AN INTRODUCTION TO THE ODE NEURAL MODEL.

AN INTRODUCTION TO THE ODE NEURAL MODEL.

In the previous post, we learned about a continuous time model using SDE. If we remove the diffusion coefficient, this equation becomes a fully differential equation over timett. At this point, the state change becomes deterministic, so we can model the change of state probabilities over time, thereby modeling an analogous version of the normalizing flow over time. continuous time. Not only that, this approach can also be used similarly to ResNet with any architecture.

 

ResNet and differential equations

A ResNet model basically takes the following form

 

y_t=y_{t-1}+f_{t-1}(y_{t-1})

Y

t

=Y

t−1

+f

t−1

( and

t−1

)

 

with t\in\{1,2,...,T\}t∈{ 1 ,2 ,... ,T}, y_{t}, f_{t}Y

t

,f

t

is the input and transform at the th layer tt. If we considertt is a series of real numbers \{t_1, t_2, ..., t_T\}{t

first

,t

2

,... ,t

T

}, we can rewrite it as

 

y_{t_i}=y_{t_{i-1}}+(t_i-t_{i-1})f_{t_{i-1}}(y_{t_{i-1}}).

Y

t

i

=Y

t

i−1

+(t

i

−t

i−1

)f

t

i−1

( and

t

i−1

) .

 

This is exactly how to approximate a differential equation using the Euler method. More specifically, whenT\to\inftyT→∞, this approach approximates the following equation

 

\frac{dy_t}{dt}=f(t, y_t)

d t

d y

t

=f(t,Y

t

)

 

From this perspective, we can view a neural network as a process of changing a state y_tY

t

over time, represented by the ordinary differential equation (ODE) as above instead of the traditional layer-by-layer model. The output of the model will be the state at timeTT, is found by solving the ODE with the input condition y_0Y

0

. This model can be used to replace any ResNet model. Jawff here can be an arbitrary architecture, get state YY and time tt, returns the vector in the same direction as YY.

 

An important property of ODE is whether from this equation it is possible to determine y_tY

t

Are not. The Picard–Lindelöf theorem shows that in the caseff is Lipschitz according to YY, exist \epsilonϵ so that y(t)y ( t ) exist and uniquely identify around [-\epsilon, \epsilon][ - ϵ ,ϵ ]. Thus, in order for ODE to be well defined, we need a model that satisfies the Lipschitz property.

 

ODE Award

With ODE with initial conditiony_0Y

0

above, state at timettwill be calculated as follows

 

y_t = y_0 + \ int_0 ^ tf (\ tau, y_ \ tau) d \ tau

Y

t

=Y

0

+∫

0

t

f ( τ ,Y

τ

)dτ

 

Our goal will be to approximate the above integral. The simplest way is the Euler method: For each stringt_0<t_1<...<t_Tt

0

<t

first

<...<t

T

, we compute in turnTTThe values ​​at these times are as follows:

 

y_i = y_{i-1} + h\cdot f(t_{i-1}, y_{i-1}),\quad h=t_i-t_{i-1}

Y

i

=Y

i−1

+H⋅f(t

i−1

,Y

i−1

) ,H=t

i

−t

i−1

 

As mentioned above, this approach is similar to the familiar ResNet model.

 

def odeint_euler(f, y0, t):

def step(state, t):

y_prev, t_prev = state

dt = t - t_prev

y = y_prev + dt * f(t_prev, y_prev)

return y, t

t_curr = t[0]

y_curr = y0

ys = []

for i in t[1:]:

y_curr, t_curr = step((y_curr, t_curr), i)

ys.append(y_curr)

return torch.stack(ys)

 

Another popular approximation with lower error is the Runge-Kutta method , which approximates the difference between times by four values.

 

y_i = y_{i-1}+\frac{h}{6}(k_1+2k_2+2k_3+k_4)

Y

i

=Y

i−1

+

6

H

( k

first

+2 k

2

+2 k

3

+to

4

)

 

\begin{aligned} k_1 &=f(t_{i-1}, y_{i-1})\\ k_2 &=f(t_{i-1}+\frac{h}{2}, y_{i-1}+h\frac{k_1}{2})\\ k_3 &=f(t_{i-1}+\frac{h}{2}, y_{i-1}+h\frac{k_2}{2})\\ k_4&=f(t_{i-1}+h, y_{i-1}+hk_3) \end{aligned}

to

first

to

2

to

3

to

4

=f(t

i−1

,Y

i−1

)

=f(t

i−1

+

2

H

,Y

i−1

+H

2

to

first

)

=f(t

i−1

+

2

H

,Y

i−1

+H

2

to

2

)

=f(t

i−1

+h ,Y

i−1

+h k

3

)

 

def odeint_rk4(f, y0, t):

def step(state, t):

y_prev, t_prev = state

dt = t - t_prev

k1 = dt * f(t_prev, y_prev)

k2 = dt * f(t_prev + dt/2., y_prev + k1/2.)

k3 = dt * f(t_prev + dt/2., y_prev + k2/2.)

k4 = dt * f(t + dt, y_prev + k3)

y = y_prev + (k1+ 2 * k2 + 2 * k3 + k4) / 6

return y, t

t_curr = t[0]

y_curr = y0

ys = []

for i in t[1:]:

y_curr, t_curr = step((y_curr, t_curr), i)

ys.append(y_curr)

return torch.stack(ys)

 

Try an example with the following ODE

 

\frac{dy_t}{dt}=y_t,\quad y_0=2

d t

d y

t

=Y

t

,Y

0

=2

 

ODE has a solution of y_t=y_0e^tY

t

=Y

0

And

t

. Use100100 step to approximate the integral to calculate y_{10}Y

ten

, the above two calculation methods give the result as below

 

image.png

 

We can see that the Euler method gives incorrect results. This shows how the distance between steps affects the accuracy of the approximation. We can therefore approximate the ODE more accurately by choosing the length of each step so that the error is optimally estimated (this requires a way to estimate the error, e.g. using another method for approximation, then difference between the results of the two methods). However, this raises a problem: In case we want to use minibatch, the error between ODEs in the batch is different, so the time between ODEs will be different, the whole batch processing will not be the same. ordinary neural network. One solution is to combine the whole batch into 1 ODE, the timelines will be shared, but the error may increase. For jax we can use vmapto parallelize ODEs in batch (recently torch also has vmap implementation ).

 

Update parameter

In the previous post , we familiarized ourselves with a continuous time model with SDE using a direct score over time model. However, for the neural ODE, we are modeling the change of state over time. Thus updating the gradient becomes non-obvious, requiring re-parameterization of the model's parameter.

 

This section will show how to update gradients for two ways of setting automatic differentiation: vector-Jacobian product (VJP) and Jacobian-vector product (JVP). Details of these two implementations can be found in the jax library reference .

 

Calculate with vector-Jacobian product (reverse mode)

For convenience, we will rewrite the differential equation as:

 

\frac{\partial y(t, y_0, \theta)}{\partial t} = f(t, y(t, y_0, \theta), \theta))

t _

∂y(t,Y

0

,θ )

=f(t,y ( t ,Y

0

,θ ) ,θ ))

 

Suppose the objective function is computed at the final state y_TY

T

At a time y_TY

T

via function L(y_T,\theta)L ( y

T

,θ ), from the unique existence theorem this function can also be computed from state y_tY

t

via function L_t(y_t,\theta)L

t

( and

t

,θ ).

 

Our goal is the derivative with respect to the initial state y_0Y

0

and parameter \thetaθ, in other words calculate the partial derivative \frac{\partial L_0(y_0,\theta)}{\partial y_0}

y _

0

LOST _

0

( and

0

, θ )

and \frac{\partial L_0(y_0,\theta)}{\partial \theta}

∂ θ

LOST _

0

( and

0

, θ )

.

 

Put

 

a(t,y_0,\theta) = \frac{\partial L_t(y_t,\theta)}{\partial y_t},

a(t,Y

0

,θ )=

y _

t

LOST _

t

( and

t

,θ )

,

 

we knowa(T, y_0,\theta)a(T,Y

0

,θ )and need to calculatea(0,y_0,\theta)a(0,Y

0

,θ ). Thus, we can model change\frac{\partial a(t, y_0,\theta)}{\partial t}

t _

a ( t , y _

0

, θ )

of the functionaaby the timett, from which to calculatea(0,y_0,\theta)a(0,Y

0

,θ )by integrating over time fromTTabout00.

 

Since ODE has a unique solution around the neighborhood ofy_0Y

0

, we can take the partial derivative with respect toy_0Y

0

on both sides

 

\frac{\partial^2 y(t, y_0, \theta)}{\partial y_0\partial t} = \frac{\partial f(t, y(t, y_0, \theta), \theta))}{\partial y_0}

y _

0

t _

2

y ( t ,Y

0

,θ )

=

y _

0

∂f(t,y ( t ,Y

0

,θ ) ,θ ))

 

Changing the order of partial derivatives and applying the chain rule we have

 

\frac{\partial^2 y(t, y_0,\theta)}{\partial t\partial y_0}=\frac{\partial f(t, y, \theta)}{\partial y}\frac{\partial y(t,y_0,\theta)}{\partial y_0}.

t y _ _

0

2

y ( t ,Y

0

,θ )

=

y _

∂f(t,and ,θ )

y _

0

∂y(t,Y

0

,θ )

.

 

Going back to the objective function, applying the chain rule we get

 

\frac{\partial L_0(y_0,\theta)}{\partial y_0} = \frac{\partial L_t(y_t,\theta)}{\partial y_t}\frac{\partial y(t,y_0,\theta)}{\partial y_0}

y _

0

LOST _

0

( and

0

,θ )

=

y _

t

LOST _

t

( and

t

,θ )

y _

0

∂y(t,Y

0

,θ )

 

From the above two, we can model the change of a(t, y_0,\theta)a(t,Y

0

,θ ) over time as follows

 

\frac{\partial a(t,y_0, \theta)}{\partial t} = - a(t,y_0,\theta)\frac{\partial f(t,y,\theta)}{\partial y}

t _

∂a(t,Y

0

,θ )

=−a(t,Y

0

,θ )

y _

∂f(t,and ,θ )

 

At this moment a(0, y_0,\theta)a(0,Y

0

,θ ) can be calculated by

 

a(0, y_0,\theta) = a(T,y_0,\theta)-\int_T^0 a(t,y_0,\theta)\frac{\partial f}{\partial y} dt

a(0,Y

0

,θ )=a(T,Y

0

,θ )−∫

T

0

a(t,Y

0

,θ )

y _

f _

d t

 

To calculatea(t,\theta)\frac{\partial f}{\partial y}a(t,θ )

y _

f _

, we will use vector-Jacobian with inputYY. This state can be recalculated using the original ODE.

 

Next we will calculate the partial derivative with the model's parameters, applying the chain rule we get

 

\frac{\partial L_0(y_0,\theta)}{\partial \theta} = \frac{\partial L_t(y_t,\theta)}{\partial y_t} \frac{\partial y_t}{\partial \theta}+\frac{\partial L_t(y_t,\theta)}{\partial \theta}

∂ θ

LOST _

0

( and

0

,θ )

=

y _

t

LOST _

t

( and

t

,θ )

∂ θ

y _

t

+

∂ θ

LOST _

t

( and

t

,θ )

 

Similar to above, if we can model the change ofb(t, y_0,\theta)=\frac{\partial L_t(y_t,\theta)}{\partial \theta}b ( t ,Y

0

,θ )=

∂ θ

LOST _

t

( and

t

, θ )

by the time,b(0,y_0,\theta)=\frac{\partial L_0(y_0,\theta)}{\partial \theta}b ( 0 ,Y

0

,θ )=

∂ θ

LOST _

0

( and

0

, θ )

can be calculated by integrating from state\frac{\partial L_T(y_T,\theta)}{\partial \theta}

∂ θ

LOST _

T

( and

T

, θ )

.

 

Take the derivative bytton both sides, we have

 

\frac{\partial a(t, y_0,\theta)}{\partial t}\frac{\partial y}{\partial \theta}+a(t,y_0,\theta)\frac{\partial^2 y}{\partial t\partial \theta}+\frac{\partial b(t,y_0,\theta)}{\partial t} = 0

t _

∂a(t,Y

0

,θ )

∂ θ

y _

+a(t,Y

0

,θ )

t _ _ _

2

Y

+

t _

∂b(t,Y

0

,θ )

=0

 

Same as initial state y_0Y

0

, we can assume that the ODE is satisfied around the neighborhood of \thetaθ and take the derivative with respect to \thetaθ on both sides, then change the order of derivatives and apply chain rule

 

\frac{\partial^2 y(t, y_0,\theta)}{\partial t\partial \theta}=\frac{\partial f(t, y, \theta)}{\partial y}\frac{\partial y(t,y_0,\theta)}{\partial \theta} + \frac{\partial f(t,y,\theta)}{\partial \theta}.

t _ _ _

2

y ( t ,Y

0

,θ )

=

y _

∂f(t,and ,θ )

∂ θ

∂y(t,Y

0

,θ )

+

∂ θ

∂f(t,and ,θ )

.

 

Replace \frac{\partial a}{\partial t}

t _

a _

and \frac{\partial^2y}{\partial t\partial \theta}

t _ _ _

2

Y

, I get

 

-a(t,y_0,\theta)\frac{\partial f(t,y,\theta)}{\partial y}\frac{\partial y(t,y_0,\theta)}{\partial \theta}+a(t,y_0,\theta)\left(\frac{\partial f(t, y, \theta)}{\partial y}\frac{\partial y(t,y_0,\theta)}{\partial \theta} + \frac{\partial f(t,y,\theta)}{\partial \theta}\right)+\frac{\partial b(t,y_0,\theta)}{\partial t} = 0

−a(t,Y

0

,θ )

y _

∂f(t,and ,θ )

∂ θ

∂y(t,Y

0

,θ )

+a(t,Y

0

,θ )(

y _

∂f(t,and ,θ )

∂ θ

∂y(t,Y

0

,θ )

+

∂ θ

∂f(t,and ,θ )

)+

t _

∂b(t,Y

0

,θ )

=0

 

I guess

 

\frac{\partial b(t,y_0,\theta)}{\partial t} = -a(t,y_0,\theta)\frac{\partial f(t,y,\theta)}{\partial \theta}.

t _

∂b(t,Y

0

,θ )

=−a(t,Y

0

,θ )

∂ θ

∂f(t,and ,θ )

.

 

Another question is what is the value of the first condition. We can realize that the loss function is calculated based on the final statey_TY

T

without the need for process parameters, so b(T,y_0,\theta)=\frac{\partial L(y_T,\theta)}{\partial \theta}=0b ( T ,Y

0

,θ )=

∂ θ

∂ L ( and

T

, θ )

=0.

 

From here we can calculate

 

\frac{\partial L_0(y_0,\theta)}{\partial \theta}=b(0,y_0,\theta)=-\int_T^0 a(t,y_0,\theta)\frac{\partial f(t,y,\theta)}{\partial \theta}dt.

∂ θ

LOST _

0

( and

0

,θ )

=b ( 0 ,Y

0

,θ )=−∫

T

0

a(t,Y

0

,θ )

∂ θ

∂f(t,and ,θ )

d t .

 

To sum up, to find partial derivatives with respect to the initial state and parameters of the model, we will solve the following system of differential equations:

 

d\begin{bmatrix} y_t \\ a_t \\ b_t \end{bmatrix} = \begin{bmatrix} f(t,y,\theta)\\ -a_t\frac{\partial f}{\partial y}\\ -a_t\frac{\partial f}{\partial \theta} \end{bmatrix}dt

d

Y

t

a

t

b

t

=

f(t,and ,θ )

−a

t

y _

f _

−a

t

∂ θ

f _

d t

 

with initial state

 

\begin{bmatrix} y_T \\ a_T \\ b_T \end{bmatrix}=\begin{bmatrix} y_T\\ \frac{d L(y_T)}{d y_T}\\ 0 \end{bmatrix}

Y

T

a

T

b

T

=

Y

T

d y

T

dL(y

T

)

0

 

Note: With this setting, we have to integrate backwards over time. This requires the ODE approximation method to satisfy the time-reversible property, more specifically, when solving ODE in the forward direction and then solving in the reverse direction, we get exactly the first condition. First-order ODE solutions (including Euler, Runge-Kutta methods) do not satisfy this property.

 

Calculating with Jacobian-vector product (forward mode)

For this implementation, we are interested in the pushforward operation fromy_0Y

0

and \thetaθluxuriousy_TY

T

. We have

 

\Delta y_t = \frac{\partial y(t, y_0,\theta)}{\partial y_0}\Delta y_0 + \frac{\partial y(t, y_0,\theta)}{\partial \theta}\ Delta θ

Δy

t

=

y _

0

∂y(t,Y

0

,θ )

Δy

0

+

∂ θ

∂y(t,Y

0

,θ )

Δ θ

 

with everyone tt (\Delta y_0, \Delta \theta, \Delta y_tΔy

0

,Δ θ ,Δy

t

tangent vector at y_0,\thetaY

0

,θ and the corresponding tangent vector at y_tY

t

, represents the change at y_0, θ, y_tY

0

,θ ,Y

t

). Similar to the above, we think of finding the change of\Delta y_tΔy

t

by the time.

 

\frac{d}{dt}\Delta y_t= \frac{\partial^2 y(t, y_0,\theta)}{\partial t\partial y_0}\Delta y_0 + \frac{\partial^2 y( t, y_0,\theta)}{\partial t\partial \theta}\Delta \theta.

d t

d

Δy

t

=

t y _ _

0

2

y ( t ,Y

0

,θ )

Δy

0

+

t _ _ _

2

y ( t ,Y

0

,θ )

Δ θ .

 

Put u(t, y_0, \theta, \Delta y_0)=\frac{\partial y(t, y_0,\theta)}{\partial y_0}\Delta y_0, v(t,y_0, \theta, \Delta \ theta) = \frac{\partial y(t, y_0,\theta)}{\partial \theta}\Delta \thetau(t,Y

0

,θ ,Δy

0

)=

y _

0

y ( t , y _

0

, θ )

Δy

0

,v(t,Y

0

,θ ,Δ θ )=

∂ θ

y ( t , y _

0

, θ )

Δ θ. In the above we have

 

\frac{\partial u}{\partial t}=\frac{\partial f}{\partial y}u

t _

u _

=

y _

f _

u

 

\frac{\partial v}{\partial t}=\frac{\partial f}{\partial y}v + \frac{\partial f}{\partial \theta}\Delta \theta

t _

v _

=

y _

f _

v+

∂ θ

f _

Δ θ

 

Therefore

 

\frac{\partial (u+v)}{\partial t}=\frac{\partial f}{\partial y}(u+v) + \frac{\partial f}{\partial \theta}\Delta \theta

t _

∂ ( u+v )

=

y _

f _

( u+v )+

∂ θ

f _

Δ θ

 

The rest is to find the initial condition. At a time00, y=y_0Y=Y

0

, so u(0)=\Delta y_0, v(0)=0in ( 0 )=Δy

0

,v ( 0 )=0. Now finding the differential aty_TY

T

is equivalent to solving ODE

 

\frac{d}{dt}w_t=\frac{\partial f}{\partial y}w_t + \frac{\partial f}{\partial \theta}\Delta \theta

d t

d

In

t

=

y _

f _

In

t

+

∂ θ

f _

Δ θ

 

with initial condition w_0 = \ Delta y_0In

0

=Δy

0

.

 

Eg

In this part, I will demonstrate with pytorch, using vjp and jvp functions . These two functions take any function whose input and output are tensor, and then compute the VJP/JVP at the input according to some tangent vector.

 

For VJP / JVP according to the model's parameters, we can delete the attribute and then reset it to put the parameter in the forward function's argument, see here specifically

 

def del_attr(obj, names):

if len(names) == 1:

delattr(obj, names[0])

else:

del_attr(getattr(obj, names[0]), names[1:])

def set_attr(obj, names, val):

if len(names) == 1:

setattr(obj, names[0], val)

else:

set_attr(getattr(obj, names[0]), names[1:], val)

 

def make_functional(mod):

orig_params = tuple(mod.parameters())

names = []

for name, p in list(mod.named_parameters()):

del_attr(mod, name.split("."))

names.append(name)

return orig_params, names

 

def load_weights(mod, names, *params):

for name, p in zip(names, params):

set_attr(mod, name.split("."), p)

 

def del_weights(mod):

for name, p in list(mod.named_parameters()):

del_attr(mod, name.split("."))

class Model(nn.Module):

def __init__(self):

super(Model, self).__init__()

self.module = nn.Sequential(nn.Linear(4, 5), nn.LeakyReLU(), nn.Linear(5,3),nn.Tanh())

 

def get_params(self):

self.params, self.names = make_functional(self)

 

def forward(self, t, state, *args):

if len(args) == 0:

load_weights(self, self.names, *self.params)

elif len(args) > 0:

del_weights(self)

load_weights(self, self.names, *args)

return self.module(torch.cat([t.view(1), state]))

 

model = Model()

model.get_params()

 

When calculating JVP/VJP, we need to solve the ODE system, so the algorithm needs to be tweaked a bit

 

def odeint_rk4_system(f, y0, t):

"""

y0 : list of states

f : func returns list of states

"""

def step(state, t):

y_prev, t_prev = state

dt = t - t_prev

k1 = [dt * i for i in f(t_prev, y_prev)]

k2 = [dt * i for i in f(t_prev + dt/2., [y + j1/2. for y, j1 in zip(y_prev, k1)])]

k3 = [dt * i for i in f(t_prev + dt/2., [y + j2/2. for y, j2 in zip(y_prev, k2)])]

k4 = [dt * i for i in f(t + dt, [y + j3 for y, j3 in zip(y_prev, k3)])]

y = [i + (j1+ 2 * j2 + 2 * j3 + j4) / 6 for i, j1, j2, j3, j4 in zip(y_prev, k1, k2, k3, k4)]

return y, t

t_curr = t[0]

y_curr = y0

ys = []

for i in t[1:]:

y_curr, t_curr = step((y_curr, t_curr), i)

ys.append(y_curr)

return ys

 

We will model the time derivative of the position of a point in\mathbb{R}^3R

3

with the Runge-Kutta method of order 4, the result is as shown below

 

image.png

 

With tangent vector[0, 0, 1][ 0 ,0 ,1 ]At the initial condition, pushforward with respect to time is tangent to each time as follows

 

image.png

 

With tangent vector[0, 0, 1][ 0 ,0 ,1 ]inTT, we pull backy_0Y

0

and \thetaθ. Apply JVP withdy_0d y

0

got the result as shown

Apply JVP withd\thetadθget the following result

 

In the next lesson, we will learn about continuous normalizing flow model with neural ODE, and related to SDE in previous lesson.

Catalog: