It was Emile Picard (1856–1941) who developed the method of successive approximations to show the existence of solutions of ordinary differential equations. He proved that it is possible to construct a sequence of functions that converges to a solution of the differential equation. One of the first steps towards understanding Picard iteration is to realize that an initial value problem can be recast in terms of an integral equation.
Theorem 1.6.6.
The function \(u = u(t)\) is a solution to the initial value problem
\begin{align*}
x' & = f(t, x)\\
x(t_0) & = x_0,
\end{align*}
if and only if \(u\) is a solution to the integral equation
\begin{equation*}
x(t) = x_0 + \int_{t_0}^t f(s, x(s)) \, ds.
\end{equation*}
Proof.
Suppose that \(u = u(t)\) is a solution to
\begin{align*}
x' & = f(t, x)\\
x(t_0) & = x_0,
\end{align*}
on some interval \(I\) containing \(t_0\text{.}\) Since \(u\) is continuous on \(I\) and \(f\) is continuous on \(R\text{,}\) the function \(F(t) = f(t, u(t))\) is also continuous on \(I\text{.}\) Integrating both sides of \(u'(t) = f(t, u(t))\) and applying the Fundamental Theorem of Calculus, we obtain
\begin{equation*}
u(t) - u(t_0) = \int_{t_0}^t u'(s) \, ds = \int_{t_0}^t f(s, u(s)) \, ds
\end{equation*}
Since \(u(t_0) = x_0\text{,}\) the function \(u\) is a solution of the integral equation.
Conversely, assume that
\begin{equation*}
u(t) = x_0 + \int_{t_0}^t f(s, u(s)) \, ds.
\end{equation*}
If we differentiate both sides of this equation, we obtain \(u'(t) = f(t, u(t))\text{.}\) Since
\begin{equation*}
u(t_0) = x_0 + \int_{t_0}^{t_0} f(s, u(s)) \, ds = x_0,
\end{equation*}
the initial condition is fulfilled.
To show the existence of a solution to the initial value problem
\begin{align*}
x' & = f(t, x)\\
x(t_0) & = x_0,
\end{align*}
we will construct a sequence of functions, \(\{ u_n(t) \}\text{,}\) that will converge to a function \(u(t)\) that is a solution to the integral equation
\begin{equation*}
x(t) = x_0 + \int_{t_0}^t f(s, x(s)) \, ds.
\end{equation*}
We define the first function of the sequence using the initial condition,
\begin{equation*}
u_0(t) = x_0.
\end{equation*}
We derive the next function in our sequence using the right-hand side of the integral equation,
\begin{equation*}
u_1(t) = x_0 + \int_{t_0}^t f(s, u_0(s)) \, ds.
\end{equation*}
Subsequent terms in the sequence can be defined recursively,
\begin{equation*}
u_{n+1} = x_0 + \int_{t_0}^t f(s, u_n(s)) \, ds.
\end{equation*}
Our goal is to show that
\(u_n(t) \rightarrow u(t)\) as
\(n \rightarrow \infty\text{.}\) Furthermore, we need to show that
\(u\) is the continuous, unique solution to our initial value problem. We will leave the proof of Picard’s Theorem to a series of exercises (
Exercise Group 1.6.5.5–13), but let us see how this works by developing an example.
Example 1.6.7.
Consider the exponential growth equation,
\begin{align*}
\frac{dx}{dt} & = kx\\
x(0) & = 1.
\end{align*}
We already know that the solution is \(x(t) = e^{kt}\text{.}\) We define the first few terms of our sequence \(\{ u_n(t) \}\) as follows:
\begin{align*}
u_0(t) & = 1,\\
u_1(t) & = 1 + \int_0^t ku_0(s) \, ds\\
& = 1 + \int_0^t k \, ds\\
& = 1 + kt,\\
u_2(t) & = 1 + \int_0^t ku_1(s) \, ds\\
& = 1 + \int_0^t k(1 + ks) \, ds\\
& = 1 + kt + \frac{(kt)^2}{2}.
\end{align*}
The next term in the sequence is
\begin{equation*}
u_3(t) = 1 + kt + \frac{(kt)^2}{2} + \frac{(kt)^3}{2\cdot 3},
\end{equation*}
and the \(n\)th term is
\begin{align*}
u_n(t) & = 1 + 1 + \int_0^t ku_{n-1}(s) \, ds\\
& = 1 + \int_0^t k\left(1 + ks \frac{(kt)^2}{2!} + \frac{(kt)^3}{3!} + \cdots +\frac{(kt)^{n-1}}{(n-1)!}\right) \, ds\\
& = 1 + kt + \frac{(kt)^2}{2!} + \frac{(kt)^3}{3!} + \cdots + \frac{(kt)^n}{n!}.
\end{align*}
However, this is just the \(n\)th partial sum for the power series for \(u(t) = e^{kt}\text{,}\) which is what we expected.