< Parallel Spectral Numerical Methods

Introduction

We now briefly discuss how to solve initial value problems. For more on this see Bradie (Chap. 7)[1]. A slightly longer but still quick introduction to these ideas can also be found in Boyce and DiPrima.[2].

Forward Euler

In order to compute solutions to differential equations on computers efficiently, it is convenient to do our calculations at a finite number of specified points and then interpolate between these points. For many calculations it is convenient to use a grid whose points are equally distant from each other.

For the rest of the section will be our step size, which is assumed to be constant. When solving an ODE or PDE, the choice of isn't selected at random, but rather requires some intuition and/or theoretical analysis. We are going to start with forward Euler method which is the most basic numerical method. Lets first denote the time at the time-step by and the computed solution at the time-step by , where . The step size in terms of is defined as . Lets first start with a basic ODE with initial conditions, in which is some arbitrary function and is our solution,

 

 

 

 

( 1)

The differential equation can be approximated by finite differences,

Now all we have to do is solve for algebraically,

 

 

 

 

( 2)

If we wanted to calculate at time , then we could generate an approximation for the value at time using eq. 2 by first finding and using it to compute . We then repeat this process until the final time is reached.

An Example Computation

Lets take the ODE in eq. 1 with initial conditions where . Using forward Euler, lets plot two examples where and

A numerical solution to the ODE in eq. 1 with f(t,y) = y demonstrating the accuracy of the Forward Euler method for different choices of timestep.

It should be no surprise that a smaller step size like compared to will be more accurate. Looking at the line for , you can see that is calculated at only 4 points then straight lines are interpolate between each point. This is obviously not very accurate, but gives a rough idea of what the function looks like. The solution for might require 10 times more steps to be taken, but it is clearly more accurate. Forward Euler is an example of a first order method and approximates the exact solution using the first two terms in the Taylor expansion[footnote 1]

where terms of higher order than O are omitted in the approximate solution. Substituting this into eq. 2 we get that

after cancelling terms and dividing by , we get that

from which it is clear that the accuracy of the method changes linearly with the step size, and hence it is first order accurate.

Backwards Euler

A variation of forward Euler can be obtained by approximating a derivative by using a backward difference quotient. Using eq. 1 and applying

Stepping the index up from to we obtain,

Notice how is not written explicitly like it was in the forward Euler method. This equation instead implicitly defines and must be solved to determine the value of . How difficult this is depends entirely on the complexity of the function . For example, if is just , then the quadratic equation could be used, but many nonlinear PDEs require other methods. Some of these methods will be introduced later.

Crank-Nicolson

By taking an average of the forward and backward Euler methods, we can find the Crank-Nicolson method:

Rearranging we obtain,

Notice again how is not written explicitly like it was in forward Euler. This equation instead implicitly defines and so the equation must be solved algebraically to obtain .

Stability of Forward Euler, Backward Euler and Crank-Nicolson

Let's look at the following ODE

where is a constant and where . Lets numerically solve this ODE using the forward Euler, backward Euler and Crank-Nicolson time stepping schemes. The results are as follows

and the exact solution is given by

A numerical solution to the ODE in eq. 2 with λ = 20 and with a timestep of h = 0.1 demonstrating the instability of the Forward Euler method and the stability of the Backward Euler and Crank Nicolson methods.

The figure above shows how both methods converge to the solution, but the forward Euler solution is unstable for the chosen timestep. Listing below is a Matlab program where you can play around with the value of to see how for a fixed timestep this changes the stability of the method.

 

 

 

 

( A)

Matlab Program Code download
% A program to demonstrate instability of timestepping methods
% when the timestep is inappropriately choosen.

%Differential equation: y'(t)=-y(t) y(t_0)=y_0
%Initial Condition, y(t_0)=1 where t_0=0
clear all; clc; clf;

%Grid
h=.1;
tmax=4;
Npoints = tmax/h;
lambda=.1;

%Initial Data
y0=1;
t_0=0;
t(1)=t_0;
y_be(1)=y0;
y_fe(1)=y0;
y_imr(1)=y0;

for n=1:Npoints
%Forward Euler
    y_fe(n+1)=y_fe(n)-lambda*h*y_fe(n);
     %Backwards Euler
    y_be(n+1)=y_be(n)/(1+lambda*h);
     %Crank Nicolson
    y_imr(n+1)=y_imr(n)*(2-lambda*h)/(2+lambda*h)
    t(n+1)=t(n)+h;
end

%Exact Solution
tt=[0:.001:tmax];
exact=exp(-lambda*tt);

%Plot
figure(1); clf; plot(tt,exact,'r-',t,y_fe,'b:',t,y_be,'g--',t,y_imr,'k-.');
xlabel time; ylabel y;
legend('Exact','Forward Euler','Backward Euler',...
'Crank Nicolson');

 

 

 

 

( Ap)

A Python program to demonstrate instability of different time-stepping methods. Code download
#!/usr/bin/env python
"""
A program to demonstrate instability of timestepping methods#
when the timestep is inappropriately choosen.################
"""

from math import exp
import matplotlib.pyplot as plt
import numpy

#Differential equation: y'(t)=-l*y(t) y(t_0)=y_0
#Initial Condition, y(t_0)=1 where t_0=0

# Definition of the Grid
h = 0.1	# Time step size
t0 = 0	# Initial value
tmax = 4	# Value to be computed y(tmax)
Npoints = int((tmax-t0)/h)	# Number of points in the Grid

t = [t0]

# Initial data
l = 0.1
y0 = 1	# Initial condition y(t0)=y0
y_be = [y0]	# Variables holding the value given by the Backward Euler Iteration
y_fe = [y0]	# Variables holding the value given by the Forward Euler Iteration
y_imr = [y0]	# Variables holding the value given by the Midpoint Rule Iteration

for i in xrange(Npoints):
    y_fe.append(y_fe[-1]*(1-l*h))
    y_be.append(y_be[-1]/(1+l*h))
    y_imr.append(y_imr[-1]*(2-l*h)/(2+l*h))
    t.append(t[-1]+h)


print "Exact Value: y(%d)=%f" % (tmax, exp(-4))
print "Backward Euler Value: y(%d)=%f" % (tmax, y_be[-1])
print "Forward Euler Value: y(%d)=%f" % (tmax, y_fe[-1])
print "Midpoint Rule Value: y(%d)=%f" % (tmax, y_imr[-1])

# Exact Solution
tt=numpy.arange(0,tmax,0.001)
exact = numpy.exp(-l*tt)

# Plot
plt.figure()
plt.plot(tt,exact,'r-',t,y_fe,'b:',t,y_be,'g--',t,y_imr,'k-.');
plt.xlabel('time')
plt.ylabel('y')
plt.legend(('Exact','Forward Euler','Backward Euler',
'Implicit Midpoint Rule'))
plt.show()

Stability and Accuracy of Forward Euler, Backward Euler and Crank-Nicolson Time Stepping Schemes for

We now try to understand these observations so that we have some guidelines to design numerical methods that are stable. The numerical solution to an initial value problem with a bounded solution is stable if the numerical solution can be bounded by functions which are independent of the step size. There are two methods which are typically used to understand stability. The first method is linearized stability, which involves calculating eigenvalues of a linear system to see if small perturbations grow or decay. A second method is to calculate an energy like quantity associated with the differential equation and check whether this remains bounded.

We shall assume that so that the exact solution to the ODE does not grow without bound. The forward Euler method gives us

We can do a similar calculation for backward Euler to get

Thus, the backward Euler method is unconditionally stable, whereas the forward Euler method is not. We leave the analysis of the Crank-Nicolson method as an exercise.

A second method, often used to show stability for partial differential equations is to look for an energy like quantity and show that this bounds the solution and prevents it from becoming too positive or too negative. Usually, the quantity is chosen to be non negative, then all one needs to do is deduce there is an upper bound. We sketch how this is done for an ordinary differential equation so that we can use the same ideas when looking at partial differential equations. Recall that the forward Euler algorithm is given by

Multiplying this by we find that

Now to obtain a bound on in terms of , we use the following fact

Hence a sufficient condition for stability if

is that

thus if , then and so we have stability, we again see that the algorithm is stable provided the timestep is small enough. There are many situations for which is large and so the timestep, needs to be very small. In such a situation, the forward Euler method can be very slow on a computer.

Stability is not the only requirement for a numerical method to approximate the solution to an initial value problem. We also want to show that as the timestep is made smaller, the numerical approximation becomes better. For the forward Euler method we have that

now if

then[footnote 2]

so

We can do a similar calculation to show that the Crank-Nicolson method is second order. In this case however, we use Taylor expansions around .

so

hence

Thus this is a second order method.

Exercises

  1. Determine the real values of and timestep for which the implicit midpoint rule is stable for the ODE . Sketch the stable region in a graph of against timestep .
  2. Show that the backward Euler method is a first order method.

Notes

  1. The derivation of the Taylor expansion can be found in most books on calculus.
  2. We will use big `Oh' to mean that there exists a constant so that if , then for , we have that , where is some constant.
  1. Bradie (2006)
  2. Boyce and DiPrima (2010)

References

Boyce, W.E.; DiPrima, R.C. (2010). Elementary Differential Equations and Boundary Value Problems. Wiley. 

Bradie, B. (2006). A Friendly Introduction to Numerical Analysis. Pearson. 

This article is issued from Wikibooks. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.