MATH2019 Introduction to Scientific Computation Coursework 4
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
MATH2019 Introduction to Scientific Computation
Coursework 4
2022
+ Numerical Differentiation
Recall, given a width h > 0, the (three-point) central difference approximation of f/ (x0 ) of a function f (x0 ) is given by
f/ (x0 ) s N1 (h) :=
f (x0 + h) - f (x0 - h)
2h .
(1)
Assuming sufficient smoothness of f over an interval [a, b] containing [x0 -h, x0 +h], Richardson extrapolation can be used to produce new approximations
f/ (x0 ) s Nk (h), (2)
for various ‘levels’ k > 1.
0 Introductory question, not for credit. Use appropriate Taylor expansions to show that
Nk (h) = ak Nk -1 (h) + bk Nk -1 (h/2), k > 1, (3)
where coefficients ak and bk are given by
ak =
for k > 1.
1 |
1 - 4k -1 , |
4k -1
bk = - 1 - 4k -1 ,
The file numerical_differentiation.py contains an unfinished function with the following definition: [10 / 40]
def richardson (f , x0 ,h , k )
This function returns:
– deriv_approx - a float.
• Complete the function so that given a single point x0 , a width h and a level k > 0, it will return an approximation Nk (h) to f/ (x0 ) (output in deriv_approx) using (3).
• Also add a brief description at the top of your definition, the so-called “docstring”. This description should become visible whenever one types: help(nd.richardson) or nd.richardson?.
• Test your richardson function by running main.py, which contains the follow- ing:
# %% Question 1
f = lambda x : np . sin (10* x )
h = 0.1
x0 = 0.5
deriv_approx1 = nd . richardson (f , x0 ,h ,2)
deriv_approx2 = nd . richardson (f , x0 ,h ,3)
In this case, you should obtain the values
deriv_approx1 = 2.8308855660810632
deriv_approx2 = 2.8366132193122207
Small variations in these values are permitted.
Marks can be obtained for your quired output, for certain set(s) of following may be checked: |
richardson function inputs {f,x0,h,k}. |
for The |
generating the correctness of |
re- the |
• The type of output deriv_approx; |
||||
• The value of output deriv_approx; |
||||
• The output of help(nd.richardson). |
||||
Note that in marking your work, different input(s) may be used. |
2 |
The function should return as output:
– error_matrix - a numpy.ndarray of shape (k_max,n);
– fig - a matplotlib.figure.Figure.
• Complete the function so that for a set of n widths {hi } (stored in h_vals), k e [1,2,...,k_max], a point x0 and a function f(x), the errors
Ek -1,i = |f/ (x0 ) - Nk (hi )|
are computed and returned in error_matrix. The true f/ (x) should be supplied in f_deriv.
• The function must call richardson from Q1.
• The function should also plot on the same axes the errors {Ek -1,i} against {hi } for each level k . plt.loglog should be used and the figure returned in fig. This plot should have an appropriate title, labels and legend.
• Add a description at the top of your richardson_errors func- tion as a “docstring” that when help(nd.richardson_errors) or nd.richardson_errors? is typed will comment and explain the results when h_vals = np.logspace(-5,1,20), k_max = 4, x0 = 0 and
(a) f(x) = sin(10x)
(b) f(x) =
• There is no need to include the usual function description in the docstring.
• Test your richardson_errors function by running main.py, which contains the following
# %% Question 2
f = lambda x : np . sin (10* x )
f_deriv = lambda x : 10* np . cos (10* x ) n = 4
h_vals = [0.5 , 0.1 , 0.05 , 0.01] k_max = 3
x0 = 0.0
error_matrix , fig = r i c h a r d s o n _ e r r o r s (f , f_deriv , x0 ,n , h_vals , k_max )
In this case, your error_matrix output should be
error_matrix =
[[1.19178485e+01 1.58529015e+00 4.11489228e-01 [6.1688657e+00 2.0222253e-02 1.2924304e-03 [3.0923322e-01 3.0442267e-05 4.8220374e-07
Note, small variations in these values are permitted.
1.66583353e-02] 2.0827133e-06] 3.0993874e-11]]
Marks can be obtained for your richardson_errors function for generating the required output, for certain set(s) of inputs {f,f_deriv,x0,n,h_vals,k_max}, as well as the plots and explanation. The correctness of the following may be checked:
• The type of outputs error_matrix and fig;
• The np.shape of error_matrix;
• The values of output error_matrix;
• The plot produced in fig;
• The quality of the comments/explanation produced in
help(nd.richardson_errors) ; Note that in marking your work, different input(s) may be used.
+ Solving ODEs Numerically - θ-schemes
Suppose we wish to solve the initial value problem:
dy
dt
y(a) = y0 .
The Euler method is part of a more general set of methods called θ-schemes, that take the form:
yi+1 = yi + h [θf (ti , yi ) + (1 - θ)f (ti+1, yi+1)] , i = 0, 1, . . . , N - 1, (4)
where 0 < θ < 1 and h = .
– If θ = 1, we obtain the (forward/explicit) Euler method.
– For every other choice of θ, we have an implicit equation for yi+1. Setting θ = 0 yields the backward Euler method and θ = 1/2 the trapezoidal method. The implicit equation (4) can be solved using Newton’s method.
3 In time_steppers.py you will find the following function definition: [10 / 40]
def t h e ta _ o d e_ s o l v er (a ,b ,f , df ,N , y0 , theta )
The function should return as output:
– t - a numpy.ndarray of shape (N+1,);
– y - a numpy.ndarray of shape (N+1,).
• Complete the function so that for a given function f (t, y) (stored in f) with par- tial derivative fy (t, y) (stored in df) it will find approximations {yi } (returned in y) at points {ti } (returned in t) for a given θ scheme. The initial condition y0 is supplied in y0.
• An implementation of Newton’s method can be found in the function newton_solver found in newton.py and this should be used to solve (4), for all 0 < θ < 1. A maximum number of iterations Nmax = 100 and tolerance tol=1.0e-12 should be used.
• Also add a brief description at the top of your definition, the so-called “docstring”. This description should become visible whenever one types: help(ts.theta_ode_solver) or ts.theta_ode_solver?.
• Test your theta_ode_solver function by running main.py, which contains the following:
# %% Question 3
a = 0; b = 2
N = 6
theta = 0.75
f = lambda t , y : y - t **2+1
df = lambda t , y : np . ones ( np . shape ( y ) )
y0 = 0.5
t , y = t he t a _ o de _ s o l ve r (a ,b ,f , df ,N , y0 , theta )
In this case, you should obtain:
t = [0. 0.33333333 0.66666667
1.666666672 2. ]
y = [0.5 1.03535354 1.70477502
4.13436358 4.88019276]
1.
2.47620836
1.33333333
3.3059407
Marks can be obtained for your theta_ode_solver function for generating the required output, for certain set(s) of inputs {a,b,f,df,N,y0,theta}. The correctness of the following may be checked:
• The type of outputs t and y;
• The np.shape of t and y;
• The values of outputs t and y;
• The output of help(ts.theta_ode_solver);
Note that in marking your work, different input(s) may be used.
+ Solving ODEs Numerically - Runge-Kutta Schemes
Suppose we wish to solve the following system of m first-order Initial Value Problems:
using various Runge-Kutta methods.
[10 / 40]
In time_steppers.py you will find the following function definition:
def runge_kutta (a ,b ,f ,N , y0 ,m , method )
The function should return as output:
– t - a numpy.ndarray of shape (N+1,);
– y - a numpy.ndarray of shape (m,N+1).
• Complete the function so that for a given m-vector function f(t, y) (stored in f), it will find approximate solutions to (5)-(6) at time points {ti = a + ih} (output in t), where N is the number of time-steps to take and h = (b - a)/N. The initial condition g0 is supplied in y0.
• On return, the j - 1th row of y should contain the approximations to the jth component of g at all the time points. Finally, method selects the Runge-Kutta method to be used and can take the following values:
– method = 1 - Forward Euler Method (RK1);
– method = 2 - Midpoint Method (RK2);
– method = 3 - Heun’s 3rd order method (RK3).
• Also add a brief description at the top of your definition, the so-called “docstring”. This description should become visible whenever one types: help(ts.runge_kutta) or ts.runge_kutta?.
• Test your runge_kutta function by running main.py, which contains the fol- lowing:
# %% Question 4
a = 0; b = 1
N = 4
m = 3
f = lambda t , y : np . array ([ y [0] -2* y [1]+ t * y [2] , - y [0]+ y [1]+ y [2] - t **2 , t * y [1] - y [1]+ t **3])
y0 = np . array ([1 ,0 ,1]) method = 1
t , y = runge_kutta (a ,b ,f ,N , y0 ,m , method ) method = 2
t , y = runge_kutta (a ,b ,f ,N , y0 ,m , method ) method = 3
t , y = runge_kutta (a ,b ,f ,N , y0 ,m , method )
In all cases you should obtain:
t = [0. 0.25 0.5 0.75 1. ]
For method = 1, you should obtain:
y = [[ 1. 1.25 1.625 2.19580078 3.09838867] [ 0. 0. -0.078125 -0.31542969 -0.82263184]
[ 1. 1. 1.00390625 1.04492188 1.17010498]] For method = 2, you should obtain:
y = [[ 1. 1.3125 1.82912445 2.7277082 4.35120032] [ 0. -0.03515625 -0.21962357 -0.69201037 -1.68299021] [ 1. 1.00048828 1.02716637 1.12369258 1.32396449]]
For method = 3, you should obtain:
y = [[ 1. 1.32335371 1.873202 2.85396743 4.66237866] [ 0. -0.04446976 -0.25061801 -0.77344478 -1.88171975] [ 1. 1.00321904 1.03648009 1.14166921 1.34657989]]
Marks can be obtained for your runge_kutta function for generating the required output, for certain set(s) of inputs {a,b,f,N,y0,m,method}. The
correctness of the following may be checked:
• The type of outputs t and y;
• The np.shape of t and y;
• The values of outputs t and y;
• The output of help(ts.runge_kutta);
Note that in marking your work, different input(s) may be used.
End of questions
2022-03-26