PROBLEMS FOR MATLAB PROGRAMMING
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
PROBLEMS FOR MATLAB PROGRAMMING
Request: Finish all the following problems using the Matlab
1. According to the provided PDF file, implement the codes in Matlab to repeat the results presented in Table 1.1 and Table 1.2 using the bisection and Newton method, respectively. You can also plot the figures to reflect the approximated xk and its corresponding function values during the iterations.
2. Iterative methods for solving linear systems
Given an n × n real matrix A and a real vector b, the problem considered is: Find x belonging to Rn such that
Ax = b (1)
Some effective iterative methods (you will learn in the course of numerical analysis) for solving the linear system (1) includes:
● Jacobi iterative method: We begin with the decomposition
A = D _ E _ F,
in which D is the diagonal of the given matrix A, _E its strict lower part, and _F its strict upper part. It is always assumed that the diagonal entries of A are all nonzero.
The Jacobi iteration determines the i-th component of the next approximation so as to annihilate the i-th component of the residual vector. In the following, ξk) denotes the i-th component of the iterate xk and βi the i-th component of the right-hand side b. Thus, writing
(b _ Axk+1)i = 0,
in which (y)i represents the i-th component of the vector y, yields
n
aii ξk+1) = _ ─ aij ξk) + βi ,
j=1,ji
or equivalently
ξk+1) = βi _ j i aij ξ , i = 1, 2, . . . , n. (2)
This is a component-wise form of the Jacobi iteration. All components of the next iterate can be grouped into the vector xk+1 . The above notation can be used to rewrite the Jacobi iteration in vector form as follows
xk+1 = D- 1 (E + F)xk + D- 1 b. (3)
Hence, we have the following algorithm to approximate the solution of a linear sys- tem.
Algorithm 1: Jacobi iteration
● Given an initial guess x0 and a tolerance c;
● Compute r = b _ Ax0 ;
● while |r| > c:
● Approximate xk based on xkì1 using (2);
● Update r = b _ Axk ;
● Gauss-Seidel iteration method:
Based on the previous matrix decomposition, similar to the Jacobi iteration, the Gauss-Seidel iteration corrects the i-th component of the current approximate solu- tion, in the order i = 1, 2, . . . , n, again to annihilate the i-th component of the residual. However, this time the approximate solution is updated immediately after the new component is determined. The newly computed components ξ i = 1, 2, . . . , n can be changed within a working vector which is redefined at each relaxation step. Thus, since the order is i = 1, 2, . . . , the result at the i-th step is
iì1 n
βi _ ─ aij ξk+1) _ aii ξk+1) _ ─ aij ξk) = 0,
j =i+1
which leads to the iteration
ξk+1) = _ aij ξ aij ξk) + βi .(、) , i = 1, . . . , n. (4)
Therefore, similarly, we have the following vector form to define the (fo≥一a≥d) Gauss- Seidel iteration
xk+1 = (D _ E)ì1Fxk + (D _ E)ì1b. (5)
Computing the new approximation in (3) requires multiplying by the inverse of the diagonal matrix D . In (5) a triangular system must be solved with D _ E, the lower triangular part of A. Thus, the new approximation in a Gauss-Seidel step can be determined either by solving a triangular system with the matrix D _ E or from the relation (4). The algorithm is given in the following.
Algorithm 2: Gauss-Seidel iteration
● Given an initial guess x0 and the tolerance c;
● Compute r = b _ Ax0 ;
● while |r| > c:
● Approximate xk based on xkì1 using (4);
● Update r = b _ Axk ;
● Conjugate gradient method: Algorithm 5: CG method
● Given an initial guess x(0) , compute r (0) = b _ Ax(0) and set p(0) = r (0) ;
● For k = 0,1,..., until convergence Do: (p(k) , r (k))I
● αk = (p(k), p(k))A ;
● x(k+1) = x(k) + αkp(k);
● r (k+1) = r (k) _ αk Ap(k);
●
(p(k), r (k+1))A
βk = (p(k), p(k))A ;
● p(k+1) = r (k+1) _ βkp(k) .
For given two vectors p, q, (p, q)A 三 pT Aq, and (p, q)I 三 pT q .
With these iterative
┌
'(')
' 0 0
methods, we consider solving
┐' ┌' x(x)2(1)
' '
'(') '(')
''
the following linear system:
┐ ┌ 1(1) ┐
'(') = '(') '(') . (6)
' ' 1 '
Matrix-free idea: from the various algorithms given before we can see that, the matrix A is needed mainly for the multiplication Ax and the decomposition A = D _E _F. For general cases, we have to store the matrix A to know these quantities. However, for the matrix with special structure, we DO NOT need to store the matrix to know the decomposition and the product Ax for given vector x. This is just the matrix-free idea, which is fulfilling the matrix times vector or other related operations without storing the matrix!
Problem: Implement the Jacobi, Gauss-Seidel and the CG methods in Matlab using the matrix-free idea and then solve above linear systems with different size |, and here | is set to 100, 1000, 5000, respectively) [Notice: for | = 5000, the first three methods would take quite large iterations to reach the convergence criteria (need use the matrix-free idea to improve the efficiency and get it work), and CG would solve this case much more quickly]. Lastly, try to give the difference or the efficiency comparison among these three methods. Lastly try to plot a nice figure reflecting the change of residues (i.e., the l2 norm of the residue |) during the iterations.
2023-01-09