A large class of iterative methods for solving linear equation system involve splitting the matrix into the difference between two new matrices and :
This gives us
and the approximate solution can be found as following
The iterative methods start from an initail guess and produce better approximations , , . If this procedure converges, i.e. , , then solve the original problem.
If the diagonal entries of the matrix are all nonzero, and we express the matrix as
where is the diagonal part, is strictly lower triangular matrix and is strictly upper triangular matrix. We can then defined three standard iterative schemes ad follows:
Jabobi meshod;
Gauss-Seidel method;
Successive over-relaxation(SOR) method a combination of above methods.
Jabobi Method
In this methods, is simply the diagonal part of . Rewrite equation 2 give us
With the known values in preivous iteration level , the computed values is obtained in current iteration level . The process is repeated until the computed values reach the state of convergence.
One may immediately notice a way to improve procedure by using the newly computed values to obtain the once they are available. Indeed, implementation of this idea becomes the Gauss-Seidel method, which has better convergence rate. The Jacobi method is rarely used in practice due to it low convergence rate, but it provides a good basis for developing the advanced methods.
numIter = 0 while numIter < maxIter numIter+=1 deltaSol = rhs - mat*result for i=1:nbrows deltaSol[i] = deltaSol[i]/mat[i,i] end result = result + deltaSol nrm = norm(deltaSol) push!(residuals,nrm) if nrm <= tol return result, numIter, residuals end end
return result, numIter, residuals end
Gauss-Seidel Method
Besides low convergence rate, Jacobi method requires us to store all the components of until we have finished computing the next iteration . To overcome these drawbacks, Gauss-Seidel Method use the new values of as soon as we have calculated them. Again, rewrite equation 2 give us
Each component of the current iteration level depends upon all previously computed value. Unlike the Jacobi method, only one storage vector is required during the computation in practice, which can be advantageous for solving very large problems. In the next section, an extrapolation factor will be added into the equations of the Gauss-Seidel method, which gives us the Successive Over-Relaxation method.
numIter = 0 while numIter < maxIter numIter+=1 for i=1:nbrows oldResult = result[i] result[i] = rhs[i] for j=1:nbcols if i!=j result[i]-=mat[i,j]*result[j] end end result[i]/=mat[i,i] deltaSol[i]=result[i]-oldResult end nrm = norm(deltaSol) push!(residuals,nrm) if nrm <= tol return result, numIter, residuals end end return result, numIter, residuals end
Successive Over-Relaxation Method(SOR)
In iterative methods, we assume that the direction from to is taking us closer to the true solution. Then it would make sense to move further along the direction. This key concept lead us to Successive Over-Relaxation(SOR) method which is derived by extrapolating the Gauss-Seidel method.
Let us rearrange Gauss-Seidel equation as
Subtract from both sides to get
As mentioned above, the idea of the SOR method is to iterate by moving further along the direction .
Written out in detail, the SOR Method is
Finally, the SOR method can be written as,
If , the SOR method simplifies to the Gauss-Seidel method. The choice of relaxation factor is not necessarily easy, and depends upon the properties of the coefficient matrix. Frequently, some heuristic estimate is used, such as where is the mesh spacing of the discretization of the underlying physical domain. In general is inside the interval .
for i=1:nbrows oldResult = result[i] result[i] = rhs[i] for j=1:nbcols if i!=j result[i]-=mat[i,j]*result[j] end end result[i]*=omega result[i]/=mat[i,i] result[i]+=(1-omega)*oldResult deltaSol[i]=result[i]-oldResult end nrm = norm(deltaSol) push!(residuals,nrm) if nrm <= tol return result, numIter, residuals end end return result, numIter, residuals end