NICOLE LESIRIMA
METHODS OF Handling LARGE SYSTEMS OF LINEAR SIMULTANEOUS EQUATIONS
- PROJECT DESCRIPTION
Linear systems simulate real-world problems using applied numerical treatment. The main aim of this task is to think about what factors have an effect on the efficiency of the various methods of solving linear simultaneous equations. So far, one of the key factors is rounding mistakes that can produce inaccurate alternatives. Moreover, MATLAB programs have been produced to time the calculation speed to look for the efficiency of the techniques. Generally, these methods are subdivided into two; immediate and iterative methods. Immediate methods are commonly used to solve small systems of equations. The iterative methods are being used to resolve real-world issues that produce systems of equations for which the coefficient matrices are sparse.
The relevance of observing these methods have its real life applications. Real life applications is seen in various fields such as knowledge and engineering, accounting and finance, business management and in operational research. The approach provides a logical framework for handling sophisticated decisions in a variety of industries. The benefit is that, decisions are founded on data analysis.
Environmentalists and meteorologists might use large systems of simultaneous linear equations to forecast future outcomes. For instance, to forecast weather habits or climate change, a sizable level of data is gathered over a long span of the time on many parameters including, solar rays, carbon emissions and ocean conditions. Queen Mary College or university of London (2015). This data is symbolized in the form of a changeover matrix that has to be row reduced into a probability matrix that can then be utilized in the prediction of climate change.
The objective of any enterprise is to increase returns while keeping bare minimum costs. Whereas the utilization of large systems of simultaneous linear equations might provide a basis for facts based business decision making within an enterprise, it is important to know which linear systems are most appropriate in order to minimize undesirable effects for an business.
- PROJECT Survey OUTLINE
Chapter 1
Introduction
Large systems of linear simultaneous equations are being used to simulate real-world problems using applied numerical procedure. Real life applications is seen in various domains such as knowledge and engineering, accounting and financing, business management. The strategy provides a rational framework for dealing with intricate decisions in a wide range of industries. The benefit is the fact decisions are founded on data evaluation. The purpose of this project is to explore the efficiency of a big systems of linear simultaneous equations in the perfect decision making of enterprise.
Chapter 2
Direct Methods: Gaussian Removal and LU Factorisation
Direct ways of resolving linear simultaneous equations are created. This chapter can look at the Gaussian Removal and LU Factorisation methods. Gaussian Elimination consists of representing the simultaneous equations in an augmented form, executing elementary row procedures to reduce top of the triangular form and finally back substituting to create the perfect solution is vector. LU Factorisation on the other hand is in which a matrix A detects less triangular matrix L and an top triangular matrix U such that A = LU. The goal of this lower triangular matrix and higher triangular matrix is so the forward and backward substitutions can be immediately put on these matrices to secure a solution to the linear system. An procedure count and processing times using MATLAB is determined to be able to determine the best method to use.
Chapter 3
Cholesky Factorisation
Introduction to the Cholesky method. That is an operation whereby the matrix A is factorised in to the product of a lesser triangular matrix and its transpose; the ahead and backward substitutions can be directly put on these matrices to secure a solution. A MATLAB program is written to compute timings. A summary can be drawn by assessing the three methods and determining which is the best option method that will produce the most correct end result as well as take the shortest processing time.
Chapter 4
Iterative Methods: Jacobi Method and Gauss-Seidel
This section will bring in the iterative methods that are being used to solve linear systems with coefficient matrices that are large and sparse. Both methods involve splitting the matrix A into lower triangular, diagonal and upper triangular matrices L, D, U respectively. The primary difference boils down to the way the x prices are calculated. The Jacobi method uses the previous x ideals (n) to assess the next iterated x beliefs (n+1). The Gauss-Seidel uses the new x value (n+1) to compute the x2 value.
Chapter 5
Successive Over Leisure and Conjugate Gradient
Other iterative methods are created. The Successive Over Rest method over relaxes the answer at each iteration. This method is computed using the weighted sum of the beliefs from the prior iteration and the beliefs form the Gauss-Seidel method at the existing iteration. The Conjugate Gradient method requires enhancing the approximated value of xk to the exact solution which might be reached after having a finite range of iterations usually smaller than the size of the matrix.
Chapter 6
Conclusion
All the task findings and email address details are summarised in this chapter. Conclusion can be made from both immediate methods and iterative methods whereby the most accurate method with the shortest computing time can be found. Downsides from each method will be pointed out as well its suitability for fixing real world problems.
- PROGRESS TO DATE
The project up to now has protected the direct methods of fixing simultaneous equations.
Gaussian Elimination
This entails representing the simultaneous equations in an augmented form, doing elementary row functions to reduce the top triangular form and lastly back substituting to form the perfect solution is vector. For instance, to solve an mxn matrix:
Ax = b
The goal of the Gaussian eradication is to control the augmented matrix [A|b] using primary row operations; with the addition of a multiple of the pivot rows to the rows beneath the pivot row i. e. RiЇЖ Ri +kRj. Once the augmented matrix is in the row echelon form, the perfect solution is is found using back again substitution.
The following general matrix equation has been reduced to row echelon form:
This corresponds to the linear system
Rearranging the ultimate solution is given by
For all the equations
i = n - 1, . . . ,
The operation count and timing the Gaussian Elimination was performed. The total number of functions for an nxn matrix using the Gaussian reduction has been O(N3).
LU Factorisation
This is where a matrix A locates a lesser triangular matrix L and an top triangular matrix U in a way that A = LU. The purpose of this lower triangular matrix and upper triangular matrix is so that the ahead and backward substitutions can be immediately put on these matrices to secure a means to fix the linear system.
In general,
L and U is an m x n matrix:
L = U =
For higher order matrices, we can derive the computation of the L and U matrices. Given a couple of n elementary matrices E1, E2, . . . , Enapplied to matrix A, row reduce in row echelon form without permuting rows in a way that A can be written as the merchandise of two matrices L and U that is
A = LU,
Where
U = En. . . E2E1A,
L = E1-1 E2-1. . . En-1
For a general nxn matrix, the total number of operations is O(N3). A Matlab program has been produced to time the LU Factorisation. Up to now, this method has proven better than the Gaussian Eradication.
Cholesky Factorisation
This is an operation whereby the matrix A is factorised in to the product of less triangular matrix and its transpose i. e. A = LLT or
=
The Cholesky factorisation is merely possible when a is a positive definite. In advance and backward substitution is utilized to find the alternatives.
The method was also timed at it can be concluded that it is the most reliable and efficient immediate method for handling simultaneous equations.
The indirect methods have been introduced with a brief put together of what each method requires.
- Work Still to be Completed
As from the goals layed out from the conditions of reference, the following are the objectives that are yet to be completed.
- Week 13 - 16: Evaluating the convergence rate of the iterative methods in detail as well as finding out which method increases the perfect solution is efficiency. Production of MATLAB programs analysing the different methods and other methods. Over the next 3 weeks, the conditions for convergence will be analysed. Among the main conditions which will be analyzed is the spectral radius. That is a disorder applied on the indirect methods to determine how fast or gradual a method calls for to attain the status of convergence. Furthermore, the task will also produce Matlab programs for the iterative methods and utilize the spectral radius on these programs to determine the swiftness of convergence for large sparse matrices.
- Weeks 17 - 19: Release to the Successive Over-Relaxation (SOR) method and the Conjugate Gradient method. Successive Over-Relaxation method improves the speed of convergence of the Gauss-Siedel method by over-relaxing the answer at every iteration. As the Conjugate Gradient boosts the approximated value of x to the exact solution. Matlab programs will be produced for both methods together with the velocity of convergence of different sizes of matrices.
- Week 20 - 24:Writing the results and conclusions of the report, finalising on the bibliography and performing a overview of the project all together. Preparing oral and poster presentation.