answersLogoWhite

0

This is gonna turn into a long answer, please bear with me. If you already know the algorithm's just skip to the examples.

Iterative techniques for solving a linear system Ax=b,where A is a square matrix with non-zero diagonal entries, start of by first rearranging the system into a form x=Tx+c where T is a fixed square matrix with zero entries along the diagonal. an initial approximation x(0) is then applied to the Tx+c part and gives the next set of of approximation values for x(1). First I'll give an example of the Jaboci method and then the Gauss-Seidal method. Both examples can be found at the start of section 7.3 of Numerical Analysis (8th edition) by R. L. Burden and J. Douglas Faires.

The Jacobi method

The Jacobi method generates the next approximation by using the formula,

xi(k+1)=Σnj=1, j=/=i (-aijxj(k))/aii + bi/aii.

Note that aii must not be equal to zero which is why we have A must be a square matrix with non-zero diagonal entries.

To show what I mean we will use the following system:

E1= 10x1 - x2 + 2x3 = 6

E2= -x1 + 11x2 - x3 + 3x4 =25

E3= 2x1 - x2 + 10x3 -x4 = -11

E4= 3x2- x3 + 8x4 = 15

Converting this from the form Ax=b to x=Tx+c we get:

x1(k+1) = x2(k)/10 - x3(k)/5 +3/5

x2(k+1) = x1(k)/11 + x3(k)/11 - 3x4(k)/11 + 25/11

x3(k+1) = -x1(k)/5 + x2(k)/10 + x4(k)/10 - 11/10

x4(k+1) = -3x2(k)/8 + x3(k)/8 + 15/8

Notice how this forms a matrix T where the diagonals are all zero and c=(3/5, 25/11, -11/10, 15/8).

For an initial approximation we shall use x(0)=(0,0,0,0) plugging these values we into the Tx+c part we get x(1)=(0.6, 2.2727, -1.1, 1.8750) and plugging these values in we get x(2)=(1.0473, 1.7159, -0.8052, 0.8852). After 10 iterations we obtain x(10)=(1.0001, 1.9998, -0.9998, 0.9998) giving us a maximum absolute error of 0.0002 as the unique solution is x=(1, 2, -1, 1)

You may notice that it does not matter in which order you compute the equations as all the variable only take the previous iteration. This is where The Gauss-Seidal method improves upon the Jacobi method to make a 'better' iteration method.

The Gauss-Seidal methodFor the G-S method the order in which you do the equations does matter, where the Jacobi takes the matrix T as it comes, the G-S method takes the upper and lower-triangular parts separately and requires using the iterations just worked out on the lower triangle.

To explain it better, go back to the iteration equations above, if you were to do them in order then for x2(k+1)=... part the first term you have is x1(k)/11, but in the previous equation you just worked out x(k+1). Surely it makes more sense to use this new value in the iteration for x2(k+1) than the old value, and this is exactly what the G-S method does! The G-S method generates the next approximation using the formula:

xi(k+1) = (-Σi-1j=1(aijxj(k+1)) - Σnj=i+1(aijxj(k)) + bi)/aii

Using our previous example this changes our iteration equations into the following:

x1(k+1) = x2(k)/10 - x3(k)/5 +3/5

x2(k+1) = x1(k+1)/11 + x3(k)/11 - 3x4(k)/11 + 25/11

x3(k+1) = -x1(k+1)/5 + x2(k+1)/10 + x4(k)/10 - 11/10

x4(k+1) = -3x2(k+1)/8 + x3(k+1)/8 + 15/8

Again starting with an initial approximation x(0)=(0, 0, 0, 0) and plugging into the equations above we get x(1) = (0.6, 2.3272, -0.9873, 0.8789) and now put these into the equations above and repeat. This time after 5 iterations we get x(5)=(1.0001, 2, -1, 1) so the G-S method has given us an answer with max absolute error of 0.0001 in half the amount of iterations that it took the Jacobi method to get an answer with max absolute error of 0.0002!

User Avatar

Wiki User

15y ago

What else can I help you with?

Continue Learning about Math & Arithmetic

Why does the Gauss-Seidel iterative method converge to a solution quicker than the Jacobi method?

The Gauss-Seidel iterative method converges more quickly than the Jacobi method primarily because it utilizes the most recently updated values as soon as they are available in the current iteration. In contrast, the Jacobi method relies solely on values from the previous iteration for all calculations, which can slow convergence. This immediate use of updated information in Gauss-Seidel allows for a more refined approximation of the solution with each iteration, leading to faster convergence, especially for well-conditioned systems.


Can you say that an iterative methods to solve a non-linear equation is actually a numerical method?

Yes, you can. Any iterative method/algorithm that is used to solve a continuous mathematics problem can also be called a numerical method/algorithm.


What is the method of Solution of Partial Differential Equations by Jacobi Method?

The Jacobi method for solving partial differential equations (PDEs) is an iterative numerical technique primarily used for linear problems, particularly in the context of discretized equations. It involves decomposing the PDE into a system of algebraic equations, typically using finite difference methods. In each iteration, the solution is updated based on the average of neighboring values from the previous iteration, which helps converge to the true solution over time. This method is particularly useful for problems with boundary conditions and can handle large systems efficiently, although it may require many iterations for convergence.


What is the first number in an iterative equation called?

The first number you use in an iterative equation/method is known as the initial point and is often symbolised with a subscript 0 like so, x0.


What is the plural of method?

The plural form of method is methods.

Related Questions

Why does the Gauss-Seidel iterative method converge to a solution quicker than the Jacobi method?

The Gauss-Seidel iterative method converges more quickly than the Jacobi method primarily because it utilizes the most recently updated values as soon as they are available in the current iteration. In contrast, the Jacobi method relies solely on values from the previous iteration for all calculations, which can slow convergence. This immediate use of updated information in Gauss-Seidel allows for a more refined approximation of the solution with each iteration, leading to faster convergence, especially for well-conditioned systems.


Can you say that an iterative methods to solve a non-linear equation is actually a numerical method?

Yes, you can. Any iterative method/algorithm that is used to solve a continuous mathematics problem can also be called a numerical method/algorithm.


What is global convergence?

An iterative method is called locally convergent if the successive approximations produced by the method are guaranteed to converge to a solution when the initial approximation is already close enough to the solution. Iterative methods for nonlinear equationsand their systems, such as Newton's method are usually only locally convergent. An iterative method that converges for an arbitrary initial approximation is called globally convergentfrom wikipedia


What has the author Ulrich Langer written?

Ulrich Langer has written: 'Preconditioned Uzawa-type iterative methods for solving mixed finite element equations' -- subject(s): Boundary value problems, Finite element method, Iterative methods (Mathematics)


What is the method of Solution of Partial Differential Equations by Jacobi Method?

The Jacobi method for solving partial differential equations (PDEs) is an iterative numerical technique primarily used for linear problems, particularly in the context of discretized equations. It involves decomposing the PDE into a system of algebraic equations, typically using finite difference methods. In each iteration, the solution is updated based on the average of neighboring values from the previous iteration, which helps converge to the true solution over time. This method is particularly useful for problems with boundary conditions and can handle large systems efficiently, although it may require many iterations for convergence.


What step is similar to the design your experiment step in scientific methods?

Conclusion


What are the 11 methods of paragraph development and their example?

what are the 11 method of developmental paragraph? and give each example


What is the first number in an iterative equation called?

The first number you use in an iterative equation/method is known as the initial point and is often symbolised with a subscript 0 like so, x0.


What are the depreciations methods?

two methods: Cost method and diminishing balance method


What is the meaning of the word methods?

Methods is defined as a procedure for approaching something. For example, you think of a method of how you are going to finish certain tasks which you need to get done at night.


What do we mean by iterative approximation of the fixed point?

An iterative approximation of a fixed point is a number, say x, that has been obtained through the use of an iterative method. x is called a fixed point of a function if and only if the function equals x when evaluated at x i.e. when f(x)=x.


What are the methods of applying fertilizers?

The methods of applying fertilizers are the ways ferilizers can be applied to produce effective results.Some of the methods are;broadcast method, spraying method, drilling method, ring method, plough sole method and side dressing method.