Iterative Solution of Large Sparse Systems of Equations - Applied Mathematical Sciences - Wolfgang Hackbusch - Books - Springer-Verlag New York Inc. - 9781461287247 - September 27, 2011
In case cover and title do not match, the title is correct

Iterative Solution of Large Sparse Systems of Equations - Applied Mathematical Sciences Softcover reprint of the original 1st ed. 1994 edition

Wolfgang Hackbusch

Price
€ 67.99

Ordered from remote warehouse

Expected delivery Dec 12 - 23
Christmas presents can be returned until 31 January
Add to your iMusic wish list

Also available as:

Iterative Solution of Large Sparse Systems of Equations - Applied Mathematical Sciences Softcover reprint of the original 1st ed. 1994 edition

Description for Sales People: This book presents the description of the state of the modern iterative techniques together with systematic analysis. It discusses classical methods, semi-iterative techniques, incomplete decompositions, conjugate gradient methods, multigrid methods and domain decomposition techniques. Every technique described is illustrated by a Pascal program. Table of Contents: 1. Introduction.- 1.1 Historical Remarks Concerning Iterative Methods.- 1.2 Model Problem (Poisson Equation).- 1.3 Amount of Work for the Direct Solution of the System of Equations.- 1.4 Examples of Iterative Methods.- 2. Recapitulation of Linear Algebra.- 2.1 Notations for Vectors and Matrices.- 2.1.1 Nonordered Index Sets.- 2.1.2 Notations.- 2.1.3 Star Notation.- 2.2 Systems of Linear Equations.- 2.3 Permutation Matrices.- 2.4 Eigenvalues and Eigenvectors.- 2.5 Block-Vectors and Block-Matrices.- 2.6 Norms.- 2.6.1 Vector Norms.- 2.6.2 Equivalence of All Norms.- 2.6.3 Corresponding Matrix Norms.- 2.7 Scalar Product.- 2.8 Normal Forms.- 2.8.1 Schur Normal Form.- 2.8.2 Jordan Normal Form.- 2.8.3 Diagonalisability.- 2.9 Correlation Between Norms and the Spectral Radius.- 2.9.1 Corresponding Matrix Norms as Upper Bound for the Eigenvalues.- 2.9.2 Spectral Norm.- 2.9.3 Matrix Norm Approximating the Spectral Radius.- 2.9.4 Geometrical Sum (Neumann s Series) for Matrices.- 2.9.5 Numerical Radius of a Matrix.- 2.10 Positive Definite Matrices.- 2.10.1 Definition and Notations.- 2.10.2 Rules and Criteria for Positive Definite Matrices.- 2.10.3 Remarks Concerning Positive Definite Matrices.- 3. Iterative Methods.- 3.1 General Statements Concerning Convergence.- 3.1.1 Notations.- 3.1.2 Fixed Points.- 3.1.3 Consistency.- 3.1.4 Convergence.- 3.1.5 Convergence and Consistency.- 3.2 Linear Iterative Methods.- 3.2.1 Notations, First Normal Form.- 3.2.2 Consistency, Second and Third Normal Form.- 3.2.3 Representation of the Iterates xm.- 3.2.4 Convergence.- 3.2.5 Convergence Speed.- 3.2.6 Remarks Concerning the Matrices M, N, and W.- 3.2.7 Product Iterations.- 3.2.8 Three-Term Recursions (Two-Step Iterations).- 3.3 Effectiveness of Iterative Methods.- 3.3.1 Amount of Computational Work.- 3.3.2 Effectiveness.- 3.3.3 Order of the Linear Convergence.- 3.4 Test of Iterative Methods.- 3.5 Comments Concerning the Pascal Procedures.- 3.5.1 Pascal.- 3.5.2 Concerning the Test Examples.- 3.5.3 Constants and Types.- 3.5.4 Format of the Iteration Procedures.- 3.5.5 Test Environment.- 4. Methods of Jacobi and Gauss-Seidel and SOR Iteration in the Positive Definite Case.- 4.1 Eigenvalue Analysis of the Model Problem.- 4.2 Construction of Iterative Methods.- 4.2.1 Jacobi Iteration.- 4.2.1.1 Additive Splitting of the Matrix A.- 4.2.1.2 Definition of the Jacobi Method.- 4.2.1.3 Pascal Procedure.- 4.2.2 Gauss-Seidel Method.- 4.2.2.1 Definition.- 4.2.2.2 Pascal Procedure.- 4.3 Damped Iterative Methods.- 4.3.1 Damped Jacobi Method.- 4.3.1.1 Damping of a General Iterative Method.- 4.3.1.2 Pascal Procedures.- 4.3.2 Richardson Iteration.- 4.3.2.1 Definition.- 4.3.2.2 Pascal Procedures.- 4.3.3 SOR Method.- 4.3.3.1 Definition.- 4.3.3.2 Pascal Procedures.- 4.4 Convergence Analysis.- 4.4.1 Richardson Iteration.- 4.4.2 Jacobi Iteration.- 4.4.3 Gauss-Seidel and SOR Methods.- 4.5 Block Versions.- 4.5.1 Block-Jacobi Method.- 4.5.1.1 Definition.- 4.5.1.2 Pascal Procedures.- 4.5.2 Block-Gauss-Seidel and Block-SOR Method.- 4.5.2.1 Definition.- 4.5.2.2 Pascal Procedures.- 4.5.3 Convergence of the Block Variants.- 4.6 Computational Work of the Methods.- 4.6.1 Case of General Sparse Matrices.- 4.6.2 Amount of Work in the Model Case.- 4.7 Convergence Rates in the Case of the Model Problem.- 4.7.1 Richardson and Jacobi Iteration.- 4.7.2 Block-Jacobi Iteration.- 4.7.3 Numerical Examples for the Jacobi Variants.- 4.7.4 SOR and Block-SOR Iteration with Numerical Examples.- 4.8 Symmetric Iterations.- 4.8.1 General Form of the Symmetric Iteration.- 4.8.2 Convergence.- 4.8.3 Symmetric Gauss-Seidel Method.- 4.8.4 Adjoint and Corresponding Symmetric Iterations.- 4.8.5 SSOR: Symmetric SOR.- 4.8.6 Pascal Procedures and Numerical Results for the SSOR Method.- 5. Analysis in the 2-Cyclic Case.- 5.1 2-Cyclic Matrices.- 5.2 Preparatory Lemmata.- 5.3 Analysis of the Richardson Iteration.- 5.4 Analysis of the Jacobi Method.- 5.5 Analysis of the Gauss-Seidel Iteration.- 5.6 Analysis of the SOR Method.- 5.6.1 Consistently Ordered Matrices.- 5.6.2 Theorem of Young.- 5.6.3 Order Improvement by SOR.- 5.6.4 Practical Handling of the SOR Method.- 5.7 Application to the Model Problem.- 5.7.1 Analysis in the Model Case.- 5.7.2 Gauss-Seidel Iteration: Numerical Examples.- 5.7.3 SOR Iteration: Numerical Examples.- 5.8 Supplementary Remarks.- 5.8.1 p-Cyclic Matrices.- 5.8.2 Modified SOR.- 5.8.3 SSOR in the 2-Cyclic Case.- 5.8.4 Unsymmetric SOR Method.- 6. Analysis for M-Matrices.- 6.1 Positive Matrices.- 6.2 Graph of a Matrix and Irreducible Matrices.- 6.3 Perron-Frobenius Theory of Positive Matrices.- 6.4 M-Matrices.- 6.4.1 Definition.- 6.4.2 Connection Between M-Matrices and Jacobi Iteration.- 6.4.3 Diagonal Dominance.- 6.4.4 Further Criteria.- 6.5 Regular Splittings.- 6.6 Applications.- 7. Semi-Iterative Methods.- 7.1 First Formulation.- 7.1.1 The Semi-Iterative Sequence.- 7.1.2 Consistency and Asymptotical Convergence Rate.- 7.1.3 Error Representation.- 7.2 Second Formulation of a Semi-Iterative Method.- 7.2.1 General Representation.- 7.2.2 Pascal Implementation of the Second Formulation.- 7.2.3 Three-Term Recursion.- 7.3 Optimal Polynomials.- 7.3.1 Minimisation Problem.- 7.3.2 Discussion of the Second Minimisation Problem.- 7.3.3 Chebyshev Polynomials.- 7.3.4 Chebyshev Method.- 7.3.5 Order Improvement by the Chebyshev Method.- 7.3.6 Optimisation over Other Sets.- 7.3.7 Cyclic Iteration.- 7.3.8 Reformulation.- 7.3.9 Multi-Step Iterations.- 7.3.10 Pascal Procedures.- 7.3.11 Amount of Work of the Semi-Iterative Method.- 7.4 Application to Iterations Discussed Above.- 7.4.1 Preliminaries.- 7.4.2 Semi-Iterative Richardson Method.- 7.4.3 Semi-Iterative Jacobi and Block-Jacobi Method.- 7.4.4 Semi-Iterative SSOR and Block-SSOR Method.- 7.5 Method of Alternating Directions (ADI).- 7.5.1 Application to the Model Problem.- 7.5.2 General Representation.- 7.5.3 ADI Method in the Commutative Case.- 7.5.4 ADI Method and Semi-Iterative Methods.- 7.5.5 Pascal Procedures.- 7.5.6 Amount of Work and Numerical Examples.- 8. Transformations, Secondary Iterations, Incomplete Triangular Decompositions.- 8.1 Generation of Iterations by Transformations.- 8.1.1 Already Discussed Techniques for Generating, Iterations.- 8.1.2 Left Transformation.- 8.1.3 Right Transformation.- 8.1.4 Two-Sided Transformation.- 8.2 Kaczmarz Iteration.- 8.2.1 Original Formulation.- 8.2.2 Interpretaton as Gauss-Seidel Method.- 8.2.3 Pascal Procedures and Numerical Examples.- 8.3 Preconditioning.- 8.3.1 Meaning of -Preconditioning-.- 8.3.2 Examples.- 8.3.3 Rules of Calculation for Condition Numbers.- 8.4 Secondary Iterations.- 8.4.1 Examples of Secondary Iterations.- 8.4.2 Convergence Analysis in the General Case.- 8.4.3 Analysis in the Symmetric Case.- 8.4.4 Estimate of the Amount of Work.- 8.4.5 Pascal Procedures.- 8.4.6 Numerical Examples.- 8.5 Incomplete Triangular Decompositions.- 8.5.1 Introduction and ILU Iteration.- 8.5.2 Incomplete Decomposition with Respect to a Star Pattern.- 8.5.3 Application to General Five-Point Formulae.- 8.5.4 Modified ILU Decompositions.- 8.5.5 On the Existence and Stability of the ILU Decomposition.- 8.5.6 Properties of the ILU Decomposition.- 8.5.7 ILU Decompositions Corresponding to Other Patterns.- 8.5.8 Approximative ILU Decompositions.- 8.5.9 Blockwise ILU Decompositions.- 8.5.10 Pascal Procedures.- 8.5.11 Numerical Examples.- 8.5.12 Comments.- 8.6 A Superfluous Term: Time-Stepping Methods.- 9. Conjugate Gradient Methods.- 9.1 Linear Systems of Equations as Minimisation Problem.- 9.1.1 Minimisation Problem.- 9.1.2 Search Directions.- 9.1.3 Other Quadratic Functionals.- 9.1.4 Complex Case.- 9.2 Gradient Method.- 9.2.1 Construction.- 9.2.2 Properties of the Gradient Method.- 9.2.3 Numerical Examples.- 9.2.4 Gradient Method Based on Other Iterations.- 9.2.5 Pascal Procedures and Numerical Examples.- 9.3 The Method of the Conjugate Directions.- 9.3.1 Optimality with Respect to a Direction.- 9.3.2 Conjugate Directions.- 9.4 Conjugate Gradient Method (cg Method).- 9.4.1 First Formulation.- 9.4.2 cg Method (Applied to the Richardson Iteration).- 9.4.3 Convergence Analysis.- 9.4.4 cg Method Applied to Symmetric Iterations.- 9.4.5 Pascal Procedures.- 9.4.6 Numerical Examples in the Model Case.- 9.4.7 Amount of Work of the cg Method.- 9.4.8 Suitability for Secondary Iterations.- 9.5 Generalisations.- 9.5.1 Formulation of the cg Method with a More General Bilinear Form.- 9.5.2 Method of Conjugate Residuals.- 9.5.3 Three-Term Recursion for pm.- 9.5.4 Stabilised Method of the Conjugate Residuals.- 9.5.5 Convergence Results for Indefinite Matrices A.- 9.5.6 Pascal Procedures.- 9.5.7 Numerical Examples.- 9.5.8 Method of Orthogonal Directions.- 9.5.9 Solution of Unsymmetric Systems.- 9.5.10 Further Comments.- 10. Multi-Grid Methods.- 10.1 Introduction.- 10.1.1 Smoothing.- 10.1.2 Hierarchy of Systems of Equations.- 10.1.3 Prolongation.- 10.1.4 Restriction.- 10.1.5 Coarse-Grid Correction.- 10.2 Two-Grid Method.- 10.2.1 Algorithm.- 10.2.2 Modifications.- 10.2.3 Iteration Matrix.- 10.2.4 Pascal Procedures.- 10.2.5 Numerical Examples.- 10.3 Analysis for a One-Dimensional Example.- 10.3.1 Fourier Analysis.- 10.3.2 Transformed Quantities.- 10.3.3 Convergence Results.- 10.4 Multi-Grid Iteration.- 10.4.1 Algorithm.- 10.4.2 Pascal Procedures.- 10.4.3 Numerical Examples.- 10.4.4 Computational Work.- 10.4.5 Iteration Matrix.- 10.5 Nested Iteration.- 10.5.1 Algorithm.- 10.5.2 Error Analysis.- 10.5.3 Amount of Computational Work.- 10.5.4 Pascal Procedures.- 10.5.5 Numerical Examples.- 10.5.6 Comments.- 10.6 Convergence Analysis.- 10.6.1 Summary.- 10.6.2 Smoothing Property.- 10.6.3 Approximation Property.- 10.6.3.1 Formulation.- 10.6.3.2 Galerkin Discretisation.- 10.6.3.3 Hierarchy of the Systems of Equations.- 10.6.3.4 Canonical Prolongation and Restriction.- 10.6.3.5 Error Estimate of the Galerkin Solution.- 10.6.3.6 Proof of the Approximation Property.- 10.6.4 Convergence of the Two-Grid Iteration.- 10.6.5 Convergence of the Multi-Grid Iteration.- 10.6.6 Case of Weaker Regularity.- 10.7 Symmetric Multi-Grid Methods.- 10.7.1 Symmetric Multi-Grid Algorithm.- 10.7.2 Two-Grid Convergence for v1 > 0, v2 > 0.- 10.7.3 Smoothing Property in the Symmetric Case.- 10.7.4 Strengthened Two-Grid Convergence Estimates.- 10.7.5 V-Cycle Convergence.- 10.7.6 Multi-Grid Convergence for All v > 0.- 10.8 Combination of Multi-Grid Methods with Semi-Iterations.- 10.8.1 Semi-Iterative Smoothers.- 10.8.2 Damped Coarse-Grid Corrections.- 10.8.3 Multi-Grid Iteration as Basis of the Conjugate Gradient Method.- 10.9 Further Comments.- 10.9.1 Multi-Grid Method of the Second Kind.- 10.9.2 History of the Multi-Grid Method.- 10.9.3 Robust Methods.- 10.9.4 Frequency Filtering Decompositions.- 11. Domain Decomposition Methods.- 11.1 Introduction.- 11.2 Formulation of the Domain Decomposition Method.- 11.2.1 General Construction.- 11.2.2 The Prolongations.- 11.2.3 Multiplicative and Additive Schwarz Iteration.- 11.2.4 Interpretation as Gauss-Seidel and Jacobi Iteration.- 11.2.5 Classical Schwarz Iteration.- 11.2.6 Approximate Solution of the Subproblems.- 11.2.7 Strengthened Estimate A ? ?W.- 11.3 Properties of the Additive Schwarz Iteration.- 11.3.1 Parallelism.- 11.3.2 Condition Estimates.- 11.3.3 Convergence Statements.- 11.4 Analysis of the Multiplicative Schwarz Iteration.- 11.4.1 Convergence Statements.- 11.4.2 Proofs of the Convergence Theorems.- 11.5 Examples.- 11.5.1 Schwarz Methods with Proper Domain Decomposition.- 11.5.2 Additive Schwarz Iteration with Coarse-Grid Correction.- 11.5.3 Formulation in the Case of a Galerkin Discretisation.- 11.6 Multi-Grid Methods as Subspace Decomposition Method.- 11.6.1 The Analysis of Braess.- 11.6.2 V-Cycle Interpreted as Multiplicative Schwarz Iteration.- 11.6.3 Proof of the V-Cycle Convergence.- 11.6.4 Method of the Hierarchical Basis.- 11.6.5 Multi-Level Schwarz Iteration.- 11.6.6 Further Approaches for Decompositions into Subspaces.- 11.6.7 Indefinite and Unsymmetric Systems.- 11.7 Schur Complement Methods.- 11.7.1 Nonoverlapping Domain Decomposition with Interior Boundary.- 11.7.2 Direct Solution.- 11.7.3 Capacitance Matrix Method.- 11.7.4 Domain Decomposition Method with Nonoverlapping Domains.- 11.7.5 Multi-Gridlike Domain Decomposition Methods.- 11.7.6 Further Remarks."Publisher Marketing: C. F. GauS in a letter from Dec. 26, 1823 to Gerling: 3c empfe le 3 nen biegen IDlobu9 aur 9tac a mung. ec werlic werben eie ie wieber bi- reet eliminiren, wenigftens nic t, wenn eie me r als 2 Unbefannte aben.: Da9 inbirecte 93erfa ren 109st sic alb im ec lafe ausfii ren, ober man fann wo renb be9gelben an anbere: Dinge benfen. [CO F. GauS: Werke vol. 9, Gottingen, p. 280, 1903] What difference exists between solving large and small systems of equations? The standard methods well-known to any student oflinear algebra are appli- cable to all systems, whether large or small. The necessary amount of work, however, increases dramatically with the size, so one has to search for algo- rithms that most efficiently and accurately solve systems of 1000, 10,000, or even one million equations. The choice of algorithms depends on the special properties the matrices in practice have. An important class of large systems arises from the discretisation of partial differential equations. In this case, the matrices are sparse (i. e., they contain mostly zeros) and well-suited to iterative algorithms. Because of the background in partial differential equa- tions, this book is closely connected with the author's Theory and Numerical Treatment of Elliptic Differential Equations, whose English translation has also been published by Springer-Verlag. This book grew out of a series of lectures given by the author at the Christian-Albrecht University of Kiel to students of mathematics.


432 pages, biography

Media Books     Paperback Book   (Book with soft cover and glued back)
Released September 27, 2011
ISBN13 9781461287247
Publishers Springer-Verlag New York Inc.
Pages 432
Dimensions 271 × 157 × 27 mm   ·   635 g
Language English  

Show all

More by Wolfgang Hackbusch