Scientific Computing
An introduction using Maple and MATLAB

Walter Gander, Martin J. Gander, Felix Kwok

Published by Springer Verlag

Home Summary Content Programs Authors  
   
Select:

Home

Summary

Content

Programs

Authors

:: Table of Contents

Chapter 1. Why Study Scientific Computing?
   1.1 Example: Designing a Suspension Bridge
     1.1.1 Constructing a Model
     1.1.2 Simulating the Bridge
     1.1.3 Calculating Resonance Frequencies
     1.1.4 Matching Simulations with Experiments
   1.2 Navigating this Book: Sample Courses
     1.2.1 A First Course in Numerical Analysis
     1.2.2 Advanced Courses
     1.2.3 Dependencies Between Chapters

Chapter 2. Finite Precision Arithmetic
   2.1 Introductory Example
   2.2 Real Numbers and Machine Numbers
   2.3 The IEEE Standard
     2.3.1 Single Precision
     2.3.2 Double Precision
   2.4 Rounding Errors
     2.4.1 Standard Model of Arithmetic
     2.4.2 Cancellation
  2.5 Condition of a Problem
    2.5.1 Norms
    2.5.2 Big- and Little-O Notation
    2.5.3 Condition Number
  2.6 Stable and Unstable Algorithms
    2.6.1 Forward Stability
    2.6.2 Backward Stability
  2.7 Calculating with Machine Numbers: Tips and Tricks
     2.7.1 Associative Law
     2.7.2 Summation Algorithm by W. Kahan
     2.7.3 Small Numbers
     2.7.4 Monotonicity
     2.7.5 Avoiding Overflow
     2.7.6 Testing for Overflow
     2.7.7 Avoiding Cancellation
     2.7.8 Computation of Mean and Standard Deviation
     2.8 Stopping Criteria
     2.8.1 Machine-independent Algorithms
     2.8.2 Test Successive Approximations
   2.8.3 Check the Residual
   2.9 Problems

Chapter 3. Linear Systems of Equations
   3.1 Introductory Example
   3.2 Gaussian Elimination
     3.2.1 LU Factorization
     3.2.2 Backward Stability
     3.2.3 Pivoting and Scaling
     3.2.4 Sum of Rank-One Matrices
   3.3 Condition of a System of Linear Equations
   3.4 Cholesky Decomposition
     3.4.1 Symmetric Positive Definite Matrices
     3.4.2 Stability and Pivoting
   3.5 Elimination with Givens Rotations
   3.6 Banded matrices
     3.6.1 Storing Banded Matrices
     3.6.2 Tridiagonal Systems
     3.6.3 Solving Banded Systems with Pivoting
     3.6.4 Using Givens Rotations
   3.7 Problems

Chapter 4. Interpolation
   4.1 Introductory Examples
   4.2 Polynomial Interpolation
     4.2.1 Lagrange Polynomials
     4.2.2 Interpolation Error
     4.2.3 Barycentric Formula
     4.2.4 Newton's Interpolation Formula
     4.2.5 Interpolation Using Orthogonal Polynomials
     4.2.6 Change of Basis, Relation with LU and QR
     4.2.7 Aitken-Neville Interpolation
     4.2.8 Extrapolation
   4.3 Piecewise Interpolation with Polynomials
     4.3.1 Classical Cubic Splines
     4.3.2 Derivatives for the Spline Function
     4.3.3 Sherman--Morrison--Woodbury Formula
     4.3.4 Spline Curves
   4.4 Trigonometric Interpolation
     4.4.1 Trigonometric Polynomials
     4.4.2 Fast Fourier Transform (FFT)
     4.4.3 Trigonometric Interpolation Error
     4.4.4 Convolutions Using FFT
   4.5 Problems

Chapter 5. Nonlinear Equations
   5.1 Introductory Example
   5.2 Scalar Nonlinear Equations
     5.2.1 Bisection
     5.2.2 Fixed Point Iteration
     5.2.3 Convergence Rates
     5.2.4 Aitken Acceleration and the epsilon-Algorithm
     5.2.5 Construction of One Step Iteration Methods
     5.2.6 Multiple Zeros
     5.2.7 Multi-Step Iteration Methods
     5.2.8 A New Iteration Formula
     5.2.9 Dynamical Systems
   5.3 Zeros of Polynomials
     5.3.1 Condition of the Zeros
     5.3.2 Companion Matrix
     5.3.3 Horner's Scheme
     5.3.4 Number Conversions
     5.3.5 Newton's Method: Classical Version
     5.3.6 Newton Method Using Taylor Expansions
     5.3.7 Newton Method for Real Simple Zeros
     5.3.8 Nickel's Method
     5.3.9 Laguerre's Method
   5.4 Nonlinear Systems of Equations
     5.4.1 Fixed Point Iteration
     5.4.2 Theorem of Banach
     5.4.3 Newton's Method
     5.4.4 Continuation Methods
   5.5 Problems

Chapter 6. Least Squares Problems
   6.1 Introductory Examples
   6.2 Linear Least Squares Problem and the Normal Equations
   6.3 Singular Value Decomposition (SVD)
     6.3.1 Pseudoinverse
     6.3.2 Fundamental Subspaces
     6.3.3 Solution of the Linear Least Squares Problem
     6.3.4 SVD and Rank
   6.4 Condition of the Linear Least Squares Problem
     6.4.1 Differentiation of Pseudoinverses
     6.4.2 Sensitivity of the Linear Least Squares Problem
     6.4.3 Normal Equations and Condition
   6.5 Algorithms Using Orthogonal Matrices
     6.5.1 QR Decomposition
     6.5.2 Method of Householder
     6.5.3 Method of Givens
     6.5.4 Fast Givens
     6.5.5 Gram-Schmidt Orthogonalization
     6.5.6 Gram-Schmidt with Reorthogonalization
     6.5.7 Partial Reorthogonalization
     6.5.8 Updating and Downdating the QR Decomposition
     6.5.9 Covariance Matrix Computations Using QR
   6.6 Linear Least Squares Problems with Linear Constraints
     6.6.1 Solution with SVD
     6.6.2 Classical Solution Using Lagrange Multipliers
     6.6.3 Direct Elimination of the Constraints
     6.6.4 Null Space Method
   6.7 Linear Least Squares Problems with Quadratic Constraint
     6.7.1 Fitting Lines
     6.7.2 Fitting Ellipses
     6.7.3 Fitting Hyperplanes, Collinearity Test
     6.7.4 Procrustes or Registration Problem
     6.7.5 Total Least Squares
   6.8 Nonlinear Least Squares Problems
     6.8.1 Notations and Definitions
     6.8.2 Newton's Method
     6.8.3 Gauss-Newton Method
     6.8.4 Levenberg-Marquardt Algorithm
   6.9 Least Squares Fit with Piecewise Functions
     6.9.1 Structure of the Linearized Problem
     6.9.2 Piecewise Polynomials
     6.9.3 Examples
   6.10 Problems

Chapter 7. Eigenvalue Problems
   7.1 Introductory Example
   7.2 A Brief Review of the Theory
     7.2.1 Eigen-Decomposition of a Matrix
     7.2.2 Characteristic Polynomial
     7.2.3 Similarity Transformations
     7.2.4 Diagonalizable Matrices
     7.2.5 Exponential of a Matrix
     7.2.6 Condition of Eigenvalues
   7.3 Method of Jacobi
     7.3.1 Reducing Cost by Using Symmetry
     7.3.2 Stopping Criterion
     7.3.3 Algorithm of Rutishauser
     7.3.4 Remarks and Comments on Jacobi
   7.4 Power Methods
     7.4.1 Power Method
     7.4.2 Inverse Power Method (Shift-and-Invert)
     7.4.3 Orthogonal Iteration
   7.5 Reduction to Simpler Form
     7.5.1 Computing Givens Rotations
     7.5.2 Reduction to Hessenberg Form
     7.5.3 Reduction to Tridiagonal Form
   7.6 QR Algorithm
     7.6.1 Some History
     7.6.2 QR Iteration
     7.6.3 Basic Facts
     7.6.4 Preservation of Form
     7.6.5 Symmetric Tridiagonal Matrices
     7.6.6 Implicit QR Algorithm
     7.6.7 Convergence of the QR Algorithm
     7.6.8 Wilkinson's Shift
     7.6.9 Test for Convergence and Deflation
     7.6.10 Unreduced Matrices have Simple Eigenvalues
     7.6.11 Specific Numerical Examples
     7.6.12 Computing the Eigenvectors
   7.7 Computing the Singular Value Decomposition (SVD)
     7.7.1 Transformations
     7.7.2 Householder-Rutishauser Bidiagonalization
     7.7.3 Golub-Kahan-Lanczos Bidiagonalization
     7.7.4 Eigenvalues and Singular Values
     7.7.5 Algorithm of Golub-Reinsch
   7.8 QD Algorithm
     7.8.1 Progressive QD Algorithm
     7.8.2 Orthogonal LR-Cholesky Algorithm
     7.8.3 Differential QD Algorithm
     7.8.4 Improving Convergence Using Shifts
     7.8.5 Connection to Orthogonal Decompositions
   7.9 Problems

Chapter 8. Differentiation
   8.1 Introductory Example
   8.2 Finite Differences
     8.2.1 Generating Finite Difference Approximations
     8.2.2 Discrete Operators for Partial Derivatives
   8.3 Algorithmic Differentiation
     8.3.1 Idea Behind Algorithmic Differentiation
     8.3.2 Rules for Algorithmic Differentiation
     8.3.3 Example: Circular Billiard
     8.3.4 Example: Nonlinear Eigenvalue Problems
   8.4 Problems

Chapter 9. Quadrature
   9.1 Computer Algebra and Numerical Approximations
   9.2 Newton--Cotes Rules
     9.2.1 Error of Newton--Cotes Rules
     9.2.2 Composite Rules
     9.2.3 Euler--Maclaurin Summation Formula
     9.2.4 Romberg Integration
   9.3 Gauss Quadrature
     9.3.1 Characterization of Nodes and Weights
     9.3.2 Orthogonal Polynomials
     9.3.3 Computing the Weights
     9.3.4 Golub--Welsch Algorithm
   9.4 Adaptive Quadrature
     9.4.1 Stopping Criterion
     9.4.2 Adaptive Simpson quadrature
     9.4.3 Adaptive Lobatto quadrature
   9.5 Problems

Chapter 10. Numerical Ordinary Differential Equations
   10.1 Introductory Examples
   10.2 Basic Notation and Solution Techniques
     10.2.1 Notation, Existence of Solutions
     10.2.2 Analytical and Numerical Solutions
     10.2.3 Solution by Taylor Expansions
     10.2.4 Computing with Power Series
     10.2.5 Euler's Method
     10.2.6 Autonomous ODE, Reduction to First Order System
   10.3 Runge-Kutta Methods
     10.3.1 Explicit Runge-Kutta Methods
     10.3.2 Local Truncation Error
     10.3.3 Order Conditions
     10.3.4 Convergence
     10.3.5 Adaptive Integration
     10.3.6 Implicit Runge-Kutta Methods
   10.4 Linear Multistep Methods
     10.4.1 Local Truncation Error
     10.4.2 Order Conditions
     10.4.3 Zero Stability
     10.4.4 Convergence
   10.5 Stiff Problems
     10.5.1 A-Stability
     10.5.2 A Nonlinear Example
     10.5.3 Differential Algebraic Equations
   10.6 Geometric Integration
     10.6.1 Symplectic Methods
     10.6.2 Energy Preserving Methods
   10.7 Delay Differential Equations
   10.8 Problems

Chapter 11. Iterative Methods for Linear Systems
   11.1 Introductory Example
   11.2 Solution by Iteration
     11.2.1 Matrix Splittings
     11.2.2 Residual, Error and the Difference of Iterates
     11.2.3 Convergence Criteria
     11.2.4 Singular Systems
     11.2.5 Convergence Factor and Convergence Rate
   11.3 Classical Stationary Iterative Methods
     11.3.1 Regular Splittings and M-Matrices
     11.3.2 Jacobi
     11.3.3 Gauss-Seidel
     11.3.4 Successive Over-relaxation (SOR)
     11.3.5 Richardson
   11.4 Local Minimization by Nonstationary Iterative Methods
     11.4.1 Conjugate Residuals
     11.4.2 Steepest Descent
   11.5 Global Minimization with Chebyshev Polynomials
     11.5.1 Chebyshev Semi-Iterative Method
     11.5.2 Acceleration of SSOR
   11.6 Global Minimization by Extrapolation
     11.6.1 Minimal Polynomial Extrapolation (MPE)
     11.6.2 Reduced Rank Extrapolation (RRE)
     11.6.3 Modified Minimal Polynomial Extrapolation (MMPE)
     11.6.4 Topological epsilon-Algorithm (TEA)
     11.6.5 Recursive Topological epsilon-Algorithm
   11.7 Krylov Subspace Methods
     11.7.1 The Conjugate Gradient Method
     11.7.2 Arnoldi Process
     11.7.3 The Symmetric Lanczos Algorithm
     11.7.4 Solving Linear Equations with Arnoldi
     11.7.5 Solving Linear Equations with Lanczos
     11.7.6 Generalized Minimum Residual: GMRES
     11.7.7 Classical Lanczos for Non-Symmetric Matrices
     11.7.8 Biconjugate Gradient Method (BiCG)
     11.7.9 Further Krylov Methods
   11.8 Preconditioning
   11.9 Problems

Chapter 12. Optimization
   12.1 Introductory Examples
     12.1.1 How much daily exercise is optimal ?
     12.1.2 Mobile Phone Networks
     12.1.3 A Problem from Operations Research
     12.1.4 Classification of Optimization Problems
   12.2 Mathematical Optimization
     12.2.1 Local Minima
     12.2.2 Constrained minima and Lagrange multipliers
     12.2.3 Equality and Inequality Constraints
   12.3 Unconstrained Optimization
     12.3.1 Line Search Methods
     12.3.2 Trust Region Methods
     12.3.3 Direct Methods
   12.4 Constrained Optimization
     12.4.1 Linear Programming
     12.4.2 Penalty and Barrier Functions
     12.4.3 Interior Point Methods
     12.4.4 Sequential Quadratic Programming
   12.5 Problems

   
Home Summary Content Programs Authors