- I. Mathematics foundation. 1. Vector spaces
- 1.1. Introduction
- 1.2. Vector spaces
- 1.3. Functions on vector spaces
- 1.4. Geometric properties of vector spaces
- 1.5. The p norms
- 1.6. Subspace and basis
- 1.7. Orthogonality
- 1.8. Outlook : Banach and Hilbert spaces
- 1.9. Notes
- 2. Linear transformations
- 2.1. Introduction
- 2.2. Four fundamental vector spaces
- 2.3. Vector spaces of matrices
- 2.4. Square matrices as linear operators
- 2.5. Matrix norms
- 2.6. Orthogonal matrices
- 2.7. Projectors
- 2.8. Outlook : bivectors and tensors
- 2.9. Notes
- II. Computer science foundation. 3. Algorithms and data structures
- 3.1. Introduction
- 3.2. Correctness and complexity of algorithms
- 3.3. Accuracy and stability of floating point arithmetic
- 3.4. Data and memory
- 3.5. Data structures
- 3.6. Graph algorithms
- 3.7. Computer graphics and the convolution algorithm
- 3.8. Outlook : simulation and time stepping
- 3.9. Notes
- 4. High performance computing
- 4.1. Introduction
- 4.2. Parallel computing
- 4.3. Parallel programming models
- 4.4. Accelerators
- 4.5. Data centric computing
- 4.6. Parallel performance
- 4.7. Emerging computing platforms
- 4.8. Outlook : quantum computing
- 4.9. Notes
- III. Matrix factorization. 5. Direct methods for systems of linear equations
- 5.1. Introduction
- 5.2. Systems of linear equations
- 5.3. Gram-Schmidt QR factorization
- 5.4. Householder QR factorization
- 5.5. LU factorization
- 5.6. Cholesky factorization
- 5.7. Block matrix algorithms
- 5.8. Sparse matrix algorithms
- 5.9. Outlook : differential and integral operators
- 5.10. Notes
- 6. Eigenvalue and singular value decompositions
- 6.1. Introduction
- 6.2. Complex vector spaces
- 6.3. Eigenvalues and eigenvectors
- 6.4. Similarity transformations and spectral theorems
- 6.5. Generalized eigenvalues
- 6.6. Singular value decomposition
- 6.7. The QR algorithm
- 6.8. The implicit QR algorithm
- 6.9. Outlook : continuum mechanics
- 6.10. Notes
- IV. Iterative methods. 7. Iterative methods for linear equations
- 7.1. Introduction
- 7.2. Convergence of iterative methods
- 7.3. Error estimation
- 7.4. Conditioning and stability
- 7.5. Fixed point iteration
- 7.6. Richardson iteration and preconditioning
- 7.7. Iterative methods based on matrix splitting
- 7.8. GMRES and Arnoldi iteration
- 7.9. The conjugate gradient method
- 7.10. Low rank matrix approximations
- 7.11. Outlook : linear dynamical systems
- 7.12. Notes
- 8. Iterative methods for nonlinear equations
- 8.1. Introduction
- 8.2. Continuous and differentiable functions
- 8.3. Nonlinear scalar equations
- 8.4. Systems of nonlinear equations
- 8.5. Recurrence relations, fractals, and chaos
- 8.6. Outlook : nonlinear dynamical systems
- 8.7. Notes
- V. Approximation. 9. Function approximation
- 9.1. Introduction
- 9.2. Optimal polynomial approximation
- 9.3. Polynomial interpolation
- 9.4. Regression
- 9.5. Projection methods
- 9.6. Transforms
- 9.7. Outlook : finite element methods
- 9.8. Notes
- 10. Function approximation for multidimensional domains
- 10.1. Introduction
- 10.2. Approximation for multidimensional domains
- 10.3. Structured grids
- 10.4. Unstructured meshes
- 10.5. Mesh refinement and coarsening
- 10.6. Polynomial approximation on simplicial meshes
- 10.7. The reference element
- 10.8. Barycentric coordinates
- 10.9. Domain decomposition methods
- 10.10. Multigrid methods
- 10.11. Outlook : spline approximation
- 10.12. Notes
- VI. Integration. 11. Integration methods
- 11.1. Introduction
- 11.2. Newton-Cotes quadrature
- 11.3. Gauss quadrature
- 11.4. The fundamental theorem of calculus
- 11.5. Measures and the Lebesgue integral
- 11.6. Lp spaces
- 11.7. The divergence theorem
- 11.8. Outlook : integral operators and kernels
- 11.9. Notes
- 12. Stochastic methods
- 12.1. Introduction
- 12.2. A very brief review of probability theory
- 12.3. Stochastic processes
- 12.4. Random samples
- 12.5. Monte Carlo integration
- 12.6. Emergence and agent-based modeling
- 12.7. Outlook : model order reduction
- 12.8. Notes
- VII. Differential equations. 13. Scalar initial value problems
- 13.1. Introduction
- 13.2. The scalar initial value problem
- 13.3. Stability of the initial value problem
- 13.4. Stability of time stepping methods
- 13.5. A priori error analysis
- 13.6. Adjoint based a posteriori error analysis
- 13.7. Outlook : stochastic differential equations
- 13.8. Notes
- 14. Systems of initial value problems
- 14.1. Introduction
- 14.2. Systems of initial value problems
- 14.3. Harmonic oscillators
- 14.4. Energy analysis of harmonic oscillators
- 14.5. Particle models
- 14.6. Compartment models
- 14.7. Lumped parameter models and bond graphs
- 14.8. Adaptive time stepping algorithms
- 14.9. Parallel time stepping algorithms
- 14.10. Outlook : partial differential equations
- 14.11. Notes
- VIII. Optimization and learning. 15. Optimization
- 15.1. Introduction
- 15.2. Convex minimization
- 15.3. Gradient descent minimization
- 15.4. Multi-objective and global minimization
- 15.5. Constrained minimization
- 15.6. Lagrange multipliers
- 15.7. Design optimization
- 15.8. Outlook : optimal control
- 15.9. Notes
- 16. Learning from data
- 16.1. Introduction
- 16.2. Geometric methods
- 16.3. Statistical decision theory
- 16.4. Deep learning
- 16.5. Backpropagation
- 16.6. Generative adversarial networks
- 16.7. Graph neural networks
- 16.8. Outlook : data-driven dynamical systems
- 16.9. Notes
- IX. Epilogue.
- 17. Closing remarks
Computational methods are an integral part of most scientific disciplines, and a rudimentary understanding of their potential and limitations is essential for any scientist or engineer. This textbook introduces computational science through a set of methods and algorithms with the aim of familiarizing the reader with the field's theoretical foundations and providing the practical skills to use and develop computational methods. Methods in Computational Science extends the classical syllabus with new material, including high performance computing, adjoint methods, machine learning, randomized algorithms, and quantum computing, is centered around a set of fundamental algorithms presented in the form of pseudocode, presents theoretical material alongside several examples and exercises, and provides Python implementations of many key algorithms. Methods in Computational Science is a textbook for computer science and data science students at the advanced undergraduate and graduate level. It is appropriate for the following courses: Advanced Numerical Analysis, Special Topics on Numerical Analysis, Topics on Data Science, Topics on Numerical Optimization, and Topics on Approximation Theory. Because the text is self-contained, it can also be used to support continuous learning for practicing mathematicians, data scientists, computer scientists, and engineers in the field of computational science.
(source: Nielsen Book Data)