Large and Sparse Matrix Problems (ENM and GU)
7.5 credit points
January - March 2014.
- See you at the course start on Monday January 20, time 10.00 - 12.45 in room MVH11 at Mathematical Sciences building.
Some words about the course
Large and sparse matrix problems arise for instance in numerical approximation of differential equations, network problems and optimisation. In this course we study numerical techniques for solution of systems of linear equations and eigenvalue problems with this type of matrices.
For systems of linear equations we present two classes of methods: iterative and direct. Among the iterative methods we study basic stationary methods like Jacobi and SOR methods, orthogonalising methods like conjugate gradients and multigrid methods. The direct methods are based on Gaussian elimination with different renumberings of the unknowns in order to keep computing time and memory requirement small.
The eigenvalue routines presented are based on Lanczos and Arnoldi algorithms with or without spectral transformations.
When solving the homework assignments (inlämningsuppgifter) and experimental assignments (laborationer), the students get experiences in implementation and evaluation of different algorithms for sparse problems.
For some learning outcomes of this course, see outcomes.
Teaching and Learning Methods
Lectures, five homework assignments (solved individually) and five computer exercises (performed in groups of two students).
The homework assignments give bonus credit points for the examination, totally at most five times one points.
Passed computer exercises will be graded with grades 3, 4 or 5, based on quantity and/or quality.
Mondays 10-12 in room MVH11 (Mathematical Sciences building)
Wednesdays 15-17 in MVH11 (first week 13-15).
The following sections in the textbook will be considered.
Chapter 6, sections 1 - 6, 9.
Chapter 7: sections 1 - 4
The following questions in the textbook are recommended:
Chapter 6: 1, 2, 3, 4, 5, 6, 11, 12, 13, 14
Chapter 7: 1, 2, 3
Some additional questions will be given along the lectures.
Detailed program on line
Outline of the course, problems and methods.
Available software on the web.
Large, sparse problems in applications
The model test problem: Poisson's equation in one and two dimensions, section 6.3.
Discussion on HA1.
FIRST PART OF THE COURSE: Iterative methods for systems of linear equations, Chapter 6 in the book.
CE1 is based on this lecture.
Basic iterative methods, Jacobi´s and Gauss-Seidel methods.
The spectral radius and convergence,
Lemma 6.4, Lemma 6.5, and Thm 6.1.
Rate of convergence
Good properties of basic iterative methods
Ordering of unknowns, red-black ordering
Irreducibility, strongly connected graph
Convergence criterion for Jacobi's and Gauss-Seidel methods, Thm 6.3.
SOR, successive overrelaxation method
Symmetrizable iterative methods (not in the book)
The symmetric SOR method, the SSOR method
Convergence rate of SOR, property(A), consistently ordering
Important convergence results: Thm 6.6 and Thm 6.7.
Optimal choice of SOR-parameter.
Discussion on CE1 and HA2a.
Convergence acceleration, section 6.5.6.
Semi-iterative methods, polynomial acceleration
Chebyshev polynomials for minimizing the spectral radius.
Krylov methods, section 6.6
Krylov methods (cont.)
Arnoldi's and Lanczos methods.
Solving Ax=b with Krylov methods.
Several "best" approximations.
The Conjugate Gradient method.
Gradient methods as search methods.
Discussion on Gauss-Seidel and SOR from an optimization point of view.
Choice of search directions and steplength.
The rate of convergence of the CG method.
Incomplete Cholesky factorizations.
Modified incomplete Cholesky factorization
Discussion on CE2.
I got a question on general MIC, how to make more and more accurate incomplete factorizations. This MATLAB-program does the job: General MIC . It is not jet optimized for high performance.
A short introduction to multigrid, section 6.9.
Multigrid methods (cont.)
Solution (smoothing), restriction and interpolation operators
Multigrid v-cycle, full multigrid.
Computational complexity for multigrid.
Multigrid for the 1D poisson problem
Convergence rate of multigrid for the 1D Poisson problem.
SECOND PART OF THE COURSE: Direct methods for large, sparse matrices, lecture notes.
Data structures for sparse matrices.
Compressed row and compressed diagonal storage.
Please find the solution to Exercise 1: Exercise 1.
Theorem 1: No fill-in outside the band in non-pivoting Gaussian elimination.
General sparse matrices, envelope,
Theorem 2: no fill-in outside the envelope in non-pivoting Gaussian elimination.
More on compressed row and column storages.
Dynamic and static sparse structures, cholinc resp. (m)ic(d).
Orderings for sparsity, for minimizing fill-in
Orderings for stability versus orderings for sparsity
Graph theory for ordering
Degree of a node, eccentricity of a node, the adjacency set of a node.
Reverse Cuthill-McKee ordering
Discussion om CE3
More on elimination graphs, Theorem 3.
The minimum degree ordering
Nested dissection orderings
MATLAB tools for sparse orderings and graphs.
Symbolic and numerical factorization
Dicussion on HA4 and CE4
I solved two exercises as training for the examination:
- Exercise 5 in lecture notes on solving systems with iterative methods
- an exercise on generalized eigenproblems.
THIRD PART OF THE COURSE: Methods for large sparse eigenproblems, chapter 7 in the text book.
Introduction, the power method, invers iteration with shift.
Relation to the Krylov space, Ritz vectors and Ritz values.
Krylov methods for eigenproblems.
Arnoldis's method for nonsymmetric and Lanczos method for symmetric eigenproblems.
Theorem 7.1 with proof
Lanczos-Raylegh-Ritz method for symmetric problems.
Theorem 7.2 with proof
- Solution of Question 7.2
- Solution of Question 7.3
Discussion on Question 7.3 related to CE5.
The Lanczos method in exact arithmetic.
(i) small component of q_1 in the direction of an eigenvector.
(ii) almost multiple eigenvalues.
The Lanczos method in floating point arithmetic:
+ can compute behind the theoretical break-down.
- loss of orthogonality.
Reorthogonalization, Selective reorthogonalization, Thm 7.3 (without proof).
Discussion om HA5 and CE5.
Numerical methods for large sparse least-squares problems, a survey (not to be examined).
Summary of the course
Part 1: Iterative methods for large, sparse systems of equations, Demmel chapter 6
Poisson's equation, five-point-matrix, Kronecker product.
Basic iterative methods:
Fixpointiteration, splitting, Jacobi's method, Gauss-Seidel's method, SOR method.
Iteration matrix, norm and spectral radius.
Irreducibility, diagonal dominans.
Convergence rate, condition number.
Consistently ordering, property(A), red-black ordering.
Lanczos and Arnoldi's methods, Conjugate Gradient techniques.
Preconditioning, incomplete factorization.
V-cycle, full multigrid.
Three operators: smoother, interpolation anf restriction
Multigrid for Poisson's problem in 1D, idea of convergence rate.
Part 2: Direct methods for large, sparse systems of linear equations, lecture notes.
Gaussian elimination and reordering.
Sparse storage techniques.
Ordering methods for sparsity: RCM, minimum degree
Nested dissection for 2D Poisson's problem
Graph theory, elimination graphs.
Symbolic and numerical factorization.
Dynamic and static sparse structures.
Part 3: Iterative methods for large, sparse eigenproblems, Demmel chapter 7.
Krylov technique (again).
Lanczos method in exact arithmetic, reorthogonalization, misconvergence.
Lanczos method in floating point arithmetic.
Material for the course
- Homework Assignment and Computer Exercise 1. Deadline January 31.
A p-code MATLAB program for red-black to be used for checking your reordering: red_black.p. A=red_black(N) gives the red-black ordered model matrix on a NxN-grid, where N is an odd number.
- Homework Assignment and Computer Exercise 2. Deadline February 17.
A number of programs to your disposal: delsq3d.m, ic0_2d.m, mic0_2d , ic2_2d.m, mic2_2d.m,
ic0_3d.m, mic0_3d.m , ic3_3d.m, mic3_3d.m.
- Homework Assignment and Computer Exercise 3. Deadline February 28.
Programs to your disposal: makemgdemo.m, testfmgv.m , fmgv.m , mgv.m , mgvrhs.m .
- Homework Assignment and Computer Exercise 4. Deadline March 10.
- Homework Assignment and Computer Exercise 5. Deadline March 17.
Date for the written examination: Thursday March 13, 8.30 - 12.30, room V.
Means of assistance at the examination: none.
The total number of points in the written examination is 25 plus possibly 5 bonus points. The number of attained points on the written examination and the computer exercises are added and the final grade for the course is based on this sum.
Preliminary grades CTH: 28p for grade 3, 35p for grade 4 and 42p for grade 5.
Preliminary grades GU: 28p for grade G and 38p for grade VG.
Examination: March 13 2014.
As expected, all passed the examination.
I keep your marked HA4, HA5, CE4, CE5, and examination at my office a couple of days. You can come and check your results 20/3 PM and 21/3 PM.
Solution of examination March 13 2014: page1, page2, page3, page4.
Old examinations: March 17 2011, March 9 2012.