Large and Sparse Matrix Problems (ENM and GU)

7.5 credit points

January - March 2010



Examiner and Lecturer: Ivar Gustafsson


- See you at the course start on Monday January 18, time 10.00 in room MVF32 at Mathematical Sciences - Physics building.

Some words about the course

Large and sparse matrix problems arise for instance in numerical approximation of differential equations, network problems and optimisation. In this course we study numerical techniques for solution of systems of linear equations and eigenvalue problems with this type of matrices.
For systems of linear equations we present two classes of methods: iterative and direct. Among the iterative methods we study basic stationary methods like Jacobi and SOR methods, orthogonalising methods like conjugate gradients and multigrid methods. The direct methods are based on Gaussian elimination with different renumberings of the unknowns in order to keep computing time and memory requirement small.
The eigenvalue routines presented are based on Lanczos and Arnoldi algorithms with or without spectral transformations. When solving the homework assignments (inlämningsuppgifter) and experimental assignments (laborationer), the students get experiences in implementation and evaluation of different algorithms for sparse problems.
For some learning outcomes of this course, see outcomes.

Course Literature


Teaching and Learning Methods

Lectures, five homework assignments (solved individually) and five computer exercises (performed in groups of two students).
The homework assignments give bonus credit points for the examination, totally at most five times one points.
Passed computer exercises will be graded with grades 3, 4 or 5, based on quantity and/or quality.

Lectures

Mondays 10-12 in room MVF32 (Mathematical Sciences - Physics building)
Wednesdays (except Februari 10) 13-15 in MVF32.

Computer Exercises, without supervision

Thursdays 13-15 in computer room MVF24 (Mathematical Sciences - Physics building)

Outline

The following sections in the textbook will be considered.
Chapter 6, sections 1 - 6, 9.
Chapter 7: sections 1 - 4
The following questions in the textbook are recommended:
Chapter 6: 1, 2, 3, 4, 11, 12, 13, 14, 15
Chapter 7: 1, 2, 3
Some additional questions will be given along the lectures.

Detailed program on line

January 18

INTRODUCTION.
Large, sparse problems in applications
Outline of the course, problems and methods.
Available software on the web.
The model test problem: Poisson's equation in one and two dimensions.

January 20

FIRST PART OF THE COURSE.
Basic iterative methods, Jacobi´s and Gauss-Seidel methods.
The spectral radius and convergence,
Lemma 6.4, Lemma 6.5, and Thm 6.1.
Rate of convergence
Good properties of basic iterative methods
Ordering of unknowns, red-black ordering
Irreducibility, strongly connected graph
Convergence criterion for Jacobi's and Gauss-Seidel methods, Thm 6.3.

January 25

SOR, successive overrelaxation method
Symmetrizable iterative methods
Convergence rate of SOR, property(A), consistently ordering
Important convergence results: Thm 6.6 and Thm 6.7
Convergence acceleration, section 6.5.6.
semi-iterative methods, polynomial acceleration
Chebyshev polynomials for minimizing the spectral radius.
Discussion on CE1b.

January 27

stopping criteria
Krylov methods
Arnoldi's and Lanczos methods.
Solving Ax=b with Krylov methods.
Several "best" approximations.
The Conjugate Gradient method.

February 1

Gradient methods as search methods.
Choice of search directions and steplength.
The rate of convergence of the CG method.
Preconditioning.
Incomplete and modified incomplete Cholesky factorizations.

February 3

Multigrid methods
Solution (smoothing), restriction and interpolation operators
Multigrid v-cycle, full multigrid.
Computational complexity for multigrid.
Multigrid for the 1D poisson problem

February 8

Convergence rate of multigrid for the 1D Poisson problem.
SECOND PART OF THE COURSE.
Direct methods for large, sparse linear systems.
Data structures for sparse matrices.
Compressed row and compressed diagonal storage.
Bandmatrices, No fill-in within the band in non-pivoting Gaussian elimination.
General sparse matrices, envelope, no fill-in within the envelope in non-pivoting Gaussian elimination.

February 15

More on compressed row and column storages.
Dynamic and static sparse structures, cholinc resp. (m)ic(d).
Orderings for sparsity, for minimizing fill-in
Graph theory for ordering
Reverse Cuthill-McKee ordering
Elimination graphs

February 17

The minimum degree ordering
Markowitz algorithm
Nested dissection orderings
Symbolic and numerical factorization
MATLAB tools for sparse orderings and graphs.

February 22

Discussion on and MATLAB-tools for HA4 and CE4
Numerical methods for large sparse least-squares problems, a survey.
Methods for large sparse eigenproblems, chapter 7 in the text book.
Introduction, the power method, invers iteration with shift.
Relation to the Krylov space, Ritz vectors and Ritz values.
Krylov methods for symmetric eigenproblems.

February 24

Rayleigh-Ritz method
Lanczos-Raylegh-Ritz method for symmetric problems.
Theorem 7.1 with proof
Theorem 7.2 with proof

March 1

Solution of Question 7.2
solution of Question 7.3
The Lanczos method in exact arithmetic.
Misconvergence.

March 3

The Lanczos method in floating point arithmetic
Selective reorthogonalization.
Discussion om HA5 and CE5.
I solved three exercises as training for the examination.

Summary of the course

Part 1: Iterative methods for large, sparse systems of equations, Demmel chapter 6

Testexampel:
Poisson's equation, five-point-matrix, Kronecker product.
Basic iterative methods:
Fixpointiteration, splitting, Jacobi's method, Gauss-Seidel's method, SOR method.
Iteration matrix, norm and spectral radius.
Irrecucibility, diagonal dominans.
Convergence rate, condition number.
Consistently order, property(A).
Converegence acceleration.
Krylov techniques:
Krylov space.
Lanczos and Arnoldi methods, Conjugate Gradient techniques.
Preconditioning, incomplete factorization.
Multigrid method:
V-cycle, full multigrid.
Multigrid for Poisson's problem in 1D, idea of convergence rate.

Part 2: Direct methods for large, sparse systems of linear equations, lecture notes.

Gaussian elimination and reordering.
Bandmatrices.
Envelope minimization
fill-in minimization.
Sparse storage techniques.
Ordering methods for sparsity: RCM, minimum degree
Graph theory, elimination graphs.
Symbolic and numerical factorization.
Dynamic and static sparse structures.

Part 3: Iterative methods for large, sparse eigenproblems, Demmel chapter 7.

Krylov technique (again).
Lanczos-Rayleigh-Ritz method.
Lanczos method in exact arithmetic, reorthogonalization, misconvergence.
Lanczos method in floating point arithmetic, selective reorthogonalization.

Material for the course


Examination

Date for the written examination has to be decided by me and the students.
Means of assistance at the examination: none.
The total number of points in the written examination is 25 plus possibly 5 bonus points. The number of attained points on the written examination and the computer exercises are added and the final grade for the course is based on this sum.
Preliminary grades CTH: 28p for grade 3, 35p for grade 4 and 42p for grade 5.
Preliminary grades GU: 28p for grade G and 38p for grade VG.

Consultation:
Tuesday March 9, 13-15 in my office.

Examination:
Wednesday March 10, 9-14 Room MVF21.