Markov theory and algorithmic applications
Markov chains, and more generally Markov processes, are random
processes characterized by having a lack-of-memory property: the
conditional distribution of what happens in the future given
everything up to now, depends only on the present state (and not
otherwise on the past).
Markov chains and Markov processes are among the most
fundamental objects of study in probability. An important subclass
of Markov chains are random walks.
Our theoretical investigations of Markov chains concern e.g. limit
theorems and rates of convergence to equilibrium distributions.
For finite state Markov chains we also analyze covering times (the
time taken until all states are visited), whereas for infinite state
Markov chains we are more interested in the recurrence/transience
problem (is the chain guaranteed to eventually return to a given
state?). These questions are also studied in the more intricate setting
of random walks in random environment.
A key tool in much of our research is the use of so called couplings;
to find out more about couplings and an activity we had a few years ago
in this area.
On the more applied side, we study Markov models for queues and other
systems of relevance to telecommunications.
Some of the models considered in the population
dynamics group also fall in a Markovian context.
However, our currently most intense activity in applied areas is the
study of randomized algorithms, i.e. computer algorithms that use
random numbers. Markov chains are highly suitable tools for analyzing
such algorithms. A collaboration with the
group at the Computing Science
Department at Chalmers is being initiated.
An important class of randomized algorithms are the so called Markov
chain Monte Carlo (MCMC) algorithms, which are highly useful for
simulation purposes (they are used e.g. by the
particle systems and
spatial statistics groups). Traditional
MCMC methods have the disadvantage of yielding samples which have only
approximately the desired distribution. In contrast, we work
in a new paradigm (so called perfect simulation; click
to find out more about this) to get algorithms that produce samples
with exactly the desired distribution.
Back to the list of research groups.
by Olle Häggström