A Markov chain, named after Andrey Markov is a mathematical system that experiences transitions from one state to another, between a finite or countable number of possible states. Markov chain is a random process usually characterized as memoryless: the next state depends only on the current state and not on the sequence of events that preceded it. This specific kind of memorylessness is called the Markov property. Markov chains have many applications as statistical models of real-world processes.
If the random variables $X_{n-1}$ and $X_n$ take the values $X_{n-1}=i$ and $X_n=j$, then the system has made a transition $S_i \rightarrow S_j$, that is, a transition from state $S_i$ to state $S_j$ at the $n$th trial. Note that $i$ can equal $j$, so transitions within the same state may be possible. We need to assign probabilities to the transitions $S_i \rightarrow S_j$. Generally in the chain, the probability that $X_n=j$ will depend on the whole sequence of random variables starting with the initial value $X_0$.
The Markov chain has the characteristic property that the probability that $X_n=j$ depends only on the immediately previous state of the system. This means that we need no further information at each step other than for each $i$ and $j$, \[P\{X_n=j|X_{n-1}=i\}\]
which means the probability that $X_n=j$ given that $X_{n-1}=i$: this probability is independent of the values of $X_{n-2},X_{n-3},\cdots, X_0$.
Let us have a set of states $S=\{s_1,s_2,\cdots,s_n\}$. The process starts in one of these states and moves successively from one state to another. Each move is called a step. If the chain is currently in state $s_i$ then it moves to state $s_j$ at the next step with a probability denoted by $p_{ij}$ (transition probability) and this probability does not depend upon which states the chain was in before the current state. The probabilities $p_{ij}$ are called transition probabilities ($s_i \xrightarrow[]{p_{ij}} s_j$ ). The process can remain in its state, and this occurs in probability $p_{ii}$.
An initial probability distribution, defined on $S$ specifies the starting state. Usually, this is done by specifying a particular state as the starting state.
A Markov chain is a sequence of random variables $X_1, X_2,\cdots,$ with the Markov property that, given the present state, the future and past state are independent. Thus
\[P(X_n=x|X_1=x_1,X_2=x_2\cdots X_{n-1}=x_{n-1})\]
\[\quad=P(X_n=x|X_{n-1}=x_{n-1})\]
Or
\[P(X_n=j|X_{n-1}=i)\]
Example: Markov Chain
A Markov chain $X$ on $S=\{0,1\}$ is determined by the initial distribution given by $p_0=P(X_0=0), \; p_1=P(X_0=1)$ and the one-step transition probability given by $p_{00}=P(x_{n+1}=0|X_n=0)$, $p_{10}=P(x_{n+1}=0|X_n=1)$, $p_{01}=1-p_{00}$ and $p_{11}=1-p_{10}$, so one-step transition probability in matrix form is $P=\begin{pmatrix}p_{00}&p_{10}\\p_{01}&p_{11}\end{pmatrix}$
Markov chains are a powerful tool for modeling various random processes. However, it’s important to remember that they assume the Markov property, which may not always hold true in real-world scenarios.
Applications of Markov Chains
- Information Theory: Used in data compression algorithms like Huffman coding.
- Search Algorithms: Applied in recommender systems and website navigation analysis.
- Queueing Theory: Helps model customer arrivals and service times in queues.
- Financial Modeling: Financial analysts can use Markov chains to model stock prices or economic trends.
- Game Design: Markov chains can be used to create video games with more realistic and interesting behavior for non-player characters.
- Predictive Text: Smartphones that suggest the next word you are typing use a kind of Markov chain, where the probability of the next word depends on the current word.
- Modeling weather: Markov chains can be used to represent the probabilities of transitioning between different weather states.
References
- https://en.wikipedia.org/wiki/Markov_chain
- http://people.virginia.edu/~rlc9s/sys6005/SYS_6005_Intro_to_MC.pdf
- http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter11.pdf