| tags |
|
||
|---|---|---|---|
| comments | true | ||
| dg-publish | true |
We may collect new meteoro�logical evidence that might affect our belief of the probability distribution in the process of predicting what the weather will be like on a given day from the initial state.
Hidden Markov Model (HMM) allows us to observe some evidence at each timestep, which can potentially affect the belief distribution at each of the states. Compared to the Markov model, the Hidden Markov model requires not only the initial distribution, the transition model, but also the sensor model.
To make distinction, we’ll call each
The model implies similar conditional indepencence relationships as standard Markov models, with an additional set of relationships for the evidence variables1:
Just like Markov models, Hidden Markov Models make the assumption that the transition model
-
$e_{i}$ : evidence observed at timestep i$e_{1:t} := e_{1}, e_{2} \dots e_{t}$
-
$B(W_{i})=P(W_{i}|f_{1},\dots f_{t})=P(W_{i}|f_{1:t})$ : the belief distribution at time i with all evidence$F1,\dots,Fi$ observed up to date-
$B'(W_{i})=P(W_{i}|f_{1},\dots f_{t-1})=P(W_{i}|f_{1:(t-1)})$ : the belief distribution at time i with evidence$f_{1},\dots, f_{i−1}$ observed.
-
Noting that
After some derivation (see original note), we get:
[!extra]-
根据原笔记,我认为应当是:$\boxed{B(W_{i+1}) = \frac{P(f_{i+1}|W_{i+1})\sum_{{w_{i}}}P(W_{i+1}|w_{i})B(w_{i})}{P(f_{i+1}|f_{1:i})}}$ ,这就是 hidden marko model's forward algorithm.
The problem can also be solved for using dynamic programming with the Viterbi algorithm. To visualize the algorithm, consider the following state trellis, a graph of states and transitions over time:
In this HMM with two possible hidden states, sun or rain, we would like to compute the highest probability path (assignment of a state for every timestep) from
The weights on an edge from
If
The algorithm consists of two passes:
- the first runs forward in time and computes the probability of the best path to each (state, time) tuple given the evidence observed so far.
- The second pass runs backwards in time: first it finds the terminal state that lies on the path with the highest probability, and then traverses backward through time along the path that leads into this state (which must be the best path).
Define
Using


