The {fHMM}
package is an implementation of the hidden
Markov model with a focus on applications to financial time series data.
This vignette1 introduces the model and its hierarchical
extension. It closely follows Oelschläger and
Adam (2021).
The hierarchical hidden Markov model (HMMM) is a flexible extension of the HMM that can jointly model data observed on two different time scales. The two time series, one on a coarser and one on a finer scale, differ in the number of observations, e.g. monthly observations on the coarser scale and daily or weekly observations on the finer scale.
Following the concept of HMMs, we can model both state-dependent time series jointly. First, we treat the time series on the coarser scale as stemming from an ordinary HMM, which we refer to as the coarse-scale HMM: At each time point \(t\) of the coarse-scale time space \(\{1,\dots,T\}\), an underlying process \((S_t)_t\) selects one state from the coarse-scale state space \(\{1,\dots,N\}\). We call \((S_t)_t\) the hidden coarse-scale state process. Depending on which state is active at \(t\), one of \(N\) distributions \(f^{(1)},\dots,f^{(N)}\) realizes the observation \(X_t\). The process \((X_t)_t\) is called the observed coarse-scale state-dependent process. The processes \((S_t)_t\) and \((X_t)_t\) have the same properties as before, namely \((S_t)_t\) is a first-order Markov process and \((X_t)_t\) satisfies the conditional independence assumption.
Subsequently, we segment the observations of the fine-scale time series into \(T\) distinct chunks, each of which contains all data points that correspond to the \(t\)-th coarse-scale time point. Assuming that we have \(T^*\) fine-scale observations on every coarse-scale time point, we face \(T\) chunks comprising of \(T^*\) fine-scale observations each.
The hierarchical structure now evinces itself as we model each of the chunks by one of \(N\) possible fine-scale HMMs. Each of the fine-scale HMMs has its own t.p.m. \(\Gamma^{*(i)}\), initial distribution \(\delta^{*(i)}\), stationary distribution \(\pi^{*(i)}\), and state-dependent distributions \(f^{*(i,1)},\dots,f^{*(i,N^*)}\). Which fine-scale HMM is selected to explain the \(t\)-th chunk of fine-scale observations depends on the hidden coarse-scale state \(S_t\). The \(i\)-th fine-scale HMM explaining the \(t\)-th chunk of fine-scale observations consists of the following two stochastic processes: At each time point \(t^*\) of the fine-scale time space \(\{1,\dots,T^*\}\), the process \((S^*_{t,t^*})_{t^*}\) selects one state from the fine-scale state space \(\{1,\dots,N^*\}\). We call \((S^*_{t,t^*})_{t^*}\) the hidden fine-scale state process. Depending on which state is active at \(t^*\), one of \(N^*\) distributions \(f^{*(i,1)},\dots,f^{*(i,N^*)}\) realizes the observation \(X^*_{t,t^*}\). The process \((X^*_{t,t^*})_{t^*}\) is called the observed fine-scale state-dependent process.
The fine-scale processes \((S^*_{1,t^*})_{t^*},\dots,(S^*_{T,t^*})_{t^*}\) and \((X^*_{1,t^*})_{t^*},\dots,(X^*_{T,t^*})_{t^*}\) satisfy the Markov property and the conditional independence assumption, respectively, as well. Furthermore, it is assumed that the fine-scale HMM explaining \((X^*_{t,t^*})_{t^*}\) only depends on \(S_t\). This hierarchical structure is visualized in the following:
This vignette was build using R 4.4.0 with the
{fHMM}
1.4.1 package.↩︎
The package includes the normal- and t-distribution for modeling log-returns and the gamma distribution for modeling absolute quantities like trading volume. Additionally, log-normal distributions can be specified.↩︎
For example, we can model price changes at time point \(t\) to be generated by different normal distributions whose mean and volatility depend on \(S_t\).↩︎
If the Markov process is irreducible, it has a unique distribution, which solves \(\pi = \pi \Gamma\). If additionally the Markov process is aperiodic, its state distribution converges to the stationary distribution, see Norris (1997). Irreducibility and aperiodicity are usually satisfied assumptions in reality.↩︎