Markov process


Also found in: Dictionary, Thesaurus, Acronyms, Encyclopedia, Wikipedia.

Mar·kov pro·cess

(mar'kof),
a stochastic process such that the conditional probability distribution for the state at any future instant, given the present state, is unaffected by any additional knowledge of the past history of the system.

Markov process

EBM
A technique of decision analysis that contrasts to standard decision tree analysis, in which a patient moves in one direction through states—e.g., from not treated, to treated, to final outcome—which may include death. In a Markov process, a hypothetical patient could move between states, backwards and forward between continuous ambulatory peritoneal dialysis and haemodialysis, with the caveat that some states—the so-called absorbing states—once entered, cannot be left (e.g., death).

Theoretical medicine
A stochastic process in which the conditional probability distribution for a system’s state at any given instant is unaffected by the system’s previous state.

Markov process

Theoretical medicine A stochastic process in which the conditional probability distribution for a system's state at any given instant is unaffected by that system's previous state

Markov,

(Markoff), Andrei, Russian mathematician, 1865-1922.
Markov chain - number of steps or events in sequence.
Markov chaining - a theory used in psychiatry.
Markov process - a process such that the conditional probability distribution for the state at any future instant, given the present state, is unaffected by any additional knowledge of the past history of the system.
References in periodicals archive ?
In a Markov process with a finite number of states, the states are either transient or recurrent.
t] has mean zero, is mean-square stable, and is independent of the fundamental Markov process st.
Also, given a matrix Markov process, one may investigate the citation ranks and PageRanks for the events.
That is, a Markov process is a stochastic process that the probability of the process at a state depends only on the previous state, not on the previous history of getting to the previous state.
Kim (1993a) assumes that real gross national product (GNP) consists of the sum of two independent unobserved components: one following a random walk with drift, which evolves according to a two-state Markov process, and the other following an autoregressive process.
If the transition probabilities are equal, then a random walk process exists and the series does not follow a first-order Markov process.
Behavior can thus be described as an ergodic Markov process with a finite state space (a state is a sequence of the last m plays) and a unique stationary distribution.
The critical concept that must be added to Frumhoff and Reeve's (1994) analysis is that rapidly evolving characters governed by the Markov process specified in Matrix 1 are unlikely to generate a pattern of character-state fixation for a single state (rather than a mosaic of both states) in all N descendant species, particularly when N is large [ILLUSTRATION FOR FIGURE 2 OMITTED].
Assumptions made relate to those inherent in a Markov process.
For example, in a Markov process setting, the regeneration times would typically correspond to successive hitting times of some fixed state.
To decide which of the two models is more appropriate for the New Jersey data, I tested the statistical significance of the difference between the proportion of employers who actually remained in a category for the 4-year period and the proportion who would remain in that category if only a simple Markov process of average transition probabilities were operating.