Transitions define how agents move between regimes and how state variables evolve over time. Both are specified on the Regime:
Regime transitions (
transitionfield) — which regime next period?State transitions (
state_transitionsfield) — how do states evolve?
import jax.numpy as jnp
from lcm import MarkovTransition, Regime, categorical
from lcm.typing import FloatND, ScalarIntRegime Transitions¶
The regime transition function determines which regime an agent enters in the next period. There are two kinds:
Deterministic¶
The function returns an integer regime ID (from the @categorical RegimeId
class). Use this for transitions that depend deterministically on states and
actions — for example, mandatory retirement at a certain age, or a retirement
decision triggered by a discrete action.
Pass the function directly to Regime(transition=...):
@categorical(ordered=False)
class RegimeId:
working_life: int
retirement: int
def next_regime(age: float, retirement_age: float) -> ScalarInt:
return jnp.where(age >= retirement_age, RegimeId.retirement, RegimeId.working_life)
print(next_regime) # just a plain function — no wrapper needed<function next_regime at 0x7056ef7a6980>
Stochastic¶
The function returns a probability array over all regimes. Use this when the regime transition is uncertain — for example, a mortality risk that determines whether the agent survives to the next period.
Wrap the transition function in MarkovTransition:
@categorical(ordered=False)
class RegimeIdMortality:
alive: int
dead: int
def survival_transition(survival_prob: float) -> FloatND:
"""Return [P(alive), P(dead)]."""
return jnp.array([survival_prob, 1 - survival_prob])
example_regime = Regime(
transition=MarkovTransition(survival_transition),
functions={"utility": lambda: 0.0},
)Internally, deterministic transitions are converted to one-hot probability arrays, so both types end up in the same format during the solve step.
State Transitions¶
The state_transitions dict on a Regime defines how each state variable evolves
over time. Every non-shock state in a non-terminal regime must have an entry.
Deterministic Transitions¶
Pass a callable that returns the next-period value:
state_transitions={
"wealth": next_wealth, # callable: (wealth, consumption, ...) -> next_wealth
}Fixed States (None)¶
States that don’t change over time. An identity transition is auto-generated:
state_transitions={
"education": None, # fixed — identity auto-generated
}Stochastic Transitions (MarkovTransition)¶
For states with stochastic transitions (e.g., a Markov chain over health states),
wrap the transition function in MarkovTransition. The function returns a
probability vector over grid points:
from lcm import MarkovTransition
state_transitions={
"health": MarkovTransition(health_transition_probs),
}Target-Regime-Dependent Transitions¶
When a state’s transition depends on which regime the agent enters next period, use a dict keyed by target regime name. Every reachable target must be listed:
state_transitions={
"health": {
"working": MarkovTransition(health_probs_working),
"retired": MarkovTransition(health_probs_retired),
},
}Within a per-target dict, stochasticity must be consistent — all entries must be
MarkovTransition or all must be plain callables.
Terminal Regimes¶
Terminal regimes (transition=None) must have empty state_transitions — there is
no next period, so no transitions are needed.
Shock Grids¶
Shock grids (e.g., Normal, Tauchen) define their own transition probabilities
(based on the discretization method), so you do not specify them in
state_transitions. See Shocks for details.