Home › Tools › PCM guideLikelihood ridges

Likelihood ridges in discrete trait models.

Maximum likelihood estimation of discrete trait models has a well-known identifiability problem. When transition rates become very high relative to tree depth, the tip states converge on the stationary distribution and the likelihood becomes nearly flat across a huge range of rates. This page shows why, when it matters, and what to do about it.

The core problem

Maximum likelihood estimation of discrete trait models comes with a well-known identifiability problem. When transition rates become very high relative to tree depth, something surprising happens: trait states get "scrambled" across the phylogeny. Every lineage rapidly visits every state many times, and the phylogenetic signal gets washed out.

In this limit, the expected distribution of tip states converges on the stationary distribution of the Markov chain. This distribution is determined only by the ratio of rates, not their absolute magnitude. Consider a simple example: a model with q01 = 0.1 and q10 = 0.1 (slow rates) can produce nearly the same distribution of tip states as q01 = 100 and q10 = 100 (very fast rates), if the tip data show a roughly 50/50 split between states.

This creates a likelihood ridge: a long plateau in parameter space extending from biologically realistic rate values (say, 0.001 to 1 events per unit branch length) out to arbitrarily high values. Along this ridge, the likelihood changes only minimally. The ridge runs along the direction of proportional rate scaling: both rates increase together while maintaining their ratio.

Mathematical intuition: Under fast evolution, the probability of observing state 0 at a tip depends on P(0) ≈ 0.5 + 0.5 × exp(−2q×t), where q is the rate and t is the branch length. When qt is large, exp(−2qt) ≈ 0 regardless of whether q = 0.1, 100, or 1000. The stationary distribution q₀/(q₀+q₁) is unaffected by scaling both rates together.

Why this matters

Red flag: If your ML estimates return rates much larger than 1 event per unit branch length, suspect ridge wandering. Check your data: are states distributed roughly equally across the tree? Then high rates are riding the ridge.

Interactive likelihood ridge visualization

The plot below shows a simulated likelihood surface for a simple two-state equal-rates model (q₀₁ = q₁₀ = q). The x-axis is the rate q on a logarithmic scale; the y-axis is log-likelihood. Use the slider to change the proportion of tips in state 0, and watch how the likelihood ridge forms and shifts.

0.50
Likelihood curve
Ridge region (ΔlogL < 2)
MLE position
How to read this: The green vertical line marks the maximum likelihood estimate (MLE). The shaded teal region shows the likelihood ridge: the range of q where the log-likelihood stays within 2 units of the peak. When the proportion is near 0.5, notice how the ridge extends far to the right (high q values). When the proportion moves away from 0.5, the ridge narrows and the likelihood becomes more peaked. This is why balanced trait distributions are problematic: the ridge becomes pronounced and high rates remain plausible.

Solutions to ridge wandering

1. MCMC with priors

Bayesian MCMC with an exponential or lognormal prior on rates is the most principled solution. A prior naturally penalizes high values that have little data support. The posterior distribution will concentrate near biologically reasonable values unless the data strongly support high rates.

This is why tools like chromePlus and ChromEvol offer MCMC alongside maximum likelihood. The prior acts as a regularizer, pulling estimates away from the ridge toward plausible values. For example, an exponential prior with mean 0.5 makes q = 100 orders of magnitude less probable than q = 0.5, even if both fit the likelihood equally well.

2. Multiple starting points

Running ML optimization from many random starting points samples different regions of parameter space. If most convergences cluster near low-rate solutions but a few wander up the ridge, the true MLE is likely in the low-rate cluster. Compare the converged values: are they stable across runs? If estimates vary wildly, you may be seeing ridge effects.

3. Penalized likelihood

Adding a regularization term (e.g., an L2 penalty proportional to rate magnitude) creates a unique maximum that concentrates near realistic values. The penalized log-likelihood is:

penalized.logL = logL(q) − λ × q²

The penalty weight λ controls how much you trust the prior belief that rates should be small. Cross-validation or information criteria can guide the choice of λ. This is less formal than a full Bayesian approach but faster and often sufficient.

4. Rate bounds

The simplest (if somewhat arbitrary) approach is to set upper bounds on the rate parameter during optimization. For example, you might constrain q ≤ 10 events per unit branch length. This prevents the optimizer from ever reaching the ridge. The downside is that the bound is ad hoc and can artificially truncate the likelihood if the true value happens to exceed it. Still, if you have strong biological priors on reasonable rate ranges, this is practical.

chromePlus and ChromEvol: best practices

The chromePlus package (developed by the Blackmon Lab) and ChromEvol both allow MCMC inference with user-specified priors precisely because of the ridge problem. When rates inferred by maximum likelihood seem unreasonably high (e.g., > 1 event per unit branch length), this is a diagnostic sign of ridge wandering.

To protect yourself:

See the discrete trait evolution guide for more on model selection and the Mk framework. For the history of chromosome evolution methods, including the development of these tools, consult the methods review. Explore karyotype databases for real data examples.

Summary

Likelihood ridges are an inherent feature of discrete trait inference, not a bug but a signature of model identifiability. When tip states are close to equilibrium proportions, high rates become nearly indistinguishable from low rates. This is a genuine inference problem: the data alone may not constrain rates well.

The solutions (priors, multiple starts, penalization, and bounds) all aim to break the ridge by adding external information or constraints. In practice, Bayesian MCMC with an informative prior is the most reliable approach because it explicitly encodes what you know (or want to assume) about plausible rate ranges.

One nuance worth stating clearly. MCMC with an uninformative prior (e.g., Uniform(0, ∞) or Exponential with very large mean) will still wander up the ridge, because the likelihood is genuinely flat there and the prior offers no counter-pressure. Switching to Bayesian methods is not the fix on its own. What matters is a meaningfully informative prior: an Exponential with mean equal to the expected number of transitions per million years for the trait class, or a tight Lognormal centered on a biologically plausible rate. Without that, MCMC and ML can give the same misleading answer. Likewise, the multiple-starting-points strategy works because different starts sample different regions of the ridge; from any one starting point the optimizer converges deterministically. See Revell & Harmon (2008, Evol. Bioinform.) and Harmon (2019, Phylogenetic Comparative Methods) for explicit treatments of these issues.

When in doubt, compare results across methods and always sanity-check your rate estimates.

Question copied. Paste it into the NotebookLM tab.