-
Understanding Language Models 1: Mechanistic Interpretability Meets Causal Representation Learning
Mechanistic interpretability and causal representation learning study the same object, computation, but from complementary angles: circuits vs variables.
-
Sequence Model 4: Nonstationary Dynamics
Extending identifiability for sequence models to nonstationary dynamics.
-
Sequence Model 3: Past Observations as Auxiliary Variables
Deriving identifiability theorems for sequence models using the sufficient variability framework with past observations as auxiliary variables.
-
Sequence Model 2: Sufficient Variability
A family of assumptions that ensure identifiability in sequence models by leveraging sufficient variability in the latent dynamics.
-
Sequence Model 1: Identifiability
An introduction to sequence models through the lens of causal representation learning and the fundamental challenge of recovering latent truth.