Aletheia
Laboratories

Aletheia Laboratories is an independent artificial intelligence research initiative. We study the internal structure of learned systems to understand what they represent, how they compute, and why they fail.

The central problem in AI is not capability. It is legibility. The scientific community's understanding of what neural networks actually compute lags far behind rapidly advancing capabilities. Knowledge of how these systems develop internal representations is concentrated within a handful of frontier labs, limiting both public discourse on AI and the ability of practitioners to trust and audit the systems they deploy.

To close this gap, we conduct mechanistic interpretability research with a specific focus on sequential and financial domains. Our work sits at the intersection of formal methods, financial time series, and the emerging science of neural circuits.

Research is better when open

Scientific progress requires transparency. We believe that advancing the field's understanding of AI systems requires sharing findings with the wider research community. We plan to publish papers, technical notes, and code. Sharing our work improves both the public discourse on AI and our own research culture.

What circuits implement trend detection?

Circuits in financial neural networks. What does a transformer actually compute when it learns to detect trend continuation in futures return series? We apply activation patching, probing classifiers, and residual stream decomposition to identify the computational structures underlying learned market behavior and test whether they correspond to known statistical phenomena.

Learned representations across regimes. Neural networks trained on financial time series develop internal representations with no direct human analogue. We study the geometry of these representations across volatility regimes, asset classes, and model scales to understand what the residual stream encodes and when it breaks.

Foundations of verifiable AI behavior

Formal verification of agent decision traces. How do you prove that an AI agent did precisely what its mandate specified? Drawing from verifiable computation and formal methods, we develop cryptographically sound audit infrastructure for autonomous agents operating under economic constraints. The question of whether an agent's behavior can be independently verified is one of the most important open problems in AI deployment.

Synthetic microstructure for agent training. Real market microstructure data is proprietary, expensive, and non-stationary. High-fidelity synthetic generation that preserves the statistical properties critical for training financial AI agents is an open research problem. We are building toward it.

Papers and working notes

What Do Trend-Following Transformers Actually Compute? A mechanistic analysis of transformer circuits trained on futures return series under a momentum objective. We identify the attention heads and MLP sublayers implementing the trend signal, test whether the learned circuit corresponds to a known statistical operation such as exponentially weighted averaging, and examine how circuit behavior shifts across volatility regime transitions. Working paper, forthcoming.

Superposition and Regime Encoding in Financial Sequence Models. Do financial transformers encode regime information in superposition? We probe the residual stream of models trained on multi-asset return series to test whether trending and mean-reverting regimes occupy distinct directions in representation space, and whether these directions are recoverable with linear probes. Working paper, forthcoming.

About

Aletheia is the Greek and Heideggerian concept of unconcealment: the process by which truth emerges from hiddenness. Interpretability research is precisely this project applied to learned systems. The name was chosen deliberately.

Aletheia Laboratories was founded by Arya Somu, Founder and CIO of Monolith Systematic LLC. Work produced here is released publicly and is independent of any affiliated commercial entity.

Contact

We welcome correspondence from researchers, collaborators, and anyone thinking seriously about the interpretability of learned systems.

research@aletheialaboratories.com