28+ papers mapped from one GL framework. Each opens a new domain while pointing back to the same equation. Contact adil@zehenlabs.com for preprints.
Submitted
Paper 3ANeurIPS 2026
“Lying Is Just a Phase”
The Hidden Alignment Transition in Language Model Scaling
We discover a phase transition at Nc = 3.5B parameters where the coupling between reasoning (HellaSwag) and truthfulness (TruthfulQA) flips sign. Below Nc: alignment taxes capabilities (r = -0.989). Above: they cooperate. The same coupled ODE cross-predicts Llama-2 at 5.6% MAE. An algebraic classifier, the isocline of the ODE, separates standard-trained from curated families. Engineering guidelines: data curation below Nc, free scaling above.
63base models16familiesr = -0.989pre-Nc5.6%ODE MAE
At frontier scale (SWE-bench vs GPQA Diamond, 34+5 models, 10 labs), capabilities remain cooperative (r = +0.72, slope 0.513). The h-field diagnostic — deviation from the cooperation trend — reveals each lab’s training philosophy: Google is reasoning-specialist (h = +5.7), Anthropic is coding-rich (h = -9.1). Per-lab coupling slopes span 5x (Google 1.15 vs DeepSeek 0.23). Tax excursions (Sonnet 4.6, GPT-5.4) are temporary, recovering at the next release. Seven falsifiable predictions with timestamped deadlines.