Papers
arxiv:2602.18333

On the "Induction Bias" in Sequence Models

Published on Feb 20
· Submitted by
M.Reza Ebrahimi
on Feb 24
Authors:
,
,
,

Abstract

Transformers require exponentially more training data than RNNs for state tracking tasks and fail to share learned mechanisms across different sequence lengths, while RNNs demonstrate effective amortized learning through weight sharing.

AI-generated summary

Despite the remarkable practical success of transformer-based language models, recent work has raised concerns about their ability to perform state tracking. In particular, a growing body of literature has shown this limitation primarily through failures in out-of-distribution (OOD) generalization, such as length extrapolation. In this work, we shift attention to the in-distribution implications of these limitations. We conduct a large-scale experimental study of the data efficiency of transformers and recurrent neural networks (RNNs) across multiple supervision regimes. We find that the amount of training data required by transformers grows much more rapidly with state-space size and sequence length than for RNNs. Furthermore, we analyze the extent to which learned state-tracking mechanisms are shared across different sequence lengths. We show that transformers exhibit negligible or even detrimental weight sharing across lengths, indicating that they learn length-specific solutions in isolation. In contrast, recurrent models exhibit effective amortized learning by sharing weights across lengths, allowing data from one sequence length to improve performance on others. Together, these results demonstrate that state tracking remains a fundamental challenge for transformers, even when training and evaluation distributions match.

Community

Transformers are data‑hungry in sequential tasks because they lack the right inductive bias.

It’s well known that for many sequential problems (from adding numbers to step‑by‑step agentic execution and multi‑hop reasoning), transformers fail to generalize to longer sequences than they were trained on. “Train short, test long” often fails.

The usual workaround is to "just train on whatever length you’ll need at test time".

📉 But we show the consequence of this is data inefficiency:

  • Transformers can learn tasks for a single fixed sequence length fairly efficiently, but learning across multiple lengths requires much more data.
  • More importantly, transformers tend not to share mechanisms across tasks of different lengths; instead, they often learn isolated, length‑specific solutions.

🧪 A simple way to test this:
Consider modular addition (with and without CoT). Train a model to add 2, 3, …, L numbers at once and measure the data needed. Then train separate models for each length (2, 3, …, L) and sum their data requirements.

💡The intuition:
If a model truly shares mechanisms across lengths, learning a distribution of lengths should require far fewer samples than learning each length separately.

This comes from amortizing the learning cost: data for length n also helps the model learn length n+k.

📊 Results:

image

Sharing Factor κ = (sum of samples to learn each length separately) ÷ (samples to learn all lengths jointly)

  • κ > 1: mechanism sharing and amortized learning.
  • κ ≈ 1: learning length-specific solutions in isolation.
  • κ < 1: destructive interference; length-specific solutions compete for model capacity.

Transformers showed low sharing factors, and even destructive interference with CoT.

✨ Implications:
This suggests that end-to-end learning in applied agentic settings, like robotics or GUI control, could be even more challenging. If data requirements grow unfavorably with sequence length, that might also help explain the persistent issues we see at large context lengths (e.g., context rot).

Standard attention mechanism appears inefficient for step-by-step tasks, and we may ultimately be better off with recurrent agents.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.18333 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.18333 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.18333 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.