LLMs as Antennas: Replacing the Generative Paradigm with Receiver-First Intelligence
Mathine: Receiver-First Subfield Coupling Machine
Link: https://doi.org/10.5281/zenodo.18818955
This paper advances a receiver-first paradigm for large language models: LLM inference is better modeled as antenna-like coupling to learned statistical fields than as autonomous “generation.” Under this view, outputs are reconstructions produced under uncertainty, constrained by budgets and by regime validity.
Meaning is formalized as regime-bounded. The paper defines subfields by scope, admissible operators, invariants, falsifiers, and verification budgets, and uses them to explain why “hard problems” are often boundary problems: difficulty arises when no single regime stays valid across the full task, forcing explicit regime selection, translation, and arbitration.
The receiver-first framing is grounded historically in engineered reception, from Marconi’s practical receivers to modern signal processing, and conceptually in the idea that intelligence is adaptation under constraints rather than global optimality. That shift matters because it treats reliability as an operational property of the receiver stack, not as a narrative about creative generation.
The paper extends the framework to comparative cognition: across species, minds are receiver stacks tuned to different ecological signals; language is a convenient human interface, not the essence of cognition. This reframes “intelligence” as signal coupling plus constraint handling, which helps separate capability from interpretive overreach.
Finally, it proposes falsifiable predictions and measurement programs for subfield stability and invariant transport, and it articulates ethical and legal safeguards that treat outputs as hypotheses requiring receipts rather than as authoritative claims.
