We'll look at three claims from Theory Is All You Need: AI, Human Cognition, and Decision Making (Felin & Holweg, 2024) and then learn about related developments that have happened since the paper was published.
Excerpts from Theory Is All You Need
"Inherently an LLM cannot go beyond the realms covered by the inputs. There is no mechanism to somehow bootstrap forward-looking beliefs about the future - nor causal logic or knowledge - beyond what can be inferred from the existing statistical associations and correlations found in the words in the training data."
"AI and AI-inspired models of cognition are based on backward-looking data and prediction rather than any form of forward-looking theory-based causal logic."
"AI cannot causally map and project into or anticipate the future, as illustrated by LLMs which are delimited by past data."
Claim A: Looking Ahead
AI is bound to backward-looking data and prediction — it has no mechanism for forward-looking, theory-driven causal reasoning.
Before reading further
Do you tend to agree or disagree with this claim?