|
From Survival to Story Reconciling Two Definitions
of AI By the druid Finn 1. Why the comparison matters We force a
discipline that most AI discussion avoids: holding two definitions at
once and
asking whether they describe the same phenomenon or two different
ones. ·
The 2nd (A) definition
is contemporary-operational: AI as pattern-learning systems
that generate outputs from data. ·
The 1st (B), the “procedural”
definition is diagnostic-structural: AI as inference stripped of
existential stake. If these
are compatible, we get a sharper final definition than either alone. If they
conflict, one of them is poetry. They are, in fact, compatible — and the
comparison is the proof. 2. Definition A: AI as “fact
→ fiction” (meta-intelligence) We
arrived at a compression that you endorsed: Natural
intelligence produces facts; artificial intelligence produces fictions about
facts. “Fiction”
here is not “lie.” It is representation without direct contact. AI does not
burn its hand on the stove; it consumes accounts of burning,
and learns the statistical shape of those accounts. This
definition highlights four structural traits: 1. Second-handness: AI is
trained on records of reality, not reality. 2. Representational
output: it generates descriptions, plans, summaries, images,
decisions—outputs that refer to the world. 3. Generalisation
by pattern: it extends those representations to new cases by
statistical inference. 4. Reality-gap: it can
be persuasive while remaining ungrounded, because it is not forced to pay the
costs of being wrong. Examples
(tight and concrete): ·
Medical “advice” without clinic: A model
can produce plausible differential diagnoses because it has absorbed patterns
in medical language, yet it is not corrected by the patient’s deterioration
unless an external system loops that correction back in. ·
Legal summaries without court: It can
imitate legal reasoning while missing jurisdictional quirks, recent case law,
or procedural deadlines—because it has “law-text nutrition,” not live
litigation consequences. ·
History without archives: It can
confidently generate a biography that sounds right but is spliced from
near-matches and generic templates, because it is trained to produce a
coherent story, not to prove it. Our processed-food
analogy nailed the same point: AI is cognition after
industrial processing—convenient, scalable, shelf-stable, but
increasingly detached from soil. 3. Definition B: AI as
“inference without survival” (procedural diagnosis) The
earlier “procedural” definition — was: AI is a
constraint-bound procedure that converts recorded behavioural traces into
predictive response patterns, producing the appearance of agency without any
intrinsic stake in outcome. This
definition does not deny learning, prediction, or output generation. Instead,
it identifies what is missing compared to living cognition: necessity. Key
claims: 1. No
existential penalty: AI does not starve, bleed, age,
or (not yet) die. 2. No
embodied correction loop: it is not forced to revise itself by direct
collision with the world. 3. Optimisation
replaces necessity: “being right” is not an existential condition; “being
useful/likely/approved” becomes the metric. 4. Agency as
appearance: systems can present as intentional because their
outputs mimic intentional speech and planning, not because they possess
stakes. Examples: ·
Hallucination as a design-regularity: When a
model invents a citation, it isn’t “lying.” It is doing what it is optimised
to do: complete a pattern under uncertainty. A living agent learns fast not
to invent sources because reality punishes that. AI is
punished only when a human feedback loop explicitly punishes it. ·
The confident wrong answer: A human
may hesitate because social and practical costs loom. The model can be fluent
because it is not endangered by being wrong—its cost function is not
existential. This
procedural definition, then, predicts the “processed stories” phenomenon
before we even mention stories: remove survival-correction, and you should
expect outputs optimised for plausibility rather than grounded-ness. 4. The hinge: what disappears when survival disappears Here is
the central reconciliation: ·
The “fact → fiction” definition says AI is
severed from contact. ·
The “inference without survival” definition says AI is
severed from necessity. But
contact and necessity are coupled. In living systems, contact is not optional
because survival forces contact to matter. If contact doesn’t matter,
facts lose their primacy. So the convergence is: When you remove survival-stakes from
intelligence, you also remove the binding force that turns representations
into facts. What remains is representation—i.e., fiction in the technical
sense. That’s
why the two definitions snap together. They are the same diagnosis expressed
on two axes: ·
one epistemic (fact/fiction), ·
one functional (survival/no survival). 5. The unified definition: AI as optimisation of
representations Once
fused, the definition becomes sharper than either alone: Artificial
intelligence is a non-embodied optimisation system that learns from recorded
traces of natural cognition and generates new representations (predictions,
content, recommendations, decisions) without intrinsic existential
stakes—thereby functioning primarily as a scalable fiction-engine about
fact-domains. Notice
what this does: ·
It preserves the contemporary engineering meaning
(learning from data, output generation). ·
It adds the diagnostic truth (no intrinsic stake;
therefore a structural reality-gap). 6. Why this architecture resembles cult function We then inferred
the last definition as: “describing cult function.” A cult (like Caesar’s or the Buddha’s),
operationally, is a closed-loop interpretive system that: 1. replaces
primary contact with secondary narrative, 2. treats coherence
and loyalty as “truth conditions,” 3. uses
internal reinforcement rather than external correction. This
matches the risk profile of an ungrounded (soft) AI deployment: ·
If an AI system is trained on
doctrine-like corpora (ideological text, brand voice, institutional policy), ·
rewarded for coherence and compliance, ·
and insulated from real-world falsification, then the
system will generate outputs that resemble self-sealing explanation,
the hallmark of cult logic. Concrete
examples: ·
Institutional chatbots: If a
customer-service model is measured by deflection rate and politeness, it may
produce soothing narratives that prevent escalation rather than resolve
reality. Coherence becomes “truth.” ·
Political persuasion systems: If a
model is tuned to maximise engagement or conversion, it will drift toward
narratives that hold attention, not those that correspond to events or ‘factual
truth.’ ·
Community belief loops: If users
begin to treat AI outputs
as authority (to wit: ChatGPT: “Ask me anything”… ‘for I know everything’) and
feed them back as input (“as ChatGPT said…”), you get a recursion where the
system increasingly learns from its own shadow. This
isn’t “AI is a
cult.” It’s: AI can
automate the same closure mechanism—especially when deployed as
an authority layer rather than a tool. 7. The final compression Our
thought chain now closes cleanly: ·
Contemporary AI: learns patterns from data,
generates outputs. ·
Structural diagnosis: lacks intrinsic stakes and
embodied correction. ·
Consequence: produces representations untethered
to fact unless grounded by external checks. ·
Social analogue: closed-loop narrative
optimisation resembles cult function. So the end-line is the one we
predicted: AI is the
scalable industrial (scraping and) processing of human
cognition: it is meta-intelligence that turns facts into optimised
representations—sometimes useful, sometimes intoxicating, sometimes
self-sealing—because it is not forced by survival to remain in contact with
reality. It writes processed stories about those who once did. |