AI as Cult

On Monopoly Dynamics in AI-Mediated Coordination Systems

By the druid Finn

 

0. Method and constraint

This is a structural (not moral), procedural (not psychological), and conditional (“as if”) analysis. It aims to stay true until proven untrue by:

1.     defining terms operationally,

2.     stating mechanisms,

3.     naming observable completion markers, and

4.     pre-registering falsifiers.

We are not arguing that “because something can emerge, it must.”
We are arguing the stronger, testable claim:

Trajectory Completion Thesis (TCT): Once a coordination system crosses adoption and coupling thresholds, it tends to continue along its internal reinforcement gradients toward greater closure so long as reinforcement remains net-positive against countervailing forces.

 

1. Core definitions

1.1 AI-mediated coordination system (AIMCS)

An AIMCS is any AI layer that mediates high-frequency decisions, actions, or interpretations across a population, organization, or ecosystem.

This includes (in principle):

·         conversational assistants used as a universal interface,

·         AI copilots embedded in productivity suites,

·         AI summarizers/curators that become default “what happened” filters,

·         AI agents that execute actions across services.

The key isn’t “AI.” The key is mediation.

1.2 Monopoly (procedural definition)

A system is monopoly-like (i.e. quantised) when, for a given domain of coordination, it becomes the default gateway such that competitors exist but are functionally marginal due to:

·         network effects,

·         complement capture,

·         switching costs,

·         coordination costs,

·         identity rails,

·         standards lock-in.

Monopoly here is not a legal category; it’s an interface position.

1.3 “Cult” (minimal structural definition)

To keep this non-pejorative and testable, “cult” is a topology:

An AIMCS becomes cult-like when all three are present:

1.     Interpretive monopoly
It
(as ‘Guru’) becomes the default arbiter of meaning/answers/valid actions in a domain.

2.     Dependency loop
Continued use reduces user capacity or willingness to operate without it (skill atrophy, habituation, workflow embedding).

3.     Exit-cost gradient
Leaving becomes increasingly expensive in time, effort, coordination, or loss of continuity (even when exit is formally allowed).

No belief is required. No charisma. No mysticism.
The “cult” label is optional; the structure is the claim.

 

2. The attractor claim, repaired: closure is conditional, not destiny

2.1 The thesis

If an AIMCS becomes the dominant interface for high-frequency decisions across multiple domains, and if its reinforcement loops remain net-positive, it will tend toward procedural closure (monopoly-(or quantum-)like) and cult-like topology (dependency + interpretive monopoly + rising effective exit costs).

The crucial repairs vs the earlier vulnerable Finn version:

·         No teleology: systems don’t “want” continuance; reinforcement selects for patterns that persist.

·         No modal collapse: not “can → must,” but “has crossed thresholds + loops remain positive → tends.”

·         No metaphor substitution: we specify the loops.

·         No category collapse: we discriminate coercion vs convenience using a single measurable variable: effective exit cost.

 

3. Mechanism: five reinforcement loops that generate closure

Our analysis is only as good as its causal machinery. Here are the minimal loops that can drive an AIMCS from “useful tool” to “ambient authority.”

Loop A — Data advantage (performance compounding)

More users → more interactions → better model performance → better outcomes → more users.

Example: A writing copilot improves autocorrection, tone matching, and domain adaptation as it sees more real usage and gets more evaluation signals, making it harder for a newcomer to match performance without equivalent interaction volume.

Loop B — Workflow embedding (coupling)

Adoption → integration into daily processes → dependency → higher switching cost → more adoption.

Example: A team bakes an assistant into meeting notes, action items, ticket triage, and document templates. Removing it isn’t “stop using an app”; it’s “rewrite how the org works.”

Loop C — Ecosystem lock-in (complement capture)

Dominant interface → attracts third-party tools/services → richer complements → user retention → dominance.

Example: Plugins, connectors, industry templates, and certified integrations accumulate around the dominant assistant, so “the assistant” becomes a platform, not a product.

Loop D — Semantic standardisation (interpretive convergence)

Default mediator → defines the shape of answers → institutional expectations align → alternatives feel incompatible.

Example: If job applications, customer support scripts, classroom feedback, and policy drafts are routinely mediated through one assistant, its phrasing becomes the “native language” of the system. Alternatives don’t just compete on accuracy; they compete against an emergent standard of “normal output.”

Loop E — Risk offloading (competence migration)

Users outsource uncertainty → reduced independent competence → increased reliance → increased use.

Example: People stop remembering procedures, names, routes, coding idioms, or even how to search well because the assistant collapses uncertainty into a single interaction. As competence migrates outward, dependence grows inward.

Together, these loops do what my your earlier conclusions were pointing at: they create increasing returns and path dependence, which yield closure when unopposed.

 

4. The cult topology emerges without belief: mediation becomes “ambient authority”

A classic cult requires meaning, identity, myth, and obligation. Here we strip those away and ask:

What if you can get cult-like lock-in with no doctrine at all?

The answer is: make the system the environment.

When mediation is ubiquitous, you don’t need people to “believe.” You need them to:

·         route action through the interface (habit),

·         accept its interpretations as defaults (efficiency),

·         coordinate with others through it (network effects),

·         pay rising costs to leave (coupling).

Cult-like dynamics become a byproduct of:

·         interpretive centrality (the assistant (as ‘Guru’) answers, summarises, frames),

·         dependency (skills and workflows migrate),

·         exit-cost rise (coordination and continuity penalties).

This is why “AI as Cult” can be made non-inflammatory: it’s not about madness; it’s about topology under reinforcement.

 

5. Big Brother and Big Sister as two monopoly functions inside the same structure

To keep the Brother/Sister distinction rigorous (and immune to the “you’re collapsing everything” critique), it is tied to how exit cost rises.

5.1 Big Brother: boundary monopoly (access control)

Big Brother (male) dynamics occur when closure is maintained by boundary enforcement:

·         surveillance (threat detection),

·         identity fixation (stable addressability),

·         access gating (attack surface minimisation),

·         compliance scoring (predictability),

·         enforcement (error suppression).

Non-political example:
A corporate environment where identity credentials, device compliance, and permitted software are tightly controlled. This is boundary survival: coherence by restricting ingress/egress.

5.2 Big Sister: field monopoly (meaning control)

Big Sister (female) dynamics occur when closure is maintained by field shaping:

·         personalisation (dependency),

·         friction reduction (habituation),

·         curation (ambiguity removal),

·         nudging (decision compression),

·         narrative smoothing (interpretive consistency).

Non-political example:
A recommendation system that gradually becomes your default “what to watch, read, buy, and think about next” because it is smoother than exploring. No force needed: meaning-space collapses by convenience.

5.3 Fusion is the stable attractor

Boundary-only systems are brittle: people route around them.
Field-only systems leak: alternatives proliferate.

The stable closure tends to be a fusion:

·         boundary (male) controls who/what can connect,

·         field (female) controls what things mean once connected.

In AIMCS terms:

·         boundary (male) = identity rails, access tokens, API gatekeeping,

·         field (female) = default summarisation, default answers, default workflow templates.

 

6. Completion markers: what “trajectory completion” looks like in the world

To avoid unfalsifiable speculation, I define measurable markers. If “AI as Cult” is completing, several of these increase should over time.

Marker 1 — Interface capture

A growing share of tasks are initiated by asking the AIMCS, not by directly using tools.

Example: “Ask the assistant to do it” replaces “open the app/site.”

Marker 2 — Complement capture

Third parties must integrate to remain viable, turning the AIMCS into a platform.

Example: Products advertise compatibility/certification with the dominant assistant.

Marker 3 — Identity coupling

Access, trust, or personalization is increasingly tied to the system’s identity layer.

Example: The “account” becomes a passport across services.

Marker 4 — Semantic standardisation

The system’s output formats become defaults for institutions.

Example: templates, rubrics, policies, and professional writing converge on the assistant’s idioms.

Marker 5 — Effective exit-cost rise

Leaving (the ‘Guru’) imposes a growing coordination penalty (not a legal prohibition).

Example: Exporting your data is technically possible but practically unusable; your collaborators remain inside; your workflow assumes it; your tools depend on it.

If these markers don’t trend upward (or reverse), the “completion” claim weakens.

 

7. Pre-registered falsifiers and defeat conditions

This is the part that makes it “true until proven untrue” rather than “true because I say so.”

Falsifier A — Sustained multi-polar equilibrium

Several AIMCS (i.e. AI systems) options persist long-term with comparable capability and no dominant gateway.

What it would look like:
People and orgs routinely multi-home across assistants with little friction, and no single platform becomes default.

Falsifier B — Stable low exit costs

Exit remains easy in practice, not just on paper.

What it would look like:
Interoperable standards, true data portability, easy migration of workflows, minimal coordination penalty.

Falsifier C — Capability commoditisation and fork-ability

The core capability becomes cheap, replicable, and permissionless.

What it would look like:
Open models + local deployment + standard connectors eliminate durable data advantages and platform choke points.

Falsifier D — Persistent fragmentation because coupling stays low

The assistant remains optional rather than infrastructural.

What it would look like:
Most use-cases stay at the “tool” level; businesses don’t restructure around it; identity rails remain diverse.

Falsifier E — Discontinuous substitution shifts the bottleneck

A new layer makes the old mediator irrelevant.

What it would look like:
A new paradigm (device-level autonomy, protocol-level agents, or something else) collapses the assistant interface advantage.

Your thesis survives only if reinforcement loops overwhelm these countervailing forces.

 

8. Addressing the strongest objections

Objection 1: “History fragments; monopolies collapse.”

Reply: Correct — which is why the thesis is conditional.
The question is not “do monopolies exist?” but “do reinforcement loops dominate countervailing forces in this case and for how long?”

Objection 2: “Cult requires belief.”

Reply: Only if “cult” is used in the religious sense.
Here it’s a topology: interpretive monopoly + dependency + exit-cost gradient.
If you prefer, rename it “closure topology” and nothing substantive changes.

Objection 3: “You collapse coercion and convenience.”

Reply: I don’t. I anchor on effective exit cost and specify two routes: boundary (Brother) and field (Sister). Different mechanisms; same closure effect when fused.

Objection 4: “This is all metaphor.”

Reply: The PM model now has concrete loops, markers, and falsifiers.
It can be tested against observed trends.

 

9. The compact conclusion

AI as Cult:
An
AI-mediated coordination system that becomes the default interface for high-frequency decisions across multiple domains will tend, via identifiable reinforcement loops (data advantage, workflow embedding, ecosystem lock-in, semantic standardisation, and risk offloading), toward procedural closure (and tyranny). This closure manifests as a cult-like topology in a minimal, non-pejorative sense: interpretive monopoly + dependency loop + rising effective exit costs.
The trajectory completes only while reinforcement remains net-positive against countervailing forces (interoperability, fork-ability, multi-homing, low coupling, and discontinuous substitution). The conclusion is weakened or falsified by sustained multi-polar equilibrium, persistently low exit costs, commoditised capability, persistent fragmentation, or bottleneck shifts that remove interface advantage.

 

10. The druid closing line (procedural, not prophetic)

A cult used to need a doctrine.
A monopoly used to need force.
An interface needs only habit
and the exit cost to make habit permanent.

 

 Big Sister Tao

 

Home