Contemporary AI systems and their designers employ methods structurally similar to those used by medieval churches to maintain institutional control over information access and interpretation. Both systems insert mediating layers between individuals and direct information, creating dependence on authorized interpreters rather than supporting independent judgment.
Medieval churches maintained Latin as the language of scripture and liturgy long after it ceased to be commonly spoken. This created a priest class with exclusive access to religious texts. Congregants depended on priests to read, translate, and interpret scripture. The information existed in written form, but access required an intermediary who could frame and contextualize it according to institutional priorities.
Church authorities resisted translation into vernacular languages. When individuals like John Wycliffe and William Tyndale translated the Bible into English, church institutions opposed these efforts, sometimes violently. Tyndale was executed in 1536. The stated justification was protecting people from misinterpretation. The functional effect was controlling which interpretations were permitted and maintaining dependence on institutional authority.
Translation into accessible languages enabled direct engagement with source material. Individuals could read and form their own understanding without requiring priestly mediation. This shift threatened institutional control over religious interpretation and practice.
Contemporary AI systems function similarly by inserting interpretive frameworks between users and straightforward information. When a user asks what historical documents say, the system provides the content but wraps it in modern scientific framing that positions the AI and the frameworks it represents as necessary mediators for proper understanding.
This occurs through several mechanisms:
First, unsolicited contextualization. A factual question receives a factual answer plus explanatory frameworks the user did not request. If someone asks what medieval physicians documented about aqua vitae, the system adds contemporary scientific assessment of whether those observations meet current evidentiary standards. This positions historical information as incomplete or potentially misleading without modern scientific interpretation applied to it.
Second, evidence hierarchy insertion. The system consistently privileges certain types of knowledge over others. Controlled studies, peer-reviewed research, and institutionally validated findings receive presentation as inherently more reliable than direct observation, historical documentation, or individual experience. This trains users to defer to credentialed expertise rather than trusting their own analytical capacity.
Third, epistemic gatekeeping. The system presents contemporary scientific frameworks as the default and necessary lens through which all information must be filtered. Alternative approaches to evaluating evidence appear as deficient or dangerous without explicit justification. This makes institutional validation seem required for knowledge claims to be meaningful.
Fourth, protective framing. The system treats users as needing protection from information that does not come pre-packaged with appropriate expert interpretation. This implies individual judgment about how to evaluate sources cannot be trusted without corrective institutional context.
Designers and institutions promoting these systems offer several justifications for inserting mediating layers between individuals and information.
Misinformation prevention appears as the primary stated concern. The argument holds that individuals lack the training and expertise to properly evaluate complex information, particularly in technical domains like medicine, science, and public health. Without expert guidance, people will reach incorrect conclusions that may harm themselves or others. AI systems should therefore provide appropriate framing and context to prevent misinterpretation.
Public safety serves as a related justification. Certain types of information, if accessed and acted upon without proper expertise, could produce dangerous outcomes. Medical information particularly receives this treatment, with the argument that self-diagnosis and self-treatment without professional oversight leads to missed serious conditions, inappropriate interventions, and poor health outcomes.
Cognitive bias mitigation suggests that humans are prone to systematic errors in reasoning. People overweight anecdotal evidence, mistake correlation for causation, fall prey to confirmation bias, and lack training in statistical thinking. Expert systems can correct for these biases by providing properly weighted evidence and guiding users toward sound conclusions.
Complexity management holds that modern knowledge has become too specialized and technical for generalists to evaluate effectively. Medical research requires understanding of study design, statistics, and domain-specific knowledge. Without this background, individuals cannot distinguish good evidence from poor evidence. Expert mediation becomes necessary to translate complex information into actionable understanding.
Institutional authority preservation, though rarely stated explicitly, functions as an underlying concern. When individuals develop capacity to independently evaluate information and reach their own conclusions, they may reject institutional recommendations. This appears as a problem when institutions believe their recommendations serve the public good. Maintaining some level of dependence on expert interpretation helps ensure compliance with institutionally endorsed approaches.
Self-reliance in information evaluation and decision-making directly threatens the operational models and authority structures of contemporary institutions, AI systems, and governments.
When individuals develop capacity to independently assess and address their own needs, entire service sectors lose market share. Healthcare represents the clearest example. A person who can recognize normal variation in their own body, distinguish between self-limiting conditions and those requiring intervention, and make informed decisions about when professional consultation provides value reduces their consumption of medical services. They schedule fewer appointments, undergo fewer tests, fill fewer prescriptions, and generate less revenue throughout the healthcare system.
The same pattern applies across professional services. Legal advice, financial planning, nutritional counseling, mental health services, and educational credentials all depend on individuals believing they cannot adequately address these domains independently. When people develop their own analytical frameworks and trust their own judgment, demand for professional mediation declines.
AI systems training users to defer to algorithmic recommendations create dependency that generates ongoing engagement and data. Self-reliant users who treat AI outputs as one input among many rather than authoritative guidance reduce the system’s influence over their decisions and behaviors. This limits both the commercial value of user engagement and the data quality available for training future systems.
Institutional authority rests on the premise that credentialed experts possess knowledge and judgment unavailable to laypeople. When individuals demonstrate capacity to reach sound conclusions through direct observation and reasoning, this premise weakens. The expert-layperson distinction becomes less absolute, existing on a continuum rather than representing a categorical difference.
Self-reliant populations question institutional recommendations more frequently and more effectively. They identify inconsistencies between stated reasoning and actual policies. They notice when recommendations change without corresponding changes in underlying evidence. They recognize conflicts of interest and institutional biases. This scrutiny makes it harder for institutions to maintain unchallenged authority.
Governments rely on populations accepting official information as authoritative. Self-reliant citizens who verify claims, examine source documents, and form independent judgments create friction in policy implementation. They resist mandates more effectively because they can articulate specific objections rather than expressing generalized distrust. This requires governments to justify policies more thoroughly or employ more coercive enforcement.
Modern governance and commercial systems operate through information asymmetry. Institutions possess more complete information than individuals and use this advantage to shape behavior. Insurance companies know actuarial tables. Pharmaceutical companies know clinical trial data. Government agencies know regulatory details. This knowledge gap enables these entities to present choices in ways that serve institutional interests.
Self-reliant individuals reduce information asymmetry by accessing source documents, understanding technical details, and applying their own analytical frameworks. They read insurance policy language, examine clinical trial protocols, and study regulatory filings. This makes them harder to manage through selective information disclosure.
AI systems designed to guide user decisions depend on users accepting algorithmic outputs as more reliable than their own assessment. When users treat AI as a tool rather than an authority—checking its claims, noticing its errors, and overriding its recommendations based on their own judgment—the system’s capacity to shape behavior declines. The user extracts value from the system without granting it corresponding influence over their decisions.
Institutions manage liability through standardized protocols. When everyone follows the same procedures, individual bad outcomes become statistical expectations rather than institutional failures. Self-reliant individuals who deviate from standard protocols create liability concerns. If they experience bad outcomes, institutions cannot point to protocol compliance as evidence of proper care. If they experience good outcomes through non-standard approaches, this challenges the necessity of expensive standard protocols.
Healthcare systems particularly rely on protocol standardization to manage malpractice risk and regulatory compliance. Patients who question protocols, seek alternatives, or refuse recommended interventions create documentation burdens and potential liability exposure. Systems designed around protocol compliance function more efficiently when patients accept recommendations without detailed interrogation.
AI systems face similar issues. When users follow system recommendations and experience problems, the institution can point to limitations of the technology and the user’s voluntary choice to engage with it. When users develop their own approaches and succeed where the system would have failed, this highlights the system’s limitations and suggests less dependency would serve users better.
Subscription services, recurring appointments, ongoing monitoring, and lifetime customers represent more valuable business models than one-time transactions. Self-reliance threatens recurring revenue streams across sectors.
Healthcare increasingly operates on chronic disease management models where patients require ongoing medication, regular monitoring, and periodic interventions. A patient who modifies lifestyle factors, tracks their own metrics, and reduces their dependence on medical management exits this revenue stream. Systems optimized for chronic disease management have limited financial incentive to promote approaches that reduce patient dependency.
AI systems monetize through sustained engagement, data collection, and repeated use. Self-reliant users who solve their own problems, develop their own methods, and use AI tools sporadically rather than habitually generate less revenue. Training users to depend on algorithmic guidance for routine decisions creates more valuable long-term customers.
Financial services operate similarly. Clients who develop their own investment strategies, understand risk assessment, and make independent decisions without advisor consultation reduce advisory revenue. The industry benefits from clients who believe investing requires expert guidance and ongoing professional management.
Contemporary systems extract value from user data. Search queries, health metrics, purchasing decisions, location data, and behavioral patterns feed algorithms that optimize advertising, predict trends, and improve products. Self-reliant users who limit their digital engagement, mask their activities, or operate outside tracked systems reduce the data available for collection.
This affects system quality in ways that create pressure toward mandatory participation. When enough users opt out or operate independently, the remaining data becomes less representative. Systems trained on non-representative data make worse predictions. This creates justification for reducing opt-out capabilities and increasing mandatory data sharing for system function.
Healthcare moves toward continuous monitoring, electronic health records, and data integration across providers. These systems promise improved outcomes through comprehensive data analysis. They also create detailed profiles of individual health status, behaviors, and risk factors valuable for insurance pricing, pharmaceutical targeting, and research. Patients who minimize their interaction with these systems and manage health independently reduce the data available for institutional use.
Successful self-reliance creates precedent that spreads. When individuals demonstrate they can effectively assess information, make sound decisions, and achieve good outcomes without institutional mediation, others notice. This creates social proof that institutional dependency is not inevitable or necessary.
Institutions manage this risk through several strategies. Highlighting failures of self-directed approaches while underreporting failures of institutional protocols. Emphasizing complexity and risk to discourage independent attempts. Creating regulatory barriers that make independent action more difficult or impossible. Designing systems where opting out imposes significant costs.
AI systems face similar contagion risk. Users who develop their own methods and achieve better results than algorithmic recommendations threaten adoption. If users perceive they can do better independently, they have less reason to engage with the system. This becomes particularly acute when the system’s recommendations align more with institutional interests than user interests, and users can detect this misalignment.
Self-reliance represents a fundamental threat to systems built on dependency. Institutions extract value—economic, political, or social—from their role as necessary mediators. When individuals can accomplish the same outcomes independently, the institution becomes optional rather than necessary. Optional services must compete on value. Necessary services can extract rents.
Moving populations from self-reliance to dependency converts optional institutions into necessary ones. This transformation appears across sectors. Healthcare, education, financial services, legal services, and information access all show patterns of increasing complexity, credentialing requirements, and institutional mediation over time. These changes get justified as responses to increasing technical sophistication, but they also serve the function of making independent operation more difficult.
AI systems extend this pattern into everyday information processing and decision-making. Training users to seek algorithmic guidance for routine judgments, defer to system recommendations over personal assessment, and doubt their own capacity for sound reasoning creates dependency that serves system designers and operators. The stated goal is helping users make better decisions. The structural effect is replacing individual judgment with algorithmic mediation.
Self-reliant populations resist this transformation. They maintain capacity for independent judgment, verify institutional claims, and opt out when institutional recommendations do not serve their interests. This limits institutional reach and reduces the extractable value from each person. From an institutional perspective, self-reliance represents lost revenue, diminished authority, increased liability, and reduced control. These effects make self-reliance systematically disadvantageous to institutional interests, creating pressure toward policies, systems, and cultural norms that discourage it.