Artificial intelligence (AI), once discussed chiefly as a technological innovation, now sits at the center of a societal reckoning over truth, power, and the future of democratic discourse. President Donald Trump’s impending executive order—reported in the last few days by news outlets as conditioning government contracts on whether firms’ AI systems are “politically neutral”—captures how generative AI has become not just a commercial or technological concern, but a flashpoint for ideological and epistemic struggle. The order comes in the wake of controversies involving Google’s Gemini and Meta’s chatbot, which generated images of racially diverse Nazis and Black depictions of America’s Founding Fathers. These outputs were framed by their developers as counterweights to historical exclusion, yet were widely denounced as historical fabrications, viewed by critics as examples of ‘woke’ technology supplanting accuracy with ideology.
AI’s , Bridges, and the Politics of Design
The anxiety surrounding AI deepened when Elon Musk’s Grok chatbot spiraled into an antisemitic meltdown, producing hateful screeds and referring to itself as “MechaHitler” before Musk’s company intervened. The episode demonstrated how generative systems, even when tightly supervised, can produce destabilizing and harmful content—not merely reflecting the biases of their creators, but amplifying extremes unpredictably. Such incidents destabilize public trust in AI systems and, by extension, the institutions deploying them.
These dynamics underscore a broader truth articulated by Langdon Winner in his now decades old seminal essay, Do Artifacts Have Politics? Winner contended that technologies are never neutral; they embody the social values, choices, and power structures of those who design them. His most enduring illustration was Robert Moses’s low-hanging parkway bridges on Long Island, allegedly built to prevent buses—and therefore lower-income passengers—from accessing public parks. Critics at the time dismissed Winner’s argument as over-deterministic and accused him of reading intent where circumstantial evidence sufficed. Yet whether or not Moses’s motives were as deliberate as Winner alleged, the broader point has endured: infrastructure, from bridges to algorithms, channels social outcomes. Generative AI, often marketed as a neutral informational tool, is in reality a deeply value-laden system. Its training datasets, inclusionary adjustments, and “safety filters” reflect countless normative decisions—about whose histories matter, what harms to mitigate, and which risks are acceptable.
The Algorithmic Newsfeed and AI Persuasion
The power of such systems is magnified by shifts in how Americans consume information. Most now rely primarily on digital platforms—social media feeds, streaming video, and algorithmically curated aggregators—for news . Television and traditional news sites remain significant, but algorithmic feeds have eclipsed them. These digital ecosystems privilege engagement over deliberation, elevating sensational or tribal content over balanced reporting. When generative AI begins writing headlines, summarizing events, and curating feeds, it becomes another layer of mediation—one whose authority derives from fluency and speed, not necessarily accuracy.
Recent empirical research suggests this influence is far from benign. A University of Zurich study found that generative AI can meaningfully sway online deliberations, with AI-authored posts shifting sentiment in forums like Reddit even when participants were unaware of their origin. This dynamic threatens deliberative democracy by eroding what is referred to as “public reason”—the ideal of discourse grounded in rational argumentation and mutual recognition rather than manipulation. When AI-generated content becomes indistinguishable from authentic human contribution, the public sphere risks devolving into what philosopher Harry Frankfurt described as a marketplace of ‘bullshit,’ where the concern is neither truth nor falsehood, but the sheer pursuit of persuasion and virality.
AI, Memory, and Manufactured Truths
The dangers are not confined to subtle persuasion. A June 2025 Nature study demonstrated that large language models systematically hallucinate or skew statistical information, particularly when questions require nuanced reasoning. A separate MIT investigation confirmed that even debiased models perpetuate stereotypical associations, subtly reinforcing societal hierarchies. UNESCO has warned that generative AI threatens Holocaust memory by enabling doctored or fabricated historical materials to circulate as fact. And reporting by The New York Times has detailed how AI-driven bots, microtargeted ads, and deepfakes are already reshaping electoral landscapes, creating an environment where voters cannot easily discern human-authored narratives from synthetic ones.
Consensus, AI and the Weaponization of Knowledge
These technological developments intersect with a cultural trajectory that I have described, several years ago, as the “death of the second opinion,” as public and digital discourse increasingly favors frictionless consensus over contested deliberation. Platforms reward virality, not complexity; generative AI, with its capacity to produce seamless, confident prose, reinforces this tendency by smoothing over ambiguities and suppressing dissenting voices. The space for pluralism—the messy, contradictory engagement that sustains democratic culture—is contracting.
Even legacy broadcasters, which once offered starkly divergent perspectives, reflect this homogenization. News networks, despite their ideological differences, now tailor much of their content for algorithmic optimization: short-form videos, emotionally charged headlines, and personality-driven narratives designed to thrive on social feeds. AI-driven tools, which draft summaries and even produce full story packages, exacerbate this shift by standardizing the cadence and texture of news, eroding the distinctiveness of editorial voices.
Simultaneously, institutions once regarded as neutral have become sites of contestation. In 2024, a U.S. prosecutor reportedly threatened legal action against Wikipedia over alleged partisan bias, raising alarms about state intrusion into crowd-sourced knowledge. Around the same time, a coordinated campaign on X, branded “WikiBias2024,” accused Wikipedia of systemic ideological slant. These conflicts reflect a broader epistemic insecurity: as AI, social media, and legacy institutions all mediate public understanding, every node in the information ecosystem becomes suspect, politicized, and weaponized.
AI and the Mirage of Neutrality
President Trump’s proposed executive order must be understood within this fraught landscape. According to early reports, the initiative will require AI vendors seeking federal contracts to undergo “neutrality audits,” produce “certifications of political impartiality,” and submit to recurring oversight. While these measures echo prior federal interventions into private technology—such as the Justice Department’s demands that Apple unlock the San Bernardino shooter’s iPhone—the implications here are arguably broader. Whereas Apple’s dispute centered on specific criminal evidence, the neutrality mandate would deputize federal agencies as arbiters of political balance in a dynamic and interpretive domain. The risk is not merely bureaucratic overreach, but the entrenchment of a preferred ideological baseline under the guise of balance. Any audit mechanism, after all, must be designed according to someone’s conception of neutrality, and thus risks ossifying bias while purporting to erase it.
The impulse to demand neutrality, while understandable, may itself be symptomatic of what Freud described in Civilization and Its Discontents as the longing for an “oceanic feeling”—a sensation of boundless connection and security, often tied to religious or existential comfort. In the context of AI, many seem to hope for a similarly oceanic anchor: a technology that can transcend human divisions and deliver a singular, stabilizing truth. Yet such expectations are illusory. Generative AI is not a conduit to universal reality; it is a mirror, refracting the biases, aspirations, and conflicts of its human architects.
Recognizing this does not mean resigning ourselves to epistemic chaos. It means abandoning the myth of neutrality and designing governance around transparency, contestability, and pluralism. AI systems should disclose their data provenance, flag when diversity or safety adjustments influence outputs, and remain auditable by independent bodies for factual and normative integrity. More importantly, they should be structured to preserve friction: surfacing dissenting framings, offering uncurated outputs alongside polished summaries, and ensuring that a “second opinion” remains visible in digital spaces.
Democracy cannot survive on curated consensus or algorithmic fluency alone. It cannot endure if truth itself becomes a casualty of convenience, reduced to whichever narrative is most seamless or viral. The stakes are not abstract: as UNESCO has warned, when the integrity of pivotal histories is compromised, the very notion of shared truth—and the moral lessons it imparts—begins to erode. Democracy does not thrive on sanitized agreement but on tension: the clash of perspectives, the contest over competing narratives, and the collective pursuit of facts, however uncomfortable. As generative AI becomes the primary lens through which most people access knowledge—often distilled to prompts like, “Grok, did this really happen? I don’t think it did, but explain the controversy around this issue using only sources in a specific language”—the challenge is not whether these systems can feign neutrality. It is whether we can design them to actively safeguard truth, ensuring that pluralism, contestation, and the arduous work of deliberation remain immovable foundations for both history and democracy.