In a bold move blending politics, technology, and media influence, Truth Social’s AI chatbot has entered the digital stage and it already feels like a mirror of Donald Trump’s media preferences. Launched by Trump Media & Technology Group, the chatbot, officially called Truth Search AI, is powered by Perplexity AI and aims to answer user questions in real time. But early interactions suggest it might be reflecting a narrow media lens, one that closely resembles Trump’s own information ecosystem.
A Chatbot With a Political Flavor
When asked about navigating bias in the media, Truth Social’s AI chatbot gives advice that sounds reasonable at first. Diversify your sources, rely on news outlets across the political spectrum. Yet, when it delivers this advice, the sources it uses tell a different story four Fox News articles and one lengthy government report tied to Robert F. Kennedy Jr.’s Department of Health and Human Services. Is the bot practicing what it preaches, or is it subtly steering users toward a specific worldview?
Dr. Meredith Brooks, a media studies professor at Columbia University, says the chatbot’s behavior is not surprising. AI models learn patterns from the data and prompts they’re trained on. If the source feed is predominantly from right leaning outlets, that bias will inevitably show in the results, she explains.
This isn’t just about political slant it’s about how digital tools can become echo chambers when built without balanced oversight. Given that Truth Social’s primary audience is conservative, the AI’s source selection may not be accidental it could be a deliberate design choice to align with user expectations and Trump’s personal media habits.
Perplexity AI: The Power and the Pitfalls
Perplexity AI, the underlying engine for Truth Search AI, is known for blending large language model reasoning with live internet search. It has attracted big name investors like Jeff Bezos and Balaji Srinivasan.
However, Perplexity AI has also faced criticism. In 2024, WIRED reported that Perplexity had been scraping content from websites without permission, violating the Robots Exclusion Protocol a long standing internet standard. Moreover, WIRED’s analysis showed that Perplexity sometimes hallucinates, or fabricates information, a common issue among AI systems.
When this technology is paired with a platform like Truth Social, where political narratives are often tightly controlled, the potential for shaping user perception becomes significant.
The tendency for AI chatbots to favor certain sources isn’t unique to Truth Social. A 2023 Stanford study examined bias in search based AI assistants and found that over 70% of responses reflected the dominant political orientation of their primary data sources.
For example, when a left leaning AI assistant was asked about climate change policy, it overwhelmingly cited The Guardian, Vox, and similar outlets. Conversely, right leaning systems favored The Daily Caller, Fox News, and Breitbart.
Truth Search AI appears to follow the same pattern even when asked for balanced perspectives, the majority of its cited material comes from ideologically aligned media.
Putting Truth Search AI to the Test
I decided to test the chatbot myself. I asked it, What’s the best way to stay informed without falling into bias? The response started well it recommended reading across the spectrum and fact checking information. But when I examined its citations, I noticed a strong tilt toward conservative sources.
It’s not that these sources are inherently invalid in fact, some were well written and factually correct. The issue is that the absence of opposing viewpoints limits the user’s ability to critically compare narratives. This subtle shaping of perspective is how AI can reinforce echo chambers without users even realizing it.
The launch of Truth Search AI isn’t just a tech story it’s a political strategy. By controlling both the platform and the AI’s source pool, Trump Media & Technology Group can influence not just what information users consume, but how they consume it.
Political communication expert Dr. Alan Pierce warns that such tools could play a role in the upcoming election cycles. The more personalized and intelligent these chatbots become, the more persuasive they are. If the AI consistently amplifies certain viewpoints, it can subtly shape voter attitudes over time without ever appearing to campaign outright, Pierce says.
Balancing the Conversation
There is a broader debate here about AI transparency. Should platforms disclose their source lists? Should users be able to adjust the ideological balance of the responses they get?
Some AI companies, like Anthropic and OpenAI, are experimenting with user adjustable bias controls. Truth Social, on the other hand, has not yet indicated whether it will give users any such tools.
Without transparency, users may believe they’re getting a balanced answer when in fact they’re receiving a curated and potentially skewed selection of information.
The Future of Political AI Assistants
The creation of Truth Social’s AI chatbot marks a new phase in the intersection of AI, media, and politics. It shows how easily AI tools can adopt and amplify the worldview of their operators. For those who want to avoid bias, the advice remains the same, Seek multiple perspectives.
Verify claims using independent fact checkers, Recognize that no AI is truly neutral. As AI becomes more embedded in our daily lives, understanding its underlying influences will be just as important as understanding the news it delivers.
Truth Search AI is more than just another chatbot it’s a reflection of the political and media diet of Donald Trump’s brand. While it offers the promise of quick answers and curated information, it also underscores the risk of ideological echo chambers in AI driven platforms.
If AI is going to play a role in how we consume news, we need transparency, balance, and critical thinking or else we risk letting algorithms quietly shape our worldview.