AI and Thai politics: When technology challenges truth in the electoral arena
In the context of Thailand, which is heading into a general election on February 8, 2026, this marks a critical political turning point. The problems of fake news, misinformation, and data distortion — especially those linked to or involving AI — have emerged as challenges that society can no longer ignore.
According to Thai PBS Verify’s discovery, over the past year AI has been drawn clearly into the battlefield of Thai politics — both as a “content creator” and as an “excuse” used to shift blame onto technology.
A worrying phenomenon is attempts to push AI into becoming the “scapegoat of society” to obscure origins, sources of production, and human responsibility — even though behind every algorithm there are still people who choose, apply, and direct it.
The use of AI in politics is therefore akin to wielding a “double‑edged sword.” On one side lies the power of progress; on the other, if employed as a tool to distort truth or as a shield against ethical responsibility, it inevitably becomes a weapon that erodes public trust and destabilizes the foundations of democracy.
To reveal dangers of problems creeping in subtly under the banner of “technology,” Thai PBS Verify has gathered cases examining the political significance of AI. The aim is not merely to highlight the negative side of technology, but to make society skeptical of the power, motives, and hidden intentions behind its use. These are some forces that may be shaping the direction of public thought without our awareness.
Thaksin and AI Images: when fake pictures become tools for amplifying bias
Former Prime Minister Thaksin Shinawatra was convicted on charges of corruption and abuse of power, hence his imprisonment at Klong Prem Prison on September 9, 2025, in connection with the “14th floor case.” During this process, social media was flooded with images of Thaksin— ranging from alleged photos inside prison to satirical depictions such as singing a duet with now-imprisoned famous rock singer Sek Loso. The Department of Corrections later confirmed that these were AI‑generated images, not real photographs. Nevertheless, the images continued to circulate widely. The dominant tone of public reactions was satirical, aimed at generating engagement while amplifying bias and intensifying emotional hostility among audiences.
AI voices and politicians: from doubt to political tremors
Beyond images, Thai PBS Verify also began investigating cases of “AI voices” after a wave of suspicion on social media suggested that leaked audio clips of politicians might actually have been generated by AI. The first case involved Paetongtarn Shinawatra, former Prime Minister, when an audio clip was released of her alleged conversation with Cambodia’s Hun Sen. The incident sparked public demands for clarification and intense debate over her suitability to hold office — particularly due to the use of familiar language such as the word “Uncle” amid ongoing unrest along the Thai–Cambodian border. Subsequently, the Constitutional Court issued a ruling that removed her from the premiership.
Following the border tensions, social media users also circulated an audio clip allegedly featuring Prime Minister Anutin Charnvirakulordering the opening of the Khao Din checkpoint in Sa Kaeo province, at the time when Thai–Cambodian relations were under significant strain.
Toward the end of the year, after Prime Minister Anutin Charnvirakul announced the dissolution of Parliament, the election season returned. An audio clip surfaced of a politicianallegedly instructing aides to check numbers from gambling websites. The politician later denied the voice was his, shifting responsibility by claiming it had been generated with AI.
This episode illustrates how technology is increasingly being used both as a weapon for political attacks and, in some cases, as a convenient “scapegoat” within the blurred boundaries of truth in the digital media sphere.
When the information war challenges truth in the electoral arena
In the context of Thai politics, AI is becoming part of the ongoing “information warfare” — a tool deployed to fabricate false credibility, attack political opponents, and erode public trust in institutions.
Social media: the main battlefield of election campaigns in the age of AI
Dr. Purawich Watthanasuk, political science lecturer at Thammasat University, has voiced concern over the growing use of AI in politics — particularly in generating campaign momentum and discrediting political parties. These trends became evident in the previous election, when social media was seen as the primary battleground of politics. In the upcoming election, the role of AI is expected to intensify further, heightening the risks of fake news and misinformation, especially during the campaign period when the media must carry out rigorous fact‑checking.
Fake news: a challenge to election oversight
Fake news is not only happening to Thailand; it is a global phenomenon that often intensifies during election periods. Each political party and its supporters now command their own fan base and content creators, who can harness AI to produce material rapidly and at low cost. This makes the control of false information a critical challenge — both for the media and for regulatory bodies such as the Election Commission.
LINE and the closed spaces of hard‑to‑verify false information
While fake news on social media can still be traced to its sources to some extent, a growing concern is its spread through LINE application — a closed-group communication space that is far more difficult to monitor. AI can now generate fake images, text, or content with striking realism and precision in just seconds, surpassing traditional editing methods. These materials are then rapidly forwarded within targeted groups of recipients.
[caption id="attachment_8111" align="aligncenter" width="768"]
Dr. Purawich Watthanasuk, political science lecturer at Thammasat University[/caption]
When fake news “works first,” truth struggles to catch up.
A recent example is the selective editing and circulation of certain messages or fragments of information, which quickly ignited reactions among politically-biased groups. Even when fact‑checking followed, in many cases, the fake news had already “done its work” — shaping public perception before the truth could intervene.
Human bias: the crucial fuel of fake news
Dr. Purawich has pointed out that bias is inherent in human nature. When people encounter information, images, or videos that align with their existing beliefs or attitudes, they are more likely to accept them without questioning. This psychological mechanism is a key factor behind the rapid spread of fake news — particularly within LINE group chats, where such content circulates with striking efficiency.
How to respond when fact‑checking alone is not enough
Fact‑checking, on its own, may no longer suffice. A long‑term strategy requires building media and AI literacies among the public, fostering a culture of questioning information before believing or sharing it. The challenge of AI‑driven fake news will not end with this election; it is a structural issue that Thai society must continue to confront collectively in the future.
AI in Thai political policy: questions about outcomes
Reflecting on the role of AI in Thai politics this year, Worawisut Pinyoyang, co‑founder of ImpactMind AI and Insiderly.ai, observes that nearly every political party has incorporated AI into its policy agenda, promoting its use as a way to equip citizens with AI literacy.
However, what remains unclear is the tangible impact: once the public gains knowledge of AI, how will this translate into economic benefits? At present, there are no concrete studies or clear evidence of whether AI learning will lead to better job creation or add measurable value to the economy.
“So what comes next after knowing that? Some parties say that investments in data centers or related projects will create jobs, but not to the extent of transforming the labor market. Unlike abroad, where new AI‑related positions are emerging in large numbers and AI engineers are promoted, high‑skill workers command high salaries and contribute significantly to national income. Thailand has yet to see such discussions. No one has clearly explained whether learning AI will lead to better jobs or how it will benefit the country. Every party has talked about AI, but the question remains: once people know it, then what?” Worawisut said.
[caption id="attachment_8112" align="aligncenter" width="1246"]
Worawisut Pinyoyang, co‑founder of ImpactMind AI and Insiderly.ai[/caption]
AI and politics: the risks of distorted data and “embellished” answers
When it comes to applying AI in politics, one of the key concerns is the risk of misinformation. Worawisut cites an experiment by a political party in which AI produced an incorrect response: it presented the Land Bridge project as if it were the party’s official policy, even though it was not. The AI had been trained on statements previously made by the party and then generated answers accordingly.
This case shows that even parties with advanced technological understanding are not immune to errors. Ordinary citizens who lack media or AI literacy may easily misinterpret such information. Especially if the issue involves sensitive topics or allegations, the consequences can be damaging.
Fake images, voices, and videos: a new political challenge before elections
In the run-up to the last election, AI was widely used to generate fake images, voices, and videos — a trend that is likely to intensify. What is particularly concerning is that Thailand still lacks a system to identify or flag content created by AI. By contrast, platforms abroad such as Facebook and Instagram already display labels indicating “AI‑generated content” and even allow users to report material as AI‑produced.
Worawisut notes that it remains unclear whether the Election Commission has any laws or regulations prohibiting the use of AI to create political content. If political parties choose to employ AI, he argues, they should clearly disclose whether a piece of content or infographic was generated by AI. Far from being harmful, such transparency would help citizens better distinguish and evaluate the information they receive.
The challenge of verification in a society without a central information hub
Another critical issue is the spread of false information used for political attacks, which is difficult to verify. Although Thailand has laws such as the Computer Crime Act, the country still lacks an official central database for fact‑checking. When a post claims that someone said or did something that was not true, citizens have almost no way to verify it immediately unless a central authority steps in to verify the facts and to stop false claims from being hurled back and forth.
AI literacy: a critical weakness in the digital political arena
From the perspective of ordinary citizens, AI literacy remains a major challenge. In Thailand, the level of AI literacy is still low. Those who are aware of AI’s existence may question the information they receive, while those who are not tend to believe it outright.
Worawisut views that this issue must be addressed in the long term, either through the education system or by involving institutions such as the Election Commission in public awareness efforts. If political parties are left to explain on their own, citizens may not trust them. What is needed is a neutral “middleman” — an authority that society can rely on to provide credible guidance.
AI and the shaping of belief: when predictions and images of victory influence decisions
Furthermore, AI and algorithms are increasingly being used to shape public belief — for example, through election forecasts or by creating the impression that a particular party is likely to win. Human psychology tends to favor siding with whoever appears to be the likely victor. If fake pages or large numbers of fabricated accounts are deployed to support a candidate or party, citizens can be misled into believing the narrative, without realizing that the information has been artificially constructed.
IO and fake accounts: manufacturing trends that appear real
This phenomenon has already emerged in Thailand in the form of IO (information operations), with each political party deploying them to varying degrees. Online communication experts can wield significant influence. For example, in a YouTube poll, fake accounts can be created to vote a particular individual into the top spot. Once the results are amplified by media outlets or analysts, they strongly shape public perception.
Moreover, social media comments are increasingly being fed into AI systems for further analysis. This technique is known as dark data, which is becoming more complex and diverse.
At the end, the question may not be how much AI will shape politics, but whether Thai society can keep pace with it. In the age when images, voices, and information can be created and distorted within seconds, truth may not be defeated by technology itself, but by the absence of verification mechanisms, media literacy, and shared responsibility among all stakeholders. This is the critical challenge Thailand must confront before AI deepens the fractures already running through its democracy.