Get ready for 2026: When “fake clips” begin to shape public opinion
Stepping into 2026, technology for generating synthetic images and audio—known as Generative AI—has developed to the point where the gap between real images and the fake ones can no longer be easily distinguished as in the past. Lessons from previous years show that fake clips are no longer created merely for entertainment; they have become a crucial tool for driving social and political agendas.
How to prepare ourselves when “fake clips” manipulate public opinion more seamlessly in 2026 ?
Associate Professor Dr. Wilaiwan Jongwilaikasem, Faculty of Journalism and Mass Communication, Thammasat University, shared her views on AI technology trends in 2026. She stated that 2026 marks the entry into what can be called the era of “Hyper-Realistic Chaos.” This is a result of artificial intelligence capable of generating content—images, audio, and video—that is “more realistic than human senses can distinguish,” leading to big confusion in interpreting what is real. In 2025, we will see a transition from AI being merely a “supporting tool” to becoming a full-fledged “actor” or AI agent.
[caption id="attachment_8068" align="aligncenter" width="1024"]
Associate Professor Dr. Wilaiwan Jongwilaikasem, Faculty of Journalism and Mass Communication, Thammasat University[/caption]
Scammers will no longer send just fake links; they will use AI voice and video calls that imitate voices and faces of our close acquaintances in real time to trick victims into transferring money. At the same time, misinformation will be tailored to suit everyone’s preferences, analyzed from browsing history and search behavior, to persuade people at a psychological level.
“In 2026, distinguishing truth with the ‘naked eyes’ will be nearly impossible. Problems previously seen—such as unnatural blinking, extra fingers, or unrealistic shadows—will be fully corrected. Not only will rake clips be visually sharp, but they will also seamlessly convey ‘emotion,’ instantly affecting people’s feelings.
In summary, in 2026, ‘seeing no longer means believing.’ Fake clips will become a major weapon for manipulating political and social opinion (aka social engineering).”
[caption id="" align="aligncenter" width="1214"]
A post by a South Korean media reported using an AI-generated clip that was so realistic that mainstream South Korean media reported it as news. The clip falsely claimed that Thailand had used F-16 fighter jets to attack a building in Cambodia. Published on December 11, 2025, the post gained 443,600 views.[/caption]
How should we prepare for 2026 ?
Approaches to cope this issue in 2026 can be divided into four key strategies:
- Pause for a moment: When seeing clips that trigger strong emotions—anger, shock, or fear—stop for five seconds to get a grip before sharing.
- Use AI to check AI: Use tools or Browser Extensions that analyze Metadata and the sources of images and videos.
- Digital Literacy 2.0: One should understand that deepfakes can now be created “live.” Even video calls cannot be trusted 100% when money is involved.
- Bear this in mind: Technology is advancing rapidly, but what AI finds hardest to replicate is “deep context” and “social plausibility.”
What happens when technology can create faster than verification ?
Assistant Professor Dr. Vera Sa-ing, lecturer of the Department of Electrical and Computer Engineering, Faculty of Engineering, King Mongkut’s University of Technology North Bangkok (KMUTNB), shared his views on AI development over the past year. He noted that in 2025, society encountered generative AI—especially images and videos—that were increasingly difficult to verify. This leads to a projection that in 2026, society will face even more advanced AI-generated clips, driven by creation technologies that are smarter, more capable, and more realistic.
When content-generation technology advances faster than verification methods, the increasing subtlety of these new technologies will make it easy to develop misunderstandings from the content found.
[caption id="" align="aligncenter" width="1024"]
Assistant Professor Dr. Vera Sa-ing, Department of Electrical and Computer Engineering, Faculty of Engineering, King Mongkut’s University of Technology North Bangkok[/caption]
“What is truly frightening is the mixing of truth and falsehood within the same piece of content.”For example, taking 80–90% information and blending it with 10–20% falsehood makes verification far more difficult. Recently, this has begun to appear in the research community as well. Data generated by AI—which was never experimentally validated—has been used to create new research papers. It must be acknowledged that such AI systems can insert fabricated information into articles, making them appear more complete or more credible. In some cases, papers that seem unrealistic or implausible have been published, prompting increased scrutiny. This includes greater efforts to request raw research data to better verify authenticity of the original sources.
[caption id="" align="aligncenter" width="1200"]
Example of fake news mixed with real information:A TikTok clip claimed to show a military response by the 31st Infantry Regiment during the Thai-Cambodian situation. The clip gained more than 3.7 million views and was widely shared by mainstream media. However, it was later found to be merely an old video of a nighttime military training exercise, unrelated to the border conflict.[/caption]
Nevertheless, distinguishing what is AI-generated becomes easier if a person has a certain level of knowledge and experience. For those who lack sufficient knowledge or experience in the subject matter, however, such content can be readily believed at face value. This is an example of a problem that is increasingly being observed today.
Guidelines for verification in 2026:
- Start from a position of skepticism
Believe only about 50% of what you see, and always verify before accepting it as true.
- Seek reliable information
Look for confirmation from knowledgeable individuals or trustworthy institutions.
- Use judgment to separate “truth” from “preference”
Even if false information has been identified, if people cannot distinguish factual truth from their preferences, it will remain difficult to stop the spread of misinformation within their own information environment.
The best defense in 2026 is possessing strong media literacy skills, especially the habit of questioning emotionally charged content, verifying sources, and cross-checking information with credible news organizations before sharing it. As the speed of sharingnow carries greater risk than ever, pausing to think and “check carefully before believing” is the most crucial skill for surviving the information war in 2026.