According to The Global Risks Report 2026, published by the World Economic Forum (WEF), misinformation and disinformation remain the topmost technological risks to the world, along with adverse outcomes of Artificial Intelligence (AI) technologies and cyber insecurity.
Saadia Zahidi, WEF Managing Director in her preface to the current report wrote: “technological acceleration, while driving unprecedented opportunities, is also generating significant risks in the form of misinformation and disinformation, a top short-term concern, and creating anxiety about the potentially adverse long- term outcomes of AI, a risk that sees the sharpest increase in rank between the short term and the long term across all 33 risks covered.”
The report, on page 8, shows a graphic representation of the current global risk landscape, out of which the three technological indices fall within the Top 10 slots. Misinformation and disinformation are the 5th biggest global risks, while adverse outcomes of AI and cyber insecurity are at the 8th and 9th positions, respectively.

Rise in AI misinformation
In the last few years, the creation of Generative AI content (audio, video, image and text) have become topical, given the role that such platforms potentially play in creating and amplifying misinformation and disinformation, especially online.
But in the midst of the evolution that AI has serially recorded and the many concerns that continue to be raised, AI’s role in policing itself, to an extent, has more recently come up. For people in the information integrity space, this may amount to playing catch-up, but it is better late than they say never.
In this piece, GhanaFact explores how specific AI platforms are helping combat possible AI misinformation and disinformation with specific reference to Generative AI content. We will use a number of publications from our AI & deepfakes category to illustrate how we leveraged AI to combat often false narratives spread online.
What is SynthID?
SynthID is a robust digital watermarking technology developed by Google DeepMind that allows users to identify content generated with Google AI.
What does the detector do?
The detector allows you to verify content created or edited with Google AI, by detecting the presence of SynthID watermarks that are imperceptibly embedded in the AI content. Additionally, the detector offers detailed information, such as localised information of where it believes the generated & watermarked content exists, and a confidence level indicating the likelihood of the content being AI-generated.

From the above interface, a user (specifically fact-checkers and others in the information integrity space) can verify whether a piece of content (image, audio or video) was generated using any Google AI platform. It is instructive to note that the same resource is available to the public via the Google Gemini platform.
How Google’s SynthID platform has helped us combat AI misinformation, we present three case studies published on GhanaFact and FactSpace West Africa websites.
Case study 1 – Ken Ofori-Atta image
When news broke about former finance minister Ken Ofori-Atta’s detention by ICE officials in early January 2026, some social media users shared images and videos purportedly showing moments in which the former minister had been arrested.
The images were shared across platforms – X, Facebook, Instagram and TikTok, and in some cases by non-Ghanaian accounts or pages (see here, here, and here).
At the time these images were shared, no credible media platforms or arresting authority had released any footage showing Ken Ofori-Atta’s arrest and subsequent detention.
GhanaFact investigated the image and video by using Google SynthID. The result stated that SynthID had been detected in all or part of the uploaded content; thus, there was a very high confidence that the content was AI-generated.

Case study 2 – Osimhen, Regina Daniels fake marriage
Shortly after Nigeria lost their semi-final fixture against Morocco at the 2025 African Cup of Nations (AFCON), viral posts on social media suggested that striker Victor Osimhen proposed to actress Regina Daniels on the pitch. The posts showed a photo of Osimhen on one knee with a ring in his hand while actress Regina Daniels looked on. Osimhen’s teammates are captured in the background, cheering him on.
We passed the image through the Google Gemini platform (the publicly available AI-detector platform), and it returned a result indicating that “the image contains significant signs of being AI-generated or digitally manipulated, specifically through the presence of anatomical distortions and inconsistent digital textures.”
After evaluating the visual evidence and real-world context, Gemini said: “While the subjects, Nigerian footballer Victor Osimhen and the Super Eagles team, are real public figures, the specific image showing a midfield marriage proposal is an AI-generated fabrication”.
It added that “The distortions in the hands of background players and the garbled text on the stadium displays confirm this is a ‘deepfake’ illustration rather than a photograph of a real event.”

Case study 3 – AI images, video during the Cameroon elections
During campaigning towards Cameroon’s October 2025 presidential election, FactSpace West Africa observed the use of AI-generated content for social media campaigns by two major candidates.
We found that a viral campaign video by incumbent Paul Biya was partly created with AI, the same was the case of a final video shared by a major opposition candidate, Joshua Osih of the Socialist Democratic Front (SDF).
When we put the Paul Biya footage through the Gemini platform, it returned a verdict that “a significant portion of this video appears to be AI-generated,” before pointing out three scenes that contained the suspected AI content. It described the video as a ‘hybrid production,’ where real and AI content are used to produce content.
We also ran the SDF video through the SythnID platform, and it returned the result that: “Made with Google AI (Video),” adding that “SynthID detected in all or part of the uploaded content.”
Further details showed that three parts of the video were generated using AI. The three parts, predominantly at the beginning of the video, account for about half of the 1-minute and 42-second video that was uploaded.

Conclusion
In a blog post in May 2025, Pushmeet Kohli, VP, Science and Strategic Initiatives, Google DeepMind, underscored the importance of Google’s AI-detection platform, which he described as: “a verification portal to quickly and efficiently identify AI-generated content made with Google AI.
“The portal provides detection capabilities across different modalities in one place and provides essential transparency in the rapidly evolving landscape of generative media. It can also highlight which parts of the content are more likely to have been watermarked with SynthID,” he added.
Aside from FactSpace West Africa, other fact-checking organisations across the world are taking advantage of the platform to get better outcomes in combating AI-generated misinformation.
Researched by Gifty Danso, Samuel Adjetey Adjei and Alfa Shaban













