OSINT vs AI Deep Fakes: The New Frontline in Ukraine’s Digital Battlefield
Over the past weeks, short videos have been popping up on X (formerly Twitter) allegedly showing Ukrainian soldiers crying, surrendering, or complaining about forced mobilization. They look real (kind of). They sound real. They’re not.
They’re AI-generated deepfakes, also known as synthetic videos, designed to mimic real footage. And they’re getting frighteningly convincing and increasingly difficult to verify.

According to Ukraine’s Center for Countering Disinformation, these clips are part of an ongoing Russian PSYOPS campaign aimed at undermining morale during the intense fighting for Pokrovsk.
A PSYOPS campaign seeks to shape perceptions, influence behavior, and erode morale by blending real and fabricated information into persuasive narratives. Such operations have always been part of warfare—once spread through rumors, leaflets, or broadcasts, and now, in 2025, through AI-generated deepfakes.
What Deepfakes Are
Deepfakes are videos created or altered using artificial intelligence to make people appear to say or do things they never did.
Early versions used “face swap” models that pasted one face onto another. Now, newer text-to-video systems like Google’s Veo or OpenAI’s Sora can generate entire scenes — camera movement, lighting, and voice included — from just a sentence of text. When combined with today’s text-to-video generation, face swapping becomes even more effective.
That’s the kind of technology likely behind these clips: a prompt such as “Ukrainian soldier crying about mobilization, cinematic light, shot on phone.” A few minutes later, out comes a ready-to-post fake — complete with shaky handheld motion and fake tears. A patient and diligent prompter and go above and beyond and ensure that even the smallest details are consistent– e.g. camouflage pattern, tactical equipment, etc — at least for a broader audience. This particular video, with subtitles in different languages, was blowing up all across the internet. However, it was fake.
How We Can Tell They’re Fake
A few indicators stand out if you slow down and look closely:
- Lighting and texture. The skin tone and shadow depth stay perfectly smooth (instagram filter level), even when the person moves — something real phone footage rarely achieves.
- Emotion mismatch. The “crying” looks a little too theatrical. Tears glisten but don’t run or smear. And expressions often freeze mid-frame.
- Audio quality. The background is silent and the voice crystal clear (e.g. no ambient wind, traffic, or echo).
- Lip sync errors. Like watching a poorly dubbed concert performance, you can immediately tell when the lips and voice don’t line up. It still happens often and remains one of the easiest giveaways of a deepfake.
- Misrendered artefacts. Many AIs still struggle to accurately depict details like glasses or other objects, hair texture, fingers, hands, writing/inscriptions, or complex backgrounds. Zoom in on these areas—blurriness or odd distortions often give the fake away.
- Distribution and Source Tracing. In this case, the origin seems to have been @fantomoko on TikTok. The account, now disabled, had over 25k, and only posted AI-generated anti-Ukrainian propaganda videos. However, in many cases disinfo videos appear on newly created or low-follower accounts that are quickly amplified or cross-posted on other platforms.

fantomoko’s TikTok grid display shows a plethora of AI-generated anti-Ukraine propaganda
- Facial Detection. Running facial recognition searches using Google images, or Yandex in this case, or PimEyes, can reveal facial matches with other people across the internet. For example, Open.Online found that the “Ukrainian soldier” in the video is based on Russian Twitch streamer @kussia88 — check out the Open.Online investigation for the full rundown on how this particular video was verified.
- Context. Always Matters, and so does in this case. Russia is creating a meta-narrative, again, that Ukrainian soldiers are surrounded in Pokrovsk. And while the situation is difficult, there is no proof of a complete encirclement. And this is why the Russians would like to 1) force a surrender or withdrawal without their forces incurring additional heavy losses for each square meter; and 2) amplify established narratives that “Russia is winning, Ukraine is losing.” Another important piece of context is that Ukraine’s minimum mobilization age is 25, not 23, as the synthetic character in the video claims (he says in Ukrainian: “I don’t wanna die, I’m just 23 years old”), so just a quick qualitative fact check should cast doubt.

Sora watermarks still visible on many AI-generated propaganda videos
In addition, a few early uploads had corner watermarks like “Sora” that were quickly cropped out once people noticed. The advantage of Sora is that its watermark appears in random spots throughout the video, so cropping a corner won’t remove it. To erase it entirely, a propagandist would need to edit multiple frames across the clip — a time-consuming task — or cut the video apart, making it largely unusable.
Other videos (example #1, example 2, example #3) showing “Ukrainian soldiers surrendering” have the Sora watermark all over.
Tools That Can Help
There’s no single “deepfake detector” that works every time — but a few tools can nudge you in the right direction:
- InVID — great for breaking a video into frames, doing reverse image searches, and checking metadata (although rarely fruitful!).
- Hive and Sensity — run media through deepfake detection models and return a probability score.
- Undetectable.ai, HiveModeration — web-based tools that flag synthetic image/video generation patterns.
- PimEyes — facial recognition search engine.
Keep in mind: these tools aren’t perfect, they can lag behind new models. Tool-based results should be treated as indicators, not as verdicts, and should always be combined with critical thinking and careful analysis.
Human Techniques Still Matter
Technology helps — but good old OSINT reasoning is still your best defense.
- Start with the source. Who posted it first? When? Does the account look organic or recently made?
- Cross-check the context. If someone claims “mass surrender near Pokrovsk,” are there any credible reports, social chatter, or official statements confirming or denying that? Pivot and find alternative ways to document the incident’s context. If you specialize in armed conflicts, particularly the war in Ukraine, this may be easier to tell right away.
- Compare visuals. Do backgrounds, insignia, and uniforms match known Ukrainian units? Are landmarks consistent with the claimed location? They may be “similar” or “good enough” but are they really a perfect match? With AI getting better, diligence requires an extra step.
- Get acquainted. Becoming familiar with how AI-generated videos look—and the strengths, weaknesses, and quirks of each style (Sora, Veo, etc.)—is key. The more exposure you have, the faster you’ll spot subtle giveaways that casual viewers might miss.
Even without specialized AI tools, basic context and critical thinking can expose synthetic media fast.
Why It Matters
Deepfakes like these aren’t random. They’re a digital form of PSYOPS — designed to erode trust in Ukraine’s resilience, demoralize Ukrainian soldiers and public, and to influence Western audiences who might see them in their feed without questioning authenticity.
They also illustrate the next phase of the information war: a world where forged video evidence is becoming increasingly difficult to verify, and by definition, something we need to investigate and doubt even more.
What We Can Do
We can’t stop deepfakes from appearing, but we can make them less effective. That means:
- Teaching media literacy and OSINT workflows that emphasize verification over virality.
- Using available tools (even if imperfect).
- Keeping a healthy skepticism — especially when the content is emotionally charged or confirms what we already want to believe.
In the era of AI-powered propaganda, the old OSINT rule still holds true: online tools can help, but our strongest instruments remain our own mind and eyes.
by Vlad Sutea
Founder and Lead OSINT Trainer