Deep Fake Advice

30986297880?profile=RESIZE_400xDeep fakes are increasingly sophisticated digital manipulations that can convincingly impersonate individuals or create misleading content.  To stop the spread and impact of deep fakes, it is essential to verify the authenticity of messages, images, and videos before trusting them.  Employing advanced detection tools, such as AI-driven deep fake detectors, and cross-referencing content with trusted sources can help identify fraudulent material.  Additionally, raising awareness and educating employees about the risks and signs of deep fakes is crucial for maintaining cybersecurity.

  • Is the sender someone I recognize or expect to contact me?
  • Does the message contain any suspicious links or attachments?
  • Are there spelling or grammatical errors that seem unusual for the sender?
  • Is the message asking for personal information, passwords, or financial details?
  • Does the message create a sense of urgency or pressure me to act quickly?
  • Can I verify the sender’s identity through another trusted method?
  • Does the message reference information that only legitimate contacts would know?
  • Is the email address or phone number consistent with previous legitimate communications?

Asking these questions can help you determine whether a message is genuine or potentially fraudulent.

Experts recommend maintaining a cautious and analytical mindset when encountering unexpected or unusual digital communications.  It is wise to avoid clicking on links or downloading attachments until you have double-checked the sender’s credentials and the context of the message.  If anything feels off, independently reach out to the sender using previously established contact information rather than replying directly to the suspicious message.[1]

Additionally, experts stress the importance of keeping your software and security tools updated to guard against new deep fake techniques. Participating in regular cybersecurity training and staying informed about the latest trends in digital impersonation can further strengthen your defenses against deep fake threats.

The issue of AI-generated media is emerging so fast that experts cited in a report by the World Economic Forum estimate that by 2026, 90% of the content online could be AI-generated—and the tech is getting so good that the human eye alone can’t tell what’s synthetic anymore.

A new startup company called TruthScan is dedicated to combating the increasing flood of harmful AI-generated photos and videos on the internet.  Its software allows users to upload images, videos, audio, or text to detect whether the content was created by artificial intelligence.  “People want a way to verify whether the media they are seeing or receiving is authentic or AI-generated, and that’s exactly where TruthScan comes in to help,” says Christian Perry, CEO of TruthScan.”

In the past, making deepfakes, a term created by a Reddit user in 2017, was costly and time-consuming.  Now, however, with advancements in AI video and image generation, anyone can create realistic deepfakes using publicly available AI tools.

The emerging damage of deepfakes - On 15 September 2025, Reuters reported that international gangs were conducting fraud operations by using AI to make online romance scams more believable.  Last month, there were reports of foreign hackers using AI to generate fake military ID cards

AI-related attacks and fraud are on the rise. Currently, insurance companies utilize online portals for customers to submit claims.  Typically, an insurance company will require a person to upload images of damage to their home or vehicle before paying the claim.   The problem is that advancements in AI-generated media allow liars to exploit these systems by uploading generated images to an insurance provider, which are so convincing that the insurance company may be left none the wiser, paying a fraudster tens of thousands of dollars on a fraudulent claim.  AI fraud has already reached over $200 million in losses this year alone.

Impersonation scams have always existed, but now with easily accessible deepfake tools, such scams are becoming way more prevalent and complicated to spot. “Scammers using deepfakes have been able to pull off impersonation attempts against executives at hundred-million-dollar companies,” explains the TruthScan CEO, “If executives working at companies making millions of dollars can be duped, then anyone can.”

One recent scam involved an executive of an international firm being defrauded out of millions of dollars, by a scammer using deepfake technology.  Unlike other companies offering AI image and video detection, TruthScan has detection tools for enterprise companies and for the general public.  If you came across a piece of media online and wanted to know whether or not it was AI-generated, you can run it through an analysis using TruthScan’s AI image detector.

TruthScan exists as a solution tailored toward multiple use cases and scenarios.  “If you’re conducting fraud investigations, we have a detection model for that.  If you’re just an everyday person trying to determine whether something you just saw online is real or fake, we have a general model for that too, that anyone can access.”

Another aspect that makes TruthScan interesting is the extensive research and resources that the company has in adversarial AI.  The team behind TruthScan’s deepfake detection software has previously conducted numerous case studies to bypass detection technology and understand the vulnerabilities in AI detection tools.  “Now what we are doing is taking everything we’ve learned about how detection tools are exploited, and feeding that data into our models to make them extremely difficult to bypass,” explains Perry.

TruthScan is connected to millions of training datapoints that allow the model to accurately detect whether a given photo or video is real or an AI fake.  Current internal benchmarks from TruthScan demonstrate 97% consistent accuracy in AI content analysis.

How to detect AI-generated images - To detect AI-generated images, examine visual inconsistencies such as anomalies in body parts (like extra or missing fingers, distorted faces, or unnatural proportions.)  Scrutinize fine details like hair, fur, or repetitive patterns that may appear unnatural, and inspect for illogical lighting, shadows, or odd background elements.  For more reliable verification, try using reverse image search tools like Google Images or TinEye to trace origins, or use specialized AI image detectors like TruthScan, which scans for artificial generation artifacts with high precision.

One of the issues with detecting AI-generated content without using analysis tools is that as generative AI tools are getting better, they are becoming harder for the human eye to spot.  Luckily, thanks to companies like TruthScan, technology is evolving that is making AI deepfakes easier to identify and stop before more damage can be done.

This article is shared at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC).  For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com    

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://register.gotowebinar.com/register/5207428251321676122

[1] https://www.usatoday.com/story/special/contributor-content/2025/11/03/truthscan-helps-detect-deepfake-content-and-ai-fakes/87064532007/

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!