I Fed AI Videos into a Deepfake Detector to See How Well it Can Identify Frauds

All of these images are AI-generated. But did the deepfake detector tool label them so?

A security firm has released a deepfake detection tool as a useful aid to determine whether a piece of content is AI-generated or not.

With so much fake media flooding the internet, there is a real need for a detection tool that can accurately determine if something is synthetic or not. So far, such a tool does not exist.

Step forward CoudSEK, an AI-based cybersecurity company that has launched its Deepfake Analyser tool. CloudSEK’s detection tool assesses the authenticity of video frames, searching for inconsistencies that could indicate that AI tools were used.

CloudSEK appears to have put some effort into the tool so I put it to the test to see if it could do a good job. If the detector gives a score of 70 percent or above then it is AI-generated, 40% to 70% is dubious, while anything less than 40% is likely human-made.

First off, I tried a recent AI viral phenomenon: a polar bear cub “being rescued” by fishermen in the Arctic.

A man in an orange and black snowsuit holds a polar bear cub on a snowy deck. Nearby, two people in similar outfits lean over a ship railing, observing a polar bear below. The scene is set against an icy, ocean backdrop.

Screenshot of a webpage showing a video classification summary. The page indicates a 57% probability that the video is clickbait. A probability breakdown bar and a text summary are also visible on the page.
The deepfake detection tool gave the AI polar bear video a score of 57 percent. Accurately casting doubt on the video’s authenticity.

Next up, I fed another AI sensation from this week: Coca-Cola’s recreation of its iconic Christmas ad. The soft drinks giant controversially commissioned an AI reimagining of its festive ad featuring big trucks and Santa Claus. But could CloudSEK’s tool tell if it was AI?

Two smiling individuals wearing red knit hats and scarves stand close together in front of a brightly lit Christmas tree. Snowflakes are gently falling around them, creating a festive and joyful atmosphere.
A screengrab from the ad. The humans are based on real actors.
Screenshot of Deepfake Analyzer page for a video with a Coca-Cola logo. It shows a 35% generated probability and details about manipulated, synthesized, and original content. A video summary and request for user feedback are also present.
The deepfake tool guessed that it was likely human-made. This is probably because this entirely AI-generated video was made professionally, even the AI characters are based on actual actors.

Finally, I fed an AI video of a cook made by OpenAI’s Sora into the deepfake detector to see how well it did.

A screenshot of a video analysis interface showing a 43% generated probability that the video is "Delicious Mixed." Probability breakdown and video summary sections are visible, focusing on food preparation. There is a thumbnail not available note.
Although less certain than the polar bear video, it accurately flagged the video as likely AI-generated.

Conclusion

Overall, CloudSEK’s deepfake detector has performed admirably, albeit in a brief test. The Coca-Cola ad was always going to be a tough one given it’s made by one of the biggest companies on the planet. But casting aspersions on the other two is a job well done.

Perhaps this could be a useful tool for the less media-savvy among us.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Todays Chronic is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – todayschronic.com. The content will be deleted within 24 hours.

Leave a Comment