2023 . 09 . 20

Making synthetic image detection practical

news image

Detecting whether an image posted on the Internet is authentic or generated by one of the recently produced generative AI models poses a significant challenge for journalists and fact checkers on a daily basis. While most people are familiar with tools and models such as Midjourney, DALL-E 2 and Stable Diffusion, there is nowadays a growing number of tools, services and apps that make synthetic image generation extremely accessible and enable anyone to create highly realistic images using plain text descriptions, which are widely known as prompts. It’s only natural that such potential can be exploited by malicious actors to spread disinformation. Therefore having capable tools in place to detect whether a suspicious image is AI-generated or not holds value for media organisations and newsrooms.

Such detection tools are also abundant, with the large majority being based on “deep learning models” – very large neural networks that have been trained to distinguish between authentic and synthetic media. In academic papers, these tools have often demonstrated to perform exceptionally well on separating between authentic and synthetic imagery. However, deploying such tools in operational settings presents several challenges.

A primary challenge is these models’ tendency to perform well in a restricted set of cases (referred to as “domain”) used for their training.  Consider a scenario where the researcher primarily used synthetic and authentic images of human faces to train the model. If a journalist wants to use this model to detect whether an image depicting a building is synthetic or not, the AI model is likely to give an unreliable response due to the domain mismatch between training (human faces) and testing (buildings). The Multimedia Knowledge and Social Media Analytics Lab (MKLab) at CERTH have recently developed a method to alleviate this issue. The method achieves better generalisation ability across different domains by training the detection model solely using high-quality synthetic images. This compels the model to “learn” quality-related artifacts instead of content-related cues. This method was presented at the international workshop on Multimedia AI against Disinformation (MAD’23) in Thessaloniki, Greece. A paper providing technical details is available as part of the workshop proceedings.

A second challenge when employing synthetic image detection models in practice is that most, if not all, available tools are in the form of web services that send the provided images to a server for analysis. This is often due to the computational intensity of the detection models, necessitating a powerful server for quick calculations. However, there are situations where journalists, fact-checkers, or citizens might be uncomfortable or at risk when sharing suspicious images with third-party services. To address this challenge, the MKLab team at CERTH leveraged a newly proposed method to “compress” detection models into a much smaller size, enabling execution on a standard smartphone.  This approach allows for  deepfake detection analysis without submitting the suspicious image to a third party. This compression uses “knowledge distillation”, where a computationally expensive model acts as a “teacher” to train a lighter model (the “student”). In experiments, the model size could be halved while maintaining nearly the same detection accuracy.  Even a 10-fold reduction was possible with only a slight decrease in accuracy. The method used for these results has been submitted for publication in an international journal, and a preprint is publicly available. 

It’s important to note a key limitation of the above results. Both focus on detecting GAN-generated images (well-known cases of such images are generated by https://thispersondoesnotexist.com/). Currently, detecting images produced by models like DALL-E 2, Stable Diffusion, and Midjourney is not feasible, although ongoing experiments show promise in developing tools that could enhance journalists’ and fact-checkers’ capabilities in countering disinformation.

Author: Akis Papadopoulos (CERTH)

COOKIES

AI4Media may use cookies to store your login data, collect statistics to optimize the website's functionality and to perform marketing actions based on your interests.

COOKIES
They allow you to browse the website and use its applications as well as to access secure areas of the website. Without these cookies, the services you have requested cannot be provided.
These cookies are necessary to allow the main functionality of the website and they are activated automatically when you enter this website. They store user preferences for site usage so that you do not need to reconfigure the site each time you visit it.
These cookies direct advertising according to the interests of each user so as to direct advertising campaigns, taking into account the tastes of users, and they also limit the number of times you see the ad, helping to measure the effectiveness of advertising and the success of the website organisation.

Required Cookies They allow you to browse the website and use its applications as well as to access secure areas of the website. Without these cookies, the services you have requested cannot be provided.

Functional Cookies These cookies are necessary to allow the main functionality of the website and they are activated automatically when you enter this website. They store user preferences for site usage so that you do not need to reconfigure the site each time you visit it.

Advertising Cookies These cookies direct advertising according to the interests of each user so as to direct advertising campaigns, taking into account the tastes of users, and they also limit the number of times you see the ad, helping to measure the effectiveness of advertising and the success of the website organisation.