Ben Zhao, a professor at the University of Chicago, and four of his students have formed a team to create a smart program that will prevent the creation of fake photos with AI tools. The Glaze program “infects” images in such a way that other intelligent programs cannot use them to “learn” how to generate other photos, writes Portalb.mk.
The photo is used with permission from the authors

A professor and his four students at the University of Chicago have made it possible for artists to protect their digital works from being used without permission by smart programs like DALL-E, Midjourney, and Stable Diffusion \. They invite any programmer who wants to help, join the team, or contribute, so that in the future we can also protect the images we post on social networks, which can currently be used without permission to create fake photos or videos, writes Portalb.mk.

Smart programs like DeepAI and DALL-E that generate images or Synthesia and Colossyan that are the first to generate realistic videos, naturally instilled fear in the general public, fear that perhaps our photos and videos that we share on social networks could be used to generate fake photos and videos.

Last year, Ben Zhao, a professor at the University of Chicago, along with his four students, Shawn Shan, Wenxin Ding, Josephine Passananti, and Haitao Zheng, formed a team to create a smart program that would prevent the creation of fake photos using artificial intelligence (AI) tools.

Photo 1, Original digital photo by Tamara Menzi on Unsplash Photo 2: Glazed photo, Digital photo by Tamara Menzi on Unsplash
Photo 1: Blue and brown bird on brown tree trunk. Photo by Boris Smorkovic on Unsplash. Original photo
Photo 2: Blue and brown bird on brown tree trunk. Photo by Boris Smorkovic on Unsplash. Glazed photo

The program in question–Glaze, has been available to the public since March 2023 and “glazes” images in a way that other intelligent programs cannot use to “learn” how to generate other photos. To the human eye, the original and “glazed” photos look the same.

The reason Glaze can’t be used for every type of photo yet is because it doesn’t work well for photos that are simpler in composition, such as a photo with a single subject and a solid background. The image has the same level of protection, but after “the glaze” it is noticeable with the eye that it is not the same.

Zhao’s team is constantly working on the program, but ultimately, it’s all volunteer work with a small team, so not much can be accomplished in a short time. Because of this, their website asks for any voluntary help that can be provided by donors or other developers.

“It’s important for us to not only continue to offer Glaze for free, but also to expand its protective capabilities,” Zhao writes for those interested in helping.

 

Just 50 “glazed” photos are enough to confuse smart photo-generating programs

In their latest research , Zhao’s team trained an AI model to generate images based on Stable Diffusion models. Initially, they fed the database with only the original photos, and in a second cycle, they added 50 more “glazed” photos, then another 100, and finally 300 more.

The Stable Diffusion model trained with original photos worked well, but with only 50 “glazed” images it started generating other things: a deformed dog when asked to generate a dog, an object that looks like a metal suitcase when asked to generate a handbag, a semi-abstract image when asked to generate an image of a science fiction scene, etc. After 300 “glazed” photos in the database, the results after generation do not match the requirements at all.

The results of the artificial intelligence models’ attempts to generate images, classified by the number of “glazed” images they had in the database. Photo used with permission from the authors
Image-generating programs are typically trained on photos stolen from the Internet without permission. AI companies like OpenAI, Meta, Google, and Stability AI have faced numerous lawsuits from artists who say their digital works were taken without permission and compensation, but there is still no legal solution to the matter.

According to the volunteer team, this problem could be more easily solved with programs like Glaze, because a “glazed” photo cannot be seen even with the naked eye, and even if the original does not exist online for comparison, it cannot be distinguished even by a software program. A small number of “glazed” photos on the Internet would confuse all the artificial intelligence models that are trained on the edited photos, and since it is impossible to distinguish whether the photo is original or “glazed,” AI companies would have to delete the entire database and start from scratch.

 

In October 2023, the same team released Nightshade, a plugin that works with Glaze to mask the style of digital artwork. This means that if there are enough images on the Internet that have been infected with Glaze and then masked with Nightshade, smart photo-generating programs will not only be unable to generate matching photos, but they will also be unable to copy the style of that artwork, for example, not only will they generate a suitcase instead of a handbag, but if a real handbag is needed, they will generate a pencil-drawn suitcase.

 

For AI models, the “glazed” image shows something completely different

If we compare the images at the pixel level, we can see that Glaze/Nightshade has changed and replaced many pixels across the entire surface of the image.

The pixel differences between Tamara Menzi’s original work (left) and the “glazed” one (right). Illustration from Portalb.mk

Up close, the eye can see some of the difference, but to an intelligent program the two images are completely different from each other.

“While human eyes see an image that is largely indistinguishable from the original, an AI model sees a completely different composition of the image. For example, human eyes may see an image of a cow in a largely unchanged green field, but an AI model may see a large leather bag lying on the grass,” Zhao says.

Zhao’s explanation may seem nonsensical, but artificial intelligence models work in such a way that they do not analyze the image as a whole, but rather divide it into blocks and analyze the pixels within a block.

For example, in the aforementioned piece by Menzi, a section is marked, which an AI model traverses with blocks that have space for 36 pixels. For the first block (inside the green square), the model might think that all the pixels are more or less the same color, while in the second block, the model might understand that from one corner of the block to the other, the pixels gradually transition from black to blue. It might also think that in this block there is a slanted “line” that separates the black from the blue part.

How Artificial Intelligence analyzes an image. Illustration from Portalb.mk

If this model is given enough images of handbags, for example, it will compare which blocks have similar features and thus retain some general characteristics of handbags, such as: circular arches (straps), round or rectangular objects between a certain number of blocks of the same color (the button that locks the handbag), etc.

Glaze/Nightshade replaces two blocks that have pixels of similar shades and then randomly rearranges the pixels within the block. Since the image looks the same from a distance, but up close the pixel arrangement is distorted, the AI model “sees” something different than what the human eye sees.

 

Smart copyright protection software is at war with time

Glaze and Nightshade, as well as other smart software that in their own way protect copyrights, will not be a permanent solution, says Nebojsa Gelevski, graphic designer and founder of the company “KOMA.”

“The challenges that come with the rapid development of artificial intelligence require finding solutions that are also fast. These softwares seem to offer copyright protection against illegal downloading and entering into databases for AI image generators, but somehow I am convinced that this will not be a permanent solution. I expect that artificial intelligence tools will be reprogrammed to avoid these protections,” he emphasized.

Gelevski is of the opinion that we also need a legal solution.

“The European Union has adopted a draft law (AI Act), the first such law at a global level, which will aim to provide creators of artificial intelligence, as well as those who implement and use artificial intelligence applications, with clear requirements and obligations regarding the specific uses of artificial intelligence. Therefore, I believe that legal solutions will be created for this problem as well, perhaps bans that will prevent image generators from having images in their database for which they have not regulated the copyright. This may also mean the need for regulatory bodies or similar cases that will be able to conduct checks on their databases, which will need to use artificial intelligence tools to check databases with billions of images. Everything is developing so quickly that it is difficult to predict in which direction things will move,” says Gelevski.

Regarding the tendency to use artificial intelligence in the fight against disinformation, Gelevski added that this is not something unexpected, manipulation and disinformation are in human nature and technology has only provided a newer, more sophisticated tool for use. He thinks that we will need a combination of tools, not just artificial intelligence, to protect ourselves.

“I am not sure that we will be able to completely protect ourselves from becoming victims of someone’s manipulation or disinformation. From what I have read, experts believe that different ways should be found, such as media and digital literacy, so that people understand what disinformation or manipulation is, there should be cross-sectoral cooperation of different actors in the fight against disinformation, greater transparency of institutions, etc., but not the adoption of legal solutions, in order to avoid the risk of endangering freedom of speech,” says Gelevski.

 

Artificial intelligence, friend and foe in the fight against disinformation

The use of artificial intelligence in detecting disinformation is also affecting North Macedonia. At the end of February, a webinar was organized on this issue, where journalist Bojan Shashevski, photojournalist Zvonko Plavevski, and lawyer Valentin Pepeljugoski were invited.

The guests debated how much artificial intelligence actually harms and how much it helps in disinformation and the fight against disinformation.

Shashevski said that artificial intelligence could be useful for creating some “stock” photos–for example, if an announcement needs to be made about a workshop, then a photo with tables and participants could be generated, at the journalist’s request.

“But artificial intelligence cannot replace journalists who go into the field to ask people and take statements, who conduct surveys and prepare reports,” stressed Shashevski.

Photojournalist Plavevski expressed his concern about how fake content created by artificial intelligence will affect the already shaken trust in media outlets. Plavevski highlighted the example of an award-winning photo that was actually created by artificial intelligence.

However, Plavevski believes that although artificial intelligence is useful for generating “stock” images, it cannot replace a photojournalist who captures a real event on the spot.

Plavevski welcomed the fact that certain companies, such as Meta, the owner of Facebook and Instagram, have announced that they will implement a policy of watermarking photos created with artificial intelligence, which will indicate exactly which software a particular photo was taken with.

Lawyer Valentin Pepeljugoski outlined the legal aspect and necessary regulations as a reaction to the massive presence of artificial intelligence technology in the media, but also beyond.

He welcomed the fact that regulations for labeling content created with artificial intelligence have already been implemented in the European Union. However, Pepeljugoski stressed that from a broader perspective, regulations are lacking in the media and in relation to cybercrime.

According to the editor-in-chief of Portalb.mk, Elida Zylbeari, media outlets may be the ones who need intelligent software the most in the fight against information manipulation. She says this is a very important topic that needs to be discussed and researched more widely in our society.

“We need to start using smart algorithms and artificial intelligence techniques that can identify manipulations and intervene in time to prevent the spread of disinformation on social networks and the Internet in general,” she says.

However, Zylbeari adds that intelligent software is not yet at the level to combat disinformation in the media without supervision.

“It is important to understand that software is not the only solution to combat manipulation and disinformation. Software must be able to distinguish the context, language and culture of a particular country, which has not yet been achieved, and therefore has difficulty recognizing spin, forms of disinformation, harmful photos, and narratives. Some manipulations can be even more sophisticated and different from the algorithms that are already known, which poses a challenge for software to detect and prevent such cases. Therefore, regardless of its potential, software must be used in collaboration with people and other strategies to combat manipulation and disinformation. An integrated approach, where technology is used to support human work and vice versa, can be more effective in protecting accurate information and building a healthier information environment for everyone,” says Zylbeari.

Because artificial intelligence has developed much faster than we have had time to adapt to it, the general public has become afraid that artificial intelligence will “overtake” us: We do not know if we have enough skills to prevent an intelligent program from lying to us, deceiving us, and ultimately surpassing us in all aspects. However, the work of Zhao, Shan, Ding, Passananti, and Zheng reminds us that the future is not a war between humans and artificial intelligence, but a war of human with artificial intelligence against human with artificial intelligence. We should not forget that artificial intelligence is not a being in principle, but a tool.

Link to the original text: https://meta.mk/vi-nasproti-vi-veshtachkata-inteligencija-prijatel-i-neprijatel-vo-borbata-protiv-dezinformacii/

Share: