Purpose of the articles posted in the blog is to share knowledge and occurring events for ecology and biodiversity conservation and protection whereas biology will be human’s security. Remember, these are meant to be conversation starters, not mere broadcasts :) so I kindly request and would vastly prefer that you share your comments and thoughts on the blog-version of this Focus on Arts and Ecology (all its past + present + future).

Premium Blogger Themes - Starting From $10
#Post Title #Post Title #Post Title

How can you tell if a photo is AI generated? Here are some tips.

It takes 13 milliseconds to process a picture. When you take more time to look, what you find may surprise you. 

 APRIL 20, 2023

This image shows a cheetah in Namibia. Can you tell if it was produced using AI?
PHOTOGRAPH BY FRANS LANTING, NAT GEO IMAGE COLLECTION

Every day, fake pictures are getting more realistic. Today, anyone can access a web-based program like Midjourney or Dall-e and create artificial or manipulated images without much effort.

The good news is that humans have a natural instinct for sniffing them out, according to Siwei Lyu, professor of computer science and engineering at the University at Buffalo. Lyu belongs to a group of researchers battling AI with AI—they’ve found the best way to teach an AI to find synthetic images is to show them how humans do it.

We’ve been dealing with falsified images for a very long time. Image manipulation has been around just about as long as photography itself. Take, for example, this photo from 1860 with Abraham Lincoln’s head attached to another man’s body—it took painstaking work and skill to make it convincing.

What’s changed is how easy it is for someone without expertise to create something that appears genuine—resulting in an intimidating volume of synthetic images. But Lyu urges us not to panic. Here’s how to use our natural instincts to find things that aren’t quite right—and how to keep up with the lightning speed of AI advancement.

Left: This is a real photograph of a cheetah that photographer Frans Lanting took in Namibia—just like the one at the top of this story. Were you able to guess correctly?
PHOTOGRAPH BY FRANS LANTING, NAT GEO IMAGE COLLECTION
Right: This AI-generated image was produced with the DALL·E 2 using the prompt “a National Geographic style profile photograph of a cheetah in Africa.” AI tends to have difficulty creating eyes that look real, and it also struggles with the physics of light, including reflections.
IMAGE BY NG STAFF

To catch a fake

The first step is to slow down. We are inundated with media all day, and only need as little as 13 milliseconds to process each image. That may be enough to register what it is, but not enough time to think about whether it’s real. An image surprises you when it contradicts what you know to be true, so don’t ignore that instinct.

“Next time we see something interesting or funny, hopefully we’ll pause a little bit and think about it,” Lyu says. “If we suspect anything that's fishy, we don’t retweet immediately—so we stop the problem at the door instead of being part of the problem.”

(Why is AI so creepy?)

AI programs are trained to create realistic images by looking at a huge volume of real ones. What Lyu calls their “Achilles’ heel” is that these programs only know what they’ve been given—and they don’t know what details to pay attention to. This results in “artifacts,” or problems with the image that are obvious upon closer inspection. For example, people in deepfake videos rarely blink, because AI are often trained with images of people with their eyes open.

“The telltales often show at the seams,”  says Paulo Ordoveza, a web developer and image verification expert who runs the Twitter account @picpedant, where he debunks fake viral posts. That might include something like a “wrinkled sleeve fading noncommittally to flesh” with no clear distinction between them. He also says to look out for “weird behaviors in strands of hair, glasses, headwear, jewelry, background” for the same reason.

If there is a person in an image, Lyu recommends looking at their hands and eyes. 

Current AI programs aren’t good at producing lifelike hands—they may have six fingers, or fingers that are all the same length or in a strange pose. In March, an AI-generated image of Pope Francis wearing a Balenciaga puffer coat went viral. If you look closely at his hand, he’s holding his coffee by the lid’s tab—a strange way to hold it, even if the cup was empty.

(How AI can tackle climate change.)

And why the eyes? Humans are really sensitive to minute characteristics of the face. Using, eye trackers, we can see that people look back and forth between the other person’s eyes to gain information. We evolved to do that, according to David Matsumoto, professor of psychology at San Francisco State University and expert in microexpressions. He says this is how we determine friend from foe, and evaluate the emotional state of those we encounter. We need to make these assessments quickly to decide how to respond to them, or if necessary, flee.

Humans almost always have circular pupils, but AI often produces strangely shaped shadows in the center of the eye. Light reflecting off the eyes should also be in the same place on each eye, something that current AI struggles with.

(The battle for the soul of artificial intelligence.)

Light and shadows in general are tricky for AI to reproduce. Especially if there’s a window or reflective surface in the image, there may be light or shadow where there isn’t supposed to be. This is part of a larger problem AI has with laws of physics, like gravity.

Many synthetic images also have an unnatural smoothness where there should be texture, and things that should be straight may be slightly curved. In the AI-generated image of the pope, his cross necklace appears to have curved edges, and also hovers slightly over his chest (ignoring gravity).

Tools to help us

But all these “tells” of AI-generated media have a downside: Hany Farid, a professor at University of California, Berkeley focusing on media forensics, explains, “whatever I tell you today, it's not going to work in a month. The reality is the space moves very, very fast. You cannot, as a matter of fact, only rely on your visual system.” 

Instead, the much more pragmatic method, and one with much more longevity, is to be generally suspicious of media, to question its sourcing, and double-check its veracity, he says.

One easy tool to use is Google’s reverse image search, where users can upload an image and see if there are conversations happening around its creation. This would work for an image that’s been widely circulated like that of the pope, but may not help with more unknown or unique creations.

(How AI is changing wildlife research.)

In those situations, companies like Reality Defender offer AI detection services to businesses for a fee. These companies are conducting “robust research” on methods such as advanced watermarking, according to Daniela Rus, director at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. “These sophisticated techniques are proving to be effective,” she adds.

Farid says AI creators should be obligated to make sure their content bears some sort of watermark or fingerprint to identify it as computer-generated down the line—especially after it’s been shared online. For example, that picture of the pope was originally shared in a community of AI creators, not intending to fool anyone—but it quickly spread to all corners of the internet without that important original context. 

“It's very clear what's going to happen when you allow people to make very sophisticated audio and video and image things. People are going to do bad things with it,” he says. “For these companies to say, ‘Well, look, don’t blame us. We just make the technology’—I just don’t buy that argument.”

Right now, free resources to identify AI-generated media are few and far between, and not very reliable.

Lyu and his team have developed a free web-based program called DeepFake-o-meter, but it’s not currently available to the public.

Part of the problem, Lyu says, is that investors are flocking to fund AI creation, but not AI countermeasures. “Our side of the work gets much less attention. And we’re basically running out of resources,” Lyu says. Unlike the programs behind deep-fakes, “we don’t generate revenue directly, we’re trying to prevent people from losing financially or being psychologically misled.”

As AI continues to advance, Lyu says we’re going to need more free, web-based AI detection programs and other tools that can reveal AI signatures that aren’t visible to the eye in the same way an x-ray machine reveals the internal workings of the body. The programs like this that do exist require a degree of expertise to use, and aren’t always free or cheap to use.

Still, without these resources, you can start looking out for AI-generated images today simply by being on your guard—check first before you believe anything you see.

(Sources: National Geographic)

    Powered By Blogger