Saturday, July 10, 2004

Dartmouth News: Investigating digital images: What's real and what's phony?

" "Seeing is no longer believing. Actually, what you see is largely irrelevant," says Dartmouth Professor Hany Farid. He is referring to the digital images that appear everywhere: in newspapers, on Web sites, in advertising, and in business materials, for example.
Farid and Dartmouth graduate student Alin Popescu have developed a mathematical technique to tell the difference between a "real" image and one that's been fiddled with. Consider a photo of two competing CEOs talking over a document labeled "confidential - merger," or a photo of Saddam Hussein shaking hands with Osama bin Laden. The Dartmouth algorithm, presented recently at the 6th International Workshop on Information Hiding, in Toronto, Canada, can determine if someone has manipulated the photos, like blending two photos into one, or adding or taking away objects or people in an image.
"Commercially available software makes it easy to alter digital photos," says Farid, an Associate Professor of Computer Science. "Sometimes this seemingly harmless talent is used to influence public opinion and trust, especially when altered photos are used in news reports. . . .
A digital image is a collection of pixels or dots, and each pixel contains numbers that correspond to a color or brightness value. When marrying two images to make one convincing composite, you have to alter pixels. They have to be stretched, shaded, twisted, and otherwise changed. The end result is, more often than not, a realistic, believable image.
"With today's technology, it's not easy to look at an image these days and decide if it's real or not," says Farid. "We look, however, at the underlying code of the image for clues of tampering."
Farid's algorithm looks for the evidence inevitably left behind after image tinkering. Statistical clues lurk in all digital images, and the ones that have been tampered with contain altered statistics.
"Natural digital photographs aren't random," he says. "In the same way that placing a monkey in front of a typewriter is unlikely to produce a play by Shakespeare, a random set of pixels thrown on a page is unlikely to yield a natural image. It means that there are underlying statistics and regularities in naturally occurring images."
Farid and his students have built a statistical model that captures the mathematical regularities inherent in natural images. Because these statistics fundamentally change when images are altered, the model can be used to detect digital tampering. . . ."

2 Comments:

Anonymous Anonymous said...

The implication in this excerpt is that this algorithm is evident in a photo published on, for example, the front page of the NYT. After the photo is published are these clues still extant? Is there still a remaining data set on an analogue piece of newsprint?

10:10 AM  
Blogger t said...

click title for full article. my guess is that skills can be developed to eventually overcome the statistical data technique. and u r right, i think: a newsprint photo, diminished through compression and a low-res line screen, would be more difficult to detect than the original, high-res image (the higher the compression and the lower the dpi, the more 'averaging' of related pixels--and the less statistical data for comparative analysis). success in 'fooling' the algorithm would also depend upon the original dpi, the technique used in retouching, the output dpi (and/or amount of compression), and the skill of the retoucher: blending, air-brushing and blurring, viewed at full resolution on a computer screen, is quite detectable to the human eye--as opposed to say precision grafting, cutting and pasting. but perhaps even careful grafting would show statistical anomalies undetectable to the eye....

12:56 PM  

Post a Comment

<< Home