Is seeing still believing: a critical review of the factors that allow humans and machines to discriminate between real and generated images in the context of News & Factual content

Martyn Gates

The growing photorealistic capabilities of 3-D Animation and CGI poses potential issues for Broadcasters, Regulators, Governments and viewers. What happens if a news story is broadcast, or available on a website, which is CGI or a combination of CGI and camera-acquired images? Can we rely our human visual perception to be able to discriminate between CGI and “real’ moving images? There are already instances of video where there is polarized debate as to the degree of CGI present in published content, a prime example being the ISIS video of the burning of a person. An analysis of the artefacts seen in the video that gave rise to the truthfulness debate are presented. The factors that humans use to discriminate, or to evaluate the realness of photographs includes shadow softness, surface smoothness, scene complexity and composition, and number of light sources. Initial studies showed a human ability to discriminate, however, recent studies have shown that the ability to discriminate is getting more difficult, although with training the ability improves. Computer Vision research has evaluated image features and descriptors that discriminate between photographs and CGI. No such studies with quantitative data or experimental methodologies seem to exist for evaluating the CGI moving image by humans or computers. Possible temporal assessment factors for human discrimination (motion parameters) for computer vision (optical flow), and for artificial intelligence (semantic scene analysis) are presented.

Published
2017-10
Content type
Original Research
Keywords
faked news, human visual perception, computer vision, cgi, photorealism, isis, compression artefacts, cgi artefacts, optical flow, semantic scene analysis, artificial intelligence
DOI
10.5594/M001803
ISBN
978-1-61482-959-1