{"id":8098,"date":"2025-08-27T08:12:00","date_gmt":"2025-08-27T08:12:00","guid":{"rendered":"https:\/\/sandbox.hbmadvisory.com\/amplify\/ai-still-struggles-with-photo-verification-tow-center-study-finds\/"},"modified":"2025-08-27T08:16:46","modified_gmt":"2025-08-27T08:16:46","slug":"ai-still-struggles-with-photo-verification-tow-center-study-finds","status":"publish","type":"post","link":"https:\/\/sandbox.hbmadvisory.com\/amplify\/ai-still-struggles-with-photo-verification-tow-center-study-finds\/","title":{"rendered":"AI still struggles with photo verification, Tow Center study finds"},"content":{"rendered":"<p><\/p>\n<div>\n<p>The growing sophistication of AI language models such as OpenAI\u2019s GPT-5 has raised hopes that they could help verify photographs online, spotting visual clues, geolocating obscure images and even detecting fakes. But <a href=\"https:\/\/www.cjr.org\/tow_center\/why-ai-models-are-bad-at-verifying-photos.php\" rel=\"nofollow noopener\" target=\"_blank\">new research from the Tow Center for Digital Journalism<\/a> suggests the technology is far from reliable when it comes to confirming provenance \u2014 and mayrisk adding confusion.<\/p>\n<p>In tests involving seven leading AI platforms, including GPT-5, Gemini, Claude and Perplexity, researchers gave each system ten authentic images from major news events and asked for details such as date, location and photographer. Out of 280 queries, only 14 met the required standard for accuracy and consistency. Even GPT-5, the best performer, was correct just over a quarter of the time.<\/p>\n<p>Unlike reverse image search tools such as Google Images or TinEye, which use pixel-based matching, large language models generate descriptions of pictures and then build text-based searches around them. This can produce \u201cconfidently wrong\u201d answers when superficial clues are over-emphasised. In one case, Grok mistook flooding in Valencia for floods in Venice after focusing on a \u201cVenice Beach\u201d t-shirt in the frame.<\/p>\n<p>The models were somewhat better at geolocation than at identifying photographers or dates. They were able, for instance, to highlight architectural details, vegetation or street furniture that might escape a human fact-checker\u2019s notice, and their optical character recognition can read faint or blurred text. Investigators say these features show promise for generating leads or providing a \u201cfirst draft\u201d of analysis.<\/p>\n<p>But errors were frequent and sometimes serious. The systems mislabelled well-documented images, fabricated claims about metadata, and even suggested authentic photos were AI-generated. A flood photo from Kazakhstan, for example, was misattributed to other events and wrongly flagged as synthetic.<\/p>\n<p>Because the models\u2019 reasoning is opaque, researchers warn that it is difficult for non-specialists to know when an answer is credible. As media researcher Mike Caulfield noted, the danger lies in untrained users treating AI\u2019s confident but inaccurate responses as fact.<\/p>\n<p>The study also places AI\u2019s weaknesses in a broader context: detection tools designed to identify synthetic media are struggling to keep up with the rapid evolution of image-generation technology.<\/p>\n<p>The Tow Center concludes that AI can play a supporting role for professional fact-checkers but should never replace traditional methods. Used with caution, AI might surface overlooked details or accelerate searches, but human oversight and independent corroboration remain essential if journalism is to avoid amplifying mistakes at moments when accuracy matters most.<\/p>\n<p>Source: <a href=\"https:\/\/www.noahwire.com\" rel=\"nofollow noopener\" target=\"_blank\">Noah Wire Services<\/a><\/p>\n<\/p><\/div>\n<div>\n<h3 class=\"mt-0\">Noah Fact Check Pro<\/h3>\n<p class=\"text-sm\">The draft above was created using the information available at the time the story first<br \/>\n        emerged. We\u2019ve since applied our fact-checking process to the final narrative, based on the criteria listed<br \/>\n        below. The results are intended to help you assess the credibility of the piece and highlight any areas that may<br \/>\n        warrant further investigation.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Freshness check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The narrative is based on a press release from the Tow Center for Digital Journalism, dated August 26, 2025. Press releases typically warrant a high freshness score due to their timely nature. ([cjr.org](https:\/\/www.cjr.org\/tow_center\/why-ai-models-are-bad-at-verifying-photos.php?utm_source=openai))<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Quotes check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>No direct quotes are present in the provided text.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Source reliability<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The narrative originates from the Tow Center for Digital Journalism, a reputable organisation known for its research in digital journalism.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Plausability check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The claims align with existing research on AI&#8217;s limitations in image verification. For instance, a study published in Digital Journalism discusses the challenges and opportunities of implementing data journalism, digital verification, and AI in newsrooms. ([viewjournal.eu](https:\/\/viewjournal.eu\/articles\/10.18146\/view.332?utm_source=openai))<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Overall assessment<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Verdict<\/span> (FAIL, OPEN, PASS): <span class=\"font-bold\">PASS<\/span><\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Confidence<\/span> (LOW, MEDIUM, HIGH): <span class=\"font-bold\">HIGH<\/span><\/p>\n<p class=\"text-sm mb-3 pt-0\"><span class=\"font-bold\">Summary:<br \/>\n        <\/span>The narrative is fresh, originating from a recent press release by a reputable organisation, and presents plausible claims supported by existing research.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The growing sophistication of AI language models such as OpenAI\u2019s GPT-5 has raised hopes that they could help verify photographs online, spotting visual clues, geolocating obscure images and even detecting fakes. But new research from the Tow Center for Digital Journalism suggests the technology is far from reliable when it comes to confirming provenance \u2014<\/p>\n","protected":false},"author":1,"featured_media":8099,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[118],"tags":[],"class_list":{"0":"post-8098","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-publishing-news"},"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/posts\/8098","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/comments?post=8098"}],"version-history":[{"count":1,"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/posts\/8098\/revisions"}],"predecessor-version":[{"id":8100,"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/posts\/8098\/revisions\/8100"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/media\/8099"}],"wp:attachment":[{"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/media?parent=8098"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/categories?post=8098"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sandbox.hbmadvisory.com\/amplify\/wp-json\/wp\/v2\/tags?post=8098"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}