There is a moment in quiet dusk when the rhythmic ebb of shadows on ancient rock seems to murmur stories of worlds long past, and one might imagine Neanderthals — our late prehistoric cousins — moving through glades and firelight. Those stories have been rewritten and refined by generations of scholars, drawing from fossil bones, stone tools, and the delicate patterns of ancient DNA that whisper of their lives. Yet in the age of instant answers and shimmering screens, a new study reminds us how the swift brilliance of generative AI can still misread these echoes from the deep past, reflecting outdated knowledge and long‑abandoned myths as though they were truths.
Researchers from the University of Maine and the University of Chicago set out to explore what happens when everyday AI tools — chatbots and image generators — are asked to visualize or narrate the daily life of Neanderthals. Collaborating across anthropology and computational science, they crafted prompts for two models — one for text responses and another for images — and ran them through hundreds of iterations. Their goal was not merely to generate content, but to probe how closely these tools align with current scientific understanding built over decades of careful research.
What they found was revealing. Without guidance that explicitly called for scholarly accuracy, both the images and narratives often echoed scientific ideas that have long been superseded. The visuals were reminiscent of 19th‑ and early 20th‑century depictions — stooped, heavily haired figures more akin to caricature than contemporary reconstructions. Women and children, whose presence is now well attested through archaeological and genetic evidence, were largely absent from the AI’s scenes. The stories spun by the text model likewise underplayed the sophistication and variability of Neanderthal behavior that modern research has documented over recent decades.
Even when the prompts specifically asked for expert‑level descriptions, many outputs still drew on outdated material. By comparing the content against snapshots of scientific literature from across the 20th century and into the 21st, the researchers could estimate the “era” of the AI’s implicit knowledge: one model’s text often aligned with mid‑20th‑century viewpoints, while the image generator’s output was more in line with late 20th‑century academic impressions. This gap reflects not a deficiency in intelligence, but a limitation of accessible training data, much of which is constrained by copyright and availability.
This study thus acts as a kind of mirror, showing not only what AI can create, but also what it cannot yet access. Many cutting‑edge discoveries, tucked inside journal articles behind paywalls or scattered across recent publications, remain invisible to the models that power everyday tools. As a result, AI can reproduce and amplify outdated narratives about our past nearly as quickly as it generates striking visuals or fluent text.
The researchers emphasize that this is not an indictment of AI in itself, but a call to awareness. These tools can process vast amounts of information and identify patterns with remarkable speed, yet they must be engaged critically — by scholars, educators, and everyday users alike — to ensure their outputs reflect contemporary knowledge rather than the shadows of it. By drawing attention to how generative AI portrays something as studied and complex as Neanderthals, the study offers a blueprint for examining similar gaps in other disciplines, reminding us that the river of knowledge flows forward even as our technologies strive to catch up.
AI Image Disclaimer (Rotated Wording)
“Images in this article are AI‑generated illustrations, meant for concept only.”
---
Sources
Phys.org UMaine News (University of Maine) ScienceBlog.com News Minimalist Mirage News

