Back to highlights 2025

From Invisible Plants to Visible Decisions: Explainable AI for Phytolith Identification

Orengo Romeu, Hector (BSC-CNS, ICAC)

Humanities

Ancient agriculture leaves behind a hidden record in the form of microscopic silica "skeletons" called phytoliths. These tiny fragments are key to understanding how early civilisations grew food, but identifying them under a microscope is an incredibly slow task, requiring years of specialised training.While modern Artificial Intelligence (AI) can now automate this process in seconds, a major scientific hurdle remained: these systems often act as a "black box" (see image 1). Without knowing how the computer reaches its conclusions, scientists cannot fully trust if the results are based on real botanical evidence. To understand why this transparency is vital, consider the "cat vs. dog" problem (see image 2): a poorly trained AI might correctly identify pets only because it recognizes the grass in the background of dog photos rather than the animals themselves.Our research successfully "opened the box" using a visual tool, called Guided Grad-CAM, to map exactly which features the AI focuses on when classifying ancient crops. We discovered that the algorithm independently learned to recognise the same complex wave patterns and surface bumps (papillae) (see Image 3) that human experts use to distinguish between ancient wheat, barley, and oat. In 91% of cases, the AI's primary decision-maker was the genuine biological wave pattern of the plant.By making AI "explainable", we are setting a new gold standard for computational archaeology. This ensures that the digital tools used to reconstruct human history are as reliable.

Explanatory image of a DL black box for phytolith classification using the algorithm of Berganzo-Besga et al. (2022). The original 256-pix crop example image is classified as Triticum. See the original article (Berganzo-Besga et al., 2022) for interpretation. (Berganzo-Besga et. al., 2025)

A poorly trained cat vs dog recognition model. (A) Black box issue illustrated for a first cat image example. (B) Black box issue illustrated for a first dog image example. (C) Grad-CAM applied to the first cat image example. (D) Grad-CAM applied to the first dog image example. (E) Grad-CAM applied to a second dog image example. (F) Grad-CAM applied to a second cat image example. The model correctly classifies the first image examples (C and D) and incorrectly the second ones (E and F). (Berganzo-Besga et. al., 2025)

The different characteristics found and analysed after applying Guided Grad-CAM: papillae, pits, shape and wave pattern. The figure shows for each of the mentioned characteristics: (A) the original 256-px crop example image, and (B) Grad-CAM applied to the same image, which is shown over the original grey-scale image (the target concept is the yellowish to reddish area). (Berganzo-Besga et. al., 2025)


REFERENCE

Berganzo-Besga I, Orengo HA, Lumbreras F & Ramsey MN 2025, 'Deep learning black box and pattern recognition analysis using Guided Grad-CAM for phytolith identification', Annals of botany, vol. 136, Pages 355–366.