Computer Trained to Reveal How it Uses AI to Identify Bird Species
Researchers at Duke University have trained a computer to identify bird species through deep learning and show its reasoning. The innovation may prove useful in medical diagnostics.
The team trained their deep neural network — algorithms based on the way the brain works — by feeding it 11,788 photos of 200 bird species to learn from, ranging from swimming ducks to hovering hummingbirds.
A Duke team trained a computer to identify up to 200 species of birds from just a photo. Given a photo of a mystery bird (top), the A.I. spits out heat maps showing which parts of the image are most similar to typical species features it has seen before. Courtesy of Chaofan Chen, Duke University.
Given a photo of a mystery bird, the network picks out identifying patterns in the image and hazards a guess by comparing those patterns to typical species traits it has seen before. Along the way, it spits out a series of heat maps highlighting those features that led it to its conclusion.
The researchers found the network to be accurate up to 84% of the time, which places it on par with some of its best performing counterparts, which don’t reveal how they are able to tell, for example, one subspecies of sparrow from the next.
That opacity, Duke professor Cynthia Rudin said, can pose problems. Unlike traditional software, deep learning software learns from data without being explicitly programmed. The bird-identifying algorithm, for example, was never told “these are wings” or “this is a beak.” As a result, traditional deep learning software’s modes of thinking aren’t always clear.
Rudin and her colleagues are trying to show that AI doesn’t have to be that way. She and her lab are designing deep learning models that explain the reasoning behind their predictions, making it clear exactly why and how they came up with their answers. When such a model makes a mistake, its built-in transparency makes it possible to see why.
For their next project, Rudin and her team will use their algorithm to classify suspicious areas in medical images like mammograms. If it works, their system won’t just help doctors detect lumps, calcifications, and other symptoms that could be signs of breast cancer. It will also show which parts of the mammogram it’s homing in on, revealing which specific features most resemble the cancerous lesions it has seen in other patients.
In that way, Rudin said, their network is designed to mimic the way doctors make a diagnosis. “It’s case-based reasoning,” Rudin said. “We’re hoping we can better explain to physicians or patients why their image was classified by the network as either malignant or benign.”
LATEST NEWS