Global Resource Navigation:

Skip to Main Content

Apply to the University of Wyoming apply now

Global Resource Navigation

Visit Campus
Download UW Viewbook
Give to UW

Harnessing Artificial Intelligence to Automatically Identify, Count and Describe Animals in the Wild

June 5, 2018
Two impala are captured on a stationary camera in the Serengeti
Motion sensor “camera traps” unobtrusively take pictures of animals in their natural environment, oftentimes yielding images not otherwise observable. The artificial intelligence system automatically processes such images, here correctly reporting this as a picture of two impala standing. Photo courtesy Snapshot Serengeti project

A new paper in the Proceedings of the National Academy of Sciences (PNAS) reports how a cutting-edge artificial intelligence technique called deep learning can automatically identify, count, and describe animals in their natural habitats.

Photographs that are automatically collected by motion-sensor cameras can then be automatically described by deep neural networks. The result is a system that can automate animal identification for up to 99.3 percent of images while still performing at the same 96.6 percent accuracy rate of crowdsourced teams of human volunteers.

“This technology lets us accurately, unobtrusively, and inexpensively collect wildlife data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, and animal behavior into ‘big data’ sciences. This will dramatically improve our ability to both study and conserve wildlife and precious ecosystems,” says Jeff Clune, the senior author on the paper. He is the Harris Associate Professor at the University of Wyoming and a Senior Research Manager at Uber’s Artificial Intelligence Labs.

The paper was written by Clune, his Ph.D. student Mohammad Sadegh Norouzzadeh, his former Ph.D. student Anh Nguyen (now at Auburn University), Margaret Kosmala (Harvard University), Ali Swanson (University of Oxford), and Meredith Palmer and Craig Packer (both from the University of Minnesota).

Deep neural networks are a form of computational intelligence loosely inspired by how animal brains see and understand the world. They require vast amounts of training data to work well, and the data must be accurately labeled (e.g. each image being correctly tagged with which species of animal is present, how many there are, etc.). This study obtained the necessary data from Snapshot Serengeti, a citizen science project on the www.zooniverse.org platform. Snapshot Serengeti has deployed a large number of “camera traps” (motion-sensor cameras) in Tanzania that collect millions of images of animals in their natural habitat, such as lions, leopards, cheetahs, and elephants. The information in these photographs is only useful once it has been converted into text and numbers. For years, the best method for extracting such information was to ask crowdsourced teams of human volunteers to label each image manually. The study published today harnessed 3.2 million labeled images produced in this manner by more than 50,000 human volunteers over several years.

“When I told Jeff Clune we had 3.2 million labeled images, he stopped in his tracks,” said Craig Packer, who heads the Snapshot Serengeti project. “We wanted to test whether we could use machine learning to automate the work of human volunteers. Our citizen scientists have done phenomenal work, but we needed to speed up the process to handle ever greater amounts of data. The deep learning algorithm is amazing and far surpassed my expectations. This is a game changer for wildlife ecology.”

Ali Swanson, who founded Snapshot Serengeti, adds, “There are hundreds of camera trap projects in the world, and very few of them are able to recruit large armies of human volunteers to extract their data. That means that much of the knowledge in these important data sets remains untapped. Although projects are increasingly turning to citizen science for image classification, we’re starting to see it take longer and longer to label each batch of images as the demand for volunteers grows. We believe deep learning will be key in alleviating the bottleneck for camera trap projects: the effort of converting images into usable data.”

“Not only does the artificial intelligence system tell you which of 48 different species of animal is present, it also tells you how many there are and what they are doing. It will tell you if they are eating, sleeping, if babies are present, etc.,” adds Margaret Kosmala, another Snapshot Serengeti leader. “We estimate that the deep learning technology pipeline we describe would save more than 8 years of human labeling effort for each additional 3 million images. That is a lot of valuable volunteer time that can be redeployed to help other projects.”

First-author Mohammad Sadegh Norouzzadeh points out that “Deep learning is still improving rapidly, and that we expect that its performance will only get better in the coming years. Here we wanted to demonstrate the value of the technology to the wildlife ecology community, but we expect that as more people research how to improve deep learning for this application and publish their datasets, the sky’s the limit. It is exciting to think of all the different ways this technology can help with our important scientific and conservation missions.”

The paper appears today in PNAS. The full citation is: Norouzzadeh M, Nguyen A, Kosmala M, Swanson A, Palmer M, Parker C, Clune J (2018) Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences.


Share This Page:

1000 E. University Ave. Laramie, WY 82071
UW Operators (307) 766-1121 | Contact Us | Download Adobe Reader

Twitter Icon Youtube Icon Instagram Icon Facebook Icon

Accreditation | Virtual Tour | Emergency Preparedness | Employment at UW | Gainful Employment | Privacy Policy | Accessibility Accessibility information icon