History

Identify Anything, Anywhere, Instantly (Well, Almost) With the Newest iNaturalist Release

July 17, 2017

A new version of the California Academy of Sciences’ iNaturalist app uses artificial intelligence to offer immediate identifications for photos of any kind of wildlife. You can observe anywhere and ask the computer anything. I’ve been using it for a few weeks now and it seems like it mostly works. It is completely astonishing.

One iNaturalist user compared it to getting your hands on a real-life Star Trek tricorder.

A few days ago, for example, I was hanging out on an elementary school lawn in the North Bay and a quarter-size brownish butterfly flitted past and landed on the grass. I snuck up on it and took this iPhone picture:

gray hairstreak
A butterfly. But what kind?

I knew it was a butterfly, meaning my best guess was “Lepidoptera,” the 180,000-species order that includes butterflies and moths.

“We’re pretty sure this is in the genus Strymon,” the app suggested, and then it generated a list of top 10 species suggestions with the gray hairstreak, Strymon melinus, representing its best recommendation. It noted that gray hairstreaks are “visually similar / seen nearby,” as opposed to several other species that were visually similar but not local. Two human experts later verified the computer’s ID for me.

There are apps out there that will attempt to use pattern recognition to automatically identify just birds, or just plants. The birds ones can be good but they’re not really useful for an iPhone nature-stalker like me, since it’s not fair to ask a computer (or a human) to identify a pixel of brown stuff that you say is a bird. The plant ones haven’t been as good. Or as Charlie Hohn, a Vermont state wetlands scientist and regular iNaturalist user, joked by email, “considering that humans literally evolved for hundreds of thousands of years to be able to identify plants, and the ones who did a bad job ate poison ones and died, it’s hard to imagine computers already being able to get near that point.”

So there was nothing really like this before, and the few places that had tried with just one type of life had had some major flaw. iNaturalist is now attempting plants, mammals, birds, insects, spiders, slime molds … how do you even begin to train a computer to do that?

Alex Shepard, iNaturalist’s iOS developer, started — seriously — with an online Coursera class about a branch of artificial intelligence called neural networks and deep learning. That’s when it occurred to him that he could teach a computer to offer species identifications from the huge number of pictures that people have added to iNaturalist over the last decade.

Plants, mammals, birds, insects, spiders, slime molds … how do you even begin to train a computer to do that?

A computer learning to identify an image proceeds at a very superficial level like a human visual cortex trying to identify something in the world, Shepard says. It does a rough pass to pick out the most basic parts — colors and lines, for example. Then it does a slightly more complex pass to pick out slightly more complicated parts — patterns of lines instead of just lines, say. Then it does it again, and this time it looks for places where colors overlap, darks over lights, which might be the beginning of an outline. All the information combines into something like a “gestalt,” Shepard says: “There’s basically a set of layers that the image is processed through that all sort of add up to a sense of what you’re looking at.”

This part isn’t particularly groundbreaking; the idea of “computer vision” has been around for nearly 50 years. Computer vision is what Facebook is doing when it suggests people to tag in photos, or what your car is doing when it tries to detect pedestrians. But something like what iNaturalist has just built has only become possible in the last few years, says Grant van Horn, a graduate student in Caltech’s Computation Vision Lab who helped build the Cornell Ornithology Lab’s Merlin app and advised the iNaturalist team. It took a lot of hardware innovation, plus a decade of work by researchers and tech companies on what’s called deep learning — teaching computers to teach themselves really complicated stuff — to make computer vision practical for nature.

“Even before the deep learning revolution, you could do a pretty good job on basic level categories, like car versus pedestrian or versus cat,” van Horn says. “But in the past it was a super-smart human encoding what to learn from into the algorithm. Now we just ask the machine to learn from a bunch of examples. And if you don’t have the data it’s really hard to get an advantage.”


The data is why iNaturalist can take on the world in a way almost nobody else can. The app’s users upload photos of all manner of creature, with both a date and a place accompanying the photo. Other users help with identifications, and once two users agree on a species an observation is elevated to “research grade.” iNaturalist recently passed five million observations, 2.5 million of which have reached research grade. The way to succeed in asking a machine to nimbly identify something from an image is to have it learn from a massive and well-organized dataset. The better organized the database, the more observations, the more reliable the service. Check and check. “We produce the high-quality database computer vision folks have been salivating about for years,” Shepard says.

Out of 2.5 million quality observations, according to an explanation iNaturalist co-director Scott Loarie posted to the iNaturalist web site in June, there are 13,730 species that have been identified and confirmed more than 20 times. That’s the database they started building their computer vision from.

Shephard built a prototype that he says was good enough to recognize “pretty easy stuff like monkeyflowers.” When the prototype started to run up against hardware limits, Nvidia donated a pair of graphics processing machines more typically used by places like Pixar to render movies. Van Horn and the Caltech Visipedia lab helped work through database challenges. This spring, the iNaturalist team set the computer up and let it chug away in the office for five weeks, nonstop, like a student memorizing the textbook before final exams. They released the results as part of a soft-launch app update on June 29.

As a random person walking around the world who mostly just wants to know what things are, here are the caveats. You have to give the computer a reasonable photo of what you want identified. There are still some — well, lots and lots of — things that look nearly identical and can’t be identified without dissection or genetic testing. Spiders, for example, or grasses. There are also things that can’t be identified well without multiple photos, like mushrooms, for which you really need to see the cap shape and the details of the underside of the cap. Shepard and the iNaturalist engineers are working on the multiple photos challenge, but they solve the broader problem by simply letting the computer admit failure. If it can’t recommend a genus, it recommends a family. If not a family, an order. If it can’t recommend anything at all, it will just back up and say, “We’re not confident enough to make a recommendation,” and then offer a few visually similar suggestions for you to browse.

The recommendations are also better where the data is more complete. Although iNaturalist’s observations are globally distributed, it originated in Northern California, and has a very active user base and identifiers here. It’s super accurate in Northern California. It’s accurate in most of North America, and parts of Europe where there are lots of users. It’s not going to be able to help at all if you’re operating from a biodiversity hotspot in the Amazon, simply because not enough previous iNaturalist users have done so. And even in Northern California it can’t help you much if the thing you’ve found is so uncommon, or so hard even for humans to identify, as to not have a footprint yet in the iNaturalist database.

iNaturalist’s engineers built a demo version of their computer vision by April. This allowed them to confront another potential pitfall. Over the years iNaturalist has attracted and cultivated a set of curators who have devoted considerable time to identifying things for strangers. There’s a malacologist from New York City, for example, Susan Hewitt, who has repeatedly helped me and thousands of other people identify intertidal mollusks. Anyone who’s been tidepooling near Half Moon Bay has (digitally) encountered birder and amateur naturalist Donna Pomeroy. Beetles often end up in front of Boris Büche, a 50-year-old German naturalist who has identified nearly 50,000 observations in iNaturalist. California generalist James Bailey has passed 50,000 identifications. It’s a magnificent, energetic, creative community — one that the developers were terrified of even appearing to disrupt with automation.

“If we just build something that pretends to do all the things they can do, and minimizes their contributions, that’s terrible,” Shepard says.

iNaturalist’s co-directors Loarie and Ken-ichi Ueda posted the demo of what they were then calling “automated species identification” to an iNaturalist Google Forum and let the power users weigh in. Everyone took it home and tested it on what they liked. A few tried to trick it and reported on results; Loarie went on a trip through the New York Botanical Garden with botanist Daniel Atha along to provide instant expert assessment. (The app went 15-of-22 on species IDs, got three more correct to the genus level, and was outright wrong on three. “All in all,” Loarie wrote, “both Daniel and I were amazed by how accurate this machine learning technology is.”)

Charlie Hohn, who has logged 24,000 observations and 46,000 identifications in iNaturalist, initially reported skepticism. But he tried it out on a crocus and burdock, which the app got correct, on a moth, for which it offered a “helpful” suggestion, and on his infant daughter, which the app suggested was a ringneck snake. (I tried it recently on my two-year-old daughter, and the app correctly recommended Homo sapiens, although ringneck snake was also on the suggestion list.)

Hohn suggested in the Google Forum that instead offering an “identification,” the way another iNaturalist user can, the computer offer a “suggestion.” That idea was adopted, which is why the app will say it’s offering you a “recommendation” and not “identification.” Flagging computer-only recommendations matters, too, in improving the network. They don’t want the computer training itself on things it’s identified on its own, so such photographs are excluded from future training runs until human identifiers weigh in as well. Hohn, who has continued using the suggestions, says it’s like living in a “sci-fi future.”

“iNat is almost like a Star Trek tricorder,” he wrote by email. “I point my little handheld device at a plant and i can often get it identified… either in a few hours or days by other users, or now with the algorithm. I’m not that old but when I was a kid computers couldn’t even display a picture. Now we’ve got these tricorders, and so much new stuff too.”

Shepard described the automated suggestions as potentially providing both instant gratification to new users of the app, and relieving some of the identification burden on experienced curators. If Boris Büche can only identify some set number of beetles per month, Shepard says, it’s nice if he’s not wasting those identifications on common ladybugs that a number of users could figure out.

But for beginners and experts alike there’s also the benefits of human interaction. The computer won’t do that. Nudibranch enthusiast Robin Agarwal, for example, tries to not just identify your nudibranch but explain the identification — or explain why it’s difficult. (Once, when some naturalists from the California Center for Natural History found an unusual Okenia plana on a dock at Jack London Square, Agarwal went back the next day to re-confirm the sighting for us.) It’s not just the model that’s learned from all that expertise. As an amateur, I’ve learned more from the explanations and the uncertainty than from the easy identifications.

“I wholeheartedly agree that speedy IDs (a big benefit of the AI model, and one I look forward to) are encouraging in and of themselves, it isn’t the same reward as an ‘Agree’ with a happyface emoji from a researcher looking for more observations of their favorite species,” she said. “Even the heated arguments over blurry photos of a bird’s brush-obscured field marks are interesting!”

About the Author

Eric Simons is a former digital editor at Bay Nature. He is author of The Secret Lives of Sports Fans and Darwin Slept Here, and is coauthor, with Tessa Hill, of At Every Depth: Our Growing Knowledge of the Changing Oceans.

Read This Next