Building Zooscan: An App that Scans and Classifies Zoo Animals
My son has always been fascinated by animals. We go to the local zoo multiple times a week, and when we’re on holiday, we always make a point to visit local zoos and other animal parks. On one of our holidays in Porto, we visited the local SeaLife. While we were there, their SeaScan app caught my attention. his clever app lets you scan fish and other creatures in the aquarium to instantly learn more about them. That sparked an idea: what if I build a similar app for zoo animals?
This project became ZooScan — an app that lets you photograph animals and instantly get more information about them. Not only would this help my son explore his passion, but it was also the perfect excuse for me to dive deeper into CoreML and iOS development. This post is the first in a series where I’ll walk you through building the app from concept to launch. Let’s get started!
The App Idea
The app idea is fairly simple. The app will let you take a photo of an animal or select one from your photo library. The photo will then be analyzed by a machine learning model that classifies the animal and returns its name. We’d also like to display some basic information about the animal. And of course, we want to be able to mark our favorite animals from the ones we’ve scanned.
Below you can see a gif of the app in action:
As you can see, I’ve added several animal photos I’ve taken over the years to the iOS simulator. We select several images from the photo library—adding a giraffe, a penguin, a watusi, and some zebras. Each photo is classified before being added to the app. The animals are then shown in a carousel. Tapping an animal opens a detail view showing its name, a larger image, and a button to mark it as a favorite. At the bottom, there’s also a carousel showing our favorite animals.
In upcoming posts, we’ll focus on building the app—starting with the basic UI, and then moving on to the machine learning model that will classify the animals. Stay tuned for more!