The project is aimed at scientists who do researches on amphibians. The main objective of these researches is to learn different changes in the animal population, changes in sex percentage, etc.
The methods of tracking amphibians that are used nowadays, affect animals’ health badly and are also rather expensive for scientists.
The application performs an individual recognition of animals in the wild with individual patterns. It is a modern approach using computer vision and machine learning which allows fast and reliable recognition of individuals within existing population databases.
Need to create a machine learning algorithm to identify amphibians using the photos of them made by the scientists via a mobile phone camera.
The challenge of the QA activities on this project was to create data sets, which represented the diverse amphibians. The application runs on the Android smartphones, so it has to be tested on the variety of devices.
Our developers did a great job to figure out the logic of the algorithm.
Besides, the algorithm is the core component of the system, but we had to think about the common work logic as well. So, our analyst and UI/UX specialist were working together to build an interface which would be understandable for the scientists during their researches.
Functional testing, usability testing, compatibility testing.
Crashlytics, Reflector, redmine, TestLink.
The end solution provides the scientists with a tool for their research work. The scientists are able to take a photo of the animal and store it and its description in the database. And thanks to the algorithm the user is able to learn the number of captures for each animal in the database. This information is used to determine the number of individuals in the population.
Machine learning // Computer vision // Image processing // Google Maps // Custom UI widgets // User settings persistence // Camera API
Dagger (Dependency Injection) // GoogleMaps // Retrofit // ktor