Automated species identification
Automated species identification is a method of making the expertise of taxonomists available to ecologists, parataxonomists and others via digital technology and artificial intelligence. Today, most automated identification systems rely on images depicting the species for the identification.[1] Based on precisely identified images of a species, a classifier is trained. Once exposed to a sufficient amount of training data, this classifier can then identify the trained species on previously unseen images. Accurate species identification is the basis for all aspects of taxonomic research and is an essential component of workflows in biological research.
Introduction
The automated identification of biological objects such as insects (individuals) and/or groups (e.g., species, guilds, characters) has been a dream among systematists for centuries. The goal of some of the first multivariate biometric methods was to address the perennial problem of group discrimination and inter-group characterization. Despite much preliminary work in the 1950s and '60s, progress in designing and implementing practical systems for fully automated object biological identification has proven frustratingly slow. As recently as 2004 Dan Janzen [2] updated the dream for a new audience:
The spaceship lands. He steps out. He points it around. It says ‘friendly–unfriendly—edible–poisonous—safe– dangerous—living–inanimate’. On the next sweep it says ‘Quercus oleoides—Homo sapiens—Spondias mombin—Solanum nigrum—Crotalus durissus—Morpho peleides—serpentine’. This has been in my head since reading science fiction in ninth grade half a century ago.
The species identification problem
Janzen's preferred solution to this classic problem involved building machines to identify species from their DNA. His predicted budget and proposed research team is “US$1 million and five bright people.” However, recent developments in computer architectures, as well as innovations in software design, have placed the tools needed to realize Janzen's vision in the hands of the systematics and computer science community not in several years hence, but now; and not just for creating DNA barcodes, but also for identification based on digital images.
A seminal survey published in 2004,[3] studies why automated species identification had not become widely employed at this time and whether it would be a realistic option for the future. The authors found that "a small but growing number of studies sought to develop automated species identification systems based on morphological characters". An overview of 20 studies analyzing species’ structures, such as cells, pollen, wings, and genitalia, shows identification success rates between 40% and 100% on training sets with 1 to 72 species. However, they also identified four fundamental problems with these systems: (1) training sets—were too small (5-10 specimens per species) and their extension especially for rare species may be difficult, (2) errors in identification—are not sufficiently studied to handle them and to find systematics, (3) scaling—studies consider only small numbers of species (<200 species), and (4) novel species — systems are restricted to the species they have been trained for and will classify any novel observation as one of the known species.
A survey published in 2017[4] systematically compares and discusses progress and findings towards automated plant species identification within the last decade (2005–2015). 120 primary studies have been published in high-quality venues within this time, mainly by authors with computer science background. These studies propose a wealth of computer vision approaches, i.e., features reducing the high-dimensionality of the pixel-based image data while preserving the characteristic information as well as classification methods. The vast majority of these studies analyzes leaves for identification, while only 13 studies propose methods for flower-based identification. The reasons being that leaves can easier be collected and imaged and are available for most of the year. Proposed features capture generic object characteristic, i.e., shape, texture, and color as well as leaf-specific characteristics, i.e., venation and margin. The majority of studies still used datasets for evaluation that contained no more than 250 species. However, there is progress in this regard, one study uses a dataset with >2k[5] and another with >20k[6] species.
These developments could not have come at a better time. As the taxonomic community already knows, the world is running out of specialists who can identify the very biodiversity whose preservation has become a global concern. In commenting on this problem in palaeontology as long ago as 1993, Roger Kaesler [7] recognized:
“… we are running out of systematic palaeontologists who have anything approaching synoptic knowledge of a major group of organisms … Palaeontologists of the next century are unlikely to have the luxury of dealing at length with taxonomic problems … Palaeontology will have to sustain its level of excitement without the aid of systematists, who have contributed so much to its success.”
This expertise deficiency cuts as deeply into those commercial industries that rely on accurate identifications (e.g., agriculture, biostratigraphy) as it does into a wide range of pure and applied research programmes (e.g., conservation, biological oceanography, climatology, ecology). It is also commonly, though informally, acknowledged that the technical, taxonomic literature of all organismal groups is littered with examples of inconsistent and incorrect identifications. This is due to a variety of factors, including taxonomists being insufficiently trained and skilled in making identifications (e.g., using different rules-of-thumb in recognizing the boundaries between similar groups), insufficiently detailed original group descriptions and/or illustrations, inadequate access to current monographs and well-curated collections and, of course, taxonomists having different opinions regarding group concepts. Peer review only weeds out the most obvious errors of commission or omission in this area, and then only when an author provides adequate representations (e.g., illustrations, recordings, and gene sequences) of the specimens in question.
Systematics too has much to gain, both practically and theoretically, from the further development and use of automated identification systems. It is now widely recognized that the days of systematics as a field populated by mildly eccentric individuals pursuing knowledge in splendid isolation from funding priorities and economic imperatives are rapidly drawing to a close. In order to attract both personnel and resources, systematics must transform itself into a “large, coordinated, international scientific enterprise” [8] Many have identified use of the Internet— especially via the World Wide Web — as the medium through which this transformation can be made. While establishment of a virtual, GenBank-like system for accessing morphological data, audio clips, video files and so forth would be a significant step in the right direction, improved access to observational information and/or text-based descriptions alone will not address either the taxonomic impediment or low identification reproducibility issues successfully. Instead, the inevitable subjectivity associated with making critical decisions on the basis of qualitative criteria must be reduced or, at the very least, embedded within a more formally analytic context.
Properly designed, flexible, and robust, automated identification systems, organized around distributed computing architectures and referenced to authoritatively identified collections of training set data (e.g., images, and gene sequences) can, in principle, provide all systematists with access to the electronic data archives and the necessary analytic tools to handle routine identifications of common taxa. Properly designed systems can also recognize when their algorithms cannot make a reliable identification and refer that image to a specialist (whose address can be accessed from another database). Such systems can also include elements of artificial intelligence and so improve their performance the more they are used. Most tantalizingly, once morphological (or molecular) models of a species have been developed and demonstrated to be accurate, these models can be queried to determine which aspects of the observed patterns of variation and variation limits are being used to achieve the identification, thus opening the way for the discovery of new and (potentially) more reliable taxonomic characters.
- iNaturalist is a global citizen science project and social network of naturalists that incorporates both human and automatic identification of plants, animals, and other living creatures via browser or mobile apps.[9]
- Pl@ntNet is a global citizen science project which provides an app and a website for plant identification through photographs, based on machine-learning
- Leaf Snap is an iOS app developed by the Smithsonian Institution that uses visual recognition software to identify North American tree species from photographs of leaves.
- FlowerChecker bot is a Facebook Chatterbot that uses visual recognition software to identify plant species from photographs. The bot uses plant a database collected by FlowerChecker app for mobile phones.
- Google Photos can automatically identify various species in photographs.[10]
- Plant.id is a web application which uses neural network trained on photos from FlowerChecker app[11][12]
- Flora Incognita is an app developed as part of a research project and uses a cascade of convolutional neural networks to identify plants based on images and location data. [13]
See also
References cited
- Wäldchen, Jana; Mäder, Patrick (November 2018). Cooper, Natalie (ed.). "Machine learning for image based species identification". Methods in Ecology and Evolution. 9 (11): 2216–2225. doi:10.1111/2041-210X.13075.
- Janzen, Daniel H. (March 22, 2004). "Now is the time". Philosophical Transactions of the Royal Society of London. B. 359 (1444): 731–732. doi:10.1098/rstb.2003.1444. PMC 1693358. PMID 15253359.
- Gaston, Kevin J.; O'Neill, Mark A. (March 22, 2004). "Automated species recognition: why not?". Philosophical Transactions of the Royal Society of London. B. 359 (1444): 655–667. doi:10.1098/rstb.2003.1442. PMC 1693351. PMID 15253351.
- Wäldchen, Jana; Mäder, Patrick (2017-01-07). "Plant Species Identification Using Computer Vision Techniques: A Systematic Literature Review". Archives of Computational Methods in Engineering. 25 (2): 507–543. doi:10.1007/s11831-016-9206-z. ISSN 1134-3060. PMC 6003396. PMID 29962832.
- Joly, Alexis; Goëau, Hervé; Bonnet, Pierre; Bakić, Vera; Barbe, Julien; Selmi, Souheil; Yahiaoui, Itheri; Carré, Jennifer; Mouysset, Elise (2014-09-01). "Interactive plant identification based on social image data". Ecological Informatics. Special Issue on Multimedia in Ecology and Environment. 23: 22–34. doi:10.1016/j.ecoinf.2013.07.006.
- Wu, Huisi; Wang, Lei; Zhang, Feng; Wen, Zhenkun (2015-08-01). "Automatic Leaf Recognition from a Big Hierarchical Image Database". International Journal of Intelligent Systems. 30 (8): 871–886. doi:10.1002/int.21729. ISSN 1098-111X. S2CID 12917626.
- Kaesler, Roger L (1993). "A window of opportunity: peering into a new century of palaeontology". Journal of Paleontology. 67 (3): 329–333. doi:10.1017/S0022336000036805. JSTOR 1306022.
- Wheeler, Quentin D. (2003). "Transforming taxonomy" (PDF) (22). The Systematist: 3–5. Cite journal requires
|journal=
(help) - "iNaturalist Computer Vision Explorations". iNaturalist.org. 2017-07-27. Retrieved 2017-08-12.
- "How Google Photos tells the difference between dogs, cats, bears, and any other animal in your photos". 2015-06-04.
- MLMU.cz - FlowerChecker: Exciting journey of one ML startup – O. Veselý & J. Řihák - YouTube
- "Tvůrci FlowerCheckeru spouštějí Shazam pro kytky. Plant.id staví na AI".
- "The Flora Incognita approach".
External links
Here are some links to the home pages of species identification systems. The SPIDA and DAISY system are essentially generic and capable of classifying any image material presented. The ABIS and DrawWing system are restricted to insects with membranous wings as they operate by matching a specific set of characters based on wing venation.