Cookies

We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

Computer Science

Profile

Publication details for Dr Steven Bradley

Hsing, P.-Y., Bradley, S.P., Kent, V.T., Hill, R.A., Smith, G.C., Whittingham, M.J., Cokill, J., Crawley, D., MammalWeb Volunteers, & Stephens, P.A. (2018). Economical crowdsourcing for camera trap image classification. Remote Sensing in Ecology and Conservation 4(4): 361-374.

Author(s) from Durham

Abstract

Camera trapping is widely used to monitor mammalian wildlife but creates large image datasets that must be classified. In response, there is a trend towards crowdsourcing image classification. For high‐profile studies of charismatic faunas, many classifications can be obtained per image, enabling consensus assessments of the image contents. For more local‐scale or less charismatic communities, however, demand may outstrip the supply of crowdsourced classifications. Here, we consider MammalWeb, a local‐scale project in North East England, which involves citizen scientists in both the capture and classification of sequences of camera trap images. We show that, for our global pool of image sequences, the probability of correct classification exceeds 99% with about nine concordant crowdsourced classifications per sequence. However, there is high variation among species. For highly recognizable species, species‐specific consensus algorithms could be even more efficient; for difficult to spot or easily confused taxa, expert classifications might be preferable. We show that two types of incorrect classifications – misidentification of species and overlooking the presence of animals – have different impacts on the confidence of consensus classifications, depending on the true species pictured. Our results have implications for data capture and classification in increasingly numerous, local‐scale citizen science projects. The species‐specific nature of our findings suggests that the performance of crowdsourcing projects is likely to be highly sensitive to the local fauna and context. The generality of consensus algorithms will, thus, be an important consideration for ecologists interested in harnessing the power of the crowd to assist with camera trapping studies.