We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

Computer Science


Publication details for Professor Toby Breckon

Ismail, K.N. & Breckon, T.P. (2019), On the Performance of Extended Real-Time Object Detection and Attribute Estimation within Urban Scene Understanding, 18th IEEE International Conference on Machine Learning and Applications (ICMLA 2019). Boca Raton, Florida, USA, IEEE, Piscataway, NJ, 641-646.

Author(s) from Durham


Whilst real-time object detection has become an increasingly important task within urban scene understanding for autonomous driving, the majority of prior work concentrates on the detection of obstacles, dynamic scene objects (pedestrians, vehicles) and road sign-age within the scene. By contrast, for an autonomous vehicle to be truly able to interact with occupants and other road users using a common semantic understanding of the environment it is traversing it requires a considerably extended scene understanding capability. In this work, we consider the performance of extended "long-list" object detection, via an extended end-to-end Region-based Convolutional Neural Network (R-CNN) architecture, over a large-scale 31 class detection problem of urban scene objects with integrated object attribute estimation for appropriate colour and primary orientation. We examine the extended performance of this multiple class object detection and attribute estimation task operating in real-time with on-vehicle processing at 10 fps. Our work is evaluated under a range of real-world automotive conditions across multiple complex and cluttered urban environments.