Skip to main content

Amir Atapour

Dr Amir Atapour-Abarghouei from our Department of Computer Science shares his research insights in machine learning and how he is using AI systems to better diagnosis of skin lesions.

Let me tell you a bit about myself as well as a very exciting and impactful research project I have been working on. My research journey began in 2015, when I joined Durham University as a PhD student, focusing on computer vision and scene understanding applications, with a particular emphasis on depth completion/estimation and semantic segmentation. I then joined Newcastle University as a Research Associate, as part of two EPSRC funded projects, expanding my research and areas of engagement and impact. In January 2021, I took the next step in my academic journey by accepting a position as a Lecturer at Newcastle University. However, my unwavering passion for Durham University led me to return as an Assistant Professor in the Department of Computer Science in October 2021, where I continue to pursue my primary research interests.

My current research primarily explores the exciting realm of machine vision and cognition, where fast-paced cutting-edge disciplines such as machine learning, deep learning and computer vision converge to revolutionise our understanding of the world.

AI systems for diagnosis of skin lesion

One of my recent areas of research involves removing algorithmic bias from future automated skin lesion classification systems. With their dermatologist-level performance, these AI systems already show great promise for transforming the way skin lesions are diagnosed and treated. By leveraging sophisticated algorithms, they provide an accessible and cost-effective solution, ensuring that essential healthcare reaches individuals who may have limited access to specialised medical practitioners. As such, there is significant potential to enhance early detection of melanoma from skin lesion images, improve patient outcomes, and alleviate the strain on healthcare resources.

Nevertheless, it is crucial to recognise that these AI systems are not immune to the biases present in the data they are trained on. Such biases can manifest in various forms, such as underrepresentation of certain demographic groups or overrepresentation of specific skin types, leading to prediction irregularities that may disproportionately affect certain individuals or communities. Failing to address these biases could result in misdiagnoses, reduced trust in the technology, and perpetuation of healthcare disparities.

The Fitzpatrick scale (visualised in Figure 1 - Left) is a numerical classification schema for human skin colour. This set of skin type classes were originally developed in 1975 by American dermatologist Thomas B. Fitzpatrick to estimate the response of different types of skin to ultraviolet light. Previous research has used a compiled dataset of clinical lesions with human annotated Fitzpatrick skin type labels to demonstrate the skin tone bias is a notable issue in AI based skin lesion classification. My research, however, does not rely on manually annotated data since human annotated labels are expensive and difficult to obtain accurately in practice so the process of identifying skin tone classes is automated using an efficient computer vision algorithmic solution.

skin lesion

Figure 1 - Left: The Fitzpatrick 6-point scale, widely accepted as the gold standard amongst dermatologists; Right: examples of artefacts typically seen in skin images including surgical markings and rulers.

Automated diagnosis system

My work involves using the automatically generated labels to robustly remove skin type bias from the melanoma classification pipeline using a “bias unlearning technique”. Such a technique forces the machine learning model to learn the useful cues that can lead to the correct classification of the lesion while intentionally disregarding, or “unlearning”, any knowledge of skin tone. This improves the accuracy of the system beyond the performance of an experienced dermatologist with the approach generalising to images of individuals from differing ethnic origins with a reduction in the performance disparity between melanoma detection in lighter and darker skin tones even if the training dataset is dominated by lighter skin tone individuals.

My research also addresses other types of bias present within images, which can have major influence over the automated diagnosis system. These can be introduced by surgical markings and rulers placed on the skin by clinicians for diagnostic purposes (examples seen in Figure 1 - Right). Suggesting that dermatologists avoid using these aids in the future is highly unrealistic and could potentially be detrimental to their performance, so it could not ethically be considered a viable solution.

One potential solution that has been suggested is to segment the specific lesion of interest from the surrounding skin and markings to eliminate the influence of surgical marking bias. However, past research has shown any kind of pre-processing or segmentation itself may erroneously introduce changes that impede the classification of a lesion. Previous work demonstrates cropping surgical markings out of the image is effective at mitigating surgical marking bias, but it is noted this must be done by an experienced dermatologist to prevent the loss of important information, which is costly and time-consuming.

Instead, my research involves robustly removing bias caused by surgical markings using the bias ‘unlearning’ technique discussed earlier, which results in improved performance of our AI-based melanoma detector without having to alter the image or the behaviour of the clinicians. My research also demonstrates the generalisation benefits of unlearning spurious variation relating to the imaging instrument used to capture lesion images. This means an AI model, using my research, can be trained on data captured in a specific clinic under certain environmental condition using a particular sensor and will experiment little to no performance degradation when deployed in other clinics or even the patient’s home under completely different conditions.

This research has led to real-world impact in the form of a Knowledge Transfer Partnership (KTP) with Evergreen Life, which is one of the largest private providers of dermatology services to the NHS.

My research interests

My research is quite extensive and carries across various areas of machine learning, computer vision, image processing, bias identification and removal, 3D scene analysis, semantic and geometric scene understanding, depth prediction as well as some natural language processing. I am currently engaged in research involving automated surveillance, anomaly detection, autonomous navigation and identification of neurodegenerative diseases through medical image analysis.

I am open to collaborating with prospective PhD students and fellow researchers who share an interest in any of these research areas.

Find out more:

Our Department of Computer Science is growing, with ambitious plans for the future and an inclusive, vibrant and international community at its heart. Ranked as a UK Top 10 Department (Complete University Guide 2023), our students develop knowledge and gain essential and transferable skills through high quality teaching, delivered by a passionate team of leading academics.

Feeling inspired? Visit our Computer Science webpages to learn more about our postgraduate and undergraduate programmes.

This is the image alt text