Skip to main content

Project description

This project posed these deeper questions around the development of machine learning (ML), artificial intelligence (AI) and related data sciences, cutting across the natural and social sciences, in a series of hour-long “fireside chats” at IAS.

Primary participants

Principal Investigators:

Dr Alex Campolo, Department of Geography, alexander.campolo@durham.ac.uk 

Dr Eamonn Bell, Department of Computer Science, eamonn.bell@durham.ac.uk

Share page:

According to the British Computer Society, writing in Priorities for the national AI strategy (2021) “digital transformation is held back by the lack of diverse interdisciplinary teams.”[1] UKRI, articulating a similar position in Transforming our world with AI: UKRI’s role in embracing the opportunity (2021), repeatedly stresses the need to encourage interdisciplinary AI research.[2] Notably, UKRI/AHRC has recently announced the ‘Enabling a Responsible AI Ecosystem’ programme (£8.5m over three years), to be disbursed through the Ada Lovelace Institute via a variety of funding mechanisms, including open grant calls.[3] 

The past decade has seen a huge growth in the development of machine learning (ML), artificial intelligence (AI) and related data sciences, cutting across the natural and social sciences. Academics, industry, and research councils recognise that these techniques demand cross-disciplinary collaboration (see below). Often, however, familiar patterns emerge: computer scientists and engineers working on technical issues with domain experts providing disciplinary ideas and context. Likewise, the urgent need for ethical engagement with ML technologies, often results in importing pre-existing ethical principles or social critiques from the outside to address novel problems, an issue addressed by Amoore (2020), Katz (2020) and – in other terms – by Agre (1997).

But there is a more fundamental way in which contemporary ML technologies demand interdisciplinary thinking: the ways that they ask us to reimagine basic concepts in the social and natural sciences. What new forms of prediction and inference do they make possible, and how might these affect our ability to imagine social political futures? What attributes of people, things, and places can be recognized and classified? These questions quickly open onto a core fundamental issue that is the topic of this development project: how precisely is knowledge created from data by machine learning systems, and what claims to groundedness does such knowledge make? Prior attempts to address this questions have foundered on mutual disciplinary misrecognition between STEM and AHSS researchers, which the research proposed here aims to address.

This development project will pose these deeper questions in a series of five, hour-long “fireside chats” between two researchers (one STEM, one AHSS) at IAS. Each event is directly preceded by a lunch meal in which all four get to know each other informally, along with a small number of invited PGRs/staff researchers. The subsequent talks themselves will be open to the wider University and Durham community. We plan to hold five of these events on a biweekly basis during Epiphany Term 2022/2023. Each event will centre on one central concept in ML; provisionally, these topics are: bias, learning, error, optimisation, and sampling.  

This format has been selected in order to set foundations for subsequent interdisciplinary collaboration at this deeper level. The informal, conversational nature of the public talks makes participation relatively lightweight for participants and also is somewhat unpredictable. The idea is that participants eschew traditional presentations of their research programmes but rather be challenged by an interlocutor (Bell and/or Campolo) to translate their experience with concepts from ML and related practices across disciplinary communities, challenge assumptions about these practices held by participants, and create new ways of working together.

This series of events will serve to identify links between new and existing staff  and contribute to efforts to make connections with other Durham staff working within the broad area known as the Digital Humanities, which includes the critical analysis of digitalisation.

The project’s over-arching objective is to identify basic transformative concepts in ML and AI concepts that resonate across the sciences and humanity to serve as the foundation of a multiyear research project. This will position Durham as a leader in the interdisciplinary analysis of the emerging transformations associated by AI and ML.

This project has the following local objectives, which will further these broader goals:

 

  1. to identify and connect staff within Computer Science, Geography, and across the university with a shared interest in ML and to identify the basic concepts that can motivate long term interdisciplinary research;

 

  1. to identify and connect stakeholders from the wider Durham community, including supporters and alumni interested in funding and benefiting from this work;

 

  1. to identify and target internal and external funding to establish a longer-term (c. 3-5 years) programme of research at Durham in the intersection between ML, AI and the social sciences most generally.

 

 Discussion events will take place as follows:

 

 

 

 

 

[1] “Priorities for the national AI strategy: Policy discussion document”, British Computer Society. December 2021.https://www.bcs.org/media/7562/national-ai-strategy.pdf, 12.

[2] “Transforming our world with AI: UKRI’s role in embracing the opportunity”, UK Research and Innovation. January 2021. https://www.ukri.org/wp-content/uploads/2021/02/UKRI-120221-TransformingOurWorldWithAI.pdf

[3] £8.5 million programme to transform AI ethics and regulation – UKRI. 15 June 2022. https://www.ukri.org/news/8-5-million-programme-to-transform-ai-ethics-and-regulation/