Cookies

We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

Computer Science

Example Research Projects

Learner Analytics on FutureLearn MOOCs

Alexandra Cristea

Data-intensive analysis of massive open online courses (MOOCs) is popular. Researchers have been proposing various parameters conducive to analysis and prediction of student behaviour and outcomes in MOOCs, as well as different methods to analyse and use these parameters, ranging from statistics, to NLP, ML, and even graph analysis. In this project, we explore many new aspects of learner analytics and educational data mining, in one of the few genuinely large-scale data collection of 5 MOOCs, spread over 21 runs, on FutureLearn, a UK-based MOOCs provider, that, whilst offering a broad range of courses from many universities, NGOs and other institutions, has been less evaluated, in comparison to, e.g., its American counterparts, such as:

  • Patterns to be extracted, and apply systematic data analysis methods.
  • We analyse temporal quiz solving patterns; for instance, we have tackled the less explored issue on how the first number of weeks of data predicts activities in the last weeks;
  • We also address the classical MOOC question on the completion chance, based on various parameters (including registration date, a.o.).
  • We discuss the type of feedback a teacher or designer could receive on their MOOCs, in terms of fine-grained analysis of their material, and what personalisation could be provided to a student.

A Bioimage Informatics QVEST: Quick, Versatile and Easy Segmentation & Tracking System

Carl Nelson

Often bioimage informatics solutions are developed on a case-by-case system and, once complete, little research goes in to developing the systems for a wide spread of applications. Here we demonstrate how a single image analysis technique can be used to segment objects in different 3D microscopy images including tracking through time series, segmenting complex shapes and segmenting multiple objects in a single image, including touching or clustered objects. We have used our speedy and robust deformable mesh system to segment a variety of different objects and track them through time series of images. We have designed a vector field-driven active mesh with a novel local termination method, using directional constraints for tracking and simple pre-processing steps for different scenarios. We feel this is the first step to developing a system that is Quick, Versatile, i.e. capable of dealing with many different scenarios with equal accuracy and precision and Easy to use for a range of bioimaging Segmentation and Tracking needs: this is our QVEST.

 

 

Next Generation Intelligent Sensing

Mikolaj Kundegorski

The school's image processing team have been awarded a research contract to work with the Defence Science and Technology Laboratory (DSTL) in developing an intelligent modular sensing system for surveillance and the protection of high-value assets. In the context of both defence and civil environments there is a need to protect high value landbased assets, such as military Forward Operating Bases (FOBs), civil dockyards and power stations, from various threats, typically human incursions into the area. Current surveillance systems suffer from a high operator burden in manually monitoring the sensor systems. Increasing the autonomy of sensor systems is expected to reduce operator fatigue with a resulting increase in performance for detection and recognition of potential security threats. Ensuring systems are modular in design and conform to an open architecture, will reduce the operating costs and increase the versatility of military future surveillance systems.

Future systems expect to solve these problems through innovation in the form of a series of networked autonomous sensor modules acting as a single sensor network. Each sensor module will be able to make decisions about what information to send, when and where to look for threats, simultaneously incorporating information from a central decision making module. The Durham team are developing one such Autonomous Sensor Module (ASM) that autonomously reports the presence of humans within the scene, their trajectory and behaviour using a thermal infra-red camera. This uses a range of computer vision and machine learning techniques to autonomously detect humans within the camera imagery and perform real-time tracking to provide GPS trajectory information within the scene. This will provide a real-time position update for any incursions into the “secure zone” across the wider sensing network. Further research will then investigate autonomous activity classification (i.e. what are they doing in the scene?) to provide real-time reporting and prioritization of potential security threats. This work is being carried out in the school's Innovative Computing Group, under the direction of Dr. Toby Breckon.

 

 

Virtual Environment Platforms for Socio-Educational Domains

Mohammed Farsi

The development of Virtual Environment (VE) platforms has significantly impacted socio-educational domains. With the advent of new technology such as the Wii and Xbox 360 Kinect, this not only relates to the learning environment, but also to the actual Human-Computer Interaction (HCI), whereby users can now experience full body interaction without the use of external controllers. This study seeks to explore how such technology could be incorporated in primary schools, in order to teach and enhance the learning experience of the Islamic prayer. The interactive Islamic Prayer (iIP) software has been designed for Xbox 360 Kinect with this specific goal in mind, and as an alternative to traditional learning methods such as textual or visual learning approaches. This study therefore seeks to explore whether there is a preference towards an interactive style of learning in comparison to learning from a book or watching a video. This research uses a mixed method approach to determine which style is preferred by learners, as well as ascertaining whether teachers would be willing to incorporate it in their lessons.
The participants for this study are Saudi primary school children in Jeddah (n=30) and their teachers (n=3), who currently learn the prayer using a prayer book and video. Therefore, in order to assess preferences, an experiment is devised incorporating a within-group design, whereby each group will experience all methods of learning (the controlled groups: video and book, and also the iIP software) before making an informed decision on which they favour. Tests and questionnaires will be administered before and after each session, which will be then be analyzed for comparison purposes. In addition, observations of each session will be implemented to see how the methods engage the learners.

 

 

Using Auditory Depth to Influence Perception of Visual Depth in Stereoscopic Images

Jonathan Berry

There has been much research concerning visual depth perception in 3D stereoscopic displays and, to a lesser extent, auditory depth perception in 3D spatial sound systems. With 3D sound systems now available in a number of different forms, there is increasing interest in the integration of 3D sound systems with 3D displays. Designing optimal content for such systems requires us to reconcile the science of human vision and hearing with the technological limitations of the display systems. Moreover, in such cross-modal paradigms it is not uncommon for one sensory mode to influence and interact with another. There are many reports of audio-visual "illusions", where audio perception influences visual perception, or vica-versa. How should these cross-modal effects inform the design of content for such displays? Can the effects be exploited to improve the viewing experience provided by such displays? How do we encapsulate our findings in software algorithms? This inter-disciplinary project, which spans Computer Science, Psychology and Engineering, seeks to improve the viewing experiences offered by the Cinema, TV, Gaming, Simulation and Visualisation industries.

 

 

Supporting Ambulance Crews through Electronic Information Provision

Eman Altuwaijri

We are investigating ways of improving services for people with epilepsy, by providing better-informed responses by ambulances to callouts for people who have had an epileptic seizure, working in collaboration with the North East Ambulance Services (NEAS) and James Cook Hospital (JCH).
Electronic information provision can be provided by implementing the concept of the ‘Information Broker’, a software system that acts as a trusted agent with the ability to provide reliable up to date patient health information from different sources. The Information Broker (IB) provides a service by receiving a request, then searching and gathering relevant patient health information. The Data Access Service (DAS) acts as a transforming agent; it is an access medium for the data source as well as representing that data source to the IB. The DAS will translate the query, which has been received from the IB, to the local format of the data source, get any relevant data and send it back to the IB.
We are initially developing a limited implementation of the broker model; when an ambulance is called to a patient identified as having some form of ‘blackout’, an enquiry will be sent to the Epileptic Patient Database that we are constructing in collaboration with JCH, and where patient history is available. If the ambulance crew require additional health information, then they too can make a patient specific enquiry and the relevant information will be summarised and relayed back to them. Finally, on scene the crew will create an Electronic Patient Record Form (EPRF) and send it to the epilepsy database at JCH and this will be added to the patient health information on the database.