Our seminars provide an enthusiastic forum for the presentation of the latest research from members of the group and visiting colleagues.
Friday 18 January 2013, 12:00, E360
Speaker: Ole Torp Lassen, Roskilde University, Denmark
Title: PLP for complex sequence analysis
Abstract: Systems that combine logic programming and statistical inference allow machine learning systems to integrate both relational and statistical information. In the LoSt project, we experiment with applying one such system, PRISM (Taisuke Sato & Yoshitaka Kameya), to complex, large scale bio-informatical problems. As part of the general domain of DNA-annotation, the task of gene-finding is characterized by large sets of extremely long and highly ambiguous sequences of data and represents a challenging setting for efficient analysis. We developed a compositional method, Bayesian Annotation Networks, where the complex overall task is optimized or approximated by identifying and negotiating individual constitutive tasks and, in turn, integrating their analytical results according to potential interdependencies.
Friday 30 November 2012, 12:00, E360
Speaker: David Parsons, Massey University, New Zealand
Title: Serious Mobile Learning Games: 21st century tools for 21st century skills
Abstract: Today's educators are required to prepare students for an unpredictable future that will bring unknown technologies, careers and challenges. Employers call for graduates with not just technical competencies, but the ability to engage in higher level thinking. This seminar describes the design and development of a mobile serious game designed to foster high level cognitive and interpersonal skills such as non-routine problem solving, critical thinking, systems thinking and teamwork. The game augments a physical location to represent a virtual company, within which players act as business consultants, and its narrative-driven activities address issues common to business simulations by leveraging the contextual learning made possible by mobile devices. The game is freely available as an open source project, and can be configured for use in any location.
Friday 09 November 2012, 12:00, E360
Speakers: David Corrigan and Marcin Gorzel, Trinity College Dublin
Title: A Virtual Environment for Audiovisual Depth Perception Studies
Abstract: In this talk, we will introduce a novel environment for audiovisual depth perception experiments. Our goal is to determine the minimum distance between audio and visual stimuli at which they are perceived as being apart.The visual environment for the experiments is presented to test subjects using a stereoscopic image pair displayed under orthostereoscopic conditions. The images that form the environment were captured by a controlled track of a video camera along an axis perpendicular to the principal axis of the camera. This allows us to capture some of the monoscopic depth cues such as Lighting and Perspective naturally and allows adjustment of the interocular distance. The audio stimuli are presented through rendering of externalised acoustic images using standard headphones. Our methodology is centred around a spherical harmonics decomposition of a sound field, known as Ambisonics. The method allows for the sound field reproduction with arbitrary level of spatial accuracy as well as the correct depth. Additional levels of accuracy and realism are achieved with head tracking that stabilises the rendered sound sources at their intended locations. The binaural decoding is accomplished by creating a vitual array of loudspeakers that all contribute to the sound field reproduction. The methodology has been shown to provide an equivalent sensation of auditory distance in rooms as real sound sources.
Friday 12 October 2012, 12:00, E360
Speaker: Boguslaw Obara, Durham University
Title: BioImage Informatics for Biological Networks
Abstract: Networks are fundamental to transport and communication in biological systems yet, despite their crucial importance in nature, little is known about their properties and design. This is mainly due to a lack of high-resolution spatio-temporal data on network structure which would allow us to visualise network dynamics which, in turn, would provide us with a means to test and validate hypotheses on structure and function. Recent advances in technology are now enabling us to acquire such data, but the sheer size and complexity of the data exceeds the capacity of human analysis, creating a fundamental barrier that hinders the potential impact of imaging-enabled biology. New revolutions in the field can be made possible only by developing the tools, technologies and work-flows to extract and exploit information from imaging data so as to achieve new fundamental biological insights and understanding, as well as developing possible therapeutic strategies in medical applications.
Here, we propose to integrate bioimaging data and biomage informatics to investigate the multi-scale topology and dynamics of complex biological networks. We have developed a powerful new high-throughput image processing approach specifically designed to extract the network architecture from low-contrast, noisy images of biological networks. We use intensity-independent phase-congruency tensors, applied over a range of scales and rotational angles to selectively enhance curvi-linear features that comprise the basis of the biological network. This approach efficiently captures weighted networks with up to 106 links from noisy and relatively low resolution images of developing fungal mycelial systems that we use both as a biological system in their own right but also as an experimentally tractable real-world networks to test algorithm development.
Friday 22 June 2012, 12:00, E360
Speaker: Andrew Ferrone, RealD
Title: RealD - The New 3D
Abstract: RealD Inc., together with its subsidiaries, licenses stereoscopic 3D technologies in the United States, Canada, and internationally. The company designs, manufactures, licenses, and markets its RealD Cinema Systems that enable digital cinema projectors to show 3D motion pictures and alternative 3D content to consumers wearing the company’s RealD eyewear. It also offers RealD Display, active and passive eyewear ,and RealD Format technologies to consumer electronics manufacturers, and content producers and distributors to enable the delivery and viewing of 3D content on high definition televisions, laptops, and other displays. In addition, the company sells CrystalEyes eyewear, monitors, digital light processing television kits, polarizer films, emitters, and linear polarizing systems to companies, government agencies, academic institutions, and research and development organizations for applications in piloting the Mars Rover and in theme park installations. RealD Inc. was founded in 2003 and is headquartered in Beverly Hills, California.
Friday 11 May 2012, 12:00, E245
Speaker: Pedro Gonnet, Durham University
Title: Algorithms for high-performance computing on modern shared-memory parallel architectures, e.g. multi-cores and GPUs.
Abstract: For the past five years, computers have become faster only by becoming more parallel: Any standard desktop PC now has at least four cores and a GPU with a hundred more. Future architectures, such as the Intel MIC, are set to follow that trend. In order to exploit this new massive shard-memory parallelism for High-Performance Computing applications, new algorithms and/or programming approaches are needed. In this talk, I will present a simple yet powerful problem-independent parallelization scheme and discuss its application to both multi-core and GPU architectures. The take-home messages from this talk are firstly that good parallelization is not difficult and relies heavily on the problem formulation, and secondly that GPU programming is not as different from regular multi-core programming as many may think it is.
Friday 27 April 2012, 12:00, CS 2.11
Speaker: David Budgen, Durham University
Title: How well can Empirical Software Engineering provide support for Evidence-Informed Teaching & Practice?
Abstract: My talk will review a number of major developments in empirical software engineering and present some recent outcomes. I will briefly outline the ways in which our interpretation of ‘primary study’ has been evolving, both in terms of form and of quality; describe the impact made by evidence-based software engineering (EBSE) with its use of ‘secondary studies’; and then assess how well this is now able to inform our teaching, practice (and research). To do so, I will be drawing upon the outcomes from some recent studies, including a tertiary study that has analysed 150 published systematic literature reviews, as well as a recently-completed empirical study that has been seeking to determine whether the quality of empirical papers has improved in the past decade.
Friday 09 March 2012, 12:00, E360
Speaker: Paul Watson, University of Newcastle
Title: Cloud Computing for Science
Abstract: Cloud computing has the potential to revolutionise science by giving scientists access to the computational resources they need, when they need them. However, cloud computing does not make it easier to build the scalable, reliable and secure applications that scientists require. To address this, we have designed “e-Science Central” – a Science-as-a-Service platform that combines three emerging technologies – Software as a Service (so users only need a web browser to do their science), Social Networking (to encourage interaction and the creation of communities) and Cloud Computing (to provide storage and computational power). Using only a browser, users can upload data, share it in a controlled way with colleagues, and analyse it using workflows which coordinate either pre-defined services, or their own, which they can upload for execution and sharing. This talk will introduce cloud computing and illustrate how e-Science Central is used in a range of scientific applications, from chemical informatics to health monitoring.
Bio: Paul Watson is Professor of Computer Science and Director of the Digital Institute at Newcastle University. There he leads a range of projects that design and exploit cloud computing solutions, including the £12M RCUK “Social Inclusion through the Digital Economy” research hub. His research is focussed around the scalable, secure and portable "e-Science Central" cloud platform, which aims to simplify the task of exploiting the potential of cloud computing. Professor Watson joined Newcastle University from industry (ICL High Performance Systems) where he designed scalable database systems. He is a Chartered Engineer and Fellow of the British Computer Society.
Friday 24 February 2012, 12:00, E360
Speaker: Jonathan Berry, Durham University
Title: Using auditory depth to influence perception of visual depth in 3D stereoscopic displays.
Abstract: Although extensive research has been undertaken to develop and use 3D visual displays effectively, relatively little thought has been given to their integration with 3D sound systems. Literature clearly shows that what we see and hear are interdependent, giving examples of some weird multi-sensory illusions. Can we use these illusions to enhance stereoscopic content for tv, gaming, cinema and virtual environments? We will review the psychology of multi-sensory illusion and depth perception, and the physics of cutting-edge 3D sound systems and visual displays, before presenting some novel experimentation demonstrating a possible computer application to integrated auditory and visual 3D environments.
Bio: Jonathan Berry graduated from Durham University with a BSc Joint Honours in Physics and Computer Science and is now undertaking a PhD in the Durham Visualisation Laboratory. He is interested in the effective use of 3D sound in 3D media, currently focusing on whether 3D sound has the potential to extend a 3D display's zone of comfort. His work in this field has been presented at conferences in Durham, Belgium and California and published in the Proceedings of Stereoscopic Displays and Applications XXII.
Wednesday 11 January 2012, 15:00, Visualization Lab
Speaker: Andrew Woods, Curtin University, Perth, Australia
Title: HMAS Sydney II in 3D
Abstract: In 1941 the Australian warship HMAS Sydney II was sunk with the loss of all hands by the German raider HSK Kormoran off the coast of Western Australia. When the wreck site was discovered in March 2008, the wreck site was extensively photographed in 2D, but unfortunately a 3D camera was not available. CMST recently gained access to the archive of 1443 still images and has been successful in extracting a selection of true stereoscopic 3D images of the wreck site. The 3D images of the wreck site provide a unique realism to the scenes, which were not able to be seen in 3D when the wreck was originally surveyed. The 3D images have been extracted from the 2D photos archive, using a process sometimes known as "accidental-stereo". Advanced image warping and rectification algorithms have been used to match selected 2D image pairs and prepare the 3D images. The presentation will discuss the technique used to extract the 3D images and show a selection of 3D images of the wreck site.
Bio: Andrew Woods is a research engineer at Curtin University's Centre for Marine Science & Technology. He has over 20 years of experience in the design, application, and evaluation of stereoscopic 3D imaging solutions for industrial and entertainment applications. He is also co-chair of the Stereoscopic Displays and Applications conference held annually in California.
Friday 02 December 2011, 12:00, E360
Speaker: Hamish Carr, University of Leeds
Title: Urban-scale LiDAR Scan and Engineering Analysis
Abstract: Most urban modelling to date has focussed on the conventional computer graphics pipeline that constructs individual buildings with textures for visual fidelity when rendering. Civil engineers, however, are more interested in finite element models that allow them to predict the behaviour of a building under load. Moreover, at the urban level, engineers rarely have the leisure of building an urban model one building at a time.
The engineering requirements can instead be addressed by performing an aerial LiDAR scan (ALS) and converting it directly to a suitable model. We show how to perform an ALS to capture facade details in dense urban settings, and how to convert it rapidly to effective FEM models of individual buildings.
Tommy Hinks, Linh Truong Hong, Hamish Carr, Debra Laefer
2010 Michaelmas Term
Seminars are usually held on Friday at 12:00 in E 360, but you should check the seminar details for exceptions. Contact email@example.com for more information about this seminar series.
- 10 December 2010 12:00: TBA - Yasser Bamarouf, TEL Group, Durham University
- 26 November 2010 12:00: TBA - Anjum Maria, Durham University
- 12 November 2010 12:00: Understanding the relationship between voice and gestures in classical Indian singing - Paschalidou Panagiota-Styliani, Music Department, University of Durham
- 15 October 2010 12:00: Sketching-Based Skeleton Generation - Qingzheng Zheng, Innovative Computing Group, Durham University
- 1 October 2010 12:00: The Seminar has been cancelled - Dr Paul Cairns, Human-Computer Interaction Group, York University