Centre for Advanced Instrumentation

Abstracts for Durham AO real-time control workshop

Talks should aim to complete with a few minutes to spare, questions will be fielded during the round table sessions

Nigel Dipper (Durham): CfAI research and development (15 minutes)

In addition to the various instrumentation projects at the CfAI, with the CANARY MOAO demonstrator being a prime example, there is an on-going program of "blue-skies" research and development in astronomical instrumentation funded by the UK Science and Technology Facilities Council.  One theme of this program is real-time control for adaptive optics.  I give here an overview of this program which includes WFS camera pixel handline, wave-front reconstruction techniques, DM control and data transport.  The program places enphasis on investigating the appropriate hardware (CPU, GPU, FPGA etc) to be used for the RTCS implementation as AO moves to higher orders in the era of ELTs.

Hermine Schnetler (UKATC): Systems engineering (15 minutes)

Why do we need AO Systems Engineering.

Kit Richards (NSO): Solar AO overview (15 minutes)

 I can talk about solar AO requirements in general, the architecture of our currently running AO real time systems including the custom camera and the requirements for AO for our new 4 meter solar telescope now beginning construction.

Eddy Younger (Durham): Middleware (15 minutes)

An overview of middleware software is presented, discussing why it is important for AO RTC, and what common options are available. The CANARY middleware solution is presented along with the strenghts and weaknesses of this system

Fabrice Vidal (Observatoire de Paris - LESIA):  STYC: The CANARY user interface (15 minutes)

CANARY is the MOAO demonstrator for the EAGLE instrument on the E-ELT. We will present the software interface used on-sky during the first on-sky MOAO demonstration performed in 2010. This software called STYC (Smart Tool in Yorick for Canary) was fully interfaced with the system and allowed us to perform several calibration schemes and compute the proper MOAO tomographic reconstructor.

Thierry Fusco (ONERA): ORCA (20 mintes)

We will present the ORCA concept (Open RTC architecture) developed in the frame of the European FP7 program. ORCA is based on more than 10 years of collaborations and common developments between ONERA (for algorithm development) and Shaktiware (fro RTC hardware and software architectures). It gathers the most advanced algorithms both in the field of wavefront sensing and control. Based on several illustrations (going from fast [more than 1.5 kHz] classical AO system working on-sky at visible wavelengths down to multi-conjugate AO systems combining LGS and NGS in lab) we will detail the specificities, performance and versatility of the ORCA system. We will show that its PC linux-based open architecture is fully compatible with very fast, complex and highly efficient AO systems for astronomical, military and civilian applications.

Albert Conrad (MPIA): LBT Linc-Nirvana RTC and simulations (20 minutes)

Current Status of Adaptive Optics Real-time Control for Linc-Nirvana, an Imaging Interferometer for LBT F.Briegel, A. Conrad, T. Bertram, J. Berwein, X. Zhang, J. Trowitzsch, F. Kittman Although it will be the last of the first-generation instruments for the Large Binocular Telescope (LBT), Linc-Nirvana (L-N) will be the first instrument to internally measure the wavefront and then correct distortions internally, pass reconstructor values to the twin LBT adaptive secondary mirrors, or both. We report here performance results from simulations of the adaptive optics control loop being developed at Max Planck Institute for Astronomy for this purpose. Using wavefront sensor data collected from a prior lab experiment, we have shown via simulation that a multi-core Linux system is sufficient to operate at 1 kHz, with jitter well below the needs of the final system. We tested several combinations by varying the processor type, scheduling algorithm, and number of cores used. We will present results from these tests and, in addition, report on the graphical user interface tools that were developed to visualize and compare the performance from these different configurations.

David Palmer (LLNL): Gemini planet imager RTC (20 minutes)

The Gemini Planet Imager (GPI) is a state of the art near infrared instrument being built for one of the two 8-meter Gemini telescopes. In order to achieve its primary goal of detecting and imaging extra-solar planets, GPI will need to achieve contrast ratios of order 10-7. This requires a high order adaptive optics (AO) system with greater than 1800 sub-apertures and corresponding deformable mirror degrees of freedom. Because a deformable mirror with that many actuators and adequate stroke to fully correct the atmosphere does not exist, two deformable mirrors are required in a woofer / tweeter arrangement. In order to adequately correct the atmosphere, the adaptive optics system will need to operate at a high rate: 1500 frames per second, with a goal of 2000. Finally, to reduce loop lag (a large controller error term), the system will need to process each frame very quickly: in the following 1.5 frames, with a goal of 1. All of this puts a significant demand on the adaptive optics computer. This demand is being met with efficient algorithms and an off-the-shelf high-end server. We will detail the architecture of the real-time controller and present measured performance results.

Charton (Grenoble):  Adaptive optics using commodity hardware and software: myths and realities (20 minutes)

Adaptive optics using commodity hardware & software: myths and realities Considering the GBits/s and TFLOP advertised by modern CPU, GPU and TCP links, adaptive optics should be easily parallelized using commodity hardware & software. But in real life, latency, jitter and cache size issues can kill performances. A mid-range AO computer (1000 actuators @ 1kHz) has been built and benchmarked to estimate real performances and identify how cheap TFLOPs can be translated into real value for AO.

Alastair Basden (Durham): The Durham AO Real-time Controller and CANARY (20 minutes)

CANARY is an ambitious MOAO technology demonstrator instrument which saw first light in September 2010.  Here, I give details of the AO control system that was developed and used with CANARY, including the real-time control system, telemetry interfaces, control interfaces, sequencing, automation and scripting as well as the graphical tools developed for CANARY.  A fast, modular, flexible and configurable real-time control system was developed for CANARY and bench experiments in Durham.  The Durham Adaptive optics Real-time Controller (DARC) has been released under a GPL license and is freely available.  As well as being easy to set up in a laboratory, the modular nature means that hardware accelerated components can also be used.  Details are given of a FPGA pixel processor and a GPU based reconstructor that have been used on-sky.  The architecture of DARC, including the low latency multi-threaded design is discussed with details of how this design allows performance to be optimised.  A discussion of implemented algorithms is given and some performance estimates of how the design scales to large AO systems.

Remco den Breeje (TNO): A control set-up for adaptive AO and real-time vision in the loop (20 minutes)

A real-time control set-up for Adaptive Optics is discussed. It consists of commercial-of-the-shelve hardware, real-time Linux and an in-house developed data communication protocol. This set-up enables an ideal experimental environment for control algorithm validation and on-line tuning. It has recently been used to implement and test H2-optimal AO control and adaptive algorithms for AO.

The second half of the presentation addresses the reduction of the closed-loop time delay. Apart from Adaptive Optics systems, for other motion control applications a camera in the loop can be an attractive solution for fast and accurate alignment. For these purposes the loop time-delay is an essential parameter in achieving a high control bandwidth. The standard camera repetition interval, the pipelined transport and processing via various interfaces may add up to a significant time delay. Also, for AO the camera read-out and communication time may seriously limit the performance in terms of the temporal wavefront error. TNO has investigated these aspects and proposes an advanced real-time architecture to minimize the overall system latency. For a particular motion set-up a frame rate of 10 kHz has been achieved with a time-delay as small as a single sample interval. i.e. 0.1 ms. The approach will be illustrated for the case of a fast focus motion system. 

Enrico Fedrigo (ESO):  SPARTA: Status and plans (20 minutes)

SPARTA, the ESO Standard Platform for Adaptive optics Real Time Applications, is the real time computing platform serving 4 major 2nd generation instruments at the VLT (SPHERE, GALACSI, GRAAL and ERIS) and several smaller instruments (GRAVITY, NAOMI). SPARTA offers a very modular and fine-grained architecture which is generic enough to serve a variety of AO systems. SPARTA includes the definitions of all the interfaces between those modules and provides libraries and tools to implement and test the various modules as well as a map to technologies capable of delivering the required performance, most of them innovative with respect to ESO standards in use.

For the above mentioned instruments, SPARTA provides also a complete implementation of the AO application, with features customized for each of the instruments. We present the architecture of SPARTA, its technologies, functions, performance and test tools as well as the plans to increase the reach of the platform to smaller system with what we call SPARTA Light.

Deli Geng (Durham):  SPARTA and FPGA developments at Durham (20 minutes)

CfAI has been using FPGAs to accelerate AO real-time control for over 7 years. We have successfully developed a pixel processing pipeline called a Wavefront Processing Unit (WPU), which handles raw pixels and calculates wavefront slopes based on a weighted central gravity algorithm. FPGA works have also been done on floating MVM operation and high speed serial communication. We are currently collaborating with industrial partners to develop our FPGA cluster for future E-ELT size AO real-time reconstruction.

Thierry Fusco (ONERA):  The SPHERE RTC (20 minutes)

The SPHERE system is the future VLT planet finder. It will aim, in the next coming years, at directly detect and characterize extrasolar planets orbiting around nearby stars. Extreme AO (SAXO) is the core of such an ambitious instrument. It has to correct for turbulence and system defects at an unprecedented level. RTC is of course an essential and critical SAXO subsystem. Based on an ESO-SPARTA architecture, it is currently developed by ESO following ONERA algorithm specifications and performance requirements. In this presentation we will recall the main and unique characteristics of SAXO and its this RTC. We will analyse their consequence on the expected performance of SPHERE (that has been derived using a very complete end-to-end simulation tool available at ONERA). Finally first laboratory results, coming from SAXO AIT, will be presented and compared to the initial requirements.

Luis Rodriguez-Ramos (Instituto de Astrofisica de Canarias):  EDIFISE (20 minutes)

EDIFISE is a technology demonstrator instrument intended to explore the feasibility of combining Adaptive Optics with attenuated optical fibers in order to obtain high spatial resolution spectra. FPGA-based real time control of the High Order unit will be in charge of the control of a 97-actuactor deformable mirror with the information provided by a configurable Wavefront sensor. The reconfigurable logic hardware will allow both zonal and modal approaches, will full access to select mode loops to be closed and with a number of utilities for influence matrix and open loop response measurement. The design of the system will depicted plus the development status and the available results.

Marta Puga Antolin (Instituto de Astrofisica de Canarias and Universidad de La Laguna):  Tip-tilt digital controller: design and study of the finite precision implementation on an FPGA (20 minutes)

    Tip-tilt digital controller: design and study of the finite precision implementation on an FPGA The correction of the low order modes of the atmospheric perturbation (tip-tilt) is carried out in the instrument EDIFISE (IAC-Instituto de Astrofísica de Canarias) by a closed loop digital controller designed to command a PZT platform driving a tip-tilt mirror. The designed servo consists of a proportional integral compensator plus a phase-lag network . The real time implementation of this servo on an FPGA involves the use of two’s complement arithmetic (finite precision). The structure of this IIR filter is determined to optimize FPGA’s efficiency in terms of memory, number of operations and speed of the calculations using parallel algorithms. The sensitivity of the structure’s performance due to the reduction of precision in the coefficients of the filter is also studied. During the algorithm execution, the overflow of registers is prevented by scaling or finding the boundaries of the variables. An evaluation of the quantization error introduced and propagated in the system due to the finite word length is also considered. It is proved that the reliability of the fixed point algorithm implemented in the FPGA is guaranteed taking all these aspects into account.

Vivek Venugopal (National Solar Observatory):  Real-time control for the Advanced Technology Solar Telescope (20 minutes)

Real-time processing for Adaptive Optics (AO) systems is challenging as the motion vectors have to be computed to properly actuate the mirrors before the wavefront information has become obsolete. The four meter Advanced Technology Solar Telescope (ATST) will provide unprecedented resolution for solar observation due to its larger aperture. The ATST AO system with 2 kHz frame rate camera, 1750 sub-apertures and 1900 actuators requires massive parallel processing and this increased demand in computational horsepower is far from being manageable by conventional processors. Hardware accelerators such as Field Programmable Gate Array (FPGA) and Graphics Processing Unit (GPU) are better equipped to harness the the parallel processing requirements of such a system. We investigate the implementation of the data processing architecture for Shack-Hartmann correlation and the wavefront reconstruction using FPGAs and GPUs. We benchmark the AO algorithm implemented using FPGAs and GPUs and compare it with the existing legacy FPGA-Digital Signal Processing (DSP) based hardware system used in the 76cm Dunn Solar Telescope(DST).

Damien Gratadour (Observatoire de Paris):  Introducing YoGA : Yorick with Gpu Acceleration (20 minutes)

YoGA is a plugin for Yorick, an interpreted programming language for scientific simulations or calculations. YoGA uses the standard API for interfacing Yorick packages to the interpreter and thus provides, within the « user-friendly » framework of this language, the basic tools to build GPU accelerated applications. YoGA has been built on top of NVIDIA's CUDA library and so, it is for now doomed to be used on this manufacturer's hardware. However future development plans include porting YoGA to OpenCL allowing its use on various parallel architectures. Available features include : BLAS linear algebra routines, fast Fourier transforms, fast Fourier convolution, random number generation and array scanning. YoGA is widely expendable as it includes a custom kernel template and corresponding auto-tuned launcher. User feedback is now required for future developments to target useful features.

Glen Herriot (NRC-HIA): The Raven RTC and GPU studies (20 minutes)

Raven is a multi-object AO demonstrator being developed at the University of Victoria for the Subaru Telescope. Raven’s RTC is expected to use a mixture of FPGAs, CPUs and GPUs. OpenCL is an emerging open multi-vendor standard for heterogeneous computing systems such as Raven. It is portable across multiple targets: CPUs, GPUs and potentially other compute devices e.g. FPGAs, DSPs, Cell Processors. OpenCL on CPUs leverages the available vector hardware, SSE and AVX, without hardcoding 4-wide SIMD. It automatically runs on Sandy Bridge and vectorizes operations. Currently NVIDIA benchmarks show OpenCL works as well as NVIDIA CUDA, but it also works well on AMD GPUs and other vendors. Using OpenCL for the next generation of AO RTCs, has the big advantage that it decouples the development from the choice of hardware. Current FPGA based hardware is expensive and difficult to program; GPUs are attractive alternatives. From a software engineering perspective, OpenCL enables finessing of tradeoffs between speed and portability among device classes. As a low-level abstraction on the devices’ hardware, OpenCL delivers a solid baseline performance, and offers a seamless route to hardware-specific optimizations. We present Raven RTC, its hardware architecture including WFS interface, and processing benchmarks on a GPU.

Ali Bharmal (Durham):  Algorithm optimisation (20 minutes)

I will give an overview of wavefront reconstruction strategies and their implications

Sivo Gaetano (L2TI-Paris 13):  MOAO LQG control structure for CANARY (20 minutes)

We show how the specific MOAO CANARY configuration can be embedded in a state-space framework. The state-space model includes stochastic auto regressive models of order 2 for the turbulent phase in each layer and for vibrations affecting the telescope, measurements for laser guide stars (LGSs) and natural guide stars (NGSs) wave-front sensors (WFSs), DM and delays in the loop. The linear quadratic Gaussian (LQG) controller is derived from a minimum variance criterion. It combines an explicit tomographic reconstruction of phase and vibration modes using a Kalman filter, with an orthogonal projection onto the deformable mirror. This formulation enables to cope easily with asynchronuous control and/or multi-rate NGS/LGS WFS operation. Identification issues for these models are also discussed.

Rufus Fraanje (Delft):  Distributed wavefront reconstruction and prediction with implementation on a GPU (20 minutes)

    Distributed wavefront reconstruction and prediction with implementation on a GPU Linear complexity and efficient parallel implementation of wavefront reconstruction and prediction are needed for adaptive optics (AO) in extremely large telescopes (ELT's). In our previous research it has been shown that the performance of the wavefront reconstruction and prediction can be improved significantly using the Kalman filter. Also data-driven methods have been proposed to estimate the Kalman filter directly from measured data. However the complexity of the design scales cubically and the real-time operation of the Kalman filter scales quadratically with the number of grid points of the wavefront sensor (WFS). Hence, the use of a Kalman filter will not be feasible for application to ELT size problems without further exploiting structure in the problem. This contribution shows how distributed reconstruction and predictors can be identified from data using local WFS data. The complexity of the identification as well as the implementation of the filters scales linearly with the number of WFS grid points and thus will be feasible for scaling to ELT-dimensions. The performance is compared with the performance obtained by the Kalman filter. The distributed filters are implemented on a CPU and a GPU, their computation times are compared for different dimensions showing the linear complexity.

Andreas Obereder (Mathconsult GmbH, Austria):  Direct reconstruction algorithms using forward operators (20 minutes)

In order to fulfill the real-time requirements for AO on ELTs one has to either invest in (very) high performance hardware or spend some effort on the development of highly efficient reconstruction algorithms for wave front sensors. The AAO (Austrian Adaptive Optics) team is involved in deriving reconstructors for SH- and Pyramid WFS measurements utilizing the mathematical properties of the forward operators for these wavefront sensors. At the moment we focus mainly on direct reconstructors (e.g. singular value decomposition of the SH WFS, wavelet representation of the incoming wavefront or the so-called Cumulative Reconstructor, CuRe) with complexity O(n) (where n denotes the number of subapertures of the WFS) to make the reconstruction scalable for large telescopes. This talk will contain a brief overview of the investigated methods and contain first O(n) results as well.

Alastair Basden (Durham):  Pixel processing algorithms, data compression and simulated system testing (20 minutes)

A consideration of algorithms that improve real-time control performance is given, and a discussion of how these can be implemented, what the difficulties are, and how they can be targeted for ELT scale systems is made.  The computational complexity of such algorithms is given.  Data compression for control matricees is also discussed, and the impact that this can have on hardware reconstruction design.  We also present a real-time simulation front end which is under development at Durham, aiming to mimic the optics of an AO system and model the sky.  This front end provides input for a real-time control system, and accepts output (for example DM commands) which can be used to modify the images passed into the RTCS.  Such a system will allow full nearly testing of a RTCS without requiring any expensive optical components, thus allowing multiple RTCS developers to develop in multiple locations simultaneously.  Full scripting of the system can also be tested, for example control matrix generation, since the simulated wavefront sensor images can be changed in response to DM commands in real-time.  A real-time simulation front end could be an important tool for next generation RTCS developments.

Paul Jonathan Phillips (STFC, RAL):  Laser tuning and DMs (15 minutes)

High powered lasers are attractive owing to their potential for a diverse range of experiments. In order to achieve these high powered lasers, various choices for amplification of laser beams have to be made. These choices include the size of the beam which in turn requires laser crystals or ceramics to be manufactured close to their current limit of scalability. In order to obtain maximum energy in the focus spot then the spatial wave-front of the beam needs to have no aberrations. At the central laser facility in Rutherford Appleton Laboratory has had a programme for developing adaptive mirrors for spatial control of laser beams for a number of years. I shall describe some of the techniques and experiments that have been conducted at the CLF for adaptive optics. Another way of achieving these high power requirements is to combine several beams into a monolithic beam, which immediately reduces the requirements for the amplifier to a more modest level. This requires technology to lock the beams spatially and temporally. We have also started to setup a laboratory to develop techniques in wavefront measurement for spatially and temporally overlap, which I shall also describe. There are a number of projects in Europe such as HiPER (High Power lasers Energy Research facility) and ELI (Extreme Light Infrastructure) that will require temporally and spatially locked laser beams.

Thomas Ruppel (Universität Stuttgart):  Feedforward control of DMs (15 minutes)

Performance enhancements for deformable membrane mirrors based on model-based feedforward control are presented. The investigated deformable mirror consists of a flexible membrane and voice coil actuators. Feedback control of the distributed actuators cannot be implemented in these mirrors due to a lack of high speed internal position measurements of the membrane’s deformation. However, by using feedforward control, the dominant dynamics of the membrane can still be controlled allowing for faster settling times and reduced membrane vibrations.  Experimental results are presented for an ALPAO deformable mirror with 88 distributed actuators on a circular membrane with a pupil of two centimeters in diameter. The presented methods can also be used for large scale DMs for ELTs.

Enrico Fedrigo (ESO): SPARTA2: perspectives for the E-ELT (15 minutes)

SPARTA, the ESO Standard Platform for Adaptive optics Real Time Applications, provides a generic decomposition in functional blocks that can be applied, unchanged, to a variety of different AO systems, ranging from very small single conjugate AO with less than 100 actuators to much bigger and faster systems. For AO systems under development, SPARTA provides an implementation for all those functional blocks that are mapped to currently available technologies. The E-ELT with its instruments poses new challenges in terms of cost and computational complexity. Simply scaling the current SPARTA implementation to the size of E-ELT AO system would be unnecessary expensive and in some cases not even feasible. So, even if the general architecture is still valid, some degree of re-implementation and use of new technologies will be needed.

We analyse the new general requirements that the E-ELT and its instruments will pose and we will present promising technologies and solutions that could replace the current ones and show how the SPARTA architecture could evolve to address those new requirements. 

Laurent Jolissaint (Aquilaoptics):  Control-loop Telemetry-based PSF Reconstruction at the Gemini and W.M.Keck telescopes (15 minutes)

Estimating the point spread function across the imaged field, for a given AO run, is critical for AO data reduction. In this talk I will describe our recent progress (March 2011) on PSF reconstruction (PSF-R) for the Gemini North (ALTAIR) and Keck NGS based AO systems. I will shortly re-introduce the basic theory, but will put the emphasize on practical implementation issues we are facing at this two facilities.