Speaker
Description
The MeerKAT telescope will conduct many scientific projects during the next five years. These range from a few Large Survey Projects (LSPs) and many, smaller, Open Time Projects (OTPs). The LSPs and OTPs will use the excellent imaging quality of MeerKAT to explore new and exciting parts of parameter space, allowing us to study the evolution of galaxies and the nature of transient objects, for example.
To access the sensitivity and the imaging quality of MeerKAT, astronomers will have to deal with an enormous amount of data -- from terabytes for single projects and petabytes for the LSPs. Radio astronomers have thus turned to HPC to deal with the deluge of data, with the aim of producing high science quality data products, while keeping pace with observations. This will allow researchers to mitigate against the negative effects of a data backlog, but will also ensure a high research throughput -- which is essential for the projects and for MeerKAT.
There has been a proliferation in the development of software tools and eco-systems for all parts of the radio astronomy processing stream -- from calibration and imaging, to data visualisation and analysis -- in anticipation of the challenge and opportunity of MeerKAT data. In addition, the data centre plays a central role in determining the feasibility and efficiency of these various workflows, by ensuring the availability of excellent and robust services.
My talk will focus on the computational challenges faced by MeerKAT users, and explore the consequences/requirements for data centres. I will focus on a simple time-cost formalism to assess the risk to science projects, and will present a variety of basic, yet instructive, scenarios.
Student? | No |
---|