Conveners
HPC Technologies: Quantum Computing
- Chair: Dr Peter Braam (-)
HPC Technologies: Storage and IO
- Chair: Dr Jay Lofstead (Sandia National Laboratories)
HPC Technologies
- Chair: Dr Matthew Curry (Sandia National Laboratories)
HPC Technologies
- Martin Hilgeman (Dell EMC)
HPC Technologies
- Kevin A. Brown (Tokyo Institute of Technology)
The use of quantum sensors to investigate gravity, dark matter, and the early universe is in the vanguard of a 2nd Quantum revolution; as significant as the first deployment of telescopes it will transform the way we understand the world. The technological innovation that is the engine of society’s development has been initiated and fuelled by fundamental scientific research; from Faraday’s...
Quantum machine learning investigates how quantum computers can be used for data-driven prediction and decision making. The talk introduces to this relatively young discipline and shows the potential of "Big Data" applications on near-term quantum computers, as they can be found in the cloud at present. Data encoding into quantum states, quantum algorithms and routines for inference and...
Still in early development, quantum computing is already overturning our contemporary notions of computational methods and devices. Using new concepts of computing based in quantum physics, quantum computers will be able to solve certain problems that are completely intractable on any imaginable classical computer, such as accurate simulations of molecules and materials, or breaking public key...
Container technology offers a convenient way to package an application and supporting libraries such that moving them from platform to platform can be done without having to rebuild. Additional features, such as stateless execution enable restarting a containerized application with minimal penalty elsewhere. Combining better support for storage into the container ecosystem breaks this...
Many important HPC applications are communication-bound and/or I/O-bound. These applications depend on efficient inter-process communication and I/O operations, hence, network interference can cause significant performance degradation. Unfortunately, most modern HPC systems use the same network infrastructure for both MPI and I/O traffic, with multiple jobs sharing the system concurrently. The...
Through the first several decades of computing, two data storage abstractions/paradigms dominated common practice: Files and relational databases. While there is significant potential overlap between their use, it is often easy to decide which is more efficient for a particular application or workload. However, over the last twenty years, teh rise of new patterns for parallel and distributed...
Next Generation Sequencing has brought genomic analysis within the range of a great number of laboratories, while increasing the demand for bioinformatic analysis. These typically comprise workflows composed out of chains of analyses with data flowing between workflow steps. Such analysis is amenable to High Throughput Computing, a form of high performance computing characterised by a focus on...
As part of hosting the Square Kilometer Array (SKA) mid frequency radio telescope in the Northern Cape Karoo region, the South Africa Radio Astronomy Observatory (SARAO) will be providing suitable facilities to house the computing, networking and data storage for both the Science Data Processor (SDP) and an SKA Regional Science Centre (SRC). These two facilities are expected to host petascale...
TensorFlow is the system driving Google's ML efforts. Many components make up this system, including a sophisticated user-friendly development environment, highly optimized language features and compilers, ultra-high performance custom chips called Tensor Processing Units (TPU), and scalable deployment on the world's devices. TPU pods may well eclipse traditional performance boundaries of...
One of the prominent trends in computing is the convergence of supercomputers and embedded control computers, which have come to share many of the same requirements and limitations. These common attributes include multicore, power, reliability, programmability, and portability. The increasing use of lightweight processors like embedded cores in HPC systems prompts the need to unify multiple...
Delivering HPC solutions via cloud-based resources is still a technical challenge. Beside either running HPC workload on on-premise resources or entirely on cloud resources hybrid approaches can be used for providing a flexible and cost-effective way of running HPC workloads. Based on two examples, a turn-key SaaS solution (HyperworkUnlimited Virtual Appliance) and a Cloud Bursting scenario...
With all the advances in massively parallel and multi-core computing with CPUs and accelerators, it is often overlooked whether the computational work is being done in an efficient manner. This efficiency is largely being determined at the application level and therefore puts the responsibility of sustaining a certain performance trajectory into the hands of the user. It is observed that the...
One of the prominent trends in computing is the convergence of supercomputers and embedded control computers, which have come to share many of the same requirements and limitations. These common attributes include multicore, power, reliability, programmability, and portability. The increasing use of lightweight processors like embedded cores in HPC systems prompts the need to unify multiple...