Speaker
Description
Throughout the rapid evolution of HPC driven by technology advances reflected by Moore’s Law, processor core architecture has dominated computer design across ten orders (or more) of magnitude in delivered performance. But with the achievement of nanoscale device technology, exponential gain has stagnated demanding alternative innovative strategies. Concurrently, workloads have pivoted from linear algebra to artificial intelligence (AI) with emphasis on supervised machine learning (ML) applications. To address these combined challenges, transformative architectures are being explored that are memory-centric, embody data-oriented semantics, and optimize for latency and bandwidth rather than FPU utilization. This closing Keynote address will describe a class of non von Neumann architectures that will accelerate dynamic graph processing across highly scalable computing systems beyond Exascale through to the end of this decade. A brief discussion of early attempts of memory-centric computing such as SIMD and PIM will motivate revolutionary concepts of the future. Questions from the audience will be welcome assuming remote communication technology permits.