Centre for High Performance Computing 2025 National Conference

Africa/Johannesburg
Century City Conference Centre

Century City Conference Centre

Description

The 19th CHPC National Conference.

The aim of the conference: to bring together our users so that their work can be communicated, to include world renowned experts, and to offer a rich programme for students, in the fields of high performance computing, big data, and high speed networking.  The CHPC National Conference is co-organised by the CHPC, DIRISA and SANReN.

Cape Town

The CHPC 2025 Conference will be an in-person event with a physical programme hosted at the Century City Conference Centre, Cape Town.

For more information please see the main conference site.

From Data to Decisions: Leveraging Cyber-Infrastructure

This year's theme is the utility of cyber-infrastructure in processing, storing and moving the large data sets underpinning today's complex world.

Programme

  • 30 November: Workshops
  • 1–3 December: Conference
  • See Timetable for details

Evening Social Events

  • Monday 1 December: Conference Dinner 
  • Tuesday 2 December: Poster Showcase Cocktails [Exhibition Hall]
  • Wednesday 3 December: Awards Dinner 

Registration

Online registration will close on Friday 27 November 2025. Thereafter only onsite registration (at full fees) will be available at the venue.

Co-Organisers

 

SANReN

 

Sponsors

Diamond

 

Platinum

 

 

Gold

 

     

Registration
Registration Form — Academic Professionals
Registration Form — CSIR Staff
Registration Form — Industry & Public Sector
Registration Form - Students & Postdoc
    • 08:00 09:00
      Registration 1h
    • 09:00 10:30
      SADC Cyber-Infrastructure Meeting 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50

      SADC Cyber-Infrastructure Meeting

      • 09:00
        SADC Cyber-Infrastructure Meeting — Session 1 1h 30m
    • 09:00 10:30
      Workshop: AI: Custom chatbot to build a RAG 1/1-11 - Room 11

      1/1-11 - Room 11

      Century City Conference Centre

      100
      • 09:00
        How custom chatbot works and how to build a RAG (Retrieval-Augmented-Generation) system from scratch 1h 30m

        This hands-on workshop introduces students to building a basic Retrieval-Augmented Generation (RAG) system. Participants will learn how to index a document corpus using embeddings, implement a vector search retriever, and connect it to a language model for context-aware responses. The session covers key components like vector store, prompt design, and system evaluation. By the end, students will have built a simple, working RAG pipeline. Basic Python knowledge is recommended.

        Requirements:Each attendee should bring their laptop along and have Jupyter notebook with python 3.10 pre-installed.

        Speaker: Walter Riviera (walterbot.ai)
    • 09:00 10:30
      Workshop: Microkinetic Modelling with ML Potentials 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
      • 09:00
        From Energetics to Rates: A Hands-on Workshop on Microkinetic Modelling with ML Potentials 1h 30m

        Linking atomistic energetics to macroscopic rates remains a core challenge in heterogeneous catalysis. This full-day, hands-on workshop takes participants from surface slab setup and adsorption energies (Python/ASE) to working microkinetic models. We use machine-learned interatomic potentials to approximate energetics, then build reaction networks, derive transition-state-theory rate expressions, and solve mass-balance ODEs to obtain coverages, turnover frequencies, and degree-of-rate-control sensitivities. Practical HPC on CHPC is woven throughout. Attendees leave with a functional model for a reaction of their choice and a clear roadmap to extend it.

        Speakers: Kyle Abrahams (University of Cape Town), Thobani Gambu (University of Cape Town)
    • 09:00 10:30
      Workshop: TBA 1/1-2 - Room 2

      1/1-2 - Room 2

      Century City Conference Centre

      21
    • 10:30 11:00
      Break 30m
    • 11:00 12:30
      SADC Cyber-Infrastructure Meeting 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50

      SADC Cyber-Infrastructure Meeting

      • 11:00
        SADC Cyber-Infrastructure Meeting — Session 2 1h 30m
    • 11:00 12:30
      Workshop: AI: Custom chatbot to build a RAG 1/1-11 - Room 11

      1/1-11 - Room 11

      Century City Conference Centre

      100
      • 11:00
        AI: Custom chatbot to build a RAG part 2 1h 30m
    • 11:00 12:30
      Workshop: Microkinetic Modelling with ML Potentials 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
      • 11:00
        Microkinetic Modelling with ML Potentials 1h 30m

        contunues

    • 11:00 12:30
      Workshop: TBA 1/1-2 - Room 2

      1/1-2 - Room 2

      Century City Conference Centre

      21
    • 12:30 13:30
      Lunch 1h 1/0-Foyer+C+D - Exhibition and Competition Halls

      1/0-Foyer+C+D - Exhibition and Competition Halls

      Century City Conference Centre

      550
    • 13:30 15:00
      SADC Cyber-Infrastructure Meeting 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50

      SADC Cyber-Infrastructure Meeting

      • 13:30
        SADC Cyber-Infrastructure Meeting — Session 3 1h 30m
    • 13:30 15:00
      Workshop: Materials Modelling using DFT 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
      • 13:30
        Materials Modelling Workshop using Density Functional Theory (DFT) 1h 30m

        The Materials Modelling Workshop using Density Functional Theory (DFT) provides postgraduate students, early-career researchers, and interdisciplinary scientists with a solid foundation in computational materials science. DFT is a powerful quantum mechanical method for predicting the structural, electronic, optical, and magnetic properties of materials, enabling the rational design of novel systems for energy, catalysis, and optoelectronic applications. Through a combination of lectures and hands-on sessions, participants will gain practical experience with leading DFT software tools, learning how to perform structure optimization, electronic structure calculations, and property analysis. The workshop will highlight applications in sustainable energy materials, topological systems, and low-dimensional materials. Participants will also be introduced to emerging approaches that combine DFT with machine learning for accelerated materials discovery and inverse design. By the end of the workshop, attendees will possess both theoretical insight and computational skills to apply DFT (CASTEP in materials studio) techniques effectively in their own research and to contribute to innovation in materials design and development.

        Speaker: Kingsley Obodo (University of South Africa)
    • 13:30 15:00
      Workshop: Openstack Part 1 1/1-2 - Room 2

      1/1-2 - Room 2

      Century City Conference Centre

      21
      • 13:30
        Openstack part 1 1h 30m
    • 15:00 15:30
      Break 30m
    • 15:30 17:00
      SADC Cyber-Infrastructure Meeting 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50

      SADC Cyber-Infrastructure Meeting

      • 15:30
        SADC Cyber-Infrastructure Meeting — Session 4 1h 30m
    • 15:30 17:00
      Workshop: Materials Modelling using DFT 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
      • 15:30
        Materials Modelling using DFT 1h 30m

        part 2

    • 15:30 17:00
      Workshop: Openstack Part 2 1/1-2 - Room 2

      1/1-2 - Room 2

      Century City Conference Centre

      21
      • 15:30
        Openstack part 2 1h 30m
    • 17:00 18:00
      Intermission 1h
    • 08:00 09:00
      Registration 1h
    • 09:00 10:30
      Keynote: Opening 1/0-AB - Hall A+B

      1/0-AB - Hall A+B

      Century City Conference Centre

      550
      Convener: Chair: Mr Mervyn Christoffels (CHPC)
      • 09:00
        Welcome 5m
        Speaker: Chair: Mr Mervyn Christoffels (CHPC)
      • 09:05
        CSIR Welcome 10m
        Speaker: Dr Sandile Malinga (Group Executive)
      • 09:15
        DSTI Opening 10m
        Speaker: TBC
      • 09:25
        NICIS Welcome 20m
        Speaker: Dr Happy Sithole (CHPC)
      • 09:45
        TBC 45m

        TBC

        Speaker: Dr Thomas Schulthess (ETH Zurich / CSCS)
    • 10:30 11:00
      Refreshment Break 30m
    • 11:00 12:30
      HPC Applications: 1 1/1-11 - Room 11

      1/1-11 - Room 11

      Century City Conference Centre

      100
      • 11:00
        Entangled Worlds: Engineering, Physics, Applied Mathematics, and the Future of High-Performance Computing. 20m

        The presentation will discuss the use of traditional computational methods, machine and deep learning, as well as quantum computing and quantum machine learning (as a new frontier) in addressing challenges in fluid dynamics, dynamical systems, and high-energy physics research. And the talk will highlight the role of the CHPC in democratising access to critical resources and the enablement of such research.

        Speaker: Prof. Muaaz Bhamjee (University of Johannesburg)
      • 11:20
        Is it Time to Switch to Machine Learnt Potentials? 20m

        For over two decades I have been developing new interatomic potentials, e.g., implemented the AOM within GULP that can be employed to model non-spherical Jahn Teller Mn(III) ions, successfully refined potential parameters to model numerous systems including the Peierls' phase transition of VO2, and I am the author of a published interatomic potential parameter database. My interest is driven by the ability to control what physics is included (or not) by the introduction of new terms to the Hamiltonian (or potential energy) and it is an approach many will follow as, compared to DFT, it allows for modelling systems of larger sizes (more atoms), greater time periods (in MD), and more sampling (global optimisation and/or calculating the partition function).

        Now ML potentials, which have many more parameters to refine and a minefield of differing functional forms to choose, have become very topical as data required to fit these as well as computer resources have become more readily available. My first real experience came with one of my earlier PhD students discovering that it was not straightforward to develop a suitable model (fit parameters), for example, the GAP ML potentials we have refined suffered from the erroneous oscillations.

        I lead UK's Materials Chemistry Consortium and one of our current aims is to make the use of ML potentials more accessible to our community. Simultaneously, other groups have begun refining ML-Potential models for the entire Periodic table based on reproducing DFT results. In my presentation I will present results from three of my PGT students who worked on energy materials using the JANUS-core code to calculate the energy and forces, based on pre-refined MACE ML-Potentials. Moreover, I will include recently published results on dense and microporous silica materials where these potentials performed particularly well and further results of ongoing research from the MCC.

        Speaker: Prof. Scott Woodley (University College London)
      • 11:40
        Active-learning driven chemical space exploration and relative binding affinity estimation of SARS-CoV-2 PLpro inhibitors using FEP+ 20m

        The global pandemic, initiated by the SARS-CoV-2 virus and emerging in 2020, has profoundly influenced humanity, resulting in 772.4 million confirmed cases and approximately 7 million fatalities as of December 2023. The resultant negative impacts of travel restrictions and lockdowns have highlighted the critical need for enhanced preparedness for future pandemics. This study primarily addresses this need by traversing chemical space to design inhibitors targeting the SARS-CoV-2 papain-like protease (PLpro). Pathfinder-based retrosynthesis analysis was employed to synthesize analogues of the hit, GRL-0617 using commercially available building blocks through the substitution of the naphthalene moiety. A total of 10 models were developed using active learning QSAR methods, which demonstrated robust statistical performance, including an R2 > 0.70, Q2 > 0.64, standard deviation < 0.30, and RMSE < 0.31 on average across all models. Subsequently, 35 potential compounds were prioritized for FEP+ calculations. The FEP+ results indicated that compound 45 was the most active in this series, with a ∆G of -7.28 ± 0.96 kcal/mol. Compound 5 exhibited a ∆G of -6.78 ± 1.30 kcal/mol. The inactive compounds in this series were compound 91 and compound 23, with a ∆G of -5.74 ± 1.06 and -3.11 ± 1.45 kcal/mol, respectively. The integrated strategy implemented in this study is anticipated to provide significant advantages in multiparameter lead optimization efforts, thereby facilitating the exploration of chemical space while conserving and/or enhancing the efficacy and property space of synthetically aware design concepts. Consequently, the outcomes of this research are expected to substantially contribute to preparedness for future pandemics and their associated variants of SARS-CoV-2 and related viruses, primarily by delivering affordable therapeutic interventions to patient populations in resource-limited and underserved settings.

        Speaker: Dr Njabulo Gumede (Walter Sisulu University)
      • 12:00
        Reference Genome Assembly, Pangenome Construction, and Population Analysis of the Spotted Hyena (Crocuta Crocuta) from the Kruger National Park 20m

        The spotted hyena (Crocuta crocuta) is a highly social carnivore with a complex behavioural and ecological functions, making it an important model for studying genetic diversity, adaptation, and evolution. However, previous draft genomes for C. crocuta have been incomplete and derived from captive individuals, limiting insights into natural genetic variation. Here, we present a high-quality de novo genome assembly and the first pangenome of wild spotted hyenas sampled from the Kruger National Park, South Africa, alongside population-level analysis.

        Using Oxford Nanopore Technologies (ONT) long-read sequencing, we assembled a 2.39 Gb reference genome with a scaffold N50 of 19.6 Mb and >98% completeness. We further performed short-read resequencing at 10-32X depth per individual, revealing >4 million single nucleotide variations and and ~1 million insertions and deletions per individual. To capture genomic variation beyond a single reference, we constructed a draft pangenome using Progressive Genome Graph Builder (PGGB). The resulting pangenome comprises ~2.47 Gb, with 35.2 million nodes, 48.4 million edges, and 159,060 paths, incorporating sequences from all individuals. Its graph structure revealed substantial topological differences, which may correspond to biologically relevant variations.

        The breadth of these analyses required extensive use of the CHPC’s computing resources. Long-read genome assembly and polishing were executed on high-memory nodes to accommodate the error-correction and scaffolding steps. Repeat and gene annotation pipelines (RepeatModeler, BRAKER3) as well as variant discovery with GATK and BCFtools were parallelised to accelerate execution. Pangenome graph construction was particularly computationally intensive, requiring large-scale parallelisation and significant memory and storage capacity to manage multi-genome alignments and graph building.

        This study provides the most contiguous wild-derived genome to date for the species, the first draft pangenome for C. crocuta, and establishes a foundation for future conservation and comparative genomics. Importantly, it demonstrates the critical role of HPC resources in enabling large-scale bioinformatics pipelines - from genome assembly to pangenome construction and population-level analysis - in non-model organisms.

        Speaker: Dr Ansia van Coller (South African Medical Research Council)
      • 12:20
        Q&A 10m

        Q&A

        Speaker: Q&A
    • 11:00 12:30
      HPC Technology 1/1-8+9 - Room 8+9

      1/1-8+9 - Room 8+9

      Century City Conference Centre

      80
      • 11:00
        ® Building the Future: Morocco's High-Performance Computing Infrastructure 20m 1/1-8+9 - Room 8+9

        1/1-8+9 - Room 8+9

        Century City Conference Centre

        80

        This talk presents an overview of Morocco's emerging HPC ecosystem, highlighting national initiatives, key infrastructure developments, and the growing collaborations that drive computational research and innovation. A particular focus will be given to Toubkal, Morocco's flagship supercomputer, which represents a major step forward in national computing capacity and supports applications in scientific research, artificial intelligence, and industry. The presentation outlines the architecture and capabilities of Moroccan HPC centers and the broader vision for positioning Morocco as a regional hub for advanced computing.

        Imad Kissami, College of Computing, Mohammed VI Polytechnic University

        Speaker: Prof. Imad Kissami (Mohammed VI Polytechnic University)
      • 11:20
        TBC (SARAO) 20m 1/1-8+9 - Room 8+9

        1/1-8+9 - Room 8+9

        Century City Conference Centre

        80
      • 11:40
        AASTU HPC-BDA Center: Advancing Cyber-Infrastructure for Data-to-Decision Transformation in Africa 20m

        The newly established High-Performance Computing and Big Data Analytics (HPC-BDA) Centre of Excellence at Addis Ababa Science and Technology University (AASTU) represents Ethiopia’s bold entry into the continental cyber-infrastructure landscape, complementing South Africa’s CHPC and NICIS. Anchored in state-of-the-art laboratories spanning Business Analytics, HPC & Cloud Systems, Bioinformatics, Agro-Informatics, Computational Science, Cybersecurity, and Meteorological Modelling, the Centre employs a dual-layer strategy that couples foundational infrastructure with high-impact applications in agriculture, healthcare, climate resilience, and the digital economy. By embedding bioinformatics and genomics with secure, INSA-supported data governance frameworks, the Centre uniquely integrates life sciences, policy alignment, and advanced computation into actionable decision systems. Positioned as a “Research Gravity Zone,” it aspires to attract partnerships, catalyze funding, and advance Ethiopia’s Digital Ethiopia 2025 and STI policy priorities, while fostering regional collaboration toward a pan-African HPC-BDA ecosystem that translates data to decisions.

        High-Performance Computing (HPC) and Big Data Analytics (BDA) are rapidly transforming the global research and innovation landscape, enabling nations to turn massive data streams into actionable insights.
        While South Africa’s Centre for High Performance Computing (CHPC) has demonstrated continental leadership, emerging ecosystems across Africa now have the opportunity to complement and expand this capacity. This presentation introduces the newly established HPC-BDA Centre at Addis Ababa Science and Technology University (AASTU), Ethiopia, as a strategic initiative designed to position Ethiopia as a
        regional knowledge hub.
        The Center integrates state-of-the-art laboratories in Business Analytics, Cloud & HPC Systems, Bioinformatics, Computational Science, Agro-Informatics, Network & Cybersecurity, and Meteorological Modelling. Its dual-layer strategy links advanced cyber-infrastructure with thematic domains of national priority, agriculture, healthcare, climate resilience, and the digital economy. By embedding bioinformatics and genomics, the Center uniquely connects life sciences with data-driven decision systems, strengthening Africa’s capacity for health security and food sustainability. Supported by the Information Network Security Administration (INSA), the Center also incorporates advanced cybersecurity and governance frameworks, ensuring ethical, secure, and policy-aligned use of HPC-BDA resources for both national and international collaboration.

        The paper will highlight how this ecosystem fuels data-to-decision pipelines through advanced HPC workflows, robust partnerships, and alignment with Digital Ethiopia 2025 and the national Science, Technology, and Innovation (STI) policy framework. Furthermore, it will discuss the Center’s regional role in fostering collaboration with continental cyber-infrastructure leaders, including CHPC and NICIS, towards a pan-African HPC-BDA network.

        By demonstrating Ethiopia’s novel model of integrating cyber-infrastructure, applied research, and innovation ecosystems, the AASTU HPC-BDA Center aspires to create a “Research Gravity Zone” in Africa, an engine attracting partnerships, funding, and global recognition while directly advancing the CHPC 2025 theme of From Data to Decisions.

        Speaker: Adugna Woldesemayat (UNISA; Addis Ababa Sci. & Tech. U.)
      • 12:00
        A computer architecture targetting RAM chips 20m 1/1-8+9 - Room 8+9

        1/1-8+9 - Room 8+9

        Century City Conference Centre

        80

        computer architecture targetting RAM chips

        Highly parallel computation algorithms on structured data can remain
        inside the memory chip, removing the need to pass all the data across a
        bus to a CPU chip and back.

        This can save a great deal of power for very little added complexity of
        the RAM chip itself.

        As a proof of concept, this computer architecture is implemented inside
        an FPGA, mapping the FPGA block RAM to a 1024 bit square array, with
        1024 bit serial processors, one for each row.

        Each processor consists of a single bit full adder, a little more logic,
        and a 6 bit stack.

        All processors are controlled, SIMD fashion, by a sequencer.

        Variable bit width math functions add, subtract, multiply and divide
        are implemented. As support operations there are also 8, 16 and 32 bit
        transpose, floating point to fixed point conversion, and vice versa.

        All these operations are mapped onto the bit-serial processor. Thus all
        1024 rows are processed at the same time.

        To demonstrate how it might be used, a prime number finding algorithm is
        implemented, which is trivial enough for the audience to understand the
        workings of the bit serial engine, and a single precision floating point
        matrix multiply to demonstrate the architectures utility.

        Were this to be realised within a 4 Gbit RAM chip, there would be
        space for a million processors, each with 4k bits of storage -
        easily sufficient for the matrix multiply algorithm used in the FPGA
        demonstrator.

        The FPGA demonstrator is for algorithm research - as very few present
        day problems have solutions targetting millions of SIMD processors.

        Speaker: Mr Andy Rabagliati (None)
      • 12:20
        Q & A 10m 1/1-8+9 - Room 8+9

        1/1-8+9 - Room 8+9

        Century City Conference Centre

        80

        Q & A

        Speaker: Q & A
    • 11:00 12:40
      ISSA: Cybersecurity 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
      • 11:00
        TBC 30m

        TBC

        Speaker: Luyanda Mtombeni
      • 11:30
        TBC 30m

        TBC

        Speaker: Boitumelo Leotlela
      • 12:00
        TBC 30m

        TBC

        Speaker: Asavela Ndzondzo
    • 11:00 12:30
      Special: Quantum Computing 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50
      • 11:00
        Quantum Reservoir Computing and Principal Component Analysis for Sustainable High-Performance Search: A New Paradigm for Digital Energy Efficiency 20m

        As digital economies scale, the hidden environmental cost of data processing—especially from AI and search engines—has become a growing concern. Search engines like Google and AI models such as Meta AI consume hundreds of kilowatt-hours (kWh) daily; in 2024, Google disclosed that each AI search can consume up to 3 watt-hours, which, at scale, parallels the energy of running a home microwave for 20–30 seconds per query. These figures point to a pressing need to rethink our computational architectures.

        We propose a novel hybrid model that combines Quantum Reservoir Computing (QRC) with Principal Component Analysis (PCA) as a means to reduce computational load while maintaining high-performance intelligence. This approach leverages quantum dynamics for memory-rich processing while applying PCA to filter and compress high-dimensional outputs, minimizing redundancy and noise. The integration is particularly designed for High-Performance Computing (HPC) tasks such as indexing, ranking, and personalization within large-scale search engines.

        Previous research in QRC has highlighted its potential for temporal processing, but it remains underutilized in real-world, energy-intensive infrastructures. Most prior work applies QRC in small-scale simulations without dimensionality reduction or power profiling. Our method introduces PCA post-processing as a compression lens—a missing piece in current quantum reservoir computing literature.

        Speaker: Biswas Kapasule (Aeon Mobility)
      • 11:20
        Converting complex unitary matrices to quantum circuits using game trees 20m

        Quantum circuits can be represented using complex unitary matrices. Certain algorithms, such as Shor's factoring algorithm, produce a matrix that needs to be converted into a quantum circuit and executed multiple times as part of a larger quantum circuit. Current approaches use the mathematical properties of matrices to factor arbitrary matrices into a different set of matrices that must then be converted to quantum circuits. As such, the available basis gates of a specific quantum computer are not considered during the process. These quantum circuits cannot be directly implemented on quantum computers and require a transpilation step, where the produced quantum circuit needs to be converted to a quantum circuit which can run on a specific quantum computer. The transpilation greatly increases the number of gates used on the quantum computer, which increases the execution time needed on quantum hardware, and increases the noise observed during experiments. This study proposes a novel approach to convert complex unitary matrices into quantum circuits, while also minimising the number of gates used by the quantum computer. The proposed approach utilises a game tree, where the basis gates for a specific quantum computer are used to ensure that an optimal solution is found. The process of converting an arbitrary matrix to a quantum circuit can be modelled by storing the matrix representation of a quantum circuit and then adding new gates one at a time and recalculating the matrix representation. These matrices can be thought of as states in a game tree. At each state in the game tree, the valid moves are all the basis gates for a given quantum computer. The goal matrix can then be found by searching the generated game tree for a state with the same matrix representation as the goal matrix, and the corresponding path of the tree will correspond to the gates which produce the quantum circuit. This study investigates the generation and traversal of such quantum game trees. This includes efficient matrix storage in the game tree coupled with compression algorithms, as well as the accuracy functions necessary to search the game tree for the desired matrix.

        Speaker: Mr Sean Macmillan (University of Pretoria)
      • 11:40
        COMPARISON OF QUANTUM ALGORITHMS FOR QUADRATIC OPTIMIZATION 20m

        Optimization problems appear widely in science and industry, yet their classical solutions often demand considerable computational resources. Quantum computing provides a promising framework for addressing such problems more efficiently by exploiting quantum superposition and entanglement [1]. In this work, we investigate several quantum gradient descent [2] approaches to find the minimum of a quadratic cost function. Performing the implementation through Amplitude encoding, we begin by a quantum gradient descent algorithm with a phase estimation-based method. To further enhance performance, we develop and test additional strategies, including linear combination of unitaries (LCUs) [3], the Sz.-Nagy dilation method [4], and a so-called unitary selection method, where the cost function is explicitly defined as a quadratic function. These methods are evaluated in terms of circuit depth, number of iterations, and accuracy. Our results show that the unitary selection outperforms phase estimation, LCUs provide a further improvement, and the Sz.-Nagy approach achieves the highest efficiency among all tested methods. This comparative study highlights the potential of pure quantum algorithms in solving real-world quadratic optimization problems.

        [1] Nielsen, M. A., & Chuang, I. L., Quantum Computation and Quantum Information (10th Anniversary Edition, 2010), Cambridge University Press.

        [2] Rebentrost, P., Schuld, M., Wossnig, L., Petruccione, F., and Lloyd, S., Quantum gradient descent and Newton’s method for constrained polynomial optimization, New J. Phys., 21(7):073023, (2019).

        [3] Chakraborty, Shantanav. "Implementing any linear combination of unitaries on intermediate-term quantum computers." Quantum 8 (2024): 1496.

        [4] Gaikwad, Akshay, Arvind, and Kavita Dorai. "Simulating open quantum dynamics on an NMR quantum processor using the Sz.-Nagy dilation algorithm." Physical Review A 106.2 (2022): 022424.

        Speaker: Ms Helarie Rose Medie Fah (UKZN)
      • 12:00
        Quantum Optimisation Algorithms on Calculating Ramsey Numbers 20m

        The computation of Ramsey Numbers in graph theory looks for the appearance
        of order of a certain substructure in a graph of given size. Mathematically, the
        calculation of a Ramsey Number R(k, l) = n is a two colouring problem that
        finds the smallest graph of size n, that contains either a colouring of size k
        or a different colouring of size l, [1]. This is a formidable computational challenge. Classical algorithms face a search space that grows super-exponentially
        with the number of vertices, rendering the problem intractable. This abstract
        presents an approach to utilizing Quantum Optimisation Algorithms to address
        this complexity, with an experimental implementation targeting IBMQ quan-
        tum hardware.
        The following paper, [2], reformulates the problem of determining if R(k, l) > n
        (i.e., if an n-vertex graph exists with no k-clique or l-independent set) into
        a Quadratic Unconstrained Binary Optimisation (QUBO) problem. The as-
        sociated Problem Hamiltonian, HP , is constructed such that its ground state
        corresponds to a solution that satisfies our decision problem.

        We employ the Variational Algorithm, a leading hybrid quantum-classical method.
        The circuit is implemented using the Qiskit framework and executed on acces-
        sible IBMQ systems. A key aspect of our work is the introduction of quantum
        approaches in this field and execution on Utility scale IBMQ architecture.
        To our knowledge, the following paper, [3], solves R(5, 5) = 45 with Majorana
        based Algebra on a Photonic Quantum Computer, using only 5 qubits.
        We have verified classical results for the computation of small, yet non-trivial,
        Ramsey numbers, such as R(3, 3), by benchmarking the performance classical
        optimization. We would like to investigate the scale-up performance and qual-
        ity of results on Utility scale quantum computers. Our findings will contribute
        to the knowledge of solving problems beyond the reach of conventional High
        Performance Computing (HPC) resources.

        [1] - Bondy, J.A., and P. Erd¨os. “Ramsey numbers for cycles in graphs.” Journal
        of Combinatorial Theory, Series B, vol. 14, no. 1, 1973, pp. 46–54. Crossref,
        https://doi.org/10.1016/S0095-8956(73)80005-X
        [2] - Wang, Hefeng. “Determining Ramsey Numbers on a Quantum Computer.”
        Physical Review A, vol. 93, no. 3, Mar. 2016. Crossref, https://doi.org/10.1103/physreva.93.032301
        [3] - Tamburini, Fabrizio. “Random-projector quantum diagnostics of Ramsey
        numbers and a prime-factor heuristic for R(5,5)=45.” arXiv, 2025. arXiv:2508.16699.

        Speaker: Shawal Kassim (Wits)
      • 12:20
        Q&A 10m
    • 12:30 13:30
      Lunch 1h 1/0-Foyer+C+D - Exhibition and Competition Halls

      1/0-Foyer+C+D - Exhibition and Competition Halls

      Century City Conference Centre

      550
    • 13:30 15:20
      HPC Applications: 2 1/1-11 - Room 11

      1/1-11 - Room 11

      Century City Conference Centre

      100
      • 13:30
        TBC 20m

        TBC

        Speaker: Prof. Catharine Esterhuysen (Department of Chemistry and Poymer Science, University of Stellenbosch)
      • 13:50
        CCAM Heavy Rainfall Simulations: Sensitivity to Initialization Datasets 20m

        Heavy rainfall events are among the most damaging weather hazards worldwide, yet they remain difficult to simulate accurately. One key source of uncertainty is the choice of input data used to initialize weather and climate models. In this study, we tested how sensitive the Conformal Cubic Atmospheric Model (CCAM) is to different initialization datasets, including ERA5, GFS, GDAS, and JRA-3Q. Using the CHPC Lengau cluster, we ran high-resolution (3 km) convection-permitting simulations, which allowed us to capture the fine-scale features of a 3-4 June 2024 heavy rainfall event over the eastern parts of South Africa.
        We evaluated the simulations against radar and IMERG satellite precipitation estimates. While all runs reproduced the evening peak in rainfall timing, they generally underestimated intensity. Among the datasets, ERA5 produced the most reliable simulations, showing the closest match to IMERG with the lowest errors and highest correlation. In contrast, JRA-3Q and GFS-FNL performed less well. These results show that the choice of initialization dataset has a clear impact on rainfall prediction skill, and highlight the value of HPC-enabled sensitivity studies for improving extreme weather forecasting in the region.

        Speaker: Mr Tshifhiwa Rambuwani (South African Weather Service)
      • 14:10
        Recommendations for Running Bioinformatics Applications on CHPC 20m

        Running large-scale bioinformatics analyses on high-performance computing (HPC) infrastructure like the CHPC can significantly accelerate research, but comes with technical challenges—especially for researchers aiming to deploy complex workflows such as those built with Nextflow. In this talk, I present practical recommendations and lessons learned from testing and running various bioinformatics applications on the CHPC, with a particular focus on containerised workflows and resource optimisation.

        Drawing from real-world use cases and performance benchmarks, I highlight key considerations such as managing limited walltime, dealing with module and environment setup, optimising Singularity containers for reproducibility, and handling input/output bottlenecks. I also reflect on common pitfalls and how to overcome them—especially for researchers with limited systems administration experience.

        This presentation aims to equip bioinformatics users with actionable guidance on how to run workflows more efficiently, reproducibly, and with fewer frustrations on the CHPC infrastructure. It is also a call for continued collaboration between HPC support teams and domain researchers to bridge the gap between computational capacity and research usability.

        Speaker: Ms Thandeka Mavundla (University of Cape Town)
      • 14:30
        Holes, Hops and Haven Ratios—Molecular Dynamics of Ion Transport and Transference in MFSI/[C₄C₁pyr][FSI] electrolytes (M = Li, Na, K) 20m

        Room-temperature ionic liquids (ILs) are molten salts with negligible vapour pressure and wide electrochemical windows, making them attractive electrolytes for beyond-lithium batteries [1]. Optimising transport properties—such as conductivity, self-diffusion, and the working-ion transference number (the fraction of the total ionic current carried by Li⁺/Na⁺/K⁺ from the added salt)—requires further quantitative, molecular-scale insight into how charge and mass move. Equilibrium molecular dynamics (MD) provides this insight by enabling transport coefficients and mechanistic signatures to be extracted from atomistic simulations. The rate capability of a battery is tightly coupled to the transport properties of the electrolyte; formulations that raise the working-ion transference number while maintaining adequate conductivity are preferred [2].

        In this work, MD simulations were used to probe 1-butyl-1-methylpyrrolidinium bis(fluorosulfonyl)imide ([C₄C₁pyr][FSI]) mixed with MFSI (M = Li, Na, K) at salt mole fractions of 0.10, 0.20, 0.40 and T = 348.15 K. A non-polarisable model based on the well-established CL&P force field was employed [3]; however, non-bonded interaction parameters were adjusted to better reflect symmetry-adapted perturbation theory (SAPT) decomposition of pairwise interactions, including cation–anion and metal-salt pairs. Equilibrium trajectories of ≥250 ns per state point were generated with LAMMPS [4]. Self-diffusion coefficients were obtained from Einstein mean-squared displacements, and ionic conductivity was computed using the Green–Kubo/Einstein–Helfand formulation. The analysis includes Nernst–Einstein estimates of conductivity ($\sigma_\text{NE}$), Haven ratios ($\sigma_\text{NE}/\sigma$) and its inverse (ionicity, $\sigma/\sigma_\text{NE}$), and both apparent transference numbers (from self-diffusion coefficients) and real/collective transference numbers from conductivity decomposition in an Onsager framework. Mechanisms of ion transport are examined via Van Hove correlation functions (self and distinct), the non-Gaussian parameter, ion–anion residence times, and coordination numbers. Hole (free-volume) theory is evaluated as a compact model for conductivity across composition.

        HPC Content: Strong scaling was assessed for fixed-size systems of 256, 512, and 1024 ion pairs on 1–64 CPU cores (MPI ranks); wall-time per ns and ns/day were recorded to determine speedup and parallel efficiency. For one representative state point, transport properties are compared across these system sizes to illustrate finite-size effects.

        [1] Yang, Q.; Zhang, Z.; Sun, X.-G.; Hu, Y.-S.; Xing, H.; Dai, S., Ionic liquids and derived materials for lithium and sodium batteries. Chem. Soc. Rev. 2018, 47, 2020-2064.
        [2] Chen, Z.; Danilov, D. L.; Eichel, R.-A.; Notten, P. H. L., Porous Electrode Modeling and its Applications to Li-Ion Batteries. Adv. Energy Mater. 2022, 12, 2201506.
        [3] Canongia Lopes, J. N.; Pádua, A. A. H., CL&P: A generic and systematic force field for ionic liquids modeling. Theor. Chem. Acc. 2012, 131, 1-11.
        [4] Thompson, A. P.; Aktulga, H. M.; Berger, R.; Bolintineanu, D. S.; Brown, W. M.; Crozier, P. S.; in 't Veld, P. J.; Kohlmeyer, A.; Moore, S. G.; Nguyen, T. D.; Shan, R.; Stevens, M. J.; Tranchida, J.; Trott, C.; Plimpton, S. J., LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Comput. Phys. Commun. 2022, 271, 108171.

        Speaker: Dr Gerhard Venter (University of Cape Town)
      • 14:50
        Q&A 10m

        Q&A

        Speaker: Q&A
    • 13:30 15:00
      HPC Technology 1/1-8+9 - Room 8+9

      1/1-8+9 - Room 8+9

      Century City Conference Centre

      80
      • 13:30
        Intel presentation (TBC) 20m

        TBC

        Speaker: Intel Speaker (TBC)
      • 13:50
        HPE Presentation (TBC) 20m
        Speaker: HPE Speaker (TBC)
      • 14:10
        Eclipse Holdings Presentation (TBC) 20m
        Speaker: Eclipse Holdings Speaker (TBC)
      • 14:30
        VAST Data Presentation (TBC) 20m
        Speaker: VAST Data Speaker (TBC)
      • 14:50
        Q & A 10m
        Speaker: Q & A
    • 13:30 15:10
      ISSA: Cybersecurity 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
      • 13:30
        TBC 30m

        TBC

        Speaker: Errol Baloyi
      • 14:00
        TBC 30m

        TBC

        Speaker: Marijke Coetzee
      • 14:30
        TBC 30m

        TBC

        Speaker: Raymond Agyemang
    • 13:30 15:00
      Special: Quantum Computing 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50
      • 13:30
        Ghost Image Reconstruction with Classical and Quantum Convolutional Neural Networks 20m

        Image reconstruction is a critical problem in industry, especially in certain areas of Optics, such as the Ghost Imaging experiment, [1], [2]. The experiment has many beneficial practical applications such as live cell imaging or remote sensing. The key leverage here lies with its non-local imaging procedure. This allows one to view a quantum image without collapsing its state. The experimental
        approach requires twice the number of measurements as opposed to a classical
        image, due to the real and complex part of the quantum image. Thus, requires
        ≈ 2N 2 measurements to reconstruct a N × N image, [3]. The experimental
        procedure has challenges in the speed and fidelity of reconstruction. Commonly
        used classical reconstruction methods are effective but can be computationally
        intensive or struggle to leverage the inherent patterns in natural images.

        We have designed a Classical and Quantum algorithm to overcome this intensive computational task. The method we present reconstructs low-sampled
        images measured from the Ghost Imaging experiment, using a Classical and
        Quantum Convolutional Neural Network (CNN), [4]. Low-sampled images have
        a linear representation using the Hadamard transform, where a number of co-
        efficients of the linear decomposition are unknown. The CNN’s take the low-
        sampled coefficients as inputs and reconstructs the complete set of coefficients.
        Instead of directly processing pixel-domain images, our method focuses on re-
        constructing missing coefficients in the Hadamard transform domain.
        The Quantum CNN model architecture adapts the principles of a Classic
        U-Net Convolutional Neural Network. With the use of Variational Circuits, we
        will apply the convolutional and pooling layers. Due to quantum properties such
        as quantum superposition and quantum entanglement, the model may be able
        to exploit more intrinsic patterns and correlation within the Hadamard coeffi-
        cient space. We have simulated the Quantum CNN, and seems to show possible
        improvement in reconstruction speed and higher fidelity rates, as compared to
        its classical counterpart of similar size.
        1

        This paper will detail the proposed Classical annd Quantum CNN archi-
        tecture, the encoding scheme for Hadamard coefficients into quantum states,
        the variational quantum layers for feature extraction and upsampling, and the
        classical optimization loop. We will present simulation results on the MNIST
        data set, and real experimental results from the Wits Structured Light Lab.
        Demonstrating the CNN’s ability to reconstruct full Hadamard coefficient sets
        from various levels of undersampling, followed by inverse Convolutional Neural
        Network to generate high-fidelity pixel-domain images. The findings highlight
        the potential of quantum machine learning to significantly advance computa-
        tional imaging techniques like Ghost Imaging, paving the way for faster, more
        accurate, and quantum-enhanced imaging solutions.

        Speaker: Shawal Kassim (Student)
      • 13:50
        Classical and Quantum Computational Complexity of the Ramsey Number Problem 20m

        Ramsey Numbers are a computationally difficult problem to solve. The expected runtime of any algorithm to find a Ramsey Number is in the computational complexity class of $\Pi_2^P$ or $\text{co-NP}^\text{NP}$ (Burr, 1987). Here we present some preliminary results from an optimized tree-search algorithm to find the next Ramsey Number $R(4,6)$ (Radziszowski, 2024) and verify the result $R(5,5)=45$ (Tamburini, 2025) using modern parallelisation techniques and improved hardware. We provide an analysis on the efficiency of this parallel algorithm compared to other implementations. We present current progress on generalising the algorithm to find the Ramsey Numbers for general associated structures.

        Speaker: Brendan Griffiths (University of the Witwatersrand)
      • 14:10
        Identification of Quantum Hardware based on Noise Fingerprint Using Machine Learning 20m

        This project focuses on identifying quantum hardware based on its unique "quantum
        noise fingerprint" using machine learning. Each quantum computer exhibits a distinct
        noise signature due to physical imperfections, and recognizing these patterns can aid in
        hardware development, calibration, and security. We utilized basic machine learning
        algorithms (SVM, KNN) to analyse noise characteristics and predict which IBM quantum
        machine executed a given circuit.
        Methodology and Observations
        Data was gathered from IBM's Qiskit platform, including actual hardware runs (facilitated
        by a CSIR educational license) and refreshed software simulations. An HPC cluster was
        essential for processing and simulating the extensive datasets due to the computational
        demands, allowing for efficient parallel data transformation. The SVM and KNN machine
        learning models were then trained on this data, after feature engineering and parameter
        tuning was completed. Initial findings showed high accuracy (over 96%) when models
        were trained and tested on data within the same category (e.g., training on hardware data
        and testing on hardware data). However, a significant drop in accuracy was observed
        when attempting to identify machines across different data types (e.g., training on
        software simulations and testing on actual hardware). Furthermore, we noted that IBM's
        refreshed simulation noise models are not static and evolve over time

        Speakers: Mr Rameez Abdool (Wits), Ms Jenna Epstein (Wits)
      • 14:30
        Password Security Quantum Readiness Framework for IT Professionals 20m

        The rise of quantum computing poses a serious threat to password-based security systems
        and could break the methods we currently use to keep data safe, putting sensitive information
        at risk. For example, Grover’s algorithm, a well-known quantum algorithm can make bruteforce password attacks much faster by reducing the number of guesses needed roughly by the
        square root of the total number of possible passwords, which could result in attacks being
        thousands of times faster for large key spaces.
        This research proposes a Password Security Quantum Readiness Framework to help IT
        professionals maintain business continuity in the face of sudden quantum-driven password
        security shifts. The study aims to assess the risk that quantum computing poses to password
        security, evaluate countermeasures including quantum-resistant hashing, multi-factor or
        password-less authentication, upgrading hashing protocols to post-quantum standards and
        other protections to mitigate these risks.-
        A qualitative-methods design supports the study. First, a thorough literature review will be
        conducted to investigate the password security risk posed by quantum computing. Second, a
        systematic literature will be conducted to investigate possible counter measures for mitigating
        password security risk related to quantum computing. Third, Critical Reasoning will be used
        identify and extract key constructs for formulating the framework.
        Businesses can protect sensitive information from emerging quantum technologies by
        developing a quantum readiness framework for password security. This framework will help IT
        professionals understand the risks posed by quantum computing and equip them to address
        password cybersecurity challenges, creating a business-continuity architecture to safeguard
        password infrastructure and ensure operational resilience in the evolving quantum landscape.

        Speaker: Anele Siwela (CSIR)
      • 14:50
        Q&A 10m
    • 15:00 15:30
      Break 30m
    • 15:30 17:00
      Keynote: Industry Inquisition 1/0-AB - Hall A+B

      1/0-AB - Hall A+B

      Century City Conference Centre

      550
      Convener: Facilitator: Mr Dan Olds (OrionX)
      • 15:30
        TBC 20m

        TBC

        Speaker: Mr Dan Olds (Olds Research)
      • 15:50
        Industry Inquisition 1h 10m

        This session has become a signature event at CHPC conferences. The rules are brutally simple. Vendors have five minutes and only three slides to put their best foot forward to the audience and the inquisitors. The panel includes industry analyst Dan Olds along with two standout students from the cluster competition who have been briefed on the vendors and their slides.

        After their five-minute presentations, the presenters will be asked three questions, two of which they know are coming followed by a final, secret, question. Frank and tough questions will be asked. Answers will be given. Punches will not be pulled. The audience will be the ultimate judge of which vendor did the best job. It’s fun, brisk, and informative.

        Speaker: Mr Dan Olds (Olds Research)
    • 17:00 17:45
      Keynote 1/0-AB - Hall A+B

      1/0-AB - Hall A+B

      Century City Conference Centre

      550
      Convener: Chair: Dr Happy Sithole (CHPC)
      • 17:00
        HPE, Intel and Eclipse Diamond Sponsor Keynote (TBC) 45m

        TBC

        Speaker: Dr Jean-Laurent Philippe (Intel)
    • 17:45 18:00
      Intermission 15m
    • 18:00 19:30
      Conference Dinner 1h 30m
    • 08:00 09:00
      Registration 1h
    • 09:00 10:30
      Keynote: Tuesday 1/0-AB - Hall A+B

      1/0-AB - Hall A+B

      Century City Conference Centre

      550
      • 09:00
        Leveraging undersea cables and optical fibre for high-volume science workflows 45m

        This Keynote presentation will be focused on the following:

        • The readiness of SADC NRENs to the challenges of Petascale and Exascale data to be generated by key projects such as, e.g., SKA and the Bioinformatics Genome Sequencing projects.

        • Strategic infrastructure investments and partnerships and how to maximise on private sector investments.

        • Data sovereignty and developing the right skills for our research communities to be able to handle and run these increasing scientific data flows.

        Speaker: Dr Rosalind Thomas (Collaborating Company -SAEx International Management Ltd)
      • 09:45
        Scaling Innovation: How Compute and Big Data Drive Research and Cybersecurity 45m

        In today’s rapidly evolving digital landscape, robust national cyber infrastructure is essential for driving innovation, securing critical systems, and empowering research across all sectors. This keynote explores how the strategic integration of advanced compute power and big data capabilities forms the backbone of modern cyber infrastructure, enabling nations to tackle complex challenges in science, engineering, and industry. We will highlight MathWorks’ pivotal role in supporting these efforts by delivering state-of-the-art technical tools, such as MATLAB and Simulink, that accelerate data analysis, modeling, and simulation at scale. Beyond technology, MathWorks is committed to capacity building—offering comprehensive training programs for staff and students to cultivate the next generation of cyber professionals. Furthermore, we foster collaboration by connecting academia, government, and industry, ensuring a vibrant ecosystem where innovative ideas flourish. Join us to discover inspiring case studies and practical strategies that demonstrate how a unified approach to compute, data, and community can unlock the full potential of national cyber infrastructure and drive transformative outcomes.

        Speaker: Dr Mischa Kim (MathWorks)
    • 10:30 11:00
      Break 30m
    • 11:00 12:30
      HPC Applications 1/1-11 - Room 11

      1/1-11 - Room 11

      Century City Conference Centre

      100
      • 11:00
        TBC 20m

        TBC

        Speaker: Prof. Malik Maaza (University of South Africa & iThemba LABS/National Research Foundation of South Africa)
      • 11:20
        VCo2O4 (001) surface properties in zinc-air batteries 20m

        The evolution and progress of humanity are closely linked to our ways of energy use. Reliable energy sources are vital for driving economic growth, especially as society's demand for energy keeps rising. The rapid development of zinc-air batteries (ZABs) makes them an appealing alternative to standard lithium-ion batteries for energy storage needs. However, the slow kinetics of the air cathode led to a short lifespan and low energy efficiency in zinc-air batteries. First-principles calculations help develop catalysts that promote the formation of the most stable discharge products in Zn-air batteries. Density functional theory (DFT) is used to examine the adsorption (Γ= +1, +2) and vacancy formation (Γ= -1, -2) energies of oxygen atoms on the (001) surface of VCo2 O4. The Bader charge analysis reveals how the atoms interact within the system. When oxygen atoms are reduced and adsorbed, it is observed that the V and Co atoms show minimal charge differences compared to the original phase, whether reduced or oxidized. Interplanar distances show that adding or removing oxygen causes the system to expand or contract, respectively. The work function helps assess the system’s reactivity. Absorbing oxygen atoms decreases reactivity, while removing oxygen increases it. The calculations were executed concurrently on 24 of the 2400 available cores, leveraging CHPC with 2048 MB of memory. These findings provide insights into identifying catalysts that can enhance the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER), thereby improving the performance of Zn-air batteries.

        Speaker: Prof. Khomotso Maenetja (Materials Modelling Centre)
      • 11:40
        A use case for integration of Earth Observations and AI in NWP modelling 20m

        This paper explores the benefits of integrating Earth Observation (EO) techniques with Artificial Intelligence (AI) to enhance the capabilities of Numerical Weather Prediction (NWP) models, particularly in the context of severe weather and environmental hazards over South Africa. NWP models often employ EO that lacks real-time resolution, which may lead to increased uncertainty in short-term forecasts and reduced reliability during high-impact weather events. EO systems provide high-resolution, near-real-time observations. AI techniques perform post-processing tasks like bias correction, anomaly detection, and pattern recognition. AI also excels at capturing non-linear relationships and fine-scale phenomena that are often poorly resolved in NWP models. This EO-AI integrated approach should improve model forecast accuracy, and detection of localized hazards. We demonstrate the benefits and shortcomings of our approach in detecting hazards such as heat waves, wildfires and wetland degradation.

        Speaker: Ms Patience Tlangelani Mulovhedzi (Council for Scientific and Industrial Research)
      • 12:00
        Uncertainty Quantification in Patient-Specific Cardiovascular CFD: A Global Sensitivity Study on the Lengau Cluster 20m

        Three-dimensional (3D) computational fluid dynamics (CFD) has emerged as a powerful tool for studying cardiovascular haemodynamics and informing the treatment of cardiovascular diseases. Patient-specific CFD models rely on boundary conditions derived from medical imaging, yet uncertainties in imaging measurements can propagate through the model and affect clinically relevant outputs such as pressure and velocity fields. To ensure that CFD-based clinical decisions are both reliable and repeatable, it is essential to quantify these uncertainties and assess the sensitivity of the outputs to boundary condition variability.
        Uncertainty quantification and sensitivity analysis (UQ/SA) typically require large numbers of simulations, which makes their application challenging in 3D CFD due to high computational costs. While Monte Carlo approaches may require hundreds of evaluations, alternative methods such as generalized polynomial chaos expansion reduce the number of runs but remain computationally demanding.
        In this study, we present a global UQ/SA framework implemented on the Lengau Cluster for coarctation of the aorta, a common form of congenital heart disease. The uncertain inputs are the lumped parameters of the 3-Element Windkessel Model, prescribed at the outlets to represent distal vasculature. We evaluate how variability in these parameters impacts pressure and velocity fields, with the objective of improving the robustness and clinical utility of patient-specific CFD simulations.

        Speaker: Vincent Punabantu (University of Cape Town)
      • 12:20
        Q & A 10m

        Q & A

        Speaker: Q & A
    • 11:00 12:40
      HPC Technology: Cloud 1/1-8+9 - Room 8+9

      1/1-8+9 - Room 8+9

      Century City Conference Centre

      80
      • 11:00
        Curbing Non-Revenue Water Through A Dynamic Hydraulic Modeling & Smart Water Network Management System 20m

        TBC

        Speaker: Mr Lwando Ngcama (CSIR Staff)
      • 11:20
        Cloud based urban planning decision support 20m

        Urban areas across South Africa face increasing pressure to plan for sustainable growth and service delivery amid rapid change. The Urban and Regional Dynamics research group at the CSIR develops city- and provincial-level simulation and decision-support tools to assist planners and policymakers in exploring long-term urban development scenarios. These models integrate spatial, economic, and demographic data, requiring significant computational capacity for processing and visualisation. Access to the CHPC infrastructure has been critical in enabling the scalability of these systems and providing reliable, shared access for multiple users across municipalities and provinces. This talk outlines the computing challenges and successes encountered from handling large geospatial datasets to deploying interactive web-based interfaces and presents key research outcomes that demonstrate how cloud-based systems can strengthen data-driven urban planning and decision-making.

        Speaker: Ms Carike Karsten (CSIR)
      • 11:40
        TBC 20m

        TBC

        Speaker: Mr Sifiso Mgaga (CSIR)
      • 12:00
        A model for cloud-based gamification to enhance collaborative learning in South African Higher Institutions: A Case of Gauteng Province 20m

        The use of gamification has gained popularity in higher education, which offers a valuable learning experience and improved skills acquisition by incorporating game-based mechanisms such as points, badges, leaderboards, and challenges. The use of gamification in education enhances student engagement and motivation by making learning more interactive and fun, improves learning outcomes, incentivises self-directed learning, and encourages collaboration and competition, thereby building connections within academic communities.
        Despite these benefits, the implementation of cloud-based gamification in South Africa faces several challenges, including the digital divide, accessibility, infrastructure, data security, and faculty readiness. Addressing these challenges is crucial for the successful implementation of gamification within the higher education environment. The challenges, which include inadequate infrastructure, unreliable internet services, and the affordability of technology, significantly hinder the use of cloud-based platforms, especially among disadvantaged students.
        This study aims to develop a cloud-based gamification model to enhance collaborative learning in South African higher education institutions, specifically universities in Gauteng Province. The integration of the Gamification Acceptance Model, Gaming Learning Systems Framework, Self-Determination Theory, and Teaching Gamification Infrastructure Cloud-based Framework factors can be identified in the development of a cloud-based gamification model. By integrating game elements like points, badges, and leader boards, the model seeks to foster a competitive yet skills-based environment. The data will be collected from participants through quantitative surveys and qualitative interviews, and the Delphi method will be used to achieve expert consensus on the model's usability, relevance, and impact on student learning outcomes.
        The expected contributions of the study are methodological, practical, and theoretical by developing a cloud-based gamification model that will be tailored to South African higher education institutions. This model seeks to improve educational practices, enhance student engagement, motivation, and learning outcomes, and advance educational technology and collaborative learning, providing valuable insights and tools for various educational contexts to support student success.
        Keywords: cloud-based gamification, game-based mechanisms, higher education institutions

        Speaker: Mrs Irene Abraham-Samgeorge (Tshwane University of Technology)
      • 12:20
        Value-driven Artificial Intelligence, Data and Natural Language Processing solutions built on the CHPC sovereign cloud infrastructure 20m

        This talk is focused on providing an overview of the Artificial Intelligence, Data Science and Natural Language Processing fields. The audience will be given a view of use cases and solutions that are being worked on at CSIR. This will be followed by an overview of the impact that CHPC infrastructure has contributed to our NLP initiatives. Lastly, we will briefly share perspectives to enable future success at the intersection of infrastructure and AI innovation.

        Speaker: Avi Moodley (CSIR)
    • 11:00 12:30
      ISSA 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
      • 11:00
        TBC 30m
        Speaker: Rowen Robinson
      • 11:30
        TBC 30m
        Speaker: Clarissa Weyer
      • 12:00
        TBC 30m
        Speaker: Aluta Vusisiziwe Dyasi
    • 11:00 12:30
      SA NREN 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50
      • 11:00
        TBC 30m
        Speaker: Stein Mkandawire (Zambia Research and Education Network (ZAMREN))
      • 11:30
        TBC 30m
        Speakers: Mthunzi Shabangu (Royal Science and Technology Park), Mr Sicelo Nkambule (RSTP)
      • 12:00
        TBC 30m
        Speaker: Miriam CHAHURUVA (Zimbabwe Research and Education Network)
    • 11:00 12:30
      Special: Digital Leadership 1/0-AB - Hall A+B

      1/0-AB - Hall A+B

      Century City Conference Centre

      550
      • 11:00
        Emergent Leadership in Digital Public Infrastructure: A Practice Nexus Approach to Sociotechnical Transformation in South Africa 20m

        Digital transformation in the public sector needs more than just technology; it requires a new understanding of leadership. This presentation examines how leadership is practised within South Africa’s Centre for High-Performance Computing (CHPC), a national facility driving the country’s digital research agenda.

        Building on the Leadership-as-Practice (L-A-P) framework and expanded through the Practice Nexus and Contextual Modulators, this study explores leadership as a collective, relational, and materially mediated activity. Using a qualitative phenomenological case study, it examines how leadership arises through dialogue, improvisation, and the interaction between human and AI actors.

        The research introduces the concept of bricolage leadership, developed within the context of institutional constraints and technological complexity.
        Findings show how AI tools serve as co-constitutive agents, shaping coordination and sensemaking within national digital infrastructures

        Speaker: Mr Mervyn Christoffels (CHPC)
      • 11:20
        Transforming Health Logistics through Digital Leadership: Rwanda’s Drone Delivery flagship project. 20m

        Digital leadership is crucial for driving innovation and transformation in health systems across Africa. This case study examines the implementation of drone technology for delivering medical supplies in Rwanda, demonstrating how strategic digital leadership facilitated the successful integration of this technology into the country's health logistics system. Through interviews and document analysis, the study explores how leaders at the Ministry of Health, Rwanda in collaboration with private partners (Zipline, Rwanda), fostered a culture of digital readiness, agility and data-driven decision-making. The findings reveal that visionary leadership, cross-sectoral collaboration and adaptive governance were key to scaling up drone-based delivery networks that now supply remote hospitals with life-saving medical products. The study concludes that Rwanda’s experience demonstrates how digital leadership can overcome infrastructure limitations, promote health equity, and accelerate digital transformation in areas with limited resources.

        Speaker: Prof. Peter Weimann
      • 11:40
        Creating a Sovereign Cloud Platform for South Africa 20m

        Sovereign cloud is an increasingly important topic. Nations and businesses are realising that cloud solutions provided by foreign companies, even when deployed in South Africa, are subject to foreign laws. These laws enable states compel these companies to provide data from users of cloud platforms regardless of where they are deployed. In addition, geopolitics has become erratic and vindictive. Trade barriers are erected and removed at a whim creating substantial uncertainty. This is unstable ground on which to build infrastructure of national importance. NICIS, as a provider of cyberinfrastructure, has experience localising technology and building services for the research community. This talk explores NICIS’ efforts to address the lack of a sovereign cloud platform in South Africa and the progress that has been made.

        Speaker: Mr David Macleod (CHPC)
      • 12:00
        Digital Leadership in the Age of AI: Reframing Human Agency and Institutional Foresight in High-Performance Computing Futures 20m

        Digital leadership is emerging as the decisive competence of our time — the ability to align human insight, computational capacity, and organisational purpose in a world shaped by. This session explores what it means to lead when cognition, creativity, and computation are increasingly interwoven.

        Far from being a technical role, digital leadership represents a new epistemic orientation — one that combines systems intelligence, ethical discernment, and strategic agility to navigate the accelerating feedback loops between human decision-making and machine learning. It requires leaders to cultivate literacies that span from data ethics and digital inclusion to the responsible deployment of AI and HPC infrastructures.

        Drawing from applied research and practice at the UWC CoLab for e-Inclusion and Social Innovation and the Samsung-funded Future-Innovation Lab, the presentation will examine how digital leadership is being developed within South Africa’s higher-education and innovation ecosystems. It will illustrate how next-generation leaders are being prepared to operate at the interface of human capability development, institutional transformation, and computational scale — where digital foresight becomes a form of national competence.

        Ultimately, the session argues that digital leadership is not about mastering technology but about shaping the conditions under which technology serves human and societal flourishing in the era of artificial intelligence.

        Speaker: Dr Wouter Grove (UWC)
      • 12:20
        Q & A 10m

        Q & A

        Speaker: Q & A
    • 12:30 13:30
      Lunch 1h 1/0-Foyer+C+D - Exhibition and Competition Halls

      1/0-Foyer+C+D - Exhibition and Competition Halls

      Century City Conference Centre

      550
    • 13:30 15:00
      HPC Applications 1/1-11 - Room 11

      1/1-11 - Room 11

      Century City Conference Centre

      100
      • 13:30
        From Punch Cards to AI-Powered HPC: A 40-Year Journey in Computational Chemistry Infrastructure 20m

        This presentation chronicles the journey of an older-generation computational chemist and a young HPC expert, mediated via AI assistance, culminating in the successful deployment of a multi-node research computing cluster. The cluster supports molecular modeling and drug design, enabling large-scale molecular dynamics and quantum chemistry calculations. The senior researcher's four-decade arc—from 1983 punch cards to 2025 AI-collaborative infrastructure—illuminates artificial intelligence's role in transforming scientific knowledge transfer.

        An initial Gaussian software request expanded into comprehensive cluster setup: Rocky Linux 9, Slurm management, parallel filesystems, and seven key packages (Gaussian, ORCA, GAMESS-US, Psi4, NWChem, CP2K and AMBER). This optimized mixed GPU architectures (RTX A4000/RTX 4060) — a common reality in most laboratories (perhaps fortunately for the average researcher), though uniform hardware is preferable if affordable. Benchmarks yielded 85% parallel efficiency, affirming production readiness.

        The AI approach thrived despite hands-off administration, via an iterative model of problem-solving, explanation, and reasoning. Complementary tools — Claude AI for documentation, Grok for perspectives, DeepSeek for verification — fostered rapid consensus, with human-led execution, validation, and adaptation essential. This erodes barriers to retraining or consultancy, enabling expertise assimilation for resource-limited institutions and heralding a paradigm shift in scientific knowledge application.

        Keywords: High-Performance Computing, Computational Chemistry, AI-Assisted Infrastructure, Cluster Computing, Knowledge Transfer, Slurm Workload Manager, Scientific Computing, Human-AI Collaboration, HPC Democratization, Intergenerational Learning.

        Speakers: Prof. Krishna Govender (University of Johannesburg), Hendrick Kruger (UKZN)
      • 13:50
        The African Bioinformatics Institute: Building Data Infrastructure for Genomics in Africa 20m

        Africa has traditionally lagged behind in life sciences research due to limited funding, infrastructure, and human capacity. Yet the growth of genomics and other large-scale data-driven projects now demands robust cyber-infrastructure for data storage, processing, and sharing. H3ABioNet made a significant contribution to building bioinformatics capacity across Africa over 12 years, but its funding has ended. In 2024, the community received a major boost with support from the Wellcome Trust and Chan Zuckerberg Initiative to establish the African Bioinformatics Institute (ABI).
        The ABI is being developed as a distributed network of African institutions, with a mandate to coordinate bioinformatics infrastructure, research, and training. A central focus is on enabling African scientists and public health institutes to manage and analyse large, complex datasets generated by initiatives such as national genome projects, pathogen genomics surveillance, and the African Population Cohorts Consortium. To meet these needs, the ABI is working with global partners, including the GA4GH, to promote adoption of international standards and tools that enable secure, responsible data sharing.
        The Institute will coordinate the development of a federated network of trusted research environments (TREs), ensuring data governance frameworks are locally appropriate while interoperable with global systems. By hosting African databases and resources, and fostering collaborations across institutions, the ABI will both drive demand for advanced compute and storage solutions and contribute to shaping how cyber-infrastructure supports genomics on the continent. In doing so, it will bridge local and global research ecosystems and advance the responsible use of genomic data for health impact.

        Speaker: Prof. Nicola Mulder (University of Cape Town)
      • 14:10
        Exploring new ways to decarbonize metal production with multiphysics modelling and high-performance computing 20m

        Direct-current (DC) electric arc furnaces are used extensively in the recycling of steel as well as primary production of many industrial commodities such as ferrochromium, titanium dioxide, cobalt, and platinum group metals. This typically involves a process called carbothermic smelting, in which raw materials are reacted with a carbon-based reductant such as metallurgical coke to make the desired product. Although it is one of humanity’s oldest and most established technologies, carbothermic metal production is becoming increasingly unattractive due to its significant scope-1 emissions of carbon dioxide and other environmental pollutants. Because of this many alternatives to fossil carbon reductants are currently being researched, and in the context of broad initiatives to establish a sustainable hydrogen economy both in South Africa and internationally, the possibility of directly replacing coke with hydrogen as a metallurgical reductant is of particular interest. A DC arc furnace fed with hydrogen has the potential to reduce or eliminate carbon emissions provided renewable resources are used for both electrical power and hydrogen production.

        Key to the operation of DC arc furnace is the electric arc itself – a high-velocity, high-temperature jet of gas which has been heated until it splits into a mixture of ions and electrons (a plasma) and becomes electrically conductive. The plasma arc acts as the principal heating and stirring element inside the furnace, and understanding its behaviour is an important part of operating an arc furnace efficiently and productively. However, due to the extreme conditions under which arcs operate, studying them experimentally can be difficult, expensive, and hazardous. Coupled multiphysics models which simulate arcs from first principles of fluid flow, heat transfer and electromagnetics are therefore of great value in conducting in silico numerical experiments and building an understanding of how they behave under different process conditions. This presentation will discuss the development of an arc modelling workflow incorporating aspects of process thermochemistry, plasma property calculation from fundamental physics, and computational mechanics models of the arc itself. This workflow is then used to explore the impact of introducing hydrogen gas as an alternative reductant in metallurgical alloy smelting processes.

        In keeping with the theme of this year’s CHPC National Conference, the critical role of HPC in plasma arc modelling will be discussed in terms of the data life cycle in plasma arc modelling – from input parameters through to raw simulation data, and finally to key insights which will help guide the next generation of clean metal production technologies.

        Speaker: Dr Quinn Reynolds (Mintek)
      • 14:30
        From Idea to Impact: Scalable AI Workflows with MATLAB & HPC - Bridging AI Challenges with Scalable, Reliable, and Explainable Solutions 20m

        In an age where AI is permeating every field of engineering and science, it is essential for researchers to quickly embrace technologies that can drive breakthroughs and accelerate innovation. Tools like MATLAB are specifically designed to lower the barriers to entry into the world of AI, making advanced capabilities more accessible to engineers and scientists.
        Integrating truly AI-enabled systems into real-world applications presents significant challenges – data fragmentation, legacy system integration, and scaling advanced computations are common hurdles. This talk presents practical approaches to overcoming these obstacles, emphasizing how high-performance computing with MATLAB and Simulink can accelerate AI model development and deployment. After a quick introduction on how to access and use MATLAB at your university or institute, the session will focus on effective strategies and best practices for leveraging HPC capabilities such as parallelization and workflow optimization to achieve faster prototyping and scalable AI solutions.
        The availability of MATLAB on the CHPC cluster through your university or institute license ensures that the presented workflows are accessible and reproducible for researchers across the Southern Africa region. The Academia Teams at MathWorks and Opti-Num Solutions (local partner company) are here to support your research by helping you quicky adopt AI and scale your work with parallel computing.

        Speaker: Dr Marco Rossi (MathWorks Academia Team)
      • 14:50
        Q & A 10m

        Q & A

        Speaker: Q & A
    • 13:30 15:00
      HPC Technology: HPC Education BoF 1/1-8+9 - Room 8+9

      1/1-8+9 - Room 8+9

      Century City Conference Centre

      80
      • 13:30
        BoF: HPC Education 1h 30m

        The session will provide an opportunity for HPC Educators to share their plans, challenges, and ideas for HPC education curricula, tools, and resources with the broader HPC Educator community.

        Speaker: Coordinator: Mr Bryan Johnston (CHPC)
    • 13:30 15:00
      ISSA 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
      • 13:30
        TBC 30m
        Speaker: Gershon Hutchinson
      • 14:00
        TBC 30m
        Speaker: Christian K. Devraj
      • 14:30
        TBC 30m
        Speaker: Emmanuel Musiiwa
    • 13:30 15:00
      SA NREN 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50
      • 13:30
        TBC 30m
        Speaker: Ben Waldeck
      • 14:00
        TBC 30m
        Speaker: Yabin Zhang
      • 14:30
        TBC 30m
        Speaker: Lebogo Kekana
    • 13:30 15:00
      Special: Supercomputing for Sustainability BoF 1/0-AB - Hall A+B

      1/0-AB - Hall A+B

      Century City Conference Centre

      550
      • 13:30
        Supercomputing for Sustainability: Balancing Performance and Energy 1h 30m

        High-performance computing and AI are at the heart of modern cyber-infrastructure, enabling the transformation of massive data sets into knowledge and decisions. Yet, as system scale and complexity grow, so do the challenges of energy consumption, sustainability, and efficient data movement. This BOF will explore strategies to balance performance with energy efficiency in large-scale systems while ensuring that scientific computing remains productive and impactful.
        Key discussion points include how future HPC and AI infrastructures can be designed and operated to reduce energy demand, how infrastructure choices affect sustainability, and how new approaches in scheduling, data management, architectures, and workflow design can align scientific progress with environmental responsibility. By bringing together several perspectives, the session aims to identify practical directions for sustainable supercomputing that can meet the dual challenge of handling ever-larger data sets while supporting informed decisions for science and society.

        Speakers: Prof. Dieter Kranzlmüller (Leibniz Supercomputing Centre), Prof. Ewa Deelman (University of Southern California), Dr Dan Stanzione (Texas Advanced Computing Center), Maximilian Höb (Leibniz Supercomputing Centre)
    • 15:00 15:30
      Break 30m
    • 15:30 17:00
      HPC Applications 1/1-11 - Room 11

      1/1-11 - Room 11

      Century City Conference Centre

      100
      • 15:30
        BoF: CHPC Users 1h 30m

        All CHPC Users, Principal Investigators and anyone interested in practical use of CHPC computational resources are invited to attend this informal Birds-of-a-Feather Session.

        At the start overviews will be presented of the recent usage of the CHPC compute resources (HPC Cluster, GPU Cluster and Sebowa Cloud infrastructure), discussion of new resources available to users and discussion of questions and topics from the audience.

        The session provides an excellent opportunity to meet up in-person with CHPC employees and to meet and engage with colleagues benefiting from CHPC services.

        Speaker: Dr Werner Janse Van Rensburg (CHPC)
    • 15:30 17:00
      HPC Technology: Storage & IO 1/1-8+9 - Room 8+9

      1/1-8+9 - Room 8+9

      Century City Conference Centre

      80
      • 15:30
        Challenges With Implementing FAIR Data Standards 20m

        The FAIR principles have become the best practice for sharing artifacts, such as data, with the public. Findable, Accessible, Interoperable, and Reusable each seem straightforward, but as implementation details are worked through many difficult decisions and unclear meanings are revealed. This talk will look at common practices and challenges with implementing these principles when sharing digital artifacts.

        Speaker: Jay Lofstead (IO500 Foundation)
      • 15:50
        TBA 20m
      • 16:10
        ®Training ML algorithms on resource-constrained devices — a memory/storage perspective 20m

        Title: Training ML algorithms on resource-constrained devices - a memory/storage perspective

        Summray:
        Deploying ML/AI algorithms on the edge is necessary for applications (e.g., security and surveillance,
        industrial IoT, autonomous vehicles, healthcare use cases, ...) requiring low latency, data privacy or reduced costs. However, most edge devices are not equipped with powerful memory systems to perform such memory and processing intensive applications. The objective of this presentation is to show some optimization venues to unlock the memory/storage bottleneck of some ML/AI algorithms mainly from a learning perspective to deply them on low-resource devices. The optimizations presented in this talk could be also applied to whatever resource constrained device used for training, be it cheap virtual machines on cloud infrastructures, common personal computers or resource contrained micro datacenters.

        Deploying ML/AI algorithms at the edge is essential for applications such as security and surveillance, industrial IoT, autonomous vehicles, and healthcare, which require low latency, data privacy, or reduced costs. However, most edge devices lack powerful memory systems capable of handling the memory- and computation-intensive nature of such applications.
        The objective of this presentation is to highlight some optimization strategies that help overcome the memory and storage bottlenecks of ML/AI algorithms—mainly from a training perspective—to enable their deployment on low-resource devices. These optimizations can also be applied to any resource-constrained environment used for training, including low-cost virtual machines in cloud infrastructures, standard personal computers, or small-scale micro data centers.

        Speaker: Jalil Boukhobza (ENSTA, Institut Polytechnique de Paris)
      • 16:30
        ® Scalable Data Management Techniques for AI workloads 20m

        Title: Scalable Data Management Techniques for AI workloads

        Abstract: The advent of complex AI workflows that involve large learning models (training using data/pipeline/tensor parallelism, retrieval augmented generation, chaining) has prompted the need for scalable system-level building blocks that enable running them efficiently at large scale on high end machines. Of particular interest in this context are data management techniques and their implementation that bridge the gap between high-level required capabilities (fine-grain tensor access, support for transfer learning and versioning, streaming and transformation of training samples, transparent augmentation, vector databases, etc.) and the existing storage hierarchy (parallel file systems, node-local memories, etc.). This talk discusses the challenges and opportunities in the design and development of such techniques and presents several results based on VELOC and DataStates, two efforts at ANL aimed at leveraging checkpointing to capture the evolution of datasets (including AI models and their training data).

        Speaker: Bogdan Nicolae (Argonne National Laboratory)
    • 15:30 17:00
      ISSA 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
      • 15:30
        TBC 30m
        Speaker: Kedimotse Baruni
      • 16:00
        TBC 30m
        Speaker: Rodney Buang Sebopelo
      • 16:30
        TBC 30m
        Speaker: Muhammad Abdul Moiz Zia
    • 15:30 17:00
      SA NREN 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50
      • 15:30
        TBC 30m
        Speaker: Guy Halse (TENET South Africa)
      • 16:00
        TBC 30m
        Speaker: Kasandra Pillay (SANReN)
      • 16:30
        TBC 30m
        Speaker: Ajay Makan (SANReN)
    • 17:00 19:00
      Poster: Showcase Cocktails Function 1/0-0 - Foyer

      1/0-0 - Foyer

      Century City Conference Centre

      500

      Cocktails Poster Session

    • 08:00 09:00
      Registration 1h
    • 09:00 10:30
      Keynote: Wednesday 1/0-AB - Hall A+B

      1/0-AB - Hall A+B

      Century City Conference Centre

      550
      • 09:00
        Agentic AI and the Future of Discovery 45m

        An AI agent is a computational entity that can interact with the world and adapt its actions based on learnings from these interactions. I discuss the potential for such agents to serve as next-generation scientific assistants, for example by acting as cognitive partners and laboratory assistants. In the former case, agents, with their machine learning and data-processing capabilities, complement the cognitive processes of human scientists by offering real-time data analysis, hypothesis generation, and experimental design suggestions; in the latter, they engage directly with the scientific environment on the scientist's behalf, for example by performing experiments in bio-labs or running simulations on supercomputers. I invite participants to envision a future in which human scientists and agents collaborate seamlessly, fostering an era of accelerated scientific discoveries, new horizons of understanding, and--we may hope--broader access to the benefits of science.

        Speaker: Dr Ian Foster (University of Chicago & Argonne National Laboratory)
      • 09:45
        TBC 45m

        TBC

        Speaker: Dr Rizwana Mia
    • 10:30 11:00
      Break 30m
    • 11:00 12:50
      DIRISA 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50
      • 11:00
        Advancing South Africa’s Research Data Ecosystem: Infrastructure, Innovation, and Collaboration. 20m

        DIRISA: Update on Data Infrastructure for Research Data Management and Collaborations;
        Dr More Manda.

        Speaker: Dr More Manda (CSIR)
      • 11:20
        Leveraging Cyber-infrastructure for Open Science: DIRISA's Contribution to Data-Driven Decision-Making in South Africa. 20m

        The transition from raw research data to impactful national decisions relies fundamentally on robust, accessible, and strategically managed data and data infrastructure. This presentation provides a high-level overview of the foundations of open science and frames the urgency within the unique South African context. It addresses critical systemic challenges, including data fragmentation and the complex dynamics of data ownership and governance that shape the research landscape.
        The core focus of the discussion is DIRISA's strategic mandate as the key national enabler. The presentation illustrates how DIRISA provides national data platforms, research data management support, and services that govern the full data lifecycle—from ingestion to long-term preservation and sharing.
        The presentation aims to demonstrate the national value of data stewardship: how it effectively bridges the gap between theoretical frameworks and practical, evidence-based decision-making for national benefit. The presentation concludes by exploring emerging trends and future requirements necessary to fully realize data’s potential for South Africa's sustainable development.

        Speaker: Dr Phoshoko Katlego (NICIS DIRISA)
      • 11:40
        AI Factory for Universities: An Altron and CSIR Value Proposition. 20m

        Altron Group CTO; Dr Andy Mabaso.

        Speaker: Dr Andy Mabaso (Altron Group)
      • 12:00
        AI-Assisted Optimization of Large-Scale Climate Data Transfers in South African Research Infrastructure 20m

        AI-Assisted Optimization of Large-Scale Climate Data Transfers in South African Research Infrastructure

        CHPC Conference 2025, Cape Town

        Abstract

        Background and Motivation

        The transfer of large-scale scientific datasets between South African research facilities represents a critical bottleneck in computational research workflows. Climate modeling datasets of the Global Change Instituite, Wits University , as odf Aug2025, are just over 540TB over 3 users, particularly from the Conformal-Cubic Atmospheric Model (CCAM).Optimized transfer strategies between the Centre for High Performance Computing (CHPC) and the Data Intensive Research Initiative of South Africa (DIRISA) storage systems are thus necessary for resilient data flows between HPC, storage and local analsysis compute facilities.Current data transfer tools such as Globus Connect identify bottlenecks within data flow circuits, however manual command line iRODS interfaces present significant challenges for reliable data transfer through AI-assisted optimization."

        Methodology: AI-Assisted Development

        This work presents a systematic application of artificial intelligence tools (Claude Code) to develop filesystem-aware transfer optimization solutions. The AI-assisted development process generated three complementary tools in under 4 hours of development time:

        1. Performance benchmarking script for systematic testing of 24-core Data Transfer Node configurations
        2. Resilient transfer wrapper with exponential backoff retry logic and comprehensive verification
        3. Lustre-aware optimization engine that dynamically analyzes filesystem striping patterns and adjusts transfer parameters

        Technical Innovation: Lustre Filesystem Integration

        The core innovation lies in automated Lustre striping analysis using lfs getstripe commands, coupled with dynamic parameter optimization. The system automatically detects:
        - Stripe counts and sizes for optimal thread allocation
        - Object Storage Target (OST) distributions for concurrency planning
        - File size patterns for buffer optimization
        - Directory structures for efficient batch processing

        Performance Results

        Test Dataset: 189TB CCAM climate modeling installation (ccam_install_20240215)
        - Source: CHPC Lustre filesystem (/home/jpadavatan/lustre/)
        - Destination: DIRISA iRODS storage (/dirisa.ac.za/home/jonathan.padavatan@wits.ac.za/)

        Validation Testing (67MB, 590 files):
        - Success Rate: 100.0% (590/590 files transferred successfully)
        - Transfer Performance: 0.95 GB/hour sustained throughput
        - Reliability: Zero failed transfers with comprehensive verification
        - Peak Performance: 10.41 MB/s maximum transfer rate
        - Optimization: Automatic 8-thread, 64MB buffer configuration

        Scalability Analysis:
        - Small datasets (41-67MB): 100% success rate, 4-11 MB/s
        - Medium datasets (17GB): Structure-preserving transfers completed
        - Large datasets (20-34TB per directory): Systematic optimization applied

        AI Development Impact

        The AI-assisted approach delivered significant advantages:
        - Development Speed: Complete toolchain developed in <4 hours vs. estimated weeks for traditional development
        - Code Quality: Production-ready tools with comprehensive error handling and logging
        - Documentation: Auto-generated usage examples and architectural documentation
        - Iterative Improvement: Real-time debugging and enhancement based on performance feedback

        South African Research Infrastructure Impact

        Immediate Benefits:
        - Enables systematic transfer of 189TB CCAM climate datasets to long-term DIRISA storage
        - Provides reusable toolchain for other large-scale data transfers in SA research community
        - Demonstrates AI-assisted development methodology for research computing infrastructure

        Broader Applications:
        - Astronomical data transfers (MeerKAT, SKA precursor datasets)
        - Genomics datasets from National Health Laboratory Service
        - Earth observation data from SANSA and international collaborations
        - General HPC-to-archive workflows across CHPC user community

        Technical Contributions

        1. First documented AI-generated, striping-aware transfer optimization for African research infrastructure
        2. Open-source toolchain available at: https://github.com/padavatan/chpc-irods-transfer-tools
        3. Comprehensive auditing framework with source validation, performance tracking, and efficiency analysis
        4. Systematic methodology for AI-assisted research infrastructure development

        Future Work and Scalability

        Planned extensions include:
        - Integration with CHPC job scheduling systems for automated large-scale transfers
        - AI-assisted optimization for emerging storage technologies
        - Performance modeling for petabyte-scale climate datasets

        Conclusion

        Initially, the twin challenges of troubleshooting the Globus bottelneck and creating manaul workflow setup of irods scripts presented as techncical complexities requiring considerable investment in troubleshooting effort. This work demonstrates that AI-assisted development can dramatically accelerate research infrastructure optimization while maintaining production-grade reliability. The 100% success rate achieved in validation testing, combined with comprehensive filesystem-aware optimization, provides a foundation for systematic large-scale data management in South African research computing.

        The methodology offers promise to the local HPC research community and other institutions facing similar data transfer challenges and represents a paradigm shift toward AI-augmented research infrastructure development.

        Technical Implementation Stack

        AI Development Platform:
        - Claude Code (claude.ai/code): Primary AI development assistant for code generation, debugging, and optimization
        - Development Time: <4 hours vs. estimated 2-3 weeks traditional approach (10-15x productivity improvement)

        Core Technologies:
        - Languages: Python 3.9.6, Bash scripting
        - HPC Infrastructure: CHPC 24-core DTN systems, Lustre parallel filesystem, DIRISA iRODS storage
        - Specialized Tools: iRODS iCommands (iput, ils), Lustre client tools (lfs getstripe), GNU Parallel
        - Version Control: Git, GitHub CLI, collaborative development workflow

        Performance Analysis Framework:
        - Custom benchmarking: Transfer rate analysis with variance tracking
        - Comprehensive auditing: Source/destination validation, file integrity verification
        - Real-time monitoring: Speed measurements, efficiency metrics, optimization recommendations

        AI-Human Collaboration Model:
        - AI Contributions: 2,597+ lines of production code, comprehensive error handling, filesystem-aware algorithms, automated documentation
        - Human Contributions: Domain expertise, performance validation, requirements specification, production integration
        - Result: Production-grade reliability with rapid development cycles

        This technical stack demonstrates the practical implementation of AI-assisted research infrastructure development, providing a replicable methodology for other HPC environments.


        Keywords: Artificial Intelligence, High-Performance Computing, Data Transfer Optimization, Lustre Filesystem, iRODS, South African Research Infrastructure, Climate Modeling, CHPC, DIRISA

        Authors: Jonathan Padavatan¹, Mthetho Sovara², Claude (AI Assistant)³
        ¹ University of the Witwatersrand, Global Change Institute
        ² CHPC
        ³ Anthropic AI

        Contact: jonathan.padavatan@wits.ac.za
        Repository: https://github.com/padavatan/chpc-irods-transfer-tools

        Speaker: Mr Jonathan Padavatan (University of the Wiitwatersrand)
      • 12:20
        Q&A 10m
    • 11:00 12:30
      HPC Applications 1/1-11 - Room 11

      1/1-11 - Room 11

      Century City Conference Centre

      100
      • 11:00
        TBC 20m

        TBC

        Speaker: Dr David Khoza (Integrated Geoscience Solutions)
      • 11:20
        TBC 20m

        TBC

        Speaker: Dr Nicolette Chang (CSIR)
      • 11:40
        Tailoring structural design of spinel LiMn2O4 cathode material through high entropy doping 20m

        The spinel LiMn2O4 cathode material has attractive candidates for the design and engineering of cost-effective and thermally sustainable lithium-ion batteries for optimal utilisation in electric vehicles and smart grid technologies. Despite its electrochemical qualities, its commercialization is delayed by the widely reported capacity loss during battery operation. The capacity attenuation is linked to structural degradation caused by Jahn-Teller active and disproportionation of Mn3+ ions. In several studies, the structural stability of spinel LiMn2O4 was improved by single- or dual-doping the Mn sites to curtail the number of Mn3+ ions. However, this results in loss of active ions, which ultimately limits the amount of energy that can be obtained from the battery. Herein, a high-entropy (HE) doping strategy is used to enhance the structural stability and electrochemical performance of LiMn2O4 spinel. The unique interactions of various dopants in HE doping yield enhanced structural stability and redox coupling, which can improve the concentration of the active material in the system. An HE-doped LiMn2O4 (LiMn1.92Mg0.02Cr0.02Al0.02Co0.02Ni0.02O4) spinel structure was successfully optimized using the Vienna Ab initio Simulation Package (VASP) code. The lattice parameters of the optimized (ground state) structure were determined to be 8.270 Å, which is less than the value of 8.274 Å of the pristine LiMn2O4 spinel structure. The yielded lattice contractions suggest a stronger M-O bond beneficial for increased resistance to phase changes and degradation. Moreover, the concentration of Mn3+ was decreased by 5.3% to defer the onset of the Jahn-Teller distortion and enhance capacity retention. This retention is part of some significant benefits emanating from dopants such as Cr3+ as it can participate in storing electric charge during the charging process by forming Cr4+ thus compensating the capacity loss endured during Mn3+ concentration reduction. Consequently, this work paves a path for exploration of several other fundamental properties linked to the electrochemical performance of spinel.

        Speaker: Prof. Raesibe Sylvia Ledwaba (University of Limpopo)
      • 12:00
        A first principle study of thermoelectric properties of some chalcogenides materials 20m

        M Ramoshaba, T E Mosuang
        Department of Physics, University of Limpopo, Private Bag x1106, Sovenga, 0727, South Africa
        E-mail: moshibudi.ramoshaba@ul.ac.za

        Thermoelectric chalcogenide materials exhibit promising properties, making them suitable for energy
        conversion and cooling applications. Thermoelectric (TE) materials have attracted significant interest
        due to their potential for energy harvesting and conservation. For a material to be considered an
        efficient thermoelectric material, it must possess low thermal conductivity, high electrical
        conductivity, a high Seebeck coefficient, and a high power factor. These characteristics contribute to
        strong thermoelectric performance, leading to a favorable figure of merit (ZT). Although several
        promising bulk semiconductors have been reported by researchers, no satisfactorily high ZT values
        have yet been achieved. Chalcogenide semiconductors may provide a solution to this challenge. Using
        density functional theory (DFT) and Boltzmann transport theory, the thermoelectric properties of
        selected chalcogenide materials (Cu₂S, Cu₂Se, InS, and InSe) were analyzed. These studies revealed
        strong thermoelectric performance, as the predicted maximum ZT values indicated high efficiency in
        these materials.

        Speaker: Dr Moshibudi Ramoshaba (university of Limpopo)
      • 12:20
        Q & A 10m

        Q & A

        Speaker: Q & A
    • 11:00 12:40
      HPC Technology 1/1-8+9 - Room 8+9

      1/1-8+9 - Room 8+9

      Century City Conference Centre

      80
      • 11:00
        Dell Presentation (TBC) 20m
        Speaker: Dell Speaker (TBC)
      • 11:20
        Lenovo Presentation (TBC) 20m
        Speaker: Lenovo Speaker (TBC)
      • 11:40
        Vendor Presentation (TBC) 20m
        Speaker: Vendor Speaker (TBC)
      • 12:00
        Vendor Presentation 20m
        Speaker: Vendor Speaker (TBC)
      • 12:20
        Q & A 10m
        Speaker: Q & A
    • 11:00 12:30
      Special: Sustainable Research Software and Infrastructure for HPC 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
      Convener: Anelda Van der Walt (Talarify)
      • 11:00
        Sustainable Research Software and Infrastructure for HPC: Practices, Challenges, and Community 1h 30m

        Motivation

        Around the world, the Research Software Engineering (RSE) movement has shown how professionalising research software practices and building RSE communities can strengthen the sustainability of HPC-enabled research. Many HPC users are writing their own code, often without formal training or long-term support, which raises challenges for efficiency, portability, reproducibility, and maintenance, all of which are foundational to sustainable research software. This workshop, the first of its kind in Africa, creates a space to showcase local software projects, share sustainability challenges, opportunities and practices, and strengthen our collective capacity for impactful computational research. Similar events have been held at computational conferences, such as ISC 2023 (where MS was an invited speaker) and SC, for the last five years.

        Objectives

        • Raise awareness of the global movement related to research software sustainability, with a specific focus on RSEs and HPC.
        • Highlight the crucial role of research software in leveraging HPC systems for scientific discovery.
        • Showcase participant HPC-related software projects with a focus on sustainability and scalability.
        • Discuss challenges unique to research software that runs on HPC systems (e.g. portability, optimisation, reproducibility).
        • Strengthen the RSE community within the HPC ecosystem in South Africa and across Africa.

        Structure (90 minutes)

        1. Welcome & Framing (10 min)

        • RSE as the “missing link” between HPC infrastructure and impactful research.
        • Why software sustainability is crucial for HPC environments:
        • Code must be portable across architectures (clusters, GPUs, cloud/HPC hybrids).
        • Performance tuning and scaling.
        • Dependency management and containerisation (Singularity/Apptainer, Docker → HPC).
        • Long-term usability beyond initial grants.
        • Outline of session flow.

        2. Lightning HPC Project Presentations (45 min)

        Participants deliver 3–4 minute lightning talks about their research software projects, following a structured template, with a focus on software sustainability in an HPC context.

        Template prompts (HPC-focused):

        • Project name & research domain
        • Software function (how it supports/accelerates research)
        • Where it runs (e.g., CHPC, regional facility (e.g. ilifu), institutional cluster, international facility)
        • Development & maintenance team (single student? research group? cross-institution?)
        • Sustainability considerations
        • Portability & scaling: Can it run on different HPC systems? GPU/CPU optimisations?
        • Documentation & training: Is it accessible to new users? HPC-specific usage guides?
        • Community & adoption: Who uses it, and how can they contribute?
        • Dependencies & environment: How are software stacks managed (modules, containers, Conda)?
        • Identifiers & citations: DOI for code/data, ORCID/ROR for credit.
        • Funding & longevity: Beyond project lifetime, who maintains it?
        • Biggest HPC-related sustainability challenge (e.g., scaling beyond a local cluster, lack of developer time, rapid hardware changes).

        We will provide a slide template in advance with these fields for participants to populate with their content. The slide template is attached to this submission for reference.

        3. Group Reflection & Discussion (25 min)

        Facilitated conversation drawing out common themes:

        • Which HPC-related sustainability challenges recur (e.g., portability, performance, lack of maintainers)?
        • What practices are helping (e.g., using containers, joining global open-source communities, institutional RSE support)?
        • How can CHPC, universities, and RSSE-Africa support the sustainability of HPC software?
        • Is there a need for a shared HPC-RSE knowledge base/training programme?

        4. Next Steps & Closing (10 min)

        • Summarise takeaways: recurring issues + promising solutions.
        • Announce possible follow-up:
        • An HPC–RSE community of practice under CHPC.
        • A repository of HPC software projects in South Africa.
        • Training opportunities (e.g., Carpentries HPC lessons, RSSE Africa workshops).
        • Share resource links (e.g., FAIR4RS principles, Software Sustainability
        • Institute guides, containerisation best practices for HPC).

        Deliverables / Follow-up

        • Shared slide deck/Zenodo collection of participant projects.
        • Post-session blog/summary for CHPC website/newsletter.
        • Potential to propose a recurring HPC Software Sustainability SIG at future CHPCconf.
        Speaker: Anelda Van der Walt (Talarify/UCT eResearch)
    • 12:30 13:30
      Lunch 1h 1/0-Foyer+C+D - Exhibition and Competition Halls

      1/0-Foyer+C+D - Exhibition and Competition Halls

      Century City Conference Centre

      550
    • 13:30 15:00
      DIRISA 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50
      • 13:30
        Catalysing Data Centre Investment for Africa’s High-Performance Computing and Climate Action through the Digital Investment Facility (DIF) 1h 30m

        The rapid growth of Africa’s data-intensive research, artificial intelligence (AI), and high-performance computing (HPC) workloads is driving unprecedented demand for resilient and sustainable data infrastructure. Data centres are emerging as critical enablers of scientific discovery, cloud adoption, and digital innovation, yet the region continues to face significant barriers: limited local hosting capacity, reliance on international facilities, high latency, data sovereignty concerns, and a shortage of investment-ready projects.
        The Digital Investment Facility (DIF)—a Team Europe initiative co-funded by the European Commission, Germany, and Finland, and implemented jointly by GIZ and HAUS—addresses these gaps by boosting investment in green and secure digital infrastructure, with a focus on data centres and Internet Exchange Points (IXPs). Operating as a project preparation and advisory facility, DIF supports projects from early design to contract closing, enhancing bankability through technical and financial advisory services, pre-feasibility studies, ESG integration, and investor matchmaking.
        Crucially, DIF embeds a climate nexus at the core of its work. By promoting energy-efficient, renewable-powered data centres and aligning with ISO 50001 energy management standards, DIF ensures digital infrastructure projects contribute directly to climate action and the implementation of Nationally Determined Contributions (NDCs). Greener data centres reduce emissions from digital growth, enhance resilience through disaster recovery capacity, and enable the digital tools required for climate adaptation (e.g., climate modelling, earth observation, and early warning systems).
        At CHPC, DIF will showcase how its approach enables data centres to meet the demanding requirements of HPC and advanced research—providing low-latency access, high-availability colocation, and sustainable cloud platforms that can host scientific datasets and AI workloads. The presentation will highlight the emerging pipeline of African digital infrastructure projects, the application of international standards, and the opportunities for researchers, policymakers, and investors to collaborate in building a digitally sovereign and climate-aligned HPC ecosystem in Africa.

        Speaker: Mr Mulalo Mphidi (GIZ)
    • 13:30 15:00
      HPC Applications 1/1-11 - Room 11

      1/1-11 - Room 11

      Century City Conference Centre

      100
      • 13:30
        Investigations of Selected Properties of Material Properties under Dynamical High Pressure and Temperature 20m

        The mechanical properties of materials change when subjected to dynamically conditions of high pressure and temperature. Such materials are those applied in cutting and shaping resulting in twisting and tensile forces. Results of selected MAX phases are presented to show variations in elastic constants as a function of dynamically pressure and temperature. Another situation where materials are subjected to such conditions is in the core of the earth. Stishovite, CaCl2 and Seifertite phases of silica, occurring in the core of the earth, are investigated with outcomes of phases transitions and related changes in seismic velocities that are compared with experimentally determined values.

        Speaker: Prof. George Amolo
      • 13:50
        ChemShell: defects in energy materials by hybrid QM/MM approach 20m

        The presentation showcases recent developments and applications of the ChemShell software in the field of energy materials by the Materials Chemistry HPC Consortium (UK), focusing on defect properties. This work capitalizes on the software engineering and methodological advances in recent years (including the UK Excalibur PAX project highlighted in the last year CHPC conference) by the groups of Prof. Thomas W. Keal in STFC Daresbury Laboratory (UK) and Prof. C. Richard A. Catlow at UCL and Cardiff University with several collaborators. Materials of interest include wide gap semiconductors used in electronic and optoelectronic devices as well as catalysis and solid electrolytes. The method allows one to explore both defect thermodynamics and their spectroscopic properties. Further examples show how a classical rock-salt structured insulator MgO can be usefully employed as a platform in studies of exotic states of matter, which are of fundamental interest, in particular, the unconventional cuprate superconductors with high critical temperatures, and the recently discovered phenomena in isostructural nickelate systems.

        Speaker: Dr Alexey A. Sokol (University College London)
      • 14:10
        The Dynamic Control of Head Stabilisation in Cheetahs: A Computer Vision and Optimisation Approach 20m

        The cheetah is a pinnacle of adaptation in the context of the natural world. It is the fastest land mammal and has multiple morphological specialisations for prey-tracking during high-speed manoeuvres, such as vestibular adaptations to facilitate gaze and head stabilisation [1]. Understanding the cheetah’s head stabilisation techniques is useful in field such as biomechanics, conservation, and artificial and robotic systems; however, the dynamics of wild and endangered animals are difficult to study from a distance. This challenge necessitated a non-invasive Computer Vision (CV) technique to collect and analyse 3D points of interest. We collected a new data set to emulate a perturbed platform and isolate head stabilisation. Using MATLAB®, we built upon a method pioneered by AcinoSet [2] to build a 3D reconstruction through CV and a dynamic model-informed optimisation, which was used to quantitatively analyse the cheetah’s head stabilisation. Using our new dataset, and by leveraging optimal control methods, this work identifiesand quantifies passive head stabilisation, in conjunction with AcinoSet data, to quantify the active stabilisation during locomotion. Since this work includes computationally heavy methods, the processing of these data using optimisations and computer vision rendering can be benchmarked and compared to parallel computing methods, to further support the viability of the 3D reconstruction methods for other animal or human models and applications of high-performance and low-cost markerless motion capture.
        [1] Grohé, C et al, Sci Rep, 8:2301, 2018.
        [2] Joska, D et al, ICRA, 13901-13908, 2021.

        Speaker: Ms Kamryn Norton (African Robotics Unit, University of Cape Town)
      • 14:30
        High-Performance Computing for Multiphase Flow Modelling of Oxygen Lancing in Pyrometallurgical Tap-Holes 20m

        High-performance computing (HPC) provides the means to translate complex multiphase flow data into insight that can inform industrial decision-making. This research applies advanced computational fluid dynamics, executed on the CHPC Lengau cluster, to model reacting gas–liquid systems relevant to oxygen lancing in pyrometallurgical tap-holes. The approach couples open-source CFD solvers with thermochemical data to capture flow behaviour, heat transfer, and reaction-driven gas evolution in molten metal–slag systems. Ferrochrome smelting serves as a representative case study, enabling validation against plant data and illustrating the broader relevance of the modelling framework to other high-temperature processes. By integrating computational models, large-scale data handling, and parallel analysis workflows, the study demonstrates how national cyber infrastructure can transform high-fidelity simulations into actionable understanding for safer, more efficient, metallurgical operations.

        Speaker: Dr Markus Erwee
      • 14:50
        Q & A 10m

        Q & A

        Speaker: Q & A
    • 13:30 15:00
      HPC Technology 1/1-8+9 - Room 8+9

      1/1-8+9 - Room 8+9

      Century City Conference Centre

      80
      • 13:30
        BoF: HPC Ecosystems Project 1h 30m

        This session is an opportunity for members of the HPC Ecosystems Community and those that identify as associates / affiliates to convene in-person. The session will allow time for members to discuss matters relating to HPC Ecosystems Project as well as broader African HPC and emerging HPC community topics. The 90-minute session will include 60-minutes of prepared talks from members of the community, followed by a further 30-minutes of open time for discussion and meaningful community engagement. Alas, muffins are not guaranteed.

        Speaker: Mr Coordinator: Mr Bryan Johnston (CHPC)
    • 13:30 15:00
      Special 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
    • 15:00 15:30
      Break 30m
    • 15:30 17:00
      DIRISA 1/1-7 - Room 7

      1/1-7 - Room 7

      Century City Conference Centre

      50
      • 16:10
        Factors associated with sexually transmitted infection literacy among men who have sex with men and transgender people in Soweto: A machine learning approach 20m

        Sexually transmitted infections (STIs) remain a significant public health challenge in Sub-Saharan Africa (SSA), particularly among key populations such as men who have sex with men (MSM) and transgender individuals. This study aimed to assess the level of STI literacy within this population, identify its demographic, behavioral, and structural predictors, and explore its influence on knowledge, attitudes, behaviors, and healthcare-seeking. A retrospective observational mixed-methods approach was employed, combining logistic regression, structural equation modeling (SEM), and explainable machine learning (SHAP) to analyze data collected from 1,240 MSM and transgender individuals in Soweto, South Africa. The main outcome variable, STI literacy, was operationalized both as a composite score (binary: high and low) and as a categorical label (1 and 0), enabling both inferential and predictive modeling. Results revealed that 28.1% of participants demonstrated adequate STI literacy. Key positive predictors included younger age, prior STI testing, higher education, being single or married, female gender identity, and personal STI history. In contrast, older age, unemployment, lower education, substance use, and frequent sexual activity were associated with lower literacy. Structural equation modeling illuminated how STI testing experience acts as a cue to action, while stigma, cost, and fear serve as barriers. SHAP analysis confirmed these insights, highlighting modifiable predictors such as information-seeking, communication confidence, and testing accessibility. The study's findings were interpreted through Nutbeam’s Health Literacy Framework, the Health Belief Model (HBM), and the Theory of Planned Behavior (TPB). These frameworks helped contextualize the behavioral pathways linking sociodemographic factors to STI literacy and preventive actions. Notably, TPB constructs such as subjective norms and perceived behavioral control were particularly influential. This study contributes to the STI prevention literature by quantifying literacy gaps, modeling predictive pathways, and demonstrating how behavioral theory and machine learning can inform targeted interventions. It recommends multi-level approaches that go beyond awareness to address stigma, build self-efficacy, and enhance access to sexual health services. These insights are vital for designing inclusive, theory-driven public health strategies in SSA.

        Speaker: Dr Edith Phalane (University of Johannesburg)
      • 16:30
        TBC 20m

        To be confirmed.

    • 15:30 17:00
      HPC Applications 1/1-11 - Room 11

      1/1-11 - Room 11

      Century City Conference Centre

      100
      • 15:30
        BoF: Women in HPC “Advancement through the professional staircase” 1h 30m

        Title: “Women in High Performance Computing South Africa (WHPC-South Africa)”
        Duration: 90 minutes
        Type of session: Advancement through the professional staircase

        Organiser(s): Name Affiliation Email Address
        1. Khomotso Maenetja University of Limpopo khomotso.maenetja@ul.ac.za
        2. Raesibe Ledwaba University of Limpopo raesibe.ledwaba@ul.ac.za
        3. Beauty Shibiri University of Limpopo beauty.shibiri@ul.ac.za
        4. Tebogo Morukuladi Univerisity of Limpopo tebzamorukuladi@gmail.com
        Description:

        The WHPC BOF session for 2025 will be a reflection session where women in HPC will be sharing on how their careers have advanced in the past 5 years also sharing how the platform has impacted on encouraging women to take leadership or management roles in their workspace. This will give feedback on underrepresentation of women in HPC especially in leadership.

        As a result, we are glad to offer an invitation to both male and female conference attendees to continue where we left off with the last session at the 2024 annual conference. The major goal of bringing them together at the meeting was to develop a network of female HPC professionals in South Africa. The CHPC executive team gave major assistance to the workshop, which was sponsored and attended by both men and women.

        Anticipated Goals
        • Improve women's underrepresentation in HPC (Contribute in increasing the number of women and girls participation in HPC through training and networking)
        • Share information and resources that foster growth for women in HPC (institutionally and across community)
        • Raise our professional profiles
        • Encourage young girls at school level to consider HPC as a career of choice

        Size: 80
        Target audience: Women and Men
        Prerequisites: Registered CHPC conference attendees

        Chair(s):

        Ms NN Zavala and Ms MG Mphahlele

        Outline of programme: — Single 90 min
        1. Opening – Prof Khomotso Maenetja (3 min)
        2. Presentations
        a. Introduction of the guest speaker – Prof RR Maphanga
        b. Guest Speaker – Keynote speaker – 30 Min
        c. Dr Tebogo Morukuladi – Academic Journey (15 min)
        d. Ms CS Mkhonto (University of Limpopo, Faculty of Science and Agriculture Student Ambassador) (15 min)
        e. Ms Precious Makhubele and Keletso Monareng – Moving from Masters to PhD Journey (15 min)
        3. Closure – Prof RS Ledwaba (2 min)

        Speaker: Prof. Khomotso Maenetja (Materials Modelling Centre)
    • 15:30 17:00
      HPC Technology: Machine Learning & AI 1/1-8+9 - Room 8+9

      1/1-8+9 - Room 8+9

      Century City Conference Centre

      80
      • 15:30
        Co-design and federation of computing services for AI and simulation 30m

        This presentation will focus on the collaborative, quantitative co-design approach to the deployment of large-scale computing services adopted by the STFC DiRAC HPC Facility in the UK (www.dirac.ac.uk). Over the past 15 years, successive generations of DiRAC services have demonstrated how workflow-centred co-design can maximise the scientific impact of computing investments. The co-design of DiRAC services has ranged from silicon-level to system-level, alongside extensive software development effort, and has delivered significantly increased system capabilities.

        I will also discuss how federation can deliver additional research capabilities and optimise service exploitation, while lowering the bar for access to large-scale computing for new users.

        Looking to the future, I will explore how co-design can be used to develop cost-effective and energy-efficient heterogeneous computing ecosystems for AI and simulation.

        Speaker: Mark Wilkinson (STFC DiRAC HPC Facility / University of Leicester)
      • 16:00
        Federated Computing for Health Data Science in Africa 20m

        There is a great need to develop computing infrastructure to support the increased application of data science and health informatics across Africa which includes robust data sharing and federated computing, whilst fostering research collaboration. The Global Alliance for Genomics and Health (GA4GH; https://www.ga4gh.org/) aims to promote responsible data sharing standards through the use of open, community derived standards and APIs such as Data Repository Service (DRS), Workflow Execution Service (WES), Data Connect, Passports and Tool Registry Service (TRS), amongst others. The DRS API provides a generic interface to access data in repositories. The data is discovered through the Data Connect API which supports federated search of different kinds of data. The WES API provides a standardized approach for accessing computing resources with use of reproducible workflows, usually housed in a tools registry service such as Dockstore (https://dockstore.org/). The eLwazi Open Data Science Platform (ODSP) has undertaken a pilot implementation of the GA4GH standards with the aim of delivering a federated framework for data discovery and analysis within Africa for the DS-I Africa consortium. The eLwazi GA4GH pilot project was started in June 2023 as an outcome of a training hackathon by the eLwazi ODSP Infrastructure work group in collaboration with the GA4GH. The main goal of the GA4GH pilot project is to enable the findable, accessible, interoperable and reusable (FAIR) principles for data discovery and analysis. Four sites within Africa (Ilifu - South Africa, ACE Lab - Mali, ACE Lab - Uganda and UVRI - Uganda) are currently hosting the different API endpoints for authorized data discovery and analysis. From within the project we can locate DRS datasets using the Data Connect API, use workflows from Dockstore via the TRS API for reproducible analysis, and submit it to the WES API for analysis without the data leaving the actual location, which provides a technical solution for data analysis within legislative data protection constraints. We are now in the process of developing a federated approach for the imputation of African genomics data as a GA4GH implementation forum (GIF) project collaboration based on the lessons from the pilot GA4GH implementation project.

        Speaker: Mr Takudzwa Nyasha Musarurwa (University of Cape Town - eLwazi ODSP)
      • 16:20
        Integrating High-Performance Computing and Deep Learning for Big Data-Driven Cyberinfrastructure: A Framework for Scalable AI Research in African Research Institutions 20m

        The proliferation of Artificial Intelligence (AI), data-driven research, and digital transformation has increased the global demand for powerful computing infrastructures capable of processing and analyzing enormous volumes of data. High-Performance Computing (HPC) has emerged as the cornerstone of this evolution, enabling researchers to perform complex simulations, accelerate model training, and analyze Big Data at unprecedented scales. Yet, across many African universities, access to such advanced computing capabilities remains severely limited, constraining the ability of scientists to participate meaningfully in global AI and data science innovation. This paper explores the strategic integration of HPC technologies with deep learning architectures to establish a sustainable, Big Data-driven cyberinfrastructure model tailored for African academic environments.

        Drawing inspiration from the ongoing efforts at the University of Mpumalanga (UMP) and the Council for Scientific and Industrial Research (CSIR), the study proposes a framework that connects HPC systems with scalable AI workflows in areas such as agriculture, climate modelling, energy, and cybersecurity. The framework emphasizes distributed GPU-accelerated clusters, containerized computing environments, and job scheduling mechanisms that allow multiple research teams to run parallel deep learning experiments efficiently. Beyond the technical dimension, the paper highlights the importance of local capacity development, collaboration, and institutional investment as key drivers for long-term sustainability. By showcasing how HPC can shorten AI model training times, enhance predictive accuracy, and improve data management efficiency, this research demonstrates that advanced computation is not merely a luxury for developed nations but an attainable enabler of scientific independence for African universities.

        The findings underscore that the convergence of HPC and AI can transform research productivity, foster interdisciplinary collaboration, and support evidence-based policymaking in sectors critical to Africa’s development. Ultimately, the paper advocates for the creation of a federated HPC-AI ecosystem across African institutions, allowing shared access to
        computational resources, open datasets, and research expertise. Such an ecosystem would democratize access to cutting-edge technologies, close the digital divide, and position African researchers as active contributors to the global knowledge economy rather than passive consumers. Through this integrative perspective, the paper not only offers a technical blueprint for HPC-AI synergy but also presents a vision for empowering scientific innovation, data sovereignty, and technological resilience within the African higher education landscape

        Speaker: Dr Olalekan Samuel Ogunleye (University of Mpumalanga)
    • 15:30 17:00
      Special 1/1-10 - Room 10

      1/1-10 - Room 10

      Century City Conference Centre

      50
    • 17:00 18:30
      Canapés & Cocktails Networking Session 1h 30m

      The Conference Networking Session is designed to facilitate networking and informal interaction amongst delegates. Canapés will be served along with a selection of soft drinks, beers and wines.

    • 18:30 19:15
      Keynote: Closing 1/0-AB - Hall A+B

      1/0-AB - Hall A+B

      Century City Conference Centre

      550
      Convener: Chair: Dr Happy Sithole (CHPC)
      • 18:30
        TBC 45m

        TBC

        Speaker: Dr Dan Stanzione (Texas Advanced Computing Center)
    • 19:15 21:30
      Awards 2h 15m 1/0-AB - Hall A+B

      1/0-AB - Hall A+B

      Century City Conference Centre

      550