The aim of the conference: to bring together our users so that their work can be communicated, to include world renowned experts, and to offer a rich programme for students, in the fields of high performance computing, big data, and high speed networking. The CHPC National Conference is co-organised by the CHPC, DIRISA and SANReN.
The CHPC 2025 Conference will be an in-person event with a physical programme hosted at the Century City Conference Centre, Cape Town.
For more information please see the main conference site.
This year's theme is the utility of cyber-infrastructure in processing, storing and moving the large data sets underpinning today's complex world.
Online registration will close on Friday 27 November 2025. Thereafter only onsite registration (at full fees) will be available at the venue.
SADC Cyber-Infrastructure Meeting
SADC Cyber-Infrastructure Meeting
SADC Cyber-Infrastructure Meeting
SADC Cyber-Infrastructure Meeting
In the rapidly evolving landscape of High-Performance Computing (HPC), the theme of "Cyberinfrastructure Collaboration: Towards Accelerated Impact" underscores the critical need for synergistic efforts to drive innovation and societal progress. This keynote presentation will explore the profound concept of being a "good ancestor" and its pivotal role in shaping impactful cyberinfrastructure collaborations.
As we delve into the intricacies of HPC, we must recognize that our technological advancements are not just for the present but are legacies for future generations. Being a good ancestor involves making conscientious decisions that prioritize sustainability, ethical considerations, and long-term benefits over short-term gains. This perspective encourages us to build resilient and adaptable cyberinfrastructures that can withstand the test of time and evolving challenges.
The presentation will highlight key strategies for fostering effective collaborations that embody the principles of good ancestry. These include:
By integrating these principles, we can create a cyberinfrastructure that accelerates current impact and lays a robust foundation for future generations. This approach enhances the immediate benefits of our collaborations and ensures that we leave a positive and enduring legacy.
The presentation will discuss the use of traditional computational methods, machine and deep learning, as well as quantum computing and quantum machine learning (as a new frontier) in addressing challenges in fluid dynamics, dynamical systems, and high-energy physics research. And the talk will highlight the role of the CHPC in democratising access to critical resources and the enablement of such research.
For over two decades I have been developing new interatomic potentials, e.g., implemented the AOM within GULP that can be employed to model non-spherical Jahn Teller Mn(III) ions, successfully refined potential parameters to model numerous systems including the Peierls' phase transition of VO2, and I am the author of a published interatomic potential parameter database. My interest is driven by the ability to control what physics is included (or not) by the introduction of new terms to the Hamiltonian (or potential energy) and it is an approach many will follow as, compared to DFT, it allows for modelling systems of larger sizes (more atoms), greater time periods (in MD), and more sampling (global optimisation and/or calculating the partition function).
Now ML potentials, which have many more parameters to refine and a minefield of differing functional forms to choose, have become very topical as data required to fit these as well as computer resources have become more readily available. My first real experience came with one of my earlier PhD students discovering that it was not straightforward to develop a suitable model (fit parameters), for example, the GAP ML potentials we have refined suffered from the erroneous oscillations.
I lead UK's Materials Chemistry Consortium and one of our current aims is to make the use of ML potentials more accessible to our community. Simultaneously, other groups have begun refining ML-Potential models for the entire Periodic table based on reproducing DFT results. In my presentation I will present results from three of my PGT students who worked on energy materials using the JANUS-core code to calculate the energy and forces, based on pre-refined MACE ML-Potentials. Moreover, I will include recently published results on dense and microporous silica materials where these potentials performed particularly well and further results of ongoing research from the MCC.
The global pandemic, initiated by the SARS-CoV-2 virus and emerging in 2020, has profoundly influenced humanity, resulting in 772.4 million confirmed cases and approximately 7 million fatalities as of December 2023. The resultant negative impacts of travel restrictions and lockdowns have highlighted the critical need for enhanced preparedness for future pandemics. This study primarily addresses this need by traversing chemical space to design inhibitors targeting the SARS-CoV-2 papain-like protease (PLpro). Pathfinder-based retrosynthesis analysis was employed to synthesize analogues of the hit, GRL-0617 using commercially available building blocks through the substitution of the naphthalene moiety. A total of 10 models were developed using active learning QSAR methods, which demonstrated robust statistical performance, including an R2 > 0.70, Q2 > 0.64, standard deviation < 0.30, and RMSE < 0.31 on average across all models. Subsequently, 35 potential compounds were prioritized for FEP+ calculations. The FEP+ results indicated that compound 45 was the most active in this series, with a ∆G of -7.28 ± 0.96 kcal/mol. Compound 5 exhibited a ∆G of -6.78 ± 1.30 kcal/mol. The inactive compounds in this series were compound 91 and compound 23, with a ∆G of -5.74 ± 1.06 and -3.11 ± 1.45 kcal/mol, respectively. The integrated strategy implemented in this study is anticipated to provide significant advantages in multiparameter lead optimization efforts, thereby facilitating the exploration of chemical space while conserving and/or enhancing the efficacy and property space of synthetically aware design concepts. Consequently, the outcomes of this research are expected to substantially contribute to preparedness for future pandemics and their associated variants of SARS-CoV-2 and related viruses, primarily by delivering affordable therapeutic interventions to patient populations in resource-limited and underserved settings.
The spotted hyena (Crocuta crocuta) is a highly social carnivore with a complex behavioural and ecological functions, making it an important model for studying genetic diversity, adaptation, and evolution. However, previous draft genomes for C. crocuta have been incomplete and derived from captive individuals, limiting insights into natural genetic variation. Here, we present a high-quality de novo genome assembly and the first pangenome of wild spotted hyenas sampled from the Kruger National Park, South Africa, alongside population-level analysis.
Using Oxford Nanopore Technologies (ONT) long-read sequencing, we assembled a 2.39 Gb reference genome with a scaffold N50 of 19.6 Mb and >98% completeness. We further performed short-read resequencing at 10-32X depth per individual, revealing >4 million single nucleotide variations and and ~1 million insertions and deletions per individual. To capture genomic variation beyond a single reference, we constructed a draft pangenome using Progressive Genome Graph Builder (PGGB). The resulting pangenome comprises ~2.47 Gb, with 35.2 million nodes, 48.4 million edges, and 159,060 paths, incorporating sequences from all individuals. Its graph structure revealed substantial topological differences, which may correspond to biologically relevant variations.
The breadth of these analyses required extensive use of the CHPC’s computing resources. Long-read genome assembly and polishing were executed on high-memory nodes to accommodate the error-correction and scaffolding steps. Repeat and gene annotation pipelines (RepeatModeler, BRAKER3) as well as variant discovery with GATK and BCFtools were parallelised to accelerate execution. Pangenome graph construction was particularly computationally intensive, requiring large-scale parallelisation and significant memory and storage capacity to manage multi-genome alignments and graph building.
This study provides the most contiguous wild-derived genome to date for the species, the first draft pangenome for C. crocuta, and establishes a foundation for future conservation and comparative genomics. Importantly, it demonstrates the critical role of HPC resources in enabling large-scale bioinformatics pipelines - from genome assembly to pangenome construction and population-level analysis - in non-model organisms.
Q&A
TBC
Heavy rainfall events are among the most damaging weather hazards worldwide, yet they remain difficult to simulate accurately. One key source of uncertainty is the choice of input data used to initialize weather and climate models. In this study, we tested how sensitive the Conformal Cubic Atmospheric Model (CCAM) is to different initialization datasets, including ERA5, GFS, GDAS, and JRA-3Q. Using the CHPC Lengau cluster, we ran high-resolution (3 km) convection-permitting simulations, which allowed us to capture the fine-scale features of a 3-4 June 2024 heavy rainfall event over the eastern parts of South Africa.
We evaluated the simulations against radar and IMERG satellite precipitation estimates. While all runs reproduced the evening peak in rainfall timing, they generally underestimated intensity. Among the datasets, ERA5 produced the most reliable simulations, showing the closest match to IMERG with the lowest errors and highest correlation. In contrast, JRA-3Q and GFS-FNL performed less well. These results show that the choice of initialization dataset has a clear impact on rainfall prediction skill, and highlight the value of HPC-enabled sensitivity studies for improving extreme weather forecasting in the region.
Running large-scale bioinformatics analyses on high-performance computing (HPC) infrastructure like the CHPC can significantly accelerate research, but comes with technical challenges—especially for researchers aiming to deploy complex workflows such as those built with Nextflow. In this talk, I present practical recommendations and lessons learned from testing and running various bioinformatics applications on the CHPC, with a particular focus on containerised workflows and resource optimisation.
Drawing from real-world use cases and performance benchmarks, I highlight key considerations such as managing limited walltime, dealing with module and environment setup, optimising Singularity containers for reproducibility, and handling input/output bottlenecks. I also reflect on common pitfalls and how to overcome them—especially for researchers with limited systems administration experience.
This presentation aims to equip bioinformatics users with actionable guidance on how to run workflows more efficiently, reproducibly, and with fewer frustrations on the CHPC infrastructure. It is also a call for continued collaboration between HPC support teams and domain researchers to bridge the gap between computational capacity and research usability.
Room-temperature ionic liquids (ILs) are molten salts with negligible vapour pressure and wide electrochemical windows, making them attractive electrolytes for beyond-lithium batteries [1]. Optimising transport properties—such as conductivity, self-diffusion, and the working-ion transference number (the fraction of the total ionic current carried by Li⁺/Na⁺/K⁺ from the added salt)—requires further quantitative, molecular-scale insight into how charge and mass move. Equilibrium molecular dynamics (MD) provides this insight by enabling transport coefficients and mechanistic signatures to be extracted from atomistic simulations. The rate capability of a battery is tightly coupled to the transport properties of the electrolyte; formulations that raise the working-ion transference number while maintaining adequate conductivity are preferred [2].
In this work, MD simulations were used to probe 1-butyl-1-methylpyrrolidinium bis(fluorosulfonyl)imide ([C₄C₁pyr][FSI]) mixed with MFSI (M = Li, Na, K) at salt mole fractions of 0.10, 0.20, 0.40 and T = 348.15 K. A non-polarisable model based on the well-established CL&P force field was employed [3]; however, non-bonded interaction parameters were adjusted to better reflect symmetry-adapted perturbation theory (SAPT) decomposition of pairwise interactions, including cation–anion and metal-salt pairs. Equilibrium trajectories of ≥250 ns per state point were generated with LAMMPS [4]. Self-diffusion coefficients were obtained from Einstein mean-squared displacements, and ionic conductivity was computed using the Green–Kubo/Einstein–Helfand formulation. The analysis includes Nernst–Einstein estimates of conductivity ($\sigma_\text{NE}$), Haven ratios ($\sigma_\text{NE}/\sigma$) and its inverse (ionicity, $\sigma/\sigma_\text{NE}$), and both apparent transference numbers (from self-diffusion coefficients) and real/collective transference numbers from conductivity decomposition in an Onsager framework. Mechanisms of ion transport are examined via Van Hove correlation functions (self and distinct), the non-Gaussian parameter, ion–anion residence times, and coordination numbers. Hole (free-volume) theory is evaluated as a compact model for conductivity across composition.
HPC Content: Strong scaling was assessed for fixed-size systems of 256, 512, and 1024 ion pairs on 1–64 CPU cores (MPI ranks); wall-time per ns and ns/day were recorded to determine speedup and parallel efficiency. For one representative state point, transport properties are compared across these system sizes to illustrate finite-size effects.
[1] Yang, Q.; Zhang, Z.; Sun, X.-G.; Hu, Y.-S.; Xing, H.; Dai, S., Ionic liquids and derived materials for lithium and sodium batteries. Chem. Soc. Rev. 2018, 47, 2020-2064.
[2] Chen, Z.; Danilov, D. L.; Eichel, R.-A.; Notten, P. H. L., Porous Electrode Modeling and its Applications to Li-Ion Batteries. Adv. Energy Mater. 2022, 12, 2201506.
[3] Canongia Lopes, J. N.; Pádua, A. A. H., CL&P: A generic and systematic force field for ionic liquids modeling. Theor. Chem. Acc. 2012, 131, 1-11.
[4] Thompson, A. P.; Aktulga, H. M.; Berger, R.; Bolintineanu, D. S.; Brown, W. M.; Crozier, P. S.; in 't Veld, P. J.; Kohlmeyer, A.; Moore, S. G.; Nguyen, T. D.; Shan, R.; Stevens, M. J.; Tranchida, J.; Trott, C.; Plimpton, S. J., LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Comput. Phys. Commun. 2022, 271, 108171.
Q&A
In this informative and fast-moving presentation, long-time industry analyst Dan Olds conducts a whirlwind trip discussing why the HPC/AI market is increasingly difficult to measure and highlights new technologies that will help data centers deal with rapidly increasing compute demand. Spoiler alert: he declares air cooling dead and explains why. As his big finish, Olds boldly declares that the only way forward for data centers is to become radically more efficient. Not just talk about it, but actually do it.
This session has become a signature event at CHPC conferences. The rules are brutally simple. Vendors have five minutes and only three slides to put their best foot forward to the audience and the inquisitors. The panel includes industry analyst Dan Olds along with two standout students from the cluster competition who have been briefed on the vendors and their slides.
After their five-minute presentations, the presenters will be asked three questions, two of which they know are coming followed by a final, secret, question. Frank and tough questions will be asked. Answers will be given. Punches will not be pulled. The audience will be the ultimate judge of which vendor did the best job. It’s fun, brisk, and informative.
Large scale AI applications are drivers for the design and deployment of the next generation of supercomputers. While large language model training and generative AI applications take the headlines, scientific workloads are starting to utilize AI as algorithmic extensions to their existing implementations. We will discuss how the needs between these communities differ, how system software and system middleware need to develop to support these use cases, and hopefully demonstrate how once again supercomputing turns compute-bound problems into I/O-bound problems.
TBC
The evolution and progress of humanity are closely linked to our ways of energy use. Reliable energy sources are vital for driving economic growth, especially as society's demand for energy keeps rising. The rapid development of zinc-air batteries (ZABs) makes them an appealing alternative to standard lithium-ion batteries for energy storage needs. However, the slow kinetics of the air cathode led to a short lifespan and low energy efficiency in zinc-air batteries. First-principles calculations help develop catalysts that promote the formation of the most stable discharge products in Zn-air batteries. Density functional theory (DFT) is used to examine the adsorption (Γ= +1, +2) and vacancy formation (Γ= -1, -2) energies of oxygen atoms on the (001) surface of VCo2 O4. The Bader charge analysis reveals how the atoms interact within the system. When oxygen atoms are reduced and adsorbed, it is observed that the V and Co atoms show minimal charge differences compared to the original phase, whether reduced or oxidized. Interplanar distances show that adding or removing oxygen causes the system to expand or contract, respectively. The work function helps assess the system’s reactivity. Absorbing oxygen atoms decreases reactivity, while removing oxygen increases it. The calculations were executed concurrently on 24 of the 2400 available cores, leveraging CHPC with 2048 MB of memory. These findings provide insights into identifying catalysts that can enhance the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER), thereby improving the performance of Zn-air batteries.
This paper explores the benefits of integrating Earth Observation (EO) techniques with Artificial Intelligence (AI) to enhance the capabilities of Numerical Weather Prediction (NWP) models, particularly in the context of severe weather and environmental hazards over South Africa. NWP models often employ EO that lacks real-time resolution, which may lead to increased uncertainty in short-term forecasts and reduced reliability during high-impact weather events. EO systems provide high-resolution, near-real-time observations. AI techniques perform post-processing tasks like bias correction, anomaly detection, and pattern recognition. AI also excels at capturing non-linear relationships and fine-scale phenomena that are often poorly resolved in NWP models. This EO-AI integrated approach should improve model forecast accuracy, and detection of localized hazards. We demonstrate the benefits and shortcomings of our approach in detecting hazards such as heat waves, wildfires and wetland degradation.
Three-dimensional (3D) computational fluid dynamics (CFD) has emerged as a powerful tool for studying cardiovascular haemodynamics and informing the treatment of cardiovascular diseases. Patient-specific CFD models rely on boundary conditions derived from medical imaging, yet uncertainties in imaging measurements can propagate through the model and affect clinically relevant outputs such as pressure and velocity fields. To ensure that CFD-based clinical decisions are both reliable and repeatable, it is essential to quantify these uncertainties and assess the sensitivity of the outputs to boundary condition variability.
Uncertainty quantification and sensitivity analysis (UQ/SA) typically require large numbers of simulations, which makes their application challenging in 3D CFD due to high computational costs. While Monte Carlo approaches may require hundreds of evaluations, alternative methods such as generalized polynomial chaos expansion reduce the number of runs but remain computationally demanding.
In this study, we present a global UQ/SA framework implemented on the Lengau Cluster for coarctation of the aorta, a common form of congenital heart disease. The uncertain inputs are the lumped parameters of the 3-Element Windkessel Model, prescribed at the outlets to represent distal vasculature. We evaluate how variability in these parameters impacts pressure and velocity fields, with the objective of improving the robustness and clinical utility of patient-specific CFD simulations.
Q & A
This presentation chronicles the journey of an older-generation computational chemist and a young HPC expert, mediated via AI assistance, culminating in the successful deployment of a multi-node research computing cluster. The cluster supports molecular modeling and drug design, enabling large-scale molecular dynamics and quantum chemistry calculations. The senior researcher's four-decade arc—from 1983 punch cards to 2025 AI-collaborative infrastructure—illuminates artificial intelligence's role in transforming scientific knowledge transfer.
An initial Gaussian software request expanded into comprehensive cluster setup: Rocky Linux 9, Slurm management, parallel filesystems, and seven key packages (Gaussian, ORCA, GAMESS-US, Psi4, NWChem, CP2K and AMBER). This optimized mixed GPU architectures (RTX A4000/RTX 4060) — a common reality in most laboratories (perhaps fortunately for the average researcher), though uniform hardware is preferable if affordable. Benchmarks yielded 85% parallel efficiency, affirming production readiness.
The AI approach thrived despite hands-off administration, via an iterative model of problem-solving, explanation, and reasoning. Complementary tools — Claude AI for documentation, Grok for perspectives, DeepSeek for verification — fostered rapid consensus, with human-led execution, validation, and adaptation essential. This erodes barriers to retraining or consultancy, enabling expertise assimilation for resource-limited institutions and heralding a paradigm shift in scientific knowledge application.
Keywords: High-Performance Computing, Computational Chemistry, AI-Assisted Infrastructure, Cluster Computing, Knowledge Transfer, Slurm Workload Manager, Scientific Computing, Human-AI Collaboration, HPC Democratization, Intergenerational Learning.
Africa has traditionally lagged behind in life sciences research due to limited funding, infrastructure, and human capacity. Yet the growth of genomics and other large-scale data-driven projects now demands robust cyber-infrastructure for data storage, processing, and sharing. H3ABioNet made a significant contribution to building bioinformatics capacity across Africa over 12 years, but its funding has ended. In 2024, the community received a major boost with support from the Wellcome Trust and Chan Zuckerberg Initiative to establish the African Bioinformatics Institute (ABI).
The ABI is being developed as a distributed network of African institutions, with a mandate to coordinate bioinformatics infrastructure, research, and training. A central focus is on enabling African scientists and public health institutes to manage and analyse large, complex datasets generated by initiatives such as national genome projects, pathogen genomics surveillance, and the African Population Cohorts Consortium. To meet these needs, the ABI is working with global partners, including the GA4GH, to promote adoption of international standards and tools that enable secure, responsible data sharing.
The Institute will coordinate the development of a federated network of trusted research environments (TREs), ensuring data governance frameworks are locally appropriate while interoperable with global systems. By hosting African databases and resources, and fostering collaborations across institutions, the ABI will both drive demand for advanced compute and storage solutions and contribute to shaping how cyber-infrastructure supports genomics on the continent. In doing so, it will bridge local and global research ecosystems and advance the responsible use of genomic data for health impact.
Direct-current (DC) electric arc furnaces are used extensively in the recycling of steel as well as primary production of many industrial commodities such as ferrochromium, titanium dioxide, cobalt, and platinum group metals. This typically involves a process called carbothermic smelting, in which raw materials are reacted with a carbon-based reductant such as metallurgical coke to make the desired product. Although it is one of humanity’s oldest and most established technologies, carbothermic metal production is becoming increasingly unattractive due to its significant scope-1 emissions of carbon dioxide and other environmental pollutants. Because of this many alternatives to fossil carbon reductants are currently being researched, and in the context of broad initiatives to establish a sustainable hydrogen economy both in South Africa and internationally, the possibility of directly replacing coke with hydrogen as a metallurgical reductant is of particular interest. A DC arc furnace fed with hydrogen has the potential to reduce or eliminate carbon emissions provided renewable resources are used for both electrical power and hydrogen production.
Key to the operation of DC arc furnace is the electric arc itself – a high-velocity, high-temperature jet of gas which has been heated until it splits into a mixture of ions and electrons (a plasma) and becomes electrically conductive. The plasma arc acts as the principal heating and stirring element inside the furnace, and understanding its behaviour is an important part of operating an arc furnace efficiently and productively. However, due to the extreme conditions under which arcs operate, studying them experimentally can be difficult, expensive, and hazardous. Coupled multiphysics models which simulate arcs from first principles of fluid flow, heat transfer and electromagnetics are therefore of great value in conducting in silico numerical experiments and building an understanding of how they behave under different process conditions. This presentation will discuss the development of an arc modelling workflow incorporating aspects of process thermochemistry, plasma property calculation from fundamental physics, and computational mechanics models of the arc itself. This workflow is then used to explore the impact of introducing hydrogen gas as an alternative reductant in metallurgical alloy smelting processes.
In keeping with the theme of this year’s CHPC National Conference, the critical role of HPC in plasma arc modelling will be discussed in terms of the data life cycle in plasma arc modelling – from input parameters through to raw simulation data, and finally to key insights which will help guide the next generation of clean metal production technologies.
In an age where AI is permeating every field of engineering and science, it is essential for researchers to quickly embrace technologies that can drive breakthroughs and accelerate innovation. Tools like MATLAB are specifically designed to lower the barriers to entry into the world of AI, making advanced capabilities more accessible to engineers and scientists.
Integrating truly AI-enabled systems into real-world applications presents significant challenges – data fragmentation, legacy system integration, and scaling advanced computations are common hurdles. This talk presents practical approaches to overcoming these obstacles, emphasizing how high-performance computing with MATLAB and Simulink can accelerate AI model development and deployment. After a quick introduction on how to access and use MATLAB at your university or institute, the session will focus on effective strategies and best practices for leveraging HPC capabilities such as parallelization and workflow optimization to achieve faster prototyping and scalable AI solutions.
The availability of MATLAB on the CHPC cluster through your university or institute license ensures that the presented workflows are accessible and reproducible for researchers across the Southern Africa region. The Academia Teams at MathWorks and Opti-Num Solutions (local partner company) are here to support your research by helping you quicky adopt AI and scale your work with parallel computing.
Q & A
All CHPC Users, Principal Investigators and anyone interested in practical use of CHPC computational resources are invited to attend this informal Birds-of-a-Feather Session.
At the start overviews will be presented of the recent usage of the CHPC compute resources (HPC Cluster, GPU Cluster and Sebowa Cloud infrastructure), discussion of new resources available to users and discussion of questions and topics from the audience.
The session provides an excellent opportunity to meet up in-person with CHPC employees and to meet and engage with colleagues benefiting from CHPC services.
Cocktails Poster Session
Intel has already put a lot of “AI in action”. Come and hear about some of the use cases that were deployed at scale during the Paris 2024 Olympic and Paralympic Games. You will be blown away with the capabilities and their results ! And the session will also deliver some details about the technologies and features of the Intel ingredients that are in these use case solutions, as well as look at the new Intel AI ingredients and solutions.
Research and discovery are increasingly computation, data-intensive, interdisciplinary, and collaborative. However, reproducing results remains a significant challenge. Scholarly publications are often disconnected from the data and software that produced the results, making reproducibility difficult. Researchers today generate vast amounts of data, code, and software tools that need to be shared, but sharing data remains challenging, especially when data is large or sensitive. Moreover, funding agencies are increasingly requiring sharing data used to generate results, yet data is only valuable if it is reproducible. A key challenge is that reproducible artifacts are typically created only after the research is complete, hindered by a lack of standards and insufficient motivation. Despite growing recognition of the importance of reproducibility, the research community still lacks comprehensive tools and platforms to support reproducible practices throughout the research cycle, as well as a culture that educates and trains researchers on the topic.
This presentation will introduce SHARED (Secure Hub for Access, Reliability, and Exchange of Data), a new initiative at the University of Chicago to develop a comprehensive platform for data-driven research and data management. We will discuss the challenges and opportunities of reproducibility in computational research and strategies for capturing reproducible artifacts throughout the research process. Additionally, we will share progress on building a community of practice to democratize reproducibility in scientific research.
TBC
TBC
The spinel LiMn2O4 cathode material has attractive candidates for the design and engineering of cost-effective and thermally sustainable lithium-ion batteries for optimal utilisation in electric vehicles and smart grid technologies. Despite its electrochemical qualities, its commercialization is delayed by the widely reported capacity loss during battery operation. The capacity attenuation is linked to structural degradation caused by Jahn-Teller active and disproportionation of Mn3+ ions. In several studies, the structural stability of spinel LiMn2O4 was improved by single- or dual-doping the Mn sites to curtail the number of Mn3+ ions. However, this results in loss of active ions, which ultimately limits the amount of energy that can be obtained from the battery. Herein, a high-entropy (HE) doping strategy is used to enhance the structural stability and electrochemical performance of LiMn2O4 spinel. The unique interactions of various dopants in HE doping yield enhanced structural stability and redox coupling, which can improve the concentration of the active material in the system. An HE-doped LiMn2O4 (LiMn1.92Mg0.02Cr0.02Al0.02Co0.02Ni0.02O4) spinel structure was successfully optimized using the Vienna Ab initio Simulation Package (VASP) code. The lattice parameters of the optimized (ground state) structure were determined to be 8.270 Å, which is less than the value of 8.274 Å of the pristine LiMn2O4 spinel structure. The yielded lattice contractions suggest a stronger M-O bond beneficial for increased resistance to phase changes and degradation. Moreover, the concentration of Mn3+ was decreased by 5.3% to defer the onset of the Jahn-Teller distortion and enhance capacity retention. This retention is part of some significant benefits emanating from dopants such as Cr3+ as it can participate in storing electric charge during the charging process by forming Cr4+ thus compensating the capacity loss endured during Mn3+ concentration reduction. Consequently, this work paves a path for exploration of several other fundamental properties linked to the electrochemical performance of spinel.
The proliferation of Artificial Intelligence (AI), data-driven research, and digital transformation has increased the global demand for powerful computing infrastructures capable of processing and analyzing enormous volumes of data. High-Performance Computing (HPC) has emerged as the cornerstone of this evolution, enabling researchers to perform complex simulations, accelerate model training, and analyze Big Data at unprecedented scales. Yet, across many African universities, access to such advanced computing capabilities remains severely limited, constraining the ability of scientists to participate meaningfully in global AI and data science innovation. This paper explores the strategic integration of HPC technologies with deep learning architectures to establish a sustainable, Big Data-driven cyberinfrastructure model tailored for African academic environments.
Drawing inspiration from the ongoing efforts at the University of Mpumalanga (UMP) and the Council for Scientific and Industrial Research (CSIR), the study proposes a framework that connects HPC systems with scalable AI workflows in areas such as agriculture, climate modelling, energy, and cybersecurity. The framework emphasizes distributed GPU-accelerated clusters, containerized computing environments, and job scheduling mechanisms that allow multiple research teams to run parallel deep learning experiments efficiently. Beyond the technical dimension, the paper highlights the importance of local capacity development, collaboration, and institutional investment as key drivers for long-term sustainability. By showcasing how HPC can shorten AI model training times, enhance predictive accuracy, and improve data management efficiency, this research demonstrates that advanced computation is not merely a luxury for developed nations but an attainable enabler of scientific independence for African universities.
The findings underscore that the convergence of HPC and AI can transform research productivity, foster interdisciplinary collaboration, and support evidence-based policymaking in sectors critical to Africa’s development. Ultimately, the paper advocates for the creation of a federated HPC-AI ecosystem across African institutions, allowing shared access to
computational resources, open datasets, and research expertise. Such an ecosystem would democratize access to cutting-edge technologies, close the digital divide, and position African researchers as active contributors to the global knowledge economy rather than passive consumers. Through this integrative perspective, the paper not only offers a technical blueprint for HPC-AI synergy but also presents a vision for empowering scientific innovation, data sovereignty, and technological resilience within the African higher education landscape
Q & A
The mechanical properties of materials change when subjected to dynamically conditions of high pressure and temperature. Such materials are those applied in cutting and shaping resulting in twisting and tensile forces. Results of selected MAX phases are presented to show variations in elastic constants as a function of dynamically pressure and temperature. Another situation where materials are subjected to such conditions is in the core of the earth. Stishovite, CaCl2 and Seifertite phases of silica, occurring in the core of the earth, are investigated with outcomes of phases transitions and related changes in seismic velocities that are compared with experimentally determined values.
The presentation showcases recent developments and applications of the ChemShell software in the field of energy materials by the Materials Chemistry HPC Consortium (UK), focusing on defect properties. This work capitalizes on the software engineering and methodological advances in recent years (including the UK Excalibur PAX project highlighted in the last year CHPC conference) by the groups of Prof. Thomas W. Keal in STFC Daresbury Laboratory (UK) and Prof. C. Richard A. Catlow at UCL and Cardiff University with several collaborators. Materials of interest include wide gap semiconductors used in electronic and optoelectronic devices as well as catalysis and solid electrolytes. The method allows one to explore both defect thermodynamics and their spectroscopic properties. Further examples show how a classical rock-salt structured insulator MgO can be usefully employed as a platform in studies of exotic states of matter, which are of fundamental interest, in particular, the unconventional cuprate superconductors with high critical temperatures, and the recently discovered phenomena in isostructural nickelate systems.
The cheetah is a pinnacle of adaptation in the context of the natural world. It is the fastest land mammal and has multiple morphological specialisations for prey-tracking during high-speed manoeuvres, such as vestibular adaptations to facilitate gaze and head stabilisation [1]. Understanding the cheetah’s head stabilisation techniques is useful in field such as biomechanics, conservation, and artificial and robotic systems; however, the dynamics of wild and endangered animals are difficult to study from a distance. This challenge necessitated a non-invasive Computer Vision (CV) technique to collect and analyse 3D points of interest. We collected a new data set to emulate a perturbed platform and isolate head stabilisation. Using MATLAB®, we built upon a method pioneered by AcinoSet [2] to build a 3D reconstruction through CV and a dynamic model-informed optimisation, which was used to quantitatively analyse the cheetah’s head stabilisation. Using our new dataset, and by leveraging optimal control methods, this work identifiesand quantifies passive head stabilisation, in conjunction with AcinoSet data, to quantify the active stabilisation during locomotion. Since this work includes computationally heavy methods, the processing of these data using optimisations and computer vision rendering can be benchmarked and compared to parallel computing methods, to further support the viability of the 3D reconstruction methods for other animal or human models and applications of high-performance and low-cost markerless motion capture.
[1] Grohé, C et al, Sci Rep, 8:2301, 2018.
[2] Joska, D et al, ICRA, 13901-13908, 2021.
TBC
Q & A
The advent of Exascale computing in 2022 marks a major milestone in HPC but also demonstrates its limitations for future progress. Historically, conventional MPP (and large commodity clusters) have achieved enhanced performance by a factor of 2 every two years. In addition to CPUs, GPUs have extended this through streaming SIMD computations for certain classes of application algorithms. But the end of Moore’s Law as well as Dennard Scaling is severely constraining future progress, especially with respect to cost as Frontier approaches 8,000 square feet and only one other, Aurora, has been announced since then. The major class of supercomputer computation not adequately addressed is that of dynamic adaptive graph processing required for advanced forms of machine intelligence; hence, the third pillar of computation. Graphs exhibit neither much spatial locality nor temporal locality but suggest what may be called “logical locality” as the data structures explicitly define their own topologies. However, a new approach to computer architecture, a non-von Neumann family, that is both dynamic and adaptive can easily provide an order of magnitude performance to cost advantage over current methods. While this form of improvement is particularly advantageous for dynamic graph processing, it also can enhance more typical matrix processing. This closing Keynote address will introduce the foundational concepts of the Active Memory Architecture which is being pursued by the Texas Advanced Computing Center. Questions will be addressed from the participants throughout the presentation.

