The annual CHPC National Conference ran from Wednesday to Friday, 1 to 3 December 2021, as an online conference.
Videos: Please see the menu on the left for all recordings from the conference.
The aim of the conference: to bring together our users so that their work can be communicated, to include world renowned experts, and to offer a rich programme for students, in the fields of high performance computing, big data, and high speed networking. The CHPC National Conference is co-organised by the CHPC, DIRISA and SANReN.
A big thank you to all speakers, delegates and sponsors. The 2021 Conference could not have been the success it was without your contribution.
This year's theme recognises how as more and more work moved online in response to the Pandemic, Cloud Technologies from compute to storage became essential to support solutions to these unprecedented travails. It is Cloud's flexibility that is key to how it helped epidemic forecasting, data collection, remote work, and many other new applications needed.
For more information please see the main conference site.
Testing the text
Testing the text
Scientific endeavors of unprecedented scale, such as Square Kilometre Array, Large
Synoptic Survey Telescope, Large Hadron Collider, will be or are already generating
colossal amounts of data. Furthermore, in a fundamental manner, OMICS and AI
depend on the availability and easy access to big data. This forces us to rethink the
hierarchy and relative importance of "compute - store - transport" elements. All these
components must be considered on equal footing, integrated and architected as almost
organic, coherent and coalesced federation.
In this talk I will chronicle InfiniCortex project run between 2014-2016.
Subsequently the U.S. DoE devised a concept of Superfacilities; a Global Research
Platform was established to facilitate big scientific data collaborations at the global
scale; and in Europe EuroHPC Joint Undertaking is funding three pre-exascale
computing infrastructures, each serving communities of users in many countries. All
this requires the tight integration of compute-store-transport and access components.
We are witnessing the emergence of Federated Supercomputing.
InfiniCortex was a supercomputer spanning the entire globe. It combined
supercomputing and storage resources located in Asia, Australia, USA and Europe into
one concurrent, seamless, global electronic brain. Over the three years of project
duration A*CRC in Singapore together with dozens of partners around the world
implemented:
i) ring-around-the-world connectivity with most links at 100Gbps using exclusively
InfiniBand transport, allowing almost 100% efficient, lossless and encrypted
communication between all processors, everywhere (RDMA) and InfiniBand
routing
ii) InfiniCloud: “designer choice” HPC Cloud instances with no limits on size of
computing resources, spanning four continents
iii) InfiniCortex enabled applications in critical areas of science and technology.
I will also report on continuation of this work at ICM, University of Warsaw with
partners around the globe to transmit data between Europe and Australia, and Europe
and Singapore, on establishing networks with efficient Data Trensfer Nodes (DTNs),
inter-city or inter-continental connected infrastructure enabling high-priority
computing, distributed supercomputing, collaboration, and scientific research.
Testing the text
HPC users ask more of their HPC systems than ever before and they rely on Intel security innovations as the trusted foundation for high performance computing, protecting applications and data. Intel deliver technology that improves foundational security, data and workload protection, and software reliability, using new hardware-based controls for cloud and enterprise environments, including the Intel® Software Guard Extensions (Intel® SGX). High Performance Computing is the foundation of research and discovery. Let’s look at Intel HPC strategy and new innovations including the latest Intel® Xeon® Scalable processors, data center GPUs and powerful software tools. Together, let's accelerate the next era of innovation in HPC.
The Intergovernmental Panel on Climate Change (IPCC) Assessment Report Six (AR6) Working Group I (WGI) report, released in August 2021, is widely regarded as the most influential and comprehensive report on climate change composed to date. The report was described as a 'code red' for humanity by the United Nations Secretary General, and served as a direct input to the 26th Conference of the Parties (COP26) in November 2021. African climate science is well represented in the report, including several modelling papers on the projected climate change futures of the African continent. The CHPC and most recently its Lengau cluster played a critical role and in fact made feasible the completion of several computationally expensive simulations that underpinned several papers referenced by the IPCC AR6. This talk will review those papers, and their significance. Future directions in Earth System Modelling will also be discussed, in the context of convection permitting models and exascale computing, and the role of the CHPC in ensuring that South African climate modelling remains internationally competitive will be emphasized.
Summary:
Computational chemistry involves the use of theoretical methods and computers to solve chemistry and interdisciplinary problems. It has progressed with the explosive increase of computational power and availability of user-friendly software. Computational chemistry is also benefiting from the boom of the ICT sector.
This presentation gives an overview of the fundamentals of computational chemistry methods and their applications in pharmacy and drug development.
Objectives:
1. Collaboration with Computational Chemistry Group, Department of Chemistry, Faculty of Science, University of Mauritius.
2. Increase awareness of faculty and students about the importance of the computational chemistry and its use in all fields of science, especially in pharmacy and health sciences.
3. Possibility to do joint graduation project with Prof Ramasami.
During the start of the Covid-19 pandemic last year, the National Department of Health launched a number of initiatives to combat the spread of the virus. One such initiative was to conduct household screenings during Lockdown Level 5. This entailed using Community Health Workers to conduct Household Screenings. A basic questionnaire was defined and deployed using the Cmore platform. The latter is a collaboration and shared situation awareness platform developed by the CSIR. It has web and mobile applications. The latter was configured to collect screening data entered by approximately 25,000 Community Health Workers, which overloaded the existing Cmore production infrastructure hosted in the CSIR ICT Data Centre (on dedicated hardware). Since the health workers were deployed across all 9 provinces, the decision was made to split the deployment into 10 servers, thus 9 provinces plus a spare for national. The CSIR CHPC team got onboard, and configured 10 servers, each with 164GB RAM, 32 vCPUs, and approx. 100GB disk space. This was done in less than two weeks. However, the difficult part was to get an operational copy of the Cmore platform with all its configured data onto the 10 servers. This required some innovative approaches, but the servers were deployed, and more than 3m household screening records collected. 4 of the 10 servers remain active, now for other Covid-19 related deployments in support of Western Cape Government Health, Gauteng Province Department Roads & Transport and the National Institute for Occupational Health.
The demand by consumers for access to ubiquitous and affordable communication services including the applications of the fourth industrial revolution, with the need to connect everything to the internet (machine-to-machine, and human-to-machine communications) is exponentially growing in both speed and volume. Radio frequency (RF) spectrum is a finite resource of the national ICT infrastructure necessary for enabling exchange of information. In most developing countries, wireless communication technologies remain the cost-effective and preferred solution for providing broadband communication networks due to the lack of, or limited coverage of fixed communication infrastructures such as fiber optic cables which is attributable to the high investment costs. However, the deployment of wireless network infrastructure depends on the availability of RF spectrum. As such, the demand for access to RF spectrum continues to increase and therefore necessitates for an efficient utilisation of it. Unfortunately, the dominant RF spectrum access techniques and management regimes are inefficient since they are based on the traditional command-and-control approaches, which are static in nature. The use of such (outdated) regimes has resulted into an “artificial” scarcity of RF spectrum. This artificially created scarcity leads to two main problems: i) limited or inadequate access to RF spectrum, and ii) high cost of network deployment which translates to high cost of data. Both these problems have a negative impact towards deploying wireless broadband networks for provisioning of universal broadband and communications infrastructures to the needy communities. The CSIR Smart Spectrum Sharing (S3) platform is meant to make available efficient RF spectrum utilisation toolboxes to stakeholders in the telecommunication sector value chain including the national regulators, network operators and policy-makers to support the efforts to reduce the cost of communications and easy the barrier of entry in the telecommunication sector.
Surface-enhanced Raman spectroscopy (SERS) is a phenomenon that amplifies conventional Raman signal with roughened metallic substrates. Gold and silver metallic nanoparticles are commonly used as SERS substrates. The application of SERS in diagnostics yields sensitivity, multiplexing, and quantification of disease causatives. However, the understanding of the SERS architecture and mechanism is still elusive. Hence the use of density functional theory (DFT) to study its chemistry. DFT is used to study the interaction of the metallic substrates with SERS tags and biological molecules. The simulation results inform experimental work towards the fabrication of reproducible, sensitive, qualitative SERS biosensors.
In 2002 with support from the Research Focus Area (Separation Technology) at North-West University (NWU), the Laboratory for Applied Molecular Modelling (LAMM) was established. After the evaluation of the researchers' abilities and the research needs, an investment was made in Accelrys Materials Studio software. Additionally, ten workstations and a 12 CPU cluster were acquired. The focus of the research done at that time in the LAMM was homogeneous catalysis, which was limited to reactions in the gas phase. Transition state calculations, as well as reaction in solutions, were a challenge.
Around the same time, the CHPC was established. However, only on 23 January 2016, the application to register a program at the CHPC was approved. The title of this program is: "Computational Chemistry within the Laboratory of Applied Molecular Modelling at NWU".
Within this program, the LAMM supports and facilitates the use of computational chemistry in research at NWU. The projects that ran between January 2016 and March 2018 was: 1. Solvent extraction of Ta, Nb, Hf and Zr from various minerals; 2. Polymer Blends - a collaboration with UFS and Qatar University. 3. Identification of mechanisms in biochemistry; 4. Homogeneous and Heterogeneous catalysis; 5. Collaboration with a group in China on polyphosphazenes; and 6. Interfaces in crystals with mechanical engineering at NWU. In this period (26 months), the program used 213896 CPU hours per month. Since then, some projects ran to completion, some were added, some were completed, and some expanded. The CPU hours have increased to 790636 CPU hours per month.
The project that expanded the most is "Homogeneous and Heterogeneous catalysis", specifically heterogeneous catalysis. The focus in heterogeneous catalysis is on developing new/alternative catalysts to apply in the generation of alternative renewable energy and pollution control.
The South African Population Research Infrastructure Network (SAPRIN) curates longitudinal population data collected by four nodes from a total population of more than 400 000 individuals. Due to the dynamic nature of these study populations data representing episodes of individual surveillance needs to be combined in a way that maintains data integrity and takes into account variations between data collection sites.
We need to deconstruct 4,5 million person years of observation into a day level dataset, requiring the kind of processing and storage capacity provided by a high performance computing environment such as CHPC.
We will describe a data processing pipeline, originally developed in Pentaho and recently converted to the julia programming language which scales well on the CHPC environment.
CASABIO is a citizen science platform currently focused on African plants. A number of innovations have been pioneered by CASABIO and these are showcased in the presentation. Most importantly CASABIO is the only platform to create a workflow that allows one to efficiently go from field observation to label. In addition, there are a number of protective features aimed at protecting plants in the field from poaching. This is becoming increasingly critical with the poaching that is currently taking place in the arid regions of South Africa. CASABIO is also an organisation with a number of assets aimed at facilitating research. We will be publicly revealing for the first time some of these assets.
Testing the text
The Arm ecosystem has defined the last three decades of compute technology and will continue to do so for a long time to come. Arm’s ambitions for the emerging markets are consistent with our vision towards deploying technology that enables opportunity for a globally connected population.
The new Armv9 architecture forms the leading edge of the next 300 billion Arm-based chips. It includes the Arm Confidential Compute Architecture (Arm CCA), a key feature of Armv9-A, and the next step in transforming the trust model of compute environments in every application. Armv9 is also be driven by the demand for increasingly specialized, secure, and high-performance processing built on the economics, design freedom and accessibility of general-purpose compute.
Confidential compute is important for client devices. The Arm CCA security features will make their way across all tiers of computing applications, and help to protect IoT sensors, handsets, laptops the Internet and the cloud. As digital transformation activities mature across emerging markets, public and private stakeholders are focused on ensuring that security remains at the forefront of their national interests. In the next decade of compute, the digitization activities around data sovereignty, information security and national security interests will be critical, and policy stakeholders in these markets will embrace programs and practices with a focus on security innovations that ensure competitiveness and preserve competitive advantages.
In support of these innovations, Armv9 is geared to change the economics and expectations for new and evolving technologies around 5G, cloud, and HPC. Performance-wise, the v9 instruction set is an upgrade of Arm’s Scalable Vector Extension technology (SVE). SVE is currently used in the Arm-based Fujitsu A64FX chip that powers Fugaku - the world’s fastest supercomputer - and SVE2 opens a range of new approaches to deploying more powerful AI across the Cloud, Edge, and endpoints.
In my proposed presentation, I will discuss how the Armv9 architecture will influence the next decade of compute, while supporting critical activities that allow emerging market stakeholders to fully embrace multi-sectorial digital transformation and participate in the Fourth Industrial Revolution (4IR). I will share ecosystem driven solutions that showcase the transformational power of Confidential Compute, and highlight how the Armv9 architecture, driven by our broad ecosystem, is delivering best-in-class solutions for Cloud, Edge and Endpoint AI needs for all tiers of ICT stakeholders.
Testing the text
Addison Snell of Intersect360 Research will give an overview of the latest market research and insights for HPC and AI, including revised market guidance for 2021, highlights of their recent HPC software survey, and insights from SC21.
The Vendor Crossfire Session is focused on giving the audience the opportunity to learn more about the products and offerings of the conference sponsors in a fun and engaging session.
The session will be facilitated by Addison Snell and Dan Olds from Intersect360 and will also involve participation of the audience to e.g. vote on answers to questions posed to vendor representatives.
Short introductory presentations will be made by each vendor representative followed by a questions session. The following representatives have been confirmed:
Dr Jean-Laurent Philippe (Intel)
Mr Yossi Avni (Nvidia)
Mr Olivier Blondel (HPE)
Mr Kamel Beyk (Dell Technologies)
Mr Ernst Burger (Altair)
Welcome and Opening:
• Welcome remarks SADC Chair, Prof Chomora Mikeka,
Malawi
• Remarks by Ms Anneline Morgan, SADC Secretariat
Ms Anneline Morgan, SADC Secretariat:
• Record of the 10th SADC Cyber-Infrastructure Technical
Experts meeting held in December 2020 virtual
• Background, expectations and outcomes of meeting
Progress report on implementation of SADC Cyber-Infrastructure Framework
—Dr Happy Sithole, South Africa
Update on establishment of SADC Centre of Excellence on ICT
—SADC Secretariat
United Nations Economic Commission for Africa (UNECA),
Centre of Excellence on Digital Technologies
—Dr Mactar Seck
Update and next steps on Weather and Climate project
—Dr Mary-Jane, Chief Scientist, South African Weather Service
Status of Open and Distance Learning in SADC: implications of
COVID-19 and future of digitation and the role of Cyber-
Infrastructures.
—Prof Martin Oosthuizen, Southern African
Regional University Associations (TBC)
—Dr Tshiamo Motshegwa
Establishment NReNs
—Ubuntu Net Alliance and NICIS
Today data is gathered at different scales and at different frequencies by different organisations using different types of devices much more than any other time in history.
Most of it is gathered using already outdated technologies and systems. This creates an integration challenge: digesting data at the local scale, using it to update regional and nationwide scale models, and feeding thus learned models back to the local scale to inform local inference.
If every farm has thousands of affordable sensors embedded in their environment (soil, waterways, animals) constantly “tasting” soil chemistry, flow rates, and any kind of biological activity, it will produce a massive amount of data per day that has to be stored, analysed, curated and maintained.
We could seed all the information such as crop variations, soil health, and weather patterns combined with insurance options, credit availability and market forecasts into a single database and then analyse it through AI and data analytics. Then the goal is to develop personalised services for a sector replete with challenges such as peaking yields, water stress, degrading soil and comparative lack of infrastructure.
This is supercomputing territory. Real-time advanced modelling and simulation for quick decision-making requires significant computing power. In this talk, Open Parallel's Nicolás Erdödy outlines the powerful modern technology tools and models available, but not often used, for the agriculture sector and how a system of systems similar to a digital twin shall be developed -from the edge to exascale computing, to optimise production while measuring climate change as it happens.
The cubic garnet-type Li7La3Zr2O12 is an eminent candidate for next-generation solid state battery technology due to its thermal stability and high ionic conductivity. As such, its operation mechanisms need to be thoroughly understood, particularly focusing on the structural instability challenge reported to occur at lower temperatures. Herein, the statistical sampling capability of molecular dynamics simulations is employed during the investigation of fundamental structural, kinetic and thermodynamic properties emanating its subjection to pressure and temperature. Systematic induction of pressure yielded transition of the tetragonal phase to the cubic phase at 2 GPa pressure. The lattice parameters for the cubic and tetragonal phases, acquired in the current study are within 0.38 % agreement with literature. Furthermore, the XRD graphs confirm varying phases under different pressure conditions. The temperature phase diagram for 0 GPa structure agrees well with the literature trends and interestingly, the 2 GPa structure retained the cubic phase at various temperatures and confirmed in the XRDs and temperature phase diagram. Interrogation of LaO8 dodecahedral and ZrO6 octahedra demonstrated no significant variations in bond lengths and bond angles giving a good indication for the regulation of Li+-transport channel size in the 2 GPa structure. Efforts in this study are a preliminary stage to fully understanding the thermodynamic impact as a structural modification avenue pending further investigations.
High-performance computing and artificial intelligence have driven supercomputers into wide commercial use as the primary data processing engines enabling research, scientific discoveries, and product development. Extracting the highest possible performance from supercomputing systems while achieving efficient utilization has traditionally been incompatible with the secured, multi-tenant architecture of modern cloud computing. A cloud-native supercomputing platform aims at the goal of combining peak system performance with a modern zero-trust model for security isolation and multi-tenancy. The key element enabling this architecture transition is the data processing unit (DPU).
The DPU is a fully integrated data-center-on-a-chip platform that imbues each supercomputing node with two new capabilities: First, an infrastructure control plane
processor that secures user access, storage access, networking, and life-cycle orchestration for the computing node in the data center or at the edge, offloading
these services from the main compute processor and enabling bare-metal multi-
tenancy. Second, an isolated line-rate data path with hardware acceleration that
enables high performance. All this infrastructure allows a cloud-native HPC and
AI platform architecture that delivers HPC performance on an infrastructure
platform that meets cloud services requirements. The implementation of the infrastructure comes from the open-source community and driven by standards, similarly as how some of the traditional HPC software stack that is maintained by a community including commercial companies, academic organizations, and government agencies.
We'll introduce the new supercomputing architecture, discuss the first cloud native supercomputers, , review first applications performance results, and explore future directions.
This talk will provide an update on the way HPE is working with customers to provide technology and solutions to address the most challenging problems in high performance computing. It will include the need to make use of heterogeneous computing elements, and how AI can be combined with HPC to enable end users to be more productive
The Fourth Industrial Revolution (4IR) and the Coronavirus Disease 2019 (COVID-19) have disrupted the higher education environment in unprecedented ways. This presentation is based on research conducted by the Faculty of Economic and Management Sciences at the University of the Free State that identifies the impact of increasing disruption driven by the 4IR and COVID-19 on the content and curriculum design of degree programmes in economic and management sciences offered by South African universities. The setting is six South African and five top-tier US and UK universities. The study used a non-positivist qualitative research design and specifically the case-study approach. A document analysis of the information in university yearbooks and prospectuses was conducted, using a purposive sampling design. The results indicate that an online presence will become more important due to increased disruption, and will not only ensure an additional revenue stream, but also promote continuity in operations and mitigate threats from competitors. COVID-19 has accelerated the extent of this disruption and expedited the migration to online teaching and learning platforms. Furthermore, since science, technology, engineering and mathematics are integral to the majority of 4IR-related modules, South African universities must not shy away from degree programmes that ignore inter- and multi-disciplinary curriculum designs. Coupled with the challenges facing the majority of South African students to access electronic devices, data and the internet, COVID-19 has thrust this challenge to the forefront in the South African higher education landscape. By comparing the developments in South African universities with those in trendsetting, top-tier, global universities, management can assess the extent to which they are internationally competitive and adapting to the demands of the 4IR.
Many companies are joining the worldwide drive toward a paperless environment. The utilisation of digital documents, contracts or approval forms can save organisations large costs and time as documents can be electronically signed and transmitted to their destination within minutes. Electronic signatures can bring many advantages, but a review of the current situation shows many cybersecurity threats around the signing of documents in a digital form. Many companies are still in the early phases of utilising electronic signatures, with many potential opportunities for manipulation. This paper discusses the various types of electronic signatures, how they can be exploited and recommends general security measures individuals and organisations can follow in order to use electronic signatures more securely.
Hydrokinetic energy generation devices within water infrastructure are becoming an ever-increasing alternative power source. In these applications the extent and characteristics of the downstream wake are of great importance. The vortex formation and diffusion of the wake has a complex formation and is dependent on numerous factors. Validated Computational Fluid Dynamics (CFD) models provide a detailed insight into these formations. These models allow analysis of the wake behaviour which are helpful in design and installation of these systems. Additionally, the effects of submergence depth and blockage ratio are also possible. Previously costly laboratory testing and simplified inaccurate numerical modelling was utilized due to the large computational resources needed to accurately simulate these applications. The presentation will discuss the simulations made possible through the CHPC resource, the challenges, success, and relevance of the results.
Descriptors derived from density functional theory (DFT) calculations have been the standard when it comes to screening any alloy configuration space. However, deriving descriptors using DFT comes with high computational costs since any alloy configuration space is expansive. DFT derived descriptors have been used for scaling relationships (SR), quantitative structure property relationships (QSPR) and of late artificial intelligence/machine learning (AI/ML) in screening for alloys and catalysts. However, SR and QSPR still require lots of DFT calculations and AI/ML need lots of training data. Catalyst support have not been intensively investigated. Much of the focus has been on the catalyst. However, alumina (Al2O3) has been the most dominant support in use. Computational alchemy can be used to approximate a descriptor on a large number of random/hypothetical alloy configurations with low computational cost. This is because it only requires a single set of reference DFT calculations. Transition metal doped Al2O3 has been reported to possess excellent attributes, such as the ability to promote surface diffusion and prevent clustering/sintering by suppressing grain growth. In this study, using the binding energy as a descriptor, we screen for random/hypothetical alloys of the catalyst support Al2O3 using computational alchemy. We explore in this study, some of the limitations of challenges of this approach in screening for a broad range of alloys. Pt is introduced at different locations within the alloy matrix to make Al2O3 a conductor and suitable for computational alchemy. Like previous studies, on metal alloys, computational alchemy predicts adsorbate BEs in close agreement with those obtained using DFT calculations. This study provides insights on how computational alchemy can be useful in materials’ predictions at low computational costs.
The talk will cover some of the trends in power, cpu tech, and the challenges we are being faced by our technology partners pushing the limits.
This talk will discuss the current state-of-the-art of HPC cloud usage in various application sectors and will look at best practices and common pitfalls when adopting cloud infrastructures for HPC. The focus will be on non-trivial use cases. Concrete examples of existing cloud deployments will showcase what is feasible and advisable today. The talk will also give a perspective on trends and future developments.
With the advent of the Fourth Industrial Revolution, there is a rise of connected ‘smart’ devices called the Internet of things (IoT). This has possible implications for the network architecture and exhibits an increase in the variety and volume of data that needs to be catered for on networks. In addition, IoT has been seen to result in a broader attack surface for information warfare as well as the utilisation of compromised IoT devices to conduct attacks that have disrupted large networks. The presentation will provide an overview of IoT-related security incidents and focus on the security considerations of IoT, as well as information warfare attacks enabled by IoT.
The Lost Packet Warehousing Service (LPWS) is a technological solution with a South African focus to enable the passive but continuous collection of cyber data. The purpose of LPWS is to function as the primary source of cyber data, which will support the identification and detection of emerging trends and cyberattacks. LPWS aims to monitor threats at a national, organisation and private level using a collection of deception technologies. Products offered by LPWS include raw data sets, available for use by universities, cyber threat reports, as well as the Honey Net Kit - a miniaturized but deployable prototype of LPWS.
Reports and Updates from Member States on programmes, projects and infrastructures.
Update on Work Plans from Working Groups/Sub-Committees
Recap on Governance Structures and update
—SADC Secretariat
Road Map and Action Plan on Outcomes of the 11 th SADC Cyber-Infrastructure Experts Meeting
Closing, next meeting in December 2022
During late 2019, the world saw the emergence of the Coronavirus Disease 2019 (COVID-19) pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). To date, more than 242 million infections have been observed in 223 countries/territories across the globe, with a staggering 4.9 million individuals losing their lives to COVID-19. South Africa has not remained unscathed by the pandemic, having more than 2.9 million COVID-19 cases and over 88 000 COVID-19 deaths. Significant inter-individual variability has been observed in host responses to COVID-19, with host genetic factors being proposed as a contributor to SARS-CoV-2 susceptibility and disease severity. This observation echoes that of the ancient disease of tuberculosis (TB), still a chief cause of death in many areas, with Africa home to most high burden countries. To find the genetic underpinnings of TB in South Africa, we have studied the complete susceptibility spectrum, from individuals with rare susceptibility mutations to common genetic variants in the general population. We used association studies, genome wide linkage studies and genome-wide association studies and incorporated population genetics and computational analyses to identify genes and loci that inform the variation in disease outcome between individuals. We are establishing a large COVID-19 cohort representative of South African populations, including individuals that have tested both positive and negative for SARS-CoV-2 to elucidate the underlying genetic markers that are associated with both infection and severe/critical COVID-19. This includes whole genome sequencing of younger individuals diagnosed with a rare Multisystem Inflammatory Syndrome in Children (MIS-C) that is directly related to a previous SARS-CoV-2 infection, which is often very mild or asymptomatic in children. Our findings could assist with the management of infectious diseases in resource-poor African settings, where an overburdened healthcare system in the past has not been able to accommodate infection surges.
Over the last year, considerable further progress has been made in using a rational design approach [1] guided by calculations with the Gaussian 09 software package on the Lengau cluster and an application of Michl’s perimeter model [1,2] to prepare novel Sn(IV) complexes of porphyrin dyes and porphyrin analogues that are suitable for use as photosensitizer dyes in photodynamic therapy [3-8]. Axial ligation results in low levels of aggregation, while the Sn(IV) ion promotes intersystem crossing resulting in relatively high singlet oxygen quantum yields through a heavy atom effect. Relatively low IC50 values have been obtained during in vitro studies against MCF-7 breast cancer cells [3-9]. Future directions on the use of the Gaussian 09 software package in the context of this research will be described.
References
[1] J. Mack, Chem. Rev. 2017, 117, 3444-3478.
[2] J. Michl, Tetrahedron 1984, 40, 3845-3934.
[3] B. Babu, J. Mack, T. Nyokong, Dalton Trans. 2020, 49, 9568-9573.
[4] B. Babu, E. Prinsloo, J. Mack, T. Nyokong, New J. Chem., 2020, 44, 11006-11012.
[5] B. Babu, J. Mack, T. Nyokong, Dalton Trans. 2020, 49, 15180-15183.
[6] B. Babu, J. Mack, T. Nyokong, Dalton Trans. 2021, 50, 2177-2182.
[7] B. Babu, J. Mack, T. Nyokong, New J. Chem., 2021, 45, 5654-5658.
[8] R.C. Soy, B. Babu, J. Mack, T. Nyokong, Dyes Pigments, 2021, 194, 109631.
[9] B. Babu, A. Sindelo, J. Mack, T. Nyokong, Dyes Pigments, 2021, 185A, 108886.
Over the past decade, given the higher number of data sources (e.g., Cloud applications, Internet of things) and critical business demands, Big Data transitioned from batch-oriented to real-time analytics. Stream storage systems, such as Apache Kafka, are well known for their increasing role in real-time Big Data analytics. For scalable stream data ingestion and processing, they logically split a data stream topic into multiple partitions. Stream storage systems keep multiple data stream copies to protect against data loss while implementing a stream partition as a replicated log. This architectural choice enables simplified development while trading cluster size with performance and the number of streams optimally managed. This paper introduces a shared virtual log-structured storage approach for improving the cluster throughput when multiple producers and consumers write and consume in parallel data streams. Stream partitions are associated with shared replicated virtual logs transparently to the user, effectively separating the implementation of stream partitioning (and data ordering) from data replication (and durability). We implement the virtual log technique in the KerA stream storage system. When comparing with Apache Kafka, KerA improves the cluster ingestion throughput (for replication factor three) by up to 4x when multiple producers write over hundreds of data streams. Furthermore, we present the initial results of running experiments with KerA over Infiniband and Singularity in an HPC cluster.
As a recent I/O behaviour analysis [1] has revealed, High Performance Computing(HPC) storage systems may no longer be dominated by write I/O – challenging the long- and widely-held belief that HPC workloads are write-intensive. HPC applications are evolving to include not only traditional scale-up modelling and simulation bulk-synchronous workloads but also scale-out workloads [2] like artificial intelligence (AI),advanced and big data analytics [3], machine learning, deep learning [4], and complex multi-step workflows [5]–[7]. Exascale workflows are projected to include multiple different components from both scale-up and scale-out communities operating together to drive scientific discovery and innovation.With the often conflicting design choices between optimizing for write-intensive vs. read-intensive workloads, having flexible I/O systems will be crucial to support these emerging hybrid workloads. Another performance aspect is the intensifying complexity of parallel file and storage systems in large-scale cluster environments. Storage system designs are advancing beyond the traditional two-tiered file system and archive model by introducing new tiers of temporary,fast storage close to the computing resources with distinctly different performance characteristics. The changing landscape of emerging hybrid HPC workloads along with the ever increasing gap between the compute and storage performance capabilities reinforce the need for an in-depth understanding of extreme-scale parallel I/O and for rethinking existing data storage and management evaluation techniques and strategies.In this talk, an overview and taxonomy [8] of the current state-of-the-art research on large-scale parallel I/O evaluation and characterization techniques in the context of HPC systems is presented. Traditionally, the process of understanding large-scale I/O behaviour and performance for specific applications or storage systems is performed iteratively and empirically in a closed loop fashion, as outlined in Figure 1, and consists of three main phases: (1) Measurements and Statistics Collection, (2) Modelling and Prediction, and (3) Simulation. The overview and broad knowledge base provided by this talk is invaluable to the whole scientific community, as applications often observe poor performance due to bottlenecks in the parallel I/O and storage system. In addition, this talk aims to identify future re-search challenges with regard to emerging exascale computing systems and more complex hybrid HPC workloads.
Birds-of-a-Feather (BoF):
Research and developments in Modelling and Simulation (M&S) domain are taking place in disparate communities considering different models and phenomena across the various length scales, with considerable advancements in multiscale approaches and multidisciplinary optimisation. This BoF session seeks to bring together M&S community across disciplines to engage on the National Science and Innovation System: Modelling and Simulation Domain. The discussions will guide and contribute to the development of Modelling and Simulation roadmap.
Programme:
Welcome - Prof Regina Maphanga (3 min)
Background and Objectives for the M&S Program - Dr Jeffery Baloyi (15 min)
Proposed Business Plan - Prof Regina Maphanga (15 min)
Discussions - All (20 min)
Way forward - Dr Jeffery Baloyi (5 min)
Closure
In HPC, typical scientific codes often manage a massive amount of data utilizing I/O middleware libraries, such as HDF5, PnetCDF, ADIOS, etc. These libraries support a variety of data structures and allow end users to optimize I/O performance by tuning configurations across multiple layers of the HPC I/O middleware stack. This work proposes SCTuner, an autotuner built within the I/O library itself to tune the configurations across I/O layers dynamically and agilely at application runtime. To this end, we introduce an I/O statistical benchmarking method to profile the behaviors of individual supercomputer I/O subsystems with varied configurations across I/O layers. Next, we use the benchmarking results as the built-in knowledge in SCTuner, implement an I/O pattern extractor, and plan to implement an online performance tuner as the runtime of SCTuner. We conducted a benchmarking analysis on the Summit supercomputer and its GPFS file system Alpine. The preliminary results show that our method can effectively extract the consistent I/O behaviors of the target system under production load, building the base for I/O autotuning at application runtime.
The gap between I/O performance and memory performance is decreasing due to the emergence of fast, low-latency storage such as NVMe and persistent memory (PMEM). However, traditional interfaces to storage (e.g., POSIX) do not fully leverage these new device characteristics, resulting in significant performance degradation. New interfaces to storage must be utilized in order to achieve the full potential of these low-latency technologies. To demonstrate this, we present pMEMCPY: a simple, lightweight, and portable I/O library for storing data in persistent memory. As opposed to traditional storage APIs, pMEMCPY uses memory mapping. We demonstrate that our approach is up to 2x faster than alternative interfaces to storage under real workloads.
Testing the text
The increasing popularity of IoT devices allows us to communicate better, interact better, and ultimately build a new type of a scientific instrument that will allow us to explore our environment in ways that we could only dream about just a few years ago. This disruptive opportunity raises a new set of challenges: how should we manage the massive amounts of data and network traffic such instruments will eventually produce? What types of environments will be most suited to developing their full potential? What new security problems will arise? And finally: what are the best ways of leveraging intelligent edge to create new types of applications?
In a research area that creates a new deployment structure, such questions are too often approached only theoretically for lack of a realistic testbed — a scientific instrument that keeps pace with the emergent requirements of science and allows researchers to deploy, measure, and analyze relevant scientific hypotheses. To help create such instrument, the NSF-funded Chameleon testbed, originally created to provide a platform for datacenter research, has now been extended to support experiments from cloud to edge.
In this talk, I will first describe Chameleon — a scientific instrument for computer science systems research, originally created to allow exploration of research topics in cloud computing such as virtualization, programmable networking, or power management -- as well as its recent extension to support experimentation at the edge. I will describe the testbed capabilities and operational practices required to provide a platform for experimentation in the edge to cloud continuum, and give examples of edge to cloud research and education projects our users created. I will also describe tools and services that Chameleon provides to improve experimental methodology and reproducibility of experiments in this environment, and illustrate how a common experimentation platform can enhance sharing and scientific productivity.
Testing the text
TBC
Testing the text
The Nectar Research Cloud has been supporting Australian national research since 2012. In the last 18 months the ARDC has doubled Nectar’s national capacity and increased its investment in leading-edge technology. This has enabled ARDC Nectar to be flexible in responding to changing research demands and challenges during the COVID-19 Pandemic.
The talk will highlight how we have approached significant growth in demand, completing a comprehensive refresh program whilst also maintaining high levels of service provision with
continual service improvement, designing and scaling new services for national benefit and trialling and testing innovative technology at scale, ensuring we have a responsive and adaptive national research cloud for an increasingly digital research ecosystem.
ARDC has built a large community through our ARDC Research Platforms projects that are developing virtual research environments, enabling us to design, test and scale these new services with this extended community. The new and improved infrastructure, services and capabilities of Nectar will support the requirements of research platforms for image processing, machine learning, drones, genomics, ecosystems science, and sensitive data.
I will also discuss some case studies in which we were quickly able to provision urgently needed additional compute for Covid-19 modelling and responses to the pandemic in a time where supply chain issues and economic pressures made it more challenging.
Numerical modelling is an essential component of integrated ocean monitoring and, together with in situ observations and remote sensing products, is one of the critical tools informing stakeholders about highly variable regional and coastal environments. Operational ocean modelling becomes particularly valuable in South Africa when one considers the dynamic nature of its surrounding oceans, including but not limited to its proximity to one of the most energetic current systems in the world, the Agulhas Current. The Agulhas Current exhibits intense mesoscale activity in the form of events such as eddy shedding events at the Agulhas Retroflection, the interaction of eddies from the Mozambique Channel with the Agulhas Current proper, and the meandering nature of the Agulhas Return Current. The unpredictability and intensity of the currents represent a direct risk to industrial, commercial and leisure activities, for example, accidental pollutants, such as oil spills, which may advect onshore to the detriment of the coastal environment. Furthermore, the Benguela Current system is known to be sensitive to climate change, and climate variability such as the El Niño Southern Oscillation (ENSO). Understanding these dynamics and their function in the ecosystem, such as associations to harmful algal blooms (HABs) is an important part of resource management. Hence, the development of regional operational modelling capacity is of key interest. This program supports SOMISANA (Sustainable Ocean Modelling Initiative: a South African Approach) whose vision is to facilitate the local development and sustainability of an operational ocean current forecast system for the South African exclusive economic zone and to do so in a transformative fashion. To this end, its two immediate goals are: (1) to develop local numerical ocean modelling capacity via student supervision and (2) to develop high resolution ‘hindcast’ numerical models, optimized for South Africa’s shelf region as well as bay-scale forecast models, downscaled from freely available global products that poorly resolve the processes in these regions. These objectives will not only lay the foundation for the development of South Africa’s operational ocean forecasting system, they will also ensure our contribution to UN Decade of Ocean Science endorsed projects: CoastPredict and ForeSea.
Starke Ayres is one of Africa’s largest seed companies in Africa which specializes in the development of superior vegetable seed varieties. Bioinformatics forms a crucial part of plant breeding as it helps researchers gain a deeper understanding of the genetics of their germplasm. To do this, we employ bioinformatics tools and software to perform computationally demanding processes such as read alignment to reference genomes, variant discovery, genome assembly and phylogenetics among other processes. As bioinformatics must deal with large amounts of sequence data, computational resources often become a limiting factor. We have been able to use CHPC massive computational power to accelerate molecular marker discovery for several traits. These projects would have otherwise been extremely slow or not possible if we relied on desktop computers with limited memory, processing power and storge. The vast amount of core bioinformatics tools and software that is preinstalled and configured on CHPC, save a lot of time that could have been used to configure and install these software packages on our local machines. In addition, parallelization capabilities of CHPC using MPI have made data processing quicker and more efficient.
Prime Number algorithm for massively parallel Processor-in-memory machine
Doing Sieve of Eratosthenes on a 1024-way bit-serial processor implemented on an FPGA
The bit-serial processors very simple, backed with 512 bits of RAM.
All processors perform the same operation, subject to individual processor
enables, encompassing boolean logic, and can read and write their own
512 bit RAM.
The 512 bits are allocated among variables, of arbitrary width, perhaps 32 bits.
By a succession of boolean operations, addition, subtraction multiply and
divide are coded into routines. All these operations happen in parallel.
Algorithm design can be difficult, as all program branch paths must be traversed.
A simple sieve on a regular processor takes time proportional to the total
number of candidate primes tested, and the number of factors used in the divisions.
I present a sieve algorithm with time only proportional to the number of factors.
The large sample sizes of modern genetic datasets has necessitated the development of high-throughput accelerators in order to allow bioinformatics research to be performed in a reasonable amount of time. Although the inherently parallel nature of FPGAs makes them well suited to accelerating high-throughput workloads, they are not commonly employed as bioinformatics accelerators (in lieu of CPUs and/or GPUs) due to their high cost and the fact that developing FPGA-accelerated algorithms is a more complex and time-consuming process than the development of software for CPUs or GPUs. The availability of cloud-based FPGA instances, however, has made powerful FPGAs accessible to bioinformatics labs and the continuous improvement of FPGA design tools has reduced much of the complexity of FPGA development.
This work determines the efficacy of FPGAs when applied to the acceleration of GWAS permutation testing - a computationally expensive bioinformatics algorithm that involves the repeated multiplication of a constant matrix with a changing vector - by presenting the design and evaluation of an FPGA-based accelerator designed to run on an AWS EC2 FPGA instance.
This work shows that the FPGA accelerator is orders of magnitude faster than a popular CPU-based GWAS tool without an apparent loss of accuracy. Furthermore, this work demonstrates that FPGA acceleration enables the handling of workloads which are almost unfeasible for current CPU-based methods. This work, therefore, proves that FPGAs can effectively accelerate high-throughput bioinformatics workloads at relatively low cost.
Microbiomes mediate crucial ecosystem processes in terrestrial and marine environments, yet data regarding their precise responses to climate change remains limited. This knowledge deficit is especially true for extreme Antarctic environments where the importance of microbial communities is thought to be more pronounced due to the depauperate and oligotrophic nature of such systems. Here, I will discuss our work in understudied deserts (such as the McMurdo Dry Valleys) and oceans (such as the Southern Ocean) ecosystems. I will discuss key insights regarding the diversity and functional traits of microbiomes in these regions. I will also highlight how the application of mesocosms (such as ocean acidification experiments) has allowed us to predict the response of microbiomes to anthropogenic change. I will conclude by discussing key questions for future research in Antarctic environments.
Intermolecular interactions play a fundamentally important role in the properties of solid materials. For instance, molecules ("guests") are taken up into porous materials ("hosts") as a result of the interactions between these species, while the manner in which they interact has an influence on the sorption ability of the porous material. Several examples from our work will be used to show that calculations performed using the CHPC's computational facility allow us to explain the role that intermolecular interactions play in the unusual sorption properties of various porous compounds. For instance, the interactions between carbon dioxide and a host porous metal-organic framework yield anomalous sorption isotherms that can be explained by the electrostatic interactions between host and guest.[1] This can be extended to study mixtures of gases, where CO2 and N2 interact differently with the frameworks, leading to non-ideal sorption behaviour that influences the ability of a porous compound to separate CO2 from N2.[2] Molecular dynamics calculations and in combination with simulation of sorption isotherms using the BioVia MaterialsStudio suite available through the CHPC can hence be used to identify materials that will yield superior gas separation compounds.
[1] Bezuidenhout, C. X.; Smith, V. J.; Bhatt, P. M.; Esterhuysen, C.; Barbour, L. J. Angew. Chem. Int. Ed. 2015, 54, 2079–2083.
[2] Costandius, A. J.; Barbour, L. J.; Esterhuysen, C. In preparation.
Birds-of-a-Feather (BoF)
The NVIDIA Deep Learning Institute (DLI) offers resources for diverse learning needs giving individuals, teams, organizations, educators, and students what they need to advance their knowledge in accelerated computing, AI, accelerated data science, graphics and simulation, and more. Many DLI programs specifically target academia with the goal of providing faculty with free training for themselves and their students. This includes faculty development workshops, a certification program for faculty to deliver NVIDIA training material on NVIDIA hardware, and teaching kits with worked problems, access to online training, and credits for cloud GPU resources. Other NVIDIA programs for academia include a hardware grant program, graduate fellowships, and other funding opportunities. This session will present the programs NVIDIA has to offer academia and how they can support both instruction and research.
Target Audience: instructional and research faculty in accelerated computing, AI, and data science
We report progress in our research aiming to detect COVID-19 from smartphone audio recordings. In our previous work we reported that it is possible to discriminate between recordings of COVID-19 positive coughs and coughs by COVID-19 negative or healthy individuals using machine learning algorithms. Since the available datasets of COVID-19 coughs are small, the classifiers exhibited a fairly high variance. In subsequent work we have investigated the effectiveness of transfer learning and bottleneck feature extraction for audio COVID-19 classification, in this case performing experiments for three sound classes: cough, breath and speech. For pre-training, we use datasets that contain recordings of coughing, sneezing, speech and other noises, but do not contain COVID-19 labels. Convolutional neural network (CNN), long short term memory (LSTM) and Resnet50 architectures were considered. The pre-trained networks are subsequently either fine-tuned using smaller datasets of coughing with COVID-19 labels in the process of transfer learning, or are used as bottleneck feature extractors. Results show that a Resnet50 classifier trained by this transfer learning process delivers optimal or near-optimal performance across all datasets achieving areas under the
receiver operating characteristic (ROC AUC) of 0.98, 0.94 and 0.92 respectively for the three sound classes (coughs, breaths and speech). This indicates that coughs carry the strongest COVID-19 signature, followed by breath and speech. Our results also show that applying transfer learning to capitalise on the larger datasets without COVID-19 labels leads not only to improved performance, but also strongly reduces the standard deviation of the classifier AUCs measured on the test sets during cross-validation, indicating better generalisation. We conclude that transfer learning and bottleneck feature extraction can improve COVID-19 cough, breath and speech audio classification, yielding automatic classifiers with higher accuracy. Since audio classification is non-contact, does not require specialist medical expertise or laboratory facilities and can be deployed on inexpensive consumer hardware, it represents an attractive method of screening.
During the pyrometallurgical production of industrial commodities such as ferromanganese and ferrochromium in electric smelting furnaces, immiscible molten slag and metal phases are tapped from the unit at regular intervals. This process involves opening a dedicated channel in the furnace sidewall (the tap-hole) and allowing the contents of the vessel to drain through it. After exiting the tap-hole, the stream of molten material is directed along open launder channels and empties into one or more storage ladles. During this process intermixing of slag and metal phases often occurs, and if not carefully managed, can result in significant metal being lost to the waste slag by entrainment.
In this presentation we show the results of a computational fluid dynamics study of the multiphase free surface fluid flow in tapping ladles, and examine how the application of high-performance computing has greatly expanded our ability to explore the unusual and challenging parameter spaces of problems in pyrometallurgy. Access to facilities such as CHPC allows us to build deeper intuition and fundamental understanding of the complex fluid flow phenomena occurring during ladle tapping, and is able to guide us to practical engineering solutions for mitigating losses due to phase mixing.
BoF: Online Education: Surprises and Insights
For some, the move to online education and training took place many years ago, while for others, the need to move online was thrust upon them due to the global pandemic. No matter what the circumstances or motivations for the move, the community of online educators and trainers have all encountered challenges, experienced surprises, and gained insights from their journeys into the online education space. After migrating, some have met with varying levels of success, while others still struggle to establish a reliable and effective online programme.
This panel session will discuss some of the challenges in moving to an online education platform. We will hear personal experiences from established online trainers from various regions about how they have implemented their programmes, what surprises they experienced along the way, and what insights they can share with others trying to move to an online education model.
Speakers:
Moderator:
Testing the text
Throughout the rapid evolution of HPC driven by technology advances reflected by Moore’s Law, processor core architecture has dominated computer design across ten orders (or more) of magnitude in delivered performance. But with the achievement of nanoscale device technology, exponential gain has stagnated demanding alternative innovative strategies. Concurrently, workloads have pivoted from linear algebra to artificial intelligence (AI) with emphasis on supervised machine learning (ML) applications. To address these combined challenges, transformative architectures are being explored that are memory-centric, embody data-oriented semantics, and optimize for latency and bandwidth rather than FPU utilization. This closing Keynote address will describe a class of non von Neumann architectures that will accelerate dynamic graph processing across highly scalable computing systems beyond Exascale through to the end of this decade. A brief discussion of early attempts of memory-centric computing such as SIMD and PIM will motivate revolutionary concepts of the future. Questions from the audience will be welcome assuming remote communication technology permits.
Testing the text
Please join the Awards Ceremony and Closing Function of the 2021 CHPC National Conference.
The program in the session is as follows:
Welcome Address: Dr Daniel Adams (DSI)
Student Micro-Talks Awards - Dr Daniel Moeketsi (CHPC)
DIRISA Student Datathon Challenge - Ms Nobubele Shozi (DIRISA)
Cyber Security Competition - Dr Renier van Heerden (SANReN)
Student Cluster Competition - Mr Nyameko Lisa (CHPC)
Vote of Thanks and Closing