ACM HPDC 2021

The 30th International Symposium on High-Performance Parallel and Distributed Computing

Stockholm, Sweden, 21-25 June 2021

Keynote Speakers

Alonso Gustavo

Gustavo Alonso, ETH Zürich

Title: Hardware specialization for distributed computing


Abstract:

Several trends in the IT industry are driving an increasing specialization of the hardware layers. On the one hand, demanding workloads, large data volumes, diversity in data types, etc. are all factors contributing to make general purpose computing too inefficient. On the other hand, cloud computing and its economies of scale allow vendors to invest on specialized hardware for particular tasks that otherwise would be too expensive or consume resources needed elsewhere. In this talk I will discuss the shift towards hardware acceleration and show with several examples why specialized systems are here to stay and are likely to dominate the computer landscape for years to come. I will also discuss Enzian, an open research platform developed at ETH to enable the exploration of hardware acceleration and present some preliminary results achieved with it.

Bio:

Gustavo Alonso is a professor in the Department of Computer Science of ETH Zurich where he is a member of the Systems Group. His research interests include data management, databases, distributed systems, cloud computing, and hardware acceleration. Gustavo is an ACM Fellow and an IEEE Fellow as well as a Distinguished Alumnus of the Department of Computer Science of UC Santa Barbara


Maria Girone

Author: Maria Girone

Title: Computing Challenges for High Energy Physics


Abstract:

High-energy physics faces unprecedented computing challenges in preparation for the ‘high-luminosity’ phase of the Large Hadron Collider, which will be known as the HL-LHC. The complexity of particle-collision events will increase, together with the data collection rate, substantially outstripping the gains expected from technology evolution. The LHC experiments, through the Worldwide LHC Computing Grid (WLCG), operate a distributed computing infrastructure at about 170 sites over more than 40 countries. This infrastructure has successfully exploited the exabyte of data collected and processed during the first 10 years of the program. During the HL-LHC regime, each experiment will collect an exabyte of data annually and additional computing resources will be needed. The efficient use of HPC facilities may be an important opportunity to address the anticipated resource gap. In this talk, I will discuss the future computing needs in high-energy physics and how these can be met combining our dedicated distributed computing infrastructure with large-scale HPC sites. As a community, we have identified common challenges for integrating these large facilities into our computing ecosystem. I will also discuss the current progress in addressing those challenges, focussing on software development for heterogeneous architectures, data management at scale, supporting services and opportunities for collaboration.

Bio:

Maria Girone has a PhD in particle physics. She also has extensive knowledge in computing for high-energy physics experiments, having worked in scientific computing since 2002. Maria has worked for many years on the development and deployment of services and tools for the global distributed computing grid WLCG and the founder of the WLCG operations coordination team. Throughout 2014 and 2015, Maria was the software and computing coordinator for the CMS experiment at the LHC. In her role as CTO, Maria is managing the overall technical strategy of CERN openlab plans towards R&D in computing architectures, HPC and AI, in collaboration with the LHC experiments for the upgrade programs for software and computing, promoting opportunities for collaboration with industry. Since July 2020, Maria coordinates for CERN the HPC Collaboration with SKA, GÈANT, and PRACE to tackle challenges related to the use of high-performance computing for large data-intensive science.