Algorithms/ Applications keynote: Computing, Data and COVID-19. Katherine A. Yelick
June 30th, 2020 (15:15 - 16:15 CEST)
Abstract: COVID-19 has disrupted every aspect of our lives and never before has there been such a need to build interdisciplinary teams to address some of the most critical data and computing problems associated with the pandemic, using advanced statistical techniques to interpret data, simulation to understand intervention impacts, and machine learning to identify possible medical therapeutics. High-performance computing has long embraced interdisciplinary work, and in this talk, I will give a broad overview of the role that data and computing are playing in COVID-19 rapid response, and some of the underlying challenges for computational modelling, analysis, and learning.
Bio: Kathy Yelick is the Robert S. Pepper Distinguished Professor of Electrical Engineering and Computer Sciences and the Associate Dean for Research in the Division of Computing, Data Science and Society (CDSS) at the UC Berkeley. She is also a Senior Advisor on Computing at Lawrence Berkeley National Laboratory, where she was Director of NERSC from 2008 to 2012 and led the Computing Sciences Area from 2010 through 2019. Yelick is a member of the National Academy of Engineering and the American Academy of Arts and Sciences and a Fellow of ACM and AAAS.
Architecture keynote: Post-Moore Server Architecture. Babak Falsafi
July 1st, 2020 (15:00 - 16:00 CEST)
Abstract: Cloud providers are building infrastructure at unprecedented speeds building the foundation for global IT services and cost-effective containerized apps. Unfortunately the silicon technologies we have relied on for the past several decades leading to the exponential growth in IT have slowed down in scaling and will soon come to a halt, resulting in diminishing returns in digital platform scalability in the post-Moore era of computing. Meanwhile, the basic architecture of a modern server blade still dates back to the CPU-centric desktop PC of the 80’s managing memory at hardware speeds but accessing the network, storage and now discrete accelerators through the OS, legacy software stacks and peripheral interfaces. This talk will make the case for a clean slate co-design of server software and hardware for the post-Moore era.
Bio: Babak Falsafi is a Professor in the School of Computer and Communication Sciences and the founding director of the EcoCloud research center at EPFL. He has worked on server architecture since the 90’s with contributions impacting industrial products including a novel shared-memory architecture in the first NUMA machines (WildFire/WildCat) by Sun Microsystems, snoop filtering and memory streaming in IBM BlueGene and ARM cores, and server evaluation methodologies in use by AMD, HPE and Google PerfKit. His recent work on scale-out server processor design laid the foundation for the first generation of Cavium ThunderX. He is a fellow of ACM and IEEE.
Compilers and languages keynote: Optimizing Supercompilers for Supercomputers. Michael Wolfe
July 2nd, 2020 (15:00 - 16:00 CEST)
Abstract: Between a problem statement and its solution as a computer simulation are several steps, from choosing a method, writing a program, compiling to machine code, making runtime decisions, and hardware execution. Here we will look at the middle three decision points. What decisions should be and must be left to the programmer? What decisions should be and must be relegated to a compiler? What decisions should be and must be left until runtime? Given my background, I will focus a great deal on the importance of compilers in supercomputing, and compare and contrast the advantages and impacts of compiler solutions to the "Performance + Portability + Productivity" problem with language and runtime solutions.
Bio: Michael Wolfe has worked on languages and compilers for parallel computing since graduate school at the University of Illinois in the 1970s. Along the way, he co-founded a compiler company, tried his hand as an academic professor, and eventually returned to commercial compiler development. He now spends most of his time as the technical lead on a team that develops and improves compilers for highly parallel computing, specifically for NVIDIA GPU accelerators.