Serge Petiton

Challenges for Extreme Scale Computational Science and Machine Learning

Exascale machines are now available, based on several different arithmetic (from 64-bits to 16-8 bits arithmetic, including mixed versions and some that are no longer IEEE compliant) and using different architectures (with network-on-chip processors and/or with accelerators). Brain-scale applications, from machine learning and AI for example, manipulate huge graphs that lead to very sparse non-symmetric linear algebra problems, resulting in performance closer to the HPCG benchmark than to the LINPACK one. Moreover, those supercomputers have been designed primarily for computational science, mainly numerical simulations, not for machine learning and AI. New applications that are maturing after the convergence of big data and HPC to machine learning and AI would probably generate post-exascale computing that will redefine some programming and application development paradigms. End-users and scientists have to face a lot of challenge associated to these evolutions and the increasing size of the data. The convergence of Data science (big Data) and the computational science to develop new applications generates important challenges.

In this talk, after a short description of some recent evolutions having important impacts on our results, I review some sparse linear algebra experiments for iterative methods and/or for machine learning. I present some results obtained on the #1 supercomputer of the HPCG list, Fugaku, and on Thianhe 2, for linear algebra problems, such as sequence of sparse matrix products and the PageRank method, with respect to the sparsity and the size of the matrices, on the one hand, and to the number of process and nodes, on the other hand. Then, I introduce two opensource generators of very large data, allowing to evaluate several methods using very large graph-sparse matrices as data sets for several application evaluations.

As a conclusion, I discuss the potential evolution we would face to efficiently combine computational science, data science and machine learning on future faster supercomputers, based on the workshop HPC challenges for new extreme scale applications I initiate and co-organize this spring in Paris
 

back to overview

Watch Recording
 

Biography

Serge G. Petiton received the B.S. degree in mathematics, the Ph.D. degree in computer science, and the “Habilitation à diriger des recherches”, from the Sorbonne University, Pierre et Marie Curie Campus. He was post-doc student, registered at the graduate school, and junior researcher scientist at Yale University, 1989-1990. He has been researcher at the “Site Experimental en Hyperparallelisme” (supported by CNRS, CEA, and the French DoD) from 1991 to 1994. He also was affiliate research scientist at Yale and visiting research fellow in several US laboratories (NASA/ICASE, AHPCRC,..) during the period 1991-1994. Since 1994, Serge G. Petiton is tenured Full Professor at the University of Lille in France and he has a CNRS senior position at the “Maison de la Simulation” in Paris-Saclay, 2013-2021. Serge G. Petiton was visiting awarded Professor at the Chinese Academy of Science, in 2016. He was P.I. of several international projects with Japan and Germany (ANR, CNRS, SPPEXA,..) and has-have many industrial collaborations (TOTAL, CEA, Airbus, Nvidia, Intel,…). Serge G. Petiton has been scientific director of more than 30 Ph.D.s and has authored more than 150 articles on international journals, books, and conferences. His main current research interests are in “Parallel and Distributed Computing”, “Dense and Sparse Linear Algebra”, “Language and Programming Paradigm for Extreme Scientific Computing”, and “Machine Learning-Transformer method”.