Serge Petiton
Challenges for Extreme Scale Computational Science and Machine Learning
Exascale machines are now available, based on several different arithmetic (from 64-bits to 16-8 bits arithmetic, including mixed versions and some that are no longer IEEE compliant) and using different architectures (with network-on-chip processors and/or with accelerators). Brain-scale applications, from machine learning and AI for example, manipulate huge graphs that lead to very sparse non-symmetric linear algebra problems, resulting in performance closer to the HPCG benchmark than to the LINPACK one. Moreover, those supercomputers have been designed primarily for computational science, mainly numerical simulations, not for machine learning and AI. New applications that are maturing after the convergence of big data and HPC to machine learning and AI would probably generate post-exascale computing that will redefine some programming and application development paradigms. End-users and scientists have to face a lot of challenge associated to these evolutions and the increasing size of the data. The convergence of Data science (big Data) and the computational science to develop new applications generates important challenges.
In this talk, after a short description of some recent evolutions having important impacts on our results, I review some sparse linear algebra experiments for iterative methods and/or for machine learning. I present some results obtained on the #1 supercomputer of the HPCG list, Fugaku, and on Thianhe 2, for linear algebra problems, such as sequence of sparse matrix products and the PageRank method, with respect to the sparsity and the size of the matrices, on the one hand, and to the number of process and nodes, on the other hand. Then, I introduce two opensource generators of very large data, allowing to evaluate several methods using very large graph-sparse matrices as data sets for several application evaluations.
As a conclusion, I discuss the potential evolution we would face to efficiently combine computational science, data science and machine learning on future faster supercomputers, based on the workshop HPC challenges for new extreme scale applications I initiate and co-organize this spring in Paris
In this talk, after a short description of some recent evolutions having important impacts on our results, I review some sparse linear algebra experiments for iterative methods and/or for machine learning. I present some results obtained on the #1 supercomputer of the HPCG list, Fugaku, and on Thianhe 2, for linear algebra problems, such as sequence of sparse matrix products and the PageRank method, with respect to the sparsity and the size of the matrices, on the one hand, and to the number of process and nodes, on the other hand. Then, I introduce two opensource generators of very large data, allowing to evaluate several methods using very large graph-sparse matrices as data sets for several application evaluations.
As a conclusion, I discuss the potential evolution we would face to efficiently combine computational science, data science and machine learning on future faster supercomputers, based on the workshop HPC challenges for new extreme scale applications I initiate and co-organize this spring in Paris
back to overview
Watch Recording