Alex Kogan

Improving Inference Performance of Machine Learning with the Divide-and-Conquer Principle

Many popular machine learning models scale poorly when deployed on CPUs. In this talk I will discuss the reasons behind that and propose a simple, yet effective approach based on the well-known Divide-and-Conquer Principle to tackle this problem of great practical importance. Given an inference job, instead of using all available computing resources (i.e., CPU cores) for running it, the idea is to break the job into independent parts that can be executed in parallel, each with the number of cores according to its expected computational cost. I will talk about the implementation of this idea in the popular OnnxRuntime framework and discuss evaluation results of its effectiveness with several use cases, including the well-known models for optical character recognition (PaddleOCR) and natural language processing (BERT).  

back to overview

Watch Recording
Speaker Image
 

Biography

Alex Kogan is a Principal Member of Technical Staff at Oracle Labs. His research interests are in the system aspects of machine learning as well as in parallel and distributed computing. In particular, he is looking into ways to optimize the performance of modern systems through the efficient utilization of available resources, e.g., cores, cache, (non-volatile and volatile) memory, accelerators, network, etc. Alex received his Ph.D. (2012), M.Sc. (2008) and BA (Summa Cum Laude, 2002) from the Department of Computer Science, Technion, Israel.