Scaling scikit-learn: introducing new sets of computational routines
07-14, 11:20–11:50 (Europe/Dublin), Liffey Hall 2

For more than 10 years, scikit-learn has been bringing machine learning and data science methods to the world. Since then, the library always aimed to deliver quality implementations, focusing on a clear and accessible code-base built on top of the PyData ecosystem.

This talk aims at explaining the recent on-going work of the scikit-learn developers to boost the native performance of the library.


scikit-learn is an open-source scientific library for machine learning in Python.

Since its first release in 2010, the library gained a lot of traction in education, research and the wider society, and has set several standards for API designs in ML software. Nowadays scikit-learn is of one the most used scientific library in the world for data analysis. It provides reference implementations of many methods and algorithms to a userbase of millions.

With the renewed interest in machine-learning based methods in the last years, other libraries providing efficient and highly optimised methods (such as for instance LightGBM and XGBoost for Gradient-Boosting-based methods) have emerged. Those libraries have encountered a similar success, and have put performance and computational efficiency as top priorities.

In this talk, we will present the recent work carried over by the scikit-learn core-developers team to improve its native performance.

This talk will cover elements of the PyData ecosystem and the CPython interpreter with an emphasis on their impact on performance. Computationally expensive patterns will then be covered before presenting the technical choices associated with the new routines implementations, keeping the project requirements in mind. At the end, we will take a quick look at the future work and collaborations on hardware-specialised computational routines.


Expected audience expertise: Domain

some

Expected audience expertise: Python

some

Abstract as a tweet

Scaling scikit-learn native performance: introducing new sets of computational routines

I am mainly interested in computational, algorithmic and mathematical methods. I first started to contribute to open-source in 2017 and since then my contributions focused on scientific software.

Since April 2021, I work at Inria as a Research Software Engineer, mainly on improving scikit-learn's native performance. I became one of scikit-learn maintainers in October 2021.