MLtraq: Track your ML/AI experiments at hyperspeed
2024-07-12 , Terrace 2A

Every second spent waiting for initializations and obscure delays hindering high-frequency logging, further limited by what you can track, an experiment dies. Wouldn’t loading and starting tracking in nearly zero time be nice? What if we could track more and faster, even handling arbitrarily large, complex Python objects with ease?

In this talk, I will present the results of comparative benchmarks covering Weights & Biases, MLflow, FastTrackML, Neptune, Aim, Comet, and MLtraq. You will learn their strengths and weaknesses, what makes them slow and fast, and what sets MLtraq apart, making it 100x faster and capable of handling tens of thousands of experiments.

This presentation will not only be enlightening for those involved in AI/ML experimentation but will also be invaluable for anyone interested in the efficient and safe serialization of Python objects.


Expected audience expertise:

Intermediate

See also: slides (2.5 MB)

Michele is an independent consultant specialising in de-risking AI applications. With two decades of experience building analytics and predictive models for robotics, publishing, decentralized finance, megaprojects management, and more. He holds a PhD in modelling and querying data with uncertainty. Connect with him on LinkedIn at https://www.linkedin.com/in/dallachiesa