MLOps: Weights & Biases Experiments Tracking

Mohammad Faizan
3 min readAug 16, 2023

Machine Learning projects have hyperparameters, configurations, datasets, and code versions. It is sometimes hard to keep track of what you have done to your project earlier which results in a specific performance metric. Fortunately, it’s now more accessible with tools like Weights & Biases which keep track of experiments, reproduce models, and manage machine learning workflow end-to-end. In this blog, I will discuss how to use weights & biases with your machine learning projects.

image by Analytics Vidhya (https://www.analyticsvidhya.com/blog/2021/06/will-mlops-change-the-future-of-the-healthcare-system-major-use-cases-of-mlops-in-health-care/)

Weights & Biases have been used by machine learning engineers, researchers, and data scientists to manage machine learning projects. We don’t have to store logs, configurations, and essential data in Excel files anymore for analysis and comparison. Weights & Biases will do for us. This blog post will comprehensively cover how you leverage Weights & Biases in your ML projects, complete code is available on my Github repository. This blog post will cover experiment tracking and logging results.

Experiment Tracking

Quick experimentation is essential for machine learning projects so that we can iterate and understand results. By just adding a few lines of code you will get an interactive dashboard like the one below:

Image by Weights & Biases

Install Weights & Biases and PyTorch

Now that you have installed “wandb” and “torch”, it’s time to import all the necessary packages, download the dataset, and create data loaders. For demonstration purposes, I am using a simple FashionMNIST dataset that has 10 classes, but you can use your own dataset.

Now that the FashionMNIST dataset is ready. Let’s create a simple Neural Network for the classification of fashion items in the FashionMNIST dataset.

Next, it's worth adding images, labels, predictions, and confidence scores in a table form to Weights & Biases for analyses and comparisons.

Next, let’s create a validation function to evaluate over model’s performance.

So far we have been writing some helper functions. Now it’s time to train our model and logs the results.

After the experiments training have been finished, you can log in to your project link to see 3 different experiments with logs, table, configs, and system information.

In summary, we saw how we can track experiments with tools like weights and Biases. In the next story, I will write about how to visualize prediction with Weights & Biases. The complete tutorial code can be obtained here.

Please feel free to comment below if you have any questions and follow for more AI-related stories.

WRITER at MLearning.ai / AI Movie Director / AI Searching 2024

--

--