Heartbeat Newsletter: Volume 29

Emilie Lewis
Heartbeat
Published in
2 min readJan 25, 2023

--

Dear Heartbeat Readers,

2023 is off to a great start with some fantastic articles and series! With our new focus areas, we’re diving into Computer Vision and NLP projects as well as spending more time on deep learning projects and seeing how you, the community, use Comet and Kangas.

This week we’ve got pieces on YOLOv5, sentiment analysis, ExBERT, and how to use Comet for deep learning experiments. You can find even more great blogs and resources on the Comet website or by joining our Slack community channel.

Happy Reading,

Emilie, Abby & the Heartbeat Team

Object Detection Using YOLOv5 and Sending Alerts

— by Kristen Kehrer

The final part in Kristen’s bus detection project. In this article, she’s going to take the trained model and actually start doing live detection. Once the program detects the bus, she’ll receive a text.

How to Log Your Keras Deep Learning Experiments With Comet

— by Shittu Olumide Ayodeji

This article discusses deep learning using Keras and how to log models to Comet to better track, visualize, and reproduce experiments.

ExBERT: A Visual Representation of Attention Heads

— by Adrien payong

When using exBERT, you can view the inner workings of Transformers. We will use it to display the BERT-base-cased model’s attention heads since this is the model selected automatically by the exBERT user interface.

Training Gradient Boosting Models With Comet ML

— by Mwanikii

Hyperparameters are among the most important aspects of any given model in Data Science and Machine Learning applications. The right combination of hyperparameters is essential when one desires to come up with a great model.

Sentiment Analysis with Python and Streamlit

— by Gourav Bais

Build and deploy your own sentiment classification app using Python and Streamlit.

Accessing GLUE datasets with the Hugging Face API

— by Adrien payong

The General Language Understanding Evaluation benchmark dataset was developed to promote and reward models that draw on shared language knowledge in a variety of contexts.

--

--