Tree-Based Models in Machine Learning

Nathan Rosidi
9 min readNov 30, 2023

Mastering Tree-Based Models in Machine Learning: A Practical Guide to Decision Trees, Random Forests, and GBMs

Tree Based Models in Machine Learning
Image created by the author on Canva

Ever wondered how machines make complex decisions? Just like a tree branches out, tree-based models in machine learning do something similar. They’re the backbone of decision-making processes in AI.

In the next few sections, we’ll explore different types of tree-based models. We start with the basics: decision trees. Then we branch out to random forests and gradient boosting machines. Each has its unique way of processing data and making decisions.

But how do these models apply to real-world scenarios? From financial analysis to healthcare predictions, tree-based models have a significant impact. At the end, we also will see the roadblocks and their solutions. Let’s get started!

Types of Tree-Based Models

In this section, we explore tree-based models in machine learning. We start with decision trees, understanding their basics and functionality. Next, we discuss random forests and their benefits over single decision trees.

Finally, we examine gradient boosting machines (GBMs), focusing on their unique aspects and comparison with random forests. This overview will introduce you to the key tree-based models.

So buckle up!

Decision Trees

Decision trees in machine learning are like flowcharts, making decisions based on data. They’re particularly useful in scenarios requiring clear, logical decisions.

For our Python example, we’ll use the famous Iris dataset from scikit-learn, which includes measurements of iris flowers and their species. We’ll predict the species based on these measurements. We’ll also visualize the tree to understand how decisions are made.

from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
import matplotlib.pyplot as plt

# Load the Iris dataset
iris = load_iris()
X = iris.data
y = iris.target

# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

# Create and train the decision tree
model = DecisionTreeClassifier()
model.fit(X_train, y_train)

# Visualize the decision tree
plt.figure(figsize=(20,10))
plot_tree(model, filled=True, feature_names=iris.feature_names, class_names=list(iris.target_names))
plt.show()

Let’s see the output.

Decision Trees in Tree Based Models

You see the decision tree that we draw.

  • Splitting Criteria: The tree splits based on petal length and width, indicating these are key features for classification.
  • Purity of Nodes: Nodes with a Gini index of 0 are perfectly pure, meaning all samples at that node belong to one class.
  • Class Distribution: Each leaf node shows how many samples are classified into each class. For instance, the leftmost leaf has 36 samples all classified as versicolor.

Random Forests

Random forests are an ensemble learning method, where multiple decision trees come together to make more accurate predictions.

Think of it as a team of experts where each member provides their opinion, and the final decision is made based on the majority vote. This method is often more accurate than a single decision tree.

Here’s how we can implement a random forest in Python using the scikit-learn library:

from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Load Iris dataset
iris = load_iris()
X = iris.data
y = iris.target

# Splitting dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

# Creating and training the Random Forest model
random_forest_model = RandomForestClassifier(n_estimators=100)
random_forest_model.fit(X_train, y_train)

# Making predictions
predictions = random_forest_model.predict(X_test)

# Evaluating the model
print("Accuracy:", accuracy_score(y_test, predictions))

In this code, we use the same Iris dataset. However, instead of a single decision tree, we use a random forest with 100 trees (n_estimators=100).

After training, we predict the species of the iris flowers in the test set. The accuracy of these predictions typically surpasses that of a single decision tree, showcasing the strength of random forests in handling complex data sets in data science.

Here is the output.

Random forests in Tree Based Models

Now let’s visualize this.

# Visualize one of the trees in the forest
plt.figure(figsize=(20,10))
tree_index = 0 # Choose the index of the tree you want to visualize
plot_tree(random_forest_model.estimators_[tree_index], filled=True, feature_names=iris.feature_names, class_names=list(iris.target_names))
plt.show()

Here is the output.

Random forests in Tree Based Models

This is a single decision tree from a Random Forest model, specifically the first tree given that tree_index is set to 0., that successfully separates iris species with nodes showing a Gini index of 0, indicating pure classifications.

It starts by classifying based on petal width and further refines with petal length, demonstrating the model’s ability to distinguish species, especially the setosa and virginica.

Gradient Boosting Machines (GBMs)

Gradient Boosting Machines (GBMs) are another type of ensemble learning technique, but they work differently from random forests. While random forests build trees independently, GBMs build them sequentially.

Each new tree helps correct errors made by the previous ones. It’s like each tree in the sequence is learning from the mistakes of its predecessors, making the overall model more accurate.

Let’s implement a GBM using Python’s scikit-learn library:

from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Load Iris dataset
iris = load_iris()
X = iris.data
y = iris.target

# Splitting the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

# Creating and training the GBM model
gbm_model = GradientBoostingClassifier(n_estimators=100)
gbm_model.fit(X_train, y_train)

# Making predictions and evaluating the model
predictions = gbm_model.predict(X_test)
print("Accuracy:", accuracy_score(y_test, predictions))

Here is the output.

Gradient Boosting Machines in Tree Based Models

In this example, we use the GradientBoostingClassifier to predict the species of iris flowers. The model builds 100 trees (n_estimators=100), each learning from the errors of the previous one. This improvement often results in high accuracy, making GBMs a powerful tool in data science for solving complex problems.

Let’s visualize this.

# Plotting the Mean Squared Error
test_score = np.zeros((100,), dtype=np.float64)
for i, y_pred in enumerate(gbm_model.staged_predict(X_test)):
test_score[i] = mean_squared_error(y_test, y_pred)

plt.figure(figsize=(12, 6))
plt.title('Mean Squared Error Over Boosting Iterations')
plt.plot(np.arange(100) + 1, test_score, 'r-', label='Test Set MSE')
plt.legend(loc='upper right')
plt.xlabel('Boosting Iterations')
plt.ylabel('Mean Squared Error')
plt.show()

Here is the output.

Gradient Boosting Machines in Tree Based Models

This graph above shows the mean squared error (MSE) of a Gradient Boosting model’s predictions on the test set over 100 boosting iterations. The MSE drops sharply in the initial iterations, indicating significant improvements in the model’s performance.

After around 20 iterations, the MSE stabilizes, suggesting that additional iterations do not contribute much to further reducing the error. This could mean that the model has reached its optimal performance early, and additional iterations are not adding value, which is a good sign of model convergence.

Decision Trees & Random Forests & Gradient Boosting Machines

Now, let’s discuss the differences between decision trees, random forests, and GBMs:

  • Decision Trees: In our example, we used a single decision tree to classify iris species. Decision trees are straightforward and easy to interpret but can be prone to overfitting. They work well for simple tasks but might not perform as well in more complex scenarios.
  • Random Forests: The random forest model we implemented uses multiple decision trees to make more accurate predictions. Unlike a single decision tree, random forests reduce the risk of overfitting by averaging the results of many trees. This makes them more robust and accurate, especially in more complex datasets.
  • Gradient Boosting Machines (GBMs): GBMs, as shown in our example, build trees one at a time, where each new tree helps to correct the errors made by previous trees. This sequential building process can lead to a highly accurate model. However, GBMs might be more sensitive to overfitting if the data is noisy and can be more challenging to tune than random forests.

Each of these methods has its strengths and weaknesses, and the choice between them depends on the specific requirements of your data and problem.

Roadblocks in Applying Decision Trees

Applying Tree Based Models in Machine Learning
Image created by the author on Canva

When applying decision trees in machine learning, several challenges or roadblocks can arise. Understanding these challenges is key to effectively using decision trees in data science projects.

  1. Overfitting: One of the most common issues with decision trees is overfitting, where the model learns the training data too well, including its noise and outliers. This causes to poor performance on unseen data. Overfitting happens especially when the tree is too deep, capturing too much detail and complexity.
  • Solution: Implement pruning techniques to limit the depth of the tree, and use cross-validation to ensure the model generalizes well to unseen data.

2. Handling Continuous Variables: Decision trees can struggle with continuous variables. They split these variables at various points, but finding the optimal split point can be tricky, especially when the continuous variable doesn’t have a clear, discrete boundary.

  • Solution: Use discretization methods to convert continuous variables into categorical ones, simplifying the decision-making process.

3. Biased Trees with Imbalanced Data: If the training data is imbalanced (i.e., some classes are underrepresented), the decision tree might become biased towards the dominant class. This bias can skew the predictions, favoring the majority class at the expense of the minority class.

  • Solution: Apply techniques like SMOTE (Synthetic Minority Over-sampling Technique) or adjust class weights to balance the dataset before training.

4. Complexity with Large Datasets: While decision trees handle small to medium-sized datasets efficiently, their performance can decrease with very large datasets. The time to train the tree increases significantly, and the tree structure can become overly complex, making it harder to interpret.

  • Solution: Use dimensionality reduction techniques or feature selection to reduce the dataset size and complexity, making the tree more manageable.

5. Limited to Linear Decision Boundaries: Decision trees primarily create linear decision boundaries, which can be limiting in scenarios where the relationship between features and the target variable is more complex or non-linear.

  • Solution: Combine decision trees with other algorithms, like SVMs (Support Vector Machines), that can capture non-linear relationships.

6. Sensitivity to Small Changes in Data: Decision trees can be quite sensitive to small changes in the training data. A slight change can lead to a significantly different tree structure. This lack of stability can be a concern in dynamic environments where data changes frequently.

  • Solution: Use ensemble methods like Random Forest or Gradient Boosting which are less sensitive to small data variations and offer more stability.

If you want to know more about machine learning algorithms, here is a good source for you → machine learning algorithms you must know.

Final Thoughts

We went through the world of tree-based models in machine learning, uncovering the nuances of decision trees, random forests, and GBMs. Using the Iris dataset, we saw these models in action, from simple visualizations to tackling complex data challenges. We also navigated through common roadblocks, offering practical solutions to enhance model performance.

Practice is key in data science. Experiment with these models, tweak them, and see how they perform with different data. Each challenge you encounter and overcome sharpens your skills further.

Join our platform to take this learning further. Engage with real-world data projects and prepare for your career in data science. Visit us, apply your knowledge, and continue your growth as a budding data scientist. Your journey is just starting, and there’s much more to explore and achieve!

--

--