logo
Blog / Machine Learning

Performance Metrics to Monitor in Machine Learning Projects

Aug 5, 2022
5 min read
cover image

One thing that we have kept on iterating on the significant difference between DevOps and MLOps is the sheer number of iterative loops a general MLOps pipeline has to go through, which stems from the various architectures, parameters, and data used in Machine Learning models.

Machine Learning Performance Metrics Meme.png

What are Performance Metrics in Machine Learning?

Most MLOps pipelines implode when the developer jumps to real-life applications. This method of not testing the accuracy and performance of a model first in a controlled environment leads to various anomalies from the desired functionality of a model. These anomalies tend to disappear when other stacks of MLOps are introduced.

Over the years, with mathematics as their arsenal, ML Engineers have come up with various machine learning performance metrics that, however different from Loss Functions, estimate the network’s performance during testing and training.

Artboard 1-100.jpg

9 Most Important Machine Learning Performance Metrics

Let us check out some of these ML performance metrics that will enable you and your team to make your model shine better than ever!

Accuracy

Problems to monitor: Classification, Supervised Learning

Perhaps the most straightforward machine learning evaluation metric, Accuracy, is just the report of the number of correct predictions divided by the total number of forecasts multiplied by 100. However, this method fails in situations where the data can be imbalanced. Let us see why.

ML Accuracy.png

In Cancer Detection’s example, data contains two classes, Negative(1) and Positive(2). Suppose the data is taken from a comprehensive source. In that case, the adverse circumstances prevail over positive issues by margins as large as 90:10. Now, even if the model fails to predict negatives (i.e., 10%), its accuracy is still 90% as the data contains such figures. In such cases where the minority class is what holds the fundamental importance and what we are trying to identify, accuracy doesn’t have sound.

To overcome this and highlight the model’s performance on minor cases, metrics like Confusion Matrix and F1 score exist, which we will read more about later in this blog.

Confusion Matrix

Problems to monitor: Classification, Supervised Learning

A confusion matrix is an N x N matrix, where N represents the number of predicted classes. The two axes in the matrix stand for actual and predicted identical sets of types. The name for the metric stands for the “confusion” that arises from mislabeling a class as another.

ML Confusion Matrix .png

This method helps identify true positives, true negatives, and false positives. In addition, this detailed analysis of simply observing the ratios between the trues and positives gives us a better understanding of the performance of a model than accuracy, given its performance in skewed or unbalanced data.

True: records whether the model was able to identify its class False: records whether the model was not able to identify its class Positive: binary positive of the given classification Negative: binary negative of the given classification

F1 Score

Problems to monitor: Classification, Supervised Learning

The F1 score is the harmonic mean between precision and recall, calculated using True Positives, True Negatives, False Positives, and False Negatives.

Precision: It is the number of correct positive results divided by the total number of favorable results predicted by the model.

Recall: It is the number of correct positive results divided by the total number of instances which should have been predicted as positive.

ML F1 Score .png

High precision but lower recall makes you extraordinarily accurate but misses many instances that should have been considered for the ideal performance. Hence, finding the best possible balance between recall and precision is essential to maximizing the F1 Score.

Mean Absolute Error

Problems to monitor: Regression, Forecasting, Time Series Analysis, Supervised Learning

As the name suggests, mean absolute error is the average difference between a model’s original and predicted values. It helps get an insight into how far the predictions were from the actual output. It serves as one of the most basic performance metrics for regression problems and has served as the inspiration for metrics like Mean Absolute Scaled Error and the Mean Squared Error.

ML Mean Absolute Error .png

One advantage of this metric is that it uses the same scale as the measured data. This simple approach is an unambiguous extension of average standard error.

Mean Squared Error

Problems to monitor: Regression, Forecasting, Time Series Analysis, Supervised Learning

Mean Squared Error is heavily inspired by Mean Absolute Error, the only difference being that MSE takes the average of the square of the difference between original and predicted values. The advantage of this metric of MAE is its ability to produce better gradients without using complicated linear algorithms.

ML Mean Squared Error .png

The effect of having a square of the error between the values is that mistakes are penalized more by amplifying their effects. This means that the impact of more significant errors becomes more pronounced than the smaller ones.

Root Mean Squared Error

Problems to monitor: Regression, Forecasting, Time Series Analysis, Supervised Learning

Over the iterations, the most popular performance metric for regression problems was born out of Mean Squared Error. Root Mean Squared Error works on the assumption that the error between the predicted and actual values is unbiased and follows a normal distribution. The Square Root enables the metric to show more significant deviations from the curve.

ML Root Mean Squared Error .png

The square root also helps give a more reliable error by eliminating the inconsistency that comes with positives and negatives and just giving the sizes of the errors. But the outlier values can make RMSE useless, so ensure you get rid of them before doing this.

AU-ROC

Problems to monitor: Classification, Binary Classification, Supervised Learning

Area Under the Received Operating Characteristic is a performance metric that is heavily used to evaluate the performances of classification models. The model works towards identifying how correctly the model can rank the data points. Fundamentally, the model acts like a discriminative metric that helps to tell the probability that a randomly selected sample will have a higher probability of the intended sample than a randomly selected wrong sample.

ML AU-ROC .png

How do we interpret this given graph?

AU-ROC of a model is represented by a graph in which the numerical value stands for the area under the plot.

  • An AU-ROC of 0.5 (area under the dotted line) corresponds to a useless model.
  • An AU-ROC less than 0.7 is sub-optimal.
  • An AU-ROC of 0.7-0.8 is a good performance.
  • An AU-ROC corresponds to a perfect classifier.

Kolmogorov Smirnov Chart

Problems to monitor: Classification, Supervised Learning

Kolmogorov Smirnov Chart is a metric used to measure the performance of classification models. It achieves so by measuring the difference between the positive and negative distributions from the model. The higher number the metric delivers, the better. A difference of 100 means that the points are completely separated.

ML Kolmogorov Smirnov Chart .png

The major advantage of the metric is that it gives an option to the developer to run a general nonparametric comparison between the results to tackle the fine tuning heads on.

R-Squared

Problems to monitor: Regression, Unsupervised Learning

R-Squared coefficient is one of the unique performance metrics, which works contrasting to the other performance metrics. This metric works as a post-metric, meaning that it works from the results of other performance metrics. The coefficient gives a better estimate of how much of the total variation, Y, is being explained by the variation of a calculated variation, X.

ML R-Squared .png

R-squared provides an assessment of the relationship between the movements of a dependent variable and the movements of an independent variable. It does not indicate whether the model you choose is excellent or terrible, nor whether the data and predictions are biased.

Conclusion

We hope this blog gives you a better understanding of your machine learning performance metrics. An essential part of the actual monitoring, maintenance and scaling of your MLOps pipeline, performance measures in machine learning are bound to save you a bunch of time and money on the machine learning side.

If you are looking for more guidance on setting up an MLOps team at your startup, how about checking out our blogs on Setting up your own MLOps Team and Building or Buying your MLOps Tech Stack.

Written By
Aryan KargwalData Evangelist
logo
Atechstars-logoMontréal AI Portfolio Company.
Connect with us on
Copyright © 2024 NimbleBox, Inc.