Leave us a message. We'll get back to you within 1 business day.

Leave us a message. We'll get back to you within 1 business day.

Leave us a message. We'll get back to you within 1 business day.

Attachment

Fayrix Machine Learning expert shares performance metrics that are commonly used in Data Science for assessing performance of Machine Learning models

First of all, metrics which we optimise tweaking a model and performance evaluation metrics in machine learning are not typically the same. Below, we discuss metrics used to optimise Machine Learning models. For performance evaluation, initial business metrics can be used.

Based on prerequisites, we need to understand what kind of problems we are trying to solve. Here is a list of some common problems in machine learning:

- Classification. This algorithm will predict data type from defined data arrays. For example, it may respond with yes/no/not sure.
- Regression. The algorithm will predict some values. For example, weather forecast for tomorrow.
- Ranking. The model will predict an order of items. For example, we have a student group and need to rank all the students depending on their height from the tallest to the shortest.

This matrix is used to evaluate the accuracy of a classifier and is presented in the table below.

False Positive (FP) moves a trusted email to junk in an anti-spam engine.

False Negative (FN) in medical screening can incorrectly show desease absense, when it is actually positive.

False Negative (FN) in medical screening can incorrectly show desease absense, when it is actually positive.

This metric is the basis one. It indicates the number of correctly classified items compared to the total number of items.

Keep in mind that accuracy metric has some limitations: it doesn't work well with unbalanced classes that can have many items of the same class and few other classes.

Keep in mind that accuracy metric has some limitations: it doesn't work well with unbalanced classes that can have many items of the same class and few other classes.

Recall Metric shows how many True Positives the model has classified from the total number of positive values.

This metric represents the number of True Positives which are really positive compared to the total number of positively predicted values.

This metric is a combination of precision and recall metrics which serves as a comprise. The best F1 score equals 1, while the worst one is 0.

performance metrics

This regression metric indicates the average sum of absolute difference between the actual and predicted value.

Mean Squared Error (MSE) calculates the average sum of squared difference between the actual and predicted value for the entire data points. All related values are raised to the second power therefore all of negative values are not compensated by positives. Moreover, due to the features of this metric, the impact of errors is higher. For example, if the error in our initial calculations is 1/2/3, MSE will equal 1/4/9 respectively. The less MSE is, the more accurate our predictions is. MSE =1 is the optimal point in which our forecast is perfectly accurate.

*MSE has some advantages over MAE: *

1. MSE highlights large errors over small ones.

2. MSE is differentiable which helps find minimum and maximum values using mathematical methods more effectively.

2. MSE is differentiable which helps find minimum and maximum values using mathematical methods more effectively.

RMSE is a square root of MSE. It is easy to interpret compared to MSE and it uses smaller absolute values which is helpful for computer calculations.

RANKING

performance metrics

performance metrics

https://machinelearningmastery.com/metrics-evaluate-machine-learning-algorithms-python/

https://www.quora.com/How-do-I-choose-error-metrics-for-machine-learning-algorithm

https://www.analyticsvidhya.com/blog/2016/02/7-important-model-evaluation-error-metrics/