Recall is the Ratio of th e correct predictions and the total number of correct items in the set. It is expressed as % of the total correct(positive) items correctly predicted by the model. In other words, recall indicates how good is the model at picking the correct items.

## What is meant by true positive?

A true positive is an outcome where the model correctly predicts the positive class. Similarly, a true negative is an outcome where the model correctly predicts the negative class. A false positive is an outcome where the model incorrectly predicts the positive class.

## How do you evaluate a confusion matrix?

From our confusion matrix, we can calculate five different metrics measuring the validity of our model.

- Accuracy (all correct / all) = TP + TN / TP + TN + FP + FN.
- Misclassification (all incorrect / all) = FP + FN / TP + TN + FP + FN.
- Precision (true positives / predicted positives) = TP / TP + FP.

## How is FPR calculated?

The false positive rate is calculated as FP/FP+TN, where FP is the number of false positives and TN is the number of true negatives (FP+TN being the total number of negatives).

## What is the use of Confusion Matrix?

A confusion matrix is a technique for summarizing the performance of a classification algorithm. Classification accuracy alone can be misleading if you have an unequal number of observations in each class or if you have more than two classes in your dataset.

## How do you get a confusion matrix in python?

How to create a confusion matrix in Python using scikit-learn

- # Importing the dependancies.
- from sklearn import metrics.
- # Predicted values.
- y_pred = [“a”, “b”, “c”, “a”, “b”]
- # Actual values.
- y_act = [“a”, “b”, “c”, “c”, “a”]
- # Printing the confusion matrix.
- # The columns will show the instances predicted for each label,

## What is sensitivity in confusion matrix?

Sensitivity (SN) is calculated as the number of correct positive predictions divided by the total number of positives. It is also called recall (REC) or true positive rate (TPR). Sensitivity is calculated as the number of correct positive predictions (TP) divided by the total number of positives (P).