Mean Absoulte Error (MAE)

Mean Absoulte Error (MAE) is a common error function used in regression problems.

#Getting Started

#Introduction

  • It is also known as the L1 loss function.
  • It is the average of the absolute difference between the predicted and actual values.
  • It is used to measure how close the predicted values are to the actual values.
  • It is used in regression problems.
  • https://en.wikipedia.org/wiki/Mean_absolute_error Wikipedia

#algorithms using MAE

  • Linear Regression
  • Decision Trees
  • Random Forests
  • Gradient Boosting

#formula

  • is the actual value.
  • is the predicted value.
  • is the number of samples.

#Example

  • Let's say we have a dataset of 5 samples.
  • The actual values are .
  • The predicted values are .
  • The MAE is calculated as follows:

#Advantages and Disadvantages

  • Advantages
    • It is easy to understand.
    • It is not sensitive to outliers.
  • Disadvantages
    • It is not differentiable at 0.
    • It is not sensitive to the direction of errors.
    • It is not sensitive to the magnitude of errors.

#Implementation

#Python

import numpy as np

def mean_absolute_error(y_true, y_pred):
    return np.mean(np.abs(y_true - y_pred))

#R

mae <- function(y_true, y_pred) {
    return(mean(abs(y_true - y_pred)))
}

#Julia

function mae(y_true, y_pred)
    return(mean(abs.(y_true - y_pred)))
end

#sklearn

from sklearn.metrics import mean_absolute_error

error = mean_absolute_error(y_true, y_pred)

#tensorflow

import tensorflow as tf

error = tf.keras.losses.MeanAbsoluteError()(y_true, y_pred)

#pytorch

import torch

error = torch.nn.L1Loss()(y_true, y_pred)