Relu Function

The relu function is a common activation function used in neural networks. It is also known as the rectified linear unit.

#Getting Started

#inroduction

#algorithms using relu

  • Neural Networks
  • Deep Learning

#formula

  • x is the input.

#properties

  • The relu function is always positive.
  • The relu function is always less than or equal to 1.
  • The relu function is equal to 0 when the input is less than 0.
  • The relu function is equal to the input when the input is greater than or equal to 0.

#Advantages and Disadvantages

  • Advantages

    • The relu function is differentiable.
    • The relu function is monotonic.
    • The relu function is bounded.
  • Disadvantages ## looks into this

    • The relu function is not differentiable at 0.
    • The relu function is not differentiable at negative values.
    • The relu function is not bounded above.
    • The relu function is not bounded below.
    • The relu function is not symmetric.
    • The relu function is not centered around 0.
    • The relu function is not smooth.

#Implementation

#Python

def relu(x):
    return max(0,x)

#R

relu <- function(x) {
    return(max(0,x))
}

#Julia

function relu(x)
    return(max(0,x))
end

#Tensorflow

import tensorflow as tf
x = tf.constant([-3.0, -1.0, 0.0, 1.0, 3.0])
with tf.Session() as sess:
    print(sess.run(tf.nn.relu(x)))

#PyTorch

import torch
x = torch.tensor([-3.0, -1.0, 0.0, 1.0, 3.0])
print(torch.nn.functional.relu(x))