Sigmoid Function

The sigmoid function is a common activation function used in neural networks. It is also known as the logistic function.

#Getting Started

#inroduction

#algorithms using sigmoid

  • Logistic Regression
  • Neural Networks
  • Gradient Boosting
  • Principal Component Analysis (PCA)
  • Reinforcement Learning

#formula

  • x is the input.
  • e is the Euler's number. Approximately equal to 2.71828.

#properties

  • The sigmoid function is always positive.
  • The sigmoid function is always less than or equal to 1.
  • The sigmoid function is equal to 0.5 when the input is equal to 0.
  • The sigmoid function is equal to 1 when the input is equal to infinity.
  • The sigmoid function is equal to 0 when the input is equal to negative infinity.

#Advantages and Disadvantages

  • Advantages
    • The sigmoid function is differentiable.
    • The sigmoid function is monotonic.
    • The sigmoid function is bounded.
    • The sigmoid function is easy to understand.
  • Disadvantages
    • The sigmoid function is not zero centered.
    • The sigmoid function is not sparse.
    • The sigmoid function is prone to saturation.
    • The sigmoid function is computationally expensive.

#Example

  • Let's say we have a dataset of 5 samples.
  • The input values are .
  • The sigmoid function is calculated as follows:
x
-1 0.268941
0 0.5
1 0.731059
2 0.880797
3 0.952574

#Implementation

#Python

import numpy as np

def sigmoid(x):
    return 1 / (1 + np.exp(-x))

#R

sigmoid <- function(x) {
    return (1 / (1 + exp(-x)))
}

#Julia

function sigmoid(x)
    return 1 / (1 + exp(-x))
end

#TensorFlow

import tensorflow as tf

x = tf.constant([-1, 0, 1, 2, 3], dtype=tf.float32)
y = tf.nn.sigmoid(x)

print(y)

#PyTorch

import torch

x = torch.tensor([-1, 0, 1, 2, 3], dtype=torch.float32)
y = torch.sigmoid(x)

print(y)

#scipy

from scipy.special import expit

x = np.array([-1, 0, 1, 2, 3])
y = expit(x)

print(y)