Activation Functions in Deep Learning

Photo by Saad Ahmad on Unsplash

Activation Functions in Deep Learning

In the vast realm of deep learning, where intricate neural networks simulate the complexities of human cognition, lies a pivotal element that empowers these models to learn and make predictions with astonishing accuracy: activation functions. Often overshadowed by the glitz and glamour of advanced architectures and massive datasets, these unassuming mathematical formulas play a critical role in shaping the behavior and expressiveness of neural networks.

Imagine a world where neural networks are merely passive observers, incapable of distinguishing signal from noise, unable to unlock the hidden patterns buried within vast amounts of data. Activation functions hold the key to breathing life into these networks, empowering them to transform inputs into meaningful outputs, and facilitating the non-linearity required to tackle real-world challenges.

In this blog, we embark on a journey to explore the inner workings of activation functions in deep learning. We'll dive into their significance, understand the diverse range of functions available, and uncover the impact they have on model performance. By the end of this exploration, you'll gain a deeper understanding of how these seemingly inconspicuous formulas shape the very essence of deep learning and unleash its true potential.

Activation Functions

Activation functions are mathematical functions applied to the outputs of individual neurons or nodes within a neural network. They introduce non-linearity into the network, enabling it to learn and model complex relationships between inputs and outputs.

Activation functions play a vital role in determining the activation levels or "firing" of neurons. They define the output values based on the weighted sum of inputs received by a neuron and introduce non-linear transformations that allow neural networks to approximate highly nonlinear functions.

Activation functions serve two primary purposes:

  1. Introducing non-linearity: Linear operations, such as simple weighted sums, are limited in their ability to model complex relationships. Activation functions provide the non-linear element required for neural networks to learn and represent intricate patterns and mappings in the data.

  2. Enabling decision-making: Activation functions determine the level of activation or firing of a neuron, thereby deciding whether the information it carries is relevant or not. This enables the network to make decisions and classify inputs into different categories.

There are various types of activation functions commonly used in deep learning, each with its own characteristics and suitability for different tasks. Examples include the sigmoid function, hyperbolic tangent (tanh), rectified linear unit (ReLU), and variants like leaky ReLU and exponential linear unit (ELU).

Choosing an appropriate activation function is crucial as it impacts the network's ability to learn, convergence speed, and generalization capabilities. Deep learning researchers and practitioners often experiment with different activation functions to find the most suitable one for a specific task or architecture.

Sigmoid function:

The sigmoid function is a mathematical function commonly used in machine learning and neural networks. It maps any real-valued number to a value between 0 and 1, providing a smooth and continuous "S"-shaped curve. The function is defined as:

$$\sigma(x) = \frac{1}{1 + e^{-x}}$$

In this equation, "e" represents the base of the natural logarithm, and "x" is the input value. The sigmoid function has a range between 0 and 1, with an output of approximately 0.5 when the input is zero. As the input becomes more positive, the output approaches 1, and as the input becomes more negative, the output approaches 0.

The sigmoid function is particularly useful in machine learning because it allows us to convert an input into a probability. The output of the sigmoid function can be interpreted as the probability of a binary event occurring, where values close to 1 indicate a high probability, and values close to 0 indicate a low probability. Additionally, the smooth and differentiable nature of the sigmoid function facilitates efficient training of neural networks using gradient-based optimization algorithms.

Hyperbolic Tangent (tanh) Function:

The hyperbolic tangent function, denoted as tanh, is a mathematical function commonly used in various fields, including mathematics, physics, and machine learning. It is an activation function that maps real numbers to a range between -1 and 1. The tanh function is derived from the regular tangent function but adjusted to fit within the range of -1 to 1.

Mathematically, the tanh function can be expressed as:

$$\tanh(x) = \frac{{e^x - e^{-x}}}{{e^x + e^{-x}}}$$

Here, 'e' represents Euler's number, a mathematical constant approximately equal to 2.71828, and 'x' is the input value.

The tanh function has several key properties. Firstly, it is an odd function, meaning tanh(-x) = -tanh(x), which results in symmetry around the origin. Additionally, the tanh function is bounded, with its range limited between -1 and 1. As 'x' approaches positive or negative infinity, tanh(x) approaches 1 and -1, respectively.

As an activation function. It can introduce non-linearity into the network, allowing it to model complex relationships between input and output. The tanh function is advantageous over other activation functions, such as the sigmoid function, as it produces values centred around zero, which can aid in faster convergence during training.

ReLU Function:

The Rectified Linear Unit (ReLU) is a widely used activation function in artificial neural networks. It introduces non-linearity by outputting the input value if it is positive, and zero otherwise. The ReLU function is defined as:

$$f(x) = max(0,x)$$

Mathematically, this means that for any input value 'x', the ReLU function returns 'x' if it is positive, and zero otherwise. This simple thresholding behavior allows the ReLU function to model complex relationships between the inputs and outputs of a neural network.

The ReLU activation function offers several advantages. Firstly, it is computationally efficient to evaluate compared to other activation functions like the sigmoid or tanh. The ReLU function also helps to mitigate the vanishing gradient problem, which can occur during backpropagation in deep neural networks. Additionally, ReLU activations produce sparse representations, as many neurons can be activated while others remain inactive.

However, one limitation of ReLU is that it can cause dead neurons when the input is negative, leading to a zero gradient and no learning. To address this, various modifications have been proposed, such as Leaky ReLU, Parametric ReLU, and Exponential ReLU, which introduce small non-zero slopes for negative inputs to alleviate the dead neuron issue.

Leaky ReLU:

Leaky ReLU (Rectified Linear Unit) is an activation function commonly used in artificial neural networks. It is an improvement over the standard ReLU function, addressing one of its limitations. In the Leaky ReLU function, for inputs less than zero, instead of simply outputting zero as in ReLU, a small linear component is introduced. This linear component helps overcome the "dying ReLU" problem where neurons that have negative inputs during training become inactive, resulting in dead neurons that do not contribute to learning.

The mathematical representation of the Leaky ReLU function is as follows:

$$f(x) = max(ax, x)$$

where x is the input to the function, and a is a small positive constant known as the leakage coefficient. Typically, the value of a is set to a small value such as 0.01. If x is positive, the output is equal to x. If x is negative, the output is equal to ax, where ax is a small fraction of x.

By introducing this small linear component, the Leaky ReLU function ensures that even neurons with negative inputs can contribute to the network's learning process. This helps prevent the dying ReLU problem and improves the overall performance and robustness of the neural network.

Exponential Linear Unit (ELU):

The Exponential Linear Unit (ELU) is an activation function commonly used in deep learning models. It addresses the limitations of other activation functions, such as the vanishing gradient problem, and helps improve the performance of neural networks.

ELU is defined by a piecewise function that consists of two parts: the linear region for negative inputs and the exponential region for non-negative inputs. Mathematically, ELU is expressed as follows:

$$f(x) = \begin{cases} x, & \text{if } x \geq 0 \\ \alpha \cdot (\exp(x) - 1), & \text{if } x < 0 \end{cases}$$

Softmax Function:

The softmax function is commonly used as an activation function in machine learning and neural networks. It is particularly useful in multi-class classification problems, where the goal is to assign an input to one of several possible categories. The softmax function takes a vector of real-valued numbers as input and transforms them into a probability distribution over the classes.

Mathematically, given an input vector z = [z₁, z₂, ..., zn], the softmax function computes the exponential of each element, yielding exp(z₁), exp(z₂), ..., exp(zn). These values are then normalized by dividing each element by the sum of all exponentiated values, exp(z₁) + exp(z₂) + ... + exp(zn). The result is a vector of values between 0 and 1 that add up to 1, representing probabilities.

The softmax function's mathematical formula can be expressed as follows:

$$\operatorname{softmax}(z_i) = \frac{\exp(z_i)}{\sum_{j=1}^n \exp(z_j)}, \qquad i=1,2,\dots,n$$

For above in matrix form

$$\operatorname{softmax}(\mathbf{z}) = \frac{\exp(\mathbf{z})}{\sum_{j=1}^n \exp(z_j)}, \qquad \mathbf{z}=[z_1, z_2, \dots, z_n]^\top$$

By applying the softmax function, the largest value in the input vector is amplified, while smaller values are suppressed. This makes the output vector suitable for interpreting as class probabilities, allowing us to choose the class with the highest probability as the predicted class during classification tasks.

Conclusion

In conclusion, we have explored several popular activation functions used in machine learning and neural networks. Each activation function offers unique properties and advantages that can impact the performance and behavior of the model.

The sigmoid function, with its S-shaped curve, is commonly used in binary classification problems. It squashes the input into a range between 0 and 1, representing probabilities. However, it suffers from the vanishing gradient problem, which can hinder training in deep networks.

The hyperbolic tangent (tanh) function also squashes the input between -1 and 1, but it is symmetric around the origin. It addresses the vanishing gradient problem to some extent and is often used in recurrent neural networks (RNNs) and certain types of architectures.

The rectified linear unit (ReLU) function is widely popular due to its simplicity and effectiveness. It sets negative inputs to zero, providing faster convergence during training. However, ReLU suffers from the dying ReLU problem, where neurons can get stuck in a state of inactivity.

Leaky ReLU and Parametric ReLU (PReLU) are variants of ReLU that address the dying ReLU problem by introducing a small slope for negative inputs. This helps prevent the complete "death" of neurons and improves the performance of deep networks.

The softmax function is commonly used in multi-class classification problems. It transforms a vector of real values into a probability distribution over classes, allowing us to select the class with the highest probability as the predicted class.

Choosing the appropriate activation function depends on the specific problem, network architecture, and desired behaviour. Experimentation and understanding the characteristics of each activation function are crucial for achieving optimal performance in machine learning tasks.

That's all for this blog, For any queries, feel free to write in the comments or reach out to me over different social media platforms. Know more at lakshaykumar.tech

Happy Learning!