Table of Contents

Deep Learning Meta Walkthrough

The Foundation

1. General Concepts

This is the first article of our walkthrough in deep learning neural networks. First things first, we explore some general concepts of deep learning, introducing the deep learning model.

2. Inside the Model

In this article, we explore the generic structure of a deep learning model.

The Learning Process

1. The Loss function

We complete the deep learning model with the loss function: this is the first step toward the learning process.

2. The Backward Pass

The backward pass is the nemesis of the forward pass: this is the second step toward the learning process.

3. The Weights

The weights are the learning elements of the deep learning model: the core of the learning process.

The Deep-Learning Algorithm

1. The Gradient Descent Algorithm

We use the different parts we have seen so far to run the training phase from scratch.

2. Batch Learning

A new idea to build a more robust learning: learn on multiple data input at once.

From a Layer Perspective

1D Layers

1. The Linear Layer

We explore the Linear layer. It is the first step to be able to design deep learning models. We also speak about the neural structure and a better way to compute the backward pass.

2. The Activation Layer

Let us see the neural structure for the Activation layer.

3. The Input 1D Layer

Let us see the neural structure for the Input 1D layer.

2D Layers

1. The Convolution Layer

Let us add the missing piece for the Convolution layer to learn.

2. The Max Pooling Layer

The Max Pooling layer helps us build effective deep learning models.

3. The Normalization Layer

The Normalization layer helps stabilizing learning.

From a Network Perspective

The Linear Network

1. Weights' Balancing

Looking back at the simple "Example" model to illustrate the weights update process over time.

2. The Linear Function

Investigating the global function of the Linear network.

The Convolutional Network

1. The Second Dimension

In this article, we open the second dimension of our trip to Computer Vision.