# Why Structuring is necessary?

With the use of Python and its vast in-built libraries it has become highly simple to implement Deep Neural Networks even for those who have no idea about what Neural Networks actually do in the back-end of the code. This becomes a problem when the model has to be tuned to improve accuracy. Most students and professionals aiming to train a model don’t follow a structured approach and land up in a situation wherein they are not able to find out what parameters to tune and where their model has to be improved.

This leads to the necessity to follow a structured approach. In this post ill explain all the key concepts involved in a structured approach towards a ML project.

# Concepts Covered:

1) Orthogonalization

2) Evaluating a model

3) Creating and Dividing your data set

5) Error Analysis

# What is Orthogonalization?

Orthogonalization or Orthogonal approach to a project means to find out and tune parameters in such a way that we tackle only the issue at stake and do not lead to further issues i.e. we tune the parameters in such a way that they do not affect other parameters.

The example of a radio and tuning its knobs for a particular frequency is a real-world example for orthogonal approach. This is the standard example to explain orthogonal approach.

Consider a conventional radio which has knobs in order to tune frequency. A radio works on the principle of resonance i.e. when we tune the frequency in such a way that it matches the resonant frequency of the transmission signal then we can receive signals from the transmission center efficiently. Now there are two features of a signal that are allowed to be tuned i.e. amplitude and frequency. Tuning the frequency improves the quality of reception whereas tuning the amplitude improves the sound output strength. Both these parameters are independent of each other when tuned in the working frequency and hence, don’t affect each other.

Orthogonalization is an approach followed in mathematics and not native to Deep Neural Networks. Anyone with a basic knowledge of mathematics knows that all the co-ordinate systems used have orthogonal axes i.e. the axes are mutually perpendicular to each other and the variables along those axes are independent variables. This concept of orthogonal co-ordinates is the key intuition behind orthogonalization in ML.

The figure below shows a set of rectangular co-ordinates with there mutually perpendicular co-ordinates x, y and z.

With enough intuition gained about what is Orthogonalization and why it should used, let’s move on to using orthogonalization in ML problems.

# How to evaluate a model?

Consider two classification algorithms one based on Convolutional Neural Network and other based on simple deep neural network. Lets call first one classifier A and second one classifier B. How do we evaluate their performance or how do we know which model performs better? This leads us to the need for a single number evaluation metric.

At the core of any ML problem we need to perform 3 tasks:

What an evaluation metric does is it lets us help to decide which model works better and on which model we need to perform the above 3 tasks again.

There r two metrics: Precision and Recall:

Precision tells us how precise the model is in classifying the intended object whereas Recall tells how efficient the model is in classifying any object.

This data tells us that classifier A is more precise with its classification but isn’t efficient whereas classifier B is more efficient but not precise. Hence, both the models have to be tuned on different grounds.

If a model satisfies both the metrics then it’s an optimized model.

# How to create an efficient data set?

Any Deep Learning model requires lot of data to perform efficiently. This costs a lot to accumulate. But there are few strategies with which we can make the available data more efficient.

· Data Augmentation: This is the method of augmenting the data so as to diversify the data set in hand. Mirroring the image, random cropping, changing BGR to gray and vice versa, changing the color scheme are some of the efficient augmenting ideas. But one must carefully analyze the data set and then perform augmentation else this may spoil the accuracy even more. Cropping a data which has disoriented images can spoil accuracy. Mirroring is a safe augmenting strategy which I recommend out of my experience.

· Synthesizing Data: This is a less suggested but efficient idea. There are various graphic tools online in order to create images we need. These images can be added to the data set after performing augmentation.

· Randomization: Randomizing the data set before dividing the data into train, dev and test sets can greatly improve accuracy because without randomizing there are chances that the sets have different distributions and the model’s accuracy might drop.

# How to divide your data set?

How do we divide our data set at hand into training, development and test sets. This is a key issue in ML projects and model’s performance greatly depends on the distribution of train, dev and test sets.

Suppose we have 10,000 images in our data set then the best strategy is to use 80–90 % (8000–9000) of the data for training, 5–8% (500- 800) for dev set and 2–5 % (200–500) for test set.

The most important factor determining accuracy is the distribution of these sets.

· The training and dev sets must be from the same distribution else the model’s accuracy might drop drastically.

· The training set must be made from all available distributions of data and should be as diverse as possible.

· The dev and test sets should be smaller and more data should be used for training. The test set should contain worst case scenarios in order to check the model completely.

# How to grade and improve the performance?

The difference between Human Level Performance and Training Error is called BIAS and the difference between Training error and Dev error is called variance.

# How to perform error analysis and when?

This is the most brain-storming problem to solve in an ML project. Most people do not follow a structured approach and end up mis-tuning their model further spoiling their accuracy. Here ill explain the most common problems faced during development of a model and what parameters to tune when they arise.

--

--