Why Linear Algebra Critical to Understand Deep Learning..?
f Machine Learning, Mobile Development, Software Engineering etc are different arts of sword
fighting, Competitive is the blade of your sword.
To use a Deep Neural Network and do Image Recognition, you don’t need to
understand Linear Algebra. But to understand a Deep Neural Network, you need to understand
Linear Algebra. But the fun fact is, to do Image Recognition, you don’t even need to use a Deep
Neural Network too, you can simply use AWS, Watson and Google Vision API or may be a Github
Repo, But then saying that you are a Data Scientist is a criminal offence.
Most of you would have heard the name of Tensorflow (Google’s Deep Learning Library). The word
‘Tensor’ is nothing but a higher dimensional matrix. Linear Algebra gives a practical and scalable way
of framing optimization algorithms like Gradient Descent and Low Memory BFGS.
Machine Learning is basically used to approximate any function, which enables us to build
Technologies which is otherwise not possible to build with conventional programming,
In almost all practical scenarios, these functions take a list of inputs, and generate a list of output.
For a Image Classification Problem, the input list should contain all the pixels of the image (1024 * 786)
which becomes quite a big list. And to approximate the function, we need thousands (sometimes
millions) of such images as training examples). For computational efficiency we pack these input and
output lists in form of vectors and thus can represent the data into a multidimensional linear space.
In entire Machine Learning (Supervised), The Approximated Function is doing nothing but applying a
Linear Transformation on the Input Vector, so that it can lands on the output vector, which is