Abheetha P
8 min readOct 28, 2020

--

DEEP LEARNING — THE NEW ELECTRICITY

Introduction

The advancement in technology has been such that we are able to code machines to perform tasks that normally requires human intelligence such as speech recognition, decision making, sound recognition, visual perception, language translation.

Deep learning is a subset of Machine Learning which makes use of deep artificial neural networks, in which the system learns to perform various tasks by propagating through the neural network architecture.

Deep neural networks are able to process enormous datasets to make significantly accurate predictions.

Deep learning models are very versatile. Different kinds of neural networks can be combined to suit the needs of a given problem.

For example, if the task is to create a chatbot that takes user input through text, and identifies the intent of the user as output, the model would be built in such a way as to use Recurrent Neural Networks(RNN) for recognizing the words in the sentence, and connect the output of the RNN to a simple Feed Forward neural network to classify the intent.

If the task was to describe an image, or caption an image, the deep learning model would be built in such a way as to make use of a Convolutional Neural Network(CNN) for extracting different image features, which basically serves as an encoder, and use an Long Short Term memory (LSTM) as a decoder to generate the natural language.

So, evidently, the machine is able to perform a task of understanding what an image contains, which is a simulation of human intelligence.

Data is fed to deep learning models in various forms. Audio data is fed as its spectrogram broken into chunks, text can be fed as words or characters, images are converted to multidimensional arrays which consists of pixel values of the image.

Impact of dataset size in Deep Learning

The more complex the task, the bigger the training set has to be. Google ’s voice recognition algorithms operate with a massive training set — yet it’s not nearly big enough to predict every possible word or phrase or question that is input to it.

Some applications of deep learning

Image processing

Images are always classified by making use of Convolutional Neural Network. CNNs are designed to perform operations on a given image, and extract features from it optimally.

A CNN Architecture

Handwritten digit Recognition

One of the most recognized and early applications of deep learning was to classify handwritten digits in the MNIST database . It is the hello world of deep learning programs.

The MNIST database contains handwritten digits (0 through 9),MNIST is divided into two datasets: the training set has 60,000 examples of hand-written numerals, and the test set has 10,000 . The neural network here, learns to classify a digit from 0 to 9 . The neural network is trained to recognize the digits with a labelled training data set. Once it is trained , a testing data set is fed to the network and how well the network has been trained to recognize the handwritten digits is measured as the accuracy of the network.

Face Recognition

In 2015, Facebook put deep learning technology — called DeepFace — into operations to automatically tag and identify Facebook users in photographs. The algorithms in this technology perform superior face recognition tasks using deep networks that take into account 120 million parameters.

Sequential Memory /Sequential Data

An LSTM neuron

When you are reading a sentence, you are able to comprehend the meaning of the sentence because of the short term memory of the words you retain by reading one word after the other.

Imitating this kind of memory means, that there has to be some sort of feedback in the neural network to retain memory of the words in the sentence. This is achieved with the use of Recurrent Neural Networks. But RNNs have something called “the vanishing gradient problem”, and ‘the explosive gradient problem’ that is it has problems retaining information. An advancement to RNNs is Long — Short — Term Memory (LSTMs). It is just that the mathematics used to imitate memory is improved.

Recommendation systems

Some of the top applications like netflix, spotify and youtube make use of deep learning. Deep learning plays a major role in understanding its consumers’ behavior, their interests and generating recommendations to help them make choices for product and services. Advertisements have been personalized to a huge extent and the consumer is able to find products according to their interests and needs. The more you interact with these applications, the more they gather information and suggest better options for you.

Machine Translation and text recognition

Due to deep learning, we have access to different translation services. One of the most popular one, Google Translate helps its user to easily translate a language. RNNs and LSTMs are used for translating words and sentences. Machines are also able to convert visual text to audio using deep learning algorithms.

Deep Learning’s role during the COVID-19 pandemic

Researchers and practitioners analyze large volumes of data to forecast the spread of COVID-19, in order to act as an early warning system for future pandemics and to identify vulnerable populations. A large dataset called CORD-19 has been created by the research community, which can be used by anybody to make mathematical analysis of sars-cov-2 virus, for predicting the trends of rise in covid19 cases, to find the spread rates, and many more such analysis.

Interesting facts and timelines of Deep Learning

  1. Deep Learning had its roots in the year 1943, when Walter Pitts and Warren McCulloch in their paper, “A logical calculus of the ideas immanent in nervous activity”, showed the mathematical model of a biological neuron.
  2. Since then over the years, ANNs , RNNs, LSTMs, Boltzman Machines, CNNs and many more such neural networks were created, and Deep Learning advanced rapidly, with the creation of the necessary software to implement these mathematical models as well.
  3. By 2001, it was observed that millions of data points existed on the internet, “Big Data”, was an opportunity to be made use for Machine Learning.
  4. In 2008, Andrew NG’s group in Stanford started to advocate for the use of GPUs for training Deep Neural Networks to speed up the training time by many folds.
  5. Finding enough labeled data has always been a challenge for Deep Learning community. In 2009 Fei-Fei Li, a professor at Stanford, launches ImageNet which is a database of 14 million labeled images. It served as a benchmark for the deep learning researchers who would participate in ImageNet competitions (ILSVRC) every year.
  6. In 2012, Google Brain released the results of a project known as The Cat Experiment. The project explored the difficulties of “unsupervised learning.” Deep Learning uses “supervised learning,” meaning the convolutional neural net is trained using labeled data Using unsupervised learning, a convolutional neural net is given unlabeled data, and is then asked to find recurring patterns.
  7. In 2014, Generative Adversarial Neural Network also known as GAN was created by Ian Goodfellow, who is considered to be one of the greatest pioneersof Deep Learning. GANs opened doors for application of deep learning in fashion, art, science due it’s ability to create real like data.
  8. In 2016, Deepmind’s deep reinforcement learning model beat human champion in the complex game of Go. The game is much more complex than chess, so this feat captures the imagination of everyone and takes the promise of deep learning to a whole new level.

The revolutionary power of Deep Learning

Deep Learning has revolutionized every field of study and industry,ranging from genetics, robotics, automobile automation, healthcare, manufacturing, advertising, virtual assistants, stock market predictions, finance, weather and climate analysis, traffic predictions,recommendation systems,entertainment etc.,

Hardware and Software Frameworks for Deep Learning

Neural Networks are actually nothing but advancements in extremely complex algorithms. CNN,LSTM, GANs and so many other neural networks are actually well devised algorithms developed by some of the top researchers in the world.

It is possible to implement such algorithms efficiently only because of the evolution in computer hardware. Training neural networks require a good processor, RAM, and GPU as well. If not it can become very time consuming, because training a machine to learn means that it needs to iterate over the data several times to learn better. To run simple problems and algorithms, a common system, having a good ram, a good processor, python, and jupyter notebook installed in the system is enough.

Some of the widely used software frameworks are keras, tensorflow, pytorch, caffe, deeplearning4j, etc.,

There are extremely efficient hardware also being built by some of the world’s best electronic component manufacturers.

NVIDIA has been dominating the current Deep Learning market with its massively parallel GPUs and their dedicated GPU programming framework called CUDA. There has been a growing number of companies developing accelerated hardware for Deep Learning. Google’s Tensor Processing Unit (TPU), Intel’s Xeon Phi Knight’s Landing, Qualcomm’s Neural Network Processor (NNU) are some examples. Companies like Teradeep are now starting to use FPGAs (Field-Programmable Gate Arrays) as they could be up to 10 times more power efficient than GPUs.

With most of the Deep Learning software frameworks (like TensorFlow, Torch, Theano, CNTK) being Open Source, and Facebook recently Open Sourcing its ‘Big Sur’ Deep Learning hardware, we can expect many such Open Source Hardware Architectures for Deep Learning in the near future.

Conclusion

Deep Learning is a technology that works best with large datasets.

If the size of the dataset is small, it is better to use machine learning algorithms that don’t make use of neural networks. The promise of deep learning is not that computers will start to think like humans. Rather, it demonstrates that given a large enough data set, fast enough processors, and a sophisticated enough algorithm, computers can begin to accomplish tasks that used to be completely left in the realm of human perception, better than humans even, and it can help solve some of the world’s biggest problems.

References

https://deepmind.com

https://wiki.pathmind.com

https://machinelearningknowledge.ai

https://www.dataversity.net

https://fossbytes.com

--

--