Deep Learning, also known as deep neural networks, is an aspect of artificial intelligence (AI) that deals with emulating the learning approach that human beings use to obtain certain types of knowledge. In its simplest form, deep learning can be considered as a way to automate predictive analysis.
Traditional machine learning algorithms
While traditional machine learning algorithms are linear, deep learning algorithms stack in a hierarchy of increasing complexity and abstraction. To understand deep learning, imagine a child whose first word is “dog.” The child learns what is (and what is not) a dog pointing to objects and saying the word “dog”. The father says “Yes, that’s Dog” or “No, that’s not a dog”. While the child continues to aim at objects, he becomes more aware of the characteristics that all dogs possess. What the child does, without knowing it, is to clarify a complex abstraction (the concept of a dog) by constructing a hierarchy in which each level of abstraction is created with the knowledge that was obtained from the preceding layer of the hierarchy.
Computer programs that use deep learning go through the same process. Each algorithm in the hierarchy applies a non-linear transformation in its input and uses what it learns to create a statistical model as output. The iterations continue until the output has reached an acceptable level of accuracy. The number of processing layers through which the data must pass is what inspired the depth label (“deep”).
In traditional machine learning, the learning process is supervised and the programmer has to be very specific in telling the computer what kinds of things to look for when deciding whether an image contains a dog or does not contain a dog. This is a laborious process called feature extraction and the success rate of the computer depends entirely on the ability of the programmer to precisely define a set of characteristics for “dog”. The advantage of deep learning is that the program constructs the set of characteristics by itself without supervision. This is not only faster, it is usually more accurate.
Initially, the computer program could be provided with training data, a set of images for which a human has labeled each “dog” or “non-dog” image with metatags. The program uses the information it receives from the training data to create a set of characteristics for the dog and build a predictive model. In this case, the model that the computer creates for the first time could predict that anything in an image that has four legs and a tail should be labeled “dog.” Of course, the program is not aware of the “four-legged” or “tail” tags, it will simply look for pixel patterns in the digital data. With each iteration, the predictive model that creates the computer equipment becomes more complex and more accurate.
This process imitates human thinking
Because this process imitates human thinking, deep learning is sometimes referred to as deep neuronal learning or deep neural networks. Unlike the small child, who will take weeks or even months to understand the concept of “dog”, a computer program that uses deep learning algorithms can show a training set and sort it through millions of images, accurately identifying which images They have dogs in just a few minutes.
In order to achieve an acceptable level of accuracy, deep learning programs require access to vast amounts of training data and processing power, none of which was readily available to programmers until the era of big data and computing. in the cloud Because deep learning programming is able to create complex statistical models directly from its own iterative output, it is able to create accurate predictive models from large amounts of unlabelled and unstructured data.
This is important as the Internet of Things (IoT) continues to become more pervasive, because most of the data that humans and machines create are unstructured and unlabelled. Today’s use cases for deep learning include all types of big data analytics applications, especially those focused on natural language processing (NLP), language translation, medical diagnostics, trading signals, network security and identification of images.
Hope this post has been interesting for you and looking forward to your comments and recommendations!