“AI winters were not due to imagination traps, but due to lack of imaginations. Imaginations bring order out of chaos. Deep learning with deep imagination is the road map to AI springs and AI autumns.”
By Amit Ray.
Learn to learn. By giving meta-learning capabilities, it may be possible for machines to continually improve upon themselves, as they will be able to understand the concept of learning. Until now, deep learning remains in the environment for which it was designed. We have to create the capacity for machines to understand what it is to learn, to be able to expand their borders beyond the initial.
It seems that the meta-learning concept is going to be one of the keys of the year. The problem is that we are entering the cumbersome world of understanding what language is, what is reasoning, what is learning … which may not be very clear to humans either. And is that as we discussed in another article, our developments in artificial intelligence will help us understand ourselves better.
2. Generative models applied to other sectors
The generative models will be applied in many new fields. Currently, most of the research is done in the field of image and voice generation. However, we will see how these methods are incorporated into tools that seek to model complex systems. One of the areas where you will see more activity is in the application of deep learning to economic models.
3. Great advances in self-learning in games
The games are a perfect scenario to develop intelligence. They have specific rules, they assume competitiveness to find a winning strategy and the objectives are clear, so we can measure the effectiveness of the actions.
The ability of AlphaGo Zero and AlfaZero to “learn from scratch” has been a brutal leap in Artificial Intelligence. Some believe that it is at the same level of impact as the discovery of deep learning itself. The ability to self-learn to play is the first step on the road to developing true AI.
It is curious, of course, that the DeepMind team does not have a word for this, but another research group calls it “ExIt”.
As a personal bet and that we have already advanced before, this year we will probably see the AI winning humans in the game Starcraft, but also, we can not forget, we will surely see the application of this ability in other very disparate resources, beyond of video games.
4. Intuitive machines solving the challenge of semantics
Let’s say there are two types of paths in AI: rational machines and intuitive machines. They were two different branches of investigation, and of course much more has been advanced in the first. Now we hope that the paths of both are found.
The notion of artificial intuition will cease to be a marginal concept and will be an idea more commonly accepted in 2018. We will understand that intelligence is not only reasoning, if we have not realized it yet, and we will be experimenting in depth with much more Dual Processes. complex.
5. The impossible “explicability”
Neural networks, as well as all the increasingly complex models that surround them, have a small problem: they are like a black box. This means that when the machine gives a solution to a problem, it is very difficult to know what are its “reasoning” to reach such a solution. Partly because there are none; but without going to technicalities or philosophical questions, the problem is that we can not know the machine’s explanation to arrive at an answer. It may seem that it does not matter, but of course, when you want to apply AI to your business, you need to control very well why you are making some decisions and not others. Many current investigations seek to develop methods that allow machines to explain their “reasoning”, to show some kind of “report” of the process, but in essence they are tricks or deceptions.
Most likely, we have to live with this uncertainty, at least for many years. The machines will solve increasingly complex problems but, in essence, we will have to live with the situation of not knowing how. The year 2018 will be for many conversations on this topic.
6. Increase of scientific studies in Deep Learning
Last year, only at the ICLR Deep Learning conference, some 4,000 papers were presented. There will be a paradox in 2018 and that is to review all the scientific articles on Deep Learning, we will need to apply Deep Learning. There is no human way to review so much information in such a short time. If we want to maintain the pace of research demanded by scientists, we need the machines to help us.
In addition, many of the papers are complex, with very advanced mathematics behind. Reviewing and qualifying these studies is very difficult and requires mathematical knowledge that not all researchers have.
The scientific studies in Deep Learning may multiply by 3 or 4 by the year 2018. Can Deep Learning understand and study the research in Deep Learning?
7. New deep learning environments
The path towards a more predictable and controlled development of deep learning systems involves the development of new learning environments.
Biological systems learn from the interaction with the environment; They can not learn in isolation. We have been developing self-learning algorithms, as we mentioned before, but we need environments rich in experiences, where machines can truly learn by interaction.
The current deep learning training procedure is one of the most gross teaching methods one can imagine. It is based on the repetitive and random presentation of facts about the world, with the hope that the student (that is, the neural network) will be able to untangle and create enough abstractions from it. Not even the worst teachers have taught us that.
In 2018 we will see new development environments and infrastructures focused on Deep Learning, where machines can experiment and interact with a particular environment.
8. Conversational cognition
Let’s go back to the objective of the high-level developments in artificial intelligence: we look for a General Artificial Intelligence. This is an issue that we have discussed many times in Digital Bridges.
For more inri, what we understand by intelligence is basically human intelligence, so we try to make the machines intelligent, or rather, humanly intelligent.
We know that language and conversational interaction is one of our pillars of intelligence. In fact, a historical anthropological discussion is whether language caused our cognitive increase or whether our cognitive increase resulted in language.
Wherever the processes and socio-cognitive phenomena of development are found, the place for their discovery is the conversational context. We must give machines the ability to learn through conversation. So far the Deep Learning applied to Chatbot is reduced, there is no cognitive learning behind and we are missing a golden opportunity. The amount of information that machines can collect through conversation is enormous, especially because it is very rich information. Our own reasoning, our intelligence, is embodied in words. When we talk to Siri, Google or Alexa, we can be helping them learn.
Many 2018 developments in Artificial Intelligence will consist of applying meta-learning and intuitive intelligence, through conversational cognition.