Artificial Intelligence (AI) is nothing new. It has, at least, 60 years. The novelty is that it is now available to a multitude of individuals and organizations and, what was once the domain of NASA, is now used on a daily basis by companies and citizens. Today, thanks to the miniaturization and the integration of computer systems, we have enough computer capacity “in our pockets” to guide, simultaneously, 120 million Apollo rockets.
Despite this, autonomous vehicles are not yet everywhere and drones do not operate at distances that go beyond the line of sight. There are still millions of people who work every day in dangerous environments and, therefore, we may ask ourselves, what is wrong?
It is not uncommon that, due to a destabilizing innovation, we now have a technological impulse (“push”) that can be stronger than the pull of the consumer (“pull”). We are capable of doing smart things, but who wants to carry them out?
And this is where the issue becomes complex, for the following reasons:
People are afraid of robots
We’ve all seen Robocop, right? What do we do when a robot fails? What is the social reaction when a car without a driver gets involved in an accident? More than 1,000 people die every day on the roads of the world, but how many of these deaths are reflected in the mainstream media? However, when in the USA an autonomous automobile injures a pedestrian, the news appears in the television news of the whole planet; or when a driver falls asleep in the passenger seat of a Tesla on a motorway in the United Kingdom, it is defamed in the press. We have to accept that the world population is uneasy about AI and we must ensure that its implementation does not put the population at risk, adopting adequate regulation for their safety.
Service and technology providers need to work together with legislators and regulators to create and adapt regulatory frameworks that provide adequate management and control, while continuing to move forward. Currently, attention is being focused on the use of autonomous vehicles on motorways and on the use of drones in urban areas and controlled airspaces, such as around airports.
In the area of AI development it is preferable not to follow the expression “two steps forward and one backward”. People will remember failures more than successes, so careful implementation and measured management is essential, in order to minimize the risk of failure and damage.
People value their privacy
The news is plagued with violations of personal data that appear daily (how can we forget the latest Facebook?). The only reason why there are no more adverse reactions is that, in general, people recognize that there is a positive net value in the use of an AI that manages and learns from their data. It is a mutually agreed solution: people will continue to share their data as long as the perceived benefits outweigh the risks of a loss or violation in the use of their data.
While legislators and regulators must continue to protect us, we must also ensure that the use of our data is correct and that it adds value. For a service provider it is an equation that is easier to solve. We can use data and artificial intelligence to improve the lives of citizens (directly and indirectly) wherever they live, work and where they travel. Improving energy efficiency, travel safety, passenger experience, as well as designing and developing citizen-centric services are all real and practical examples of how to improve things through AI, with negligible risk. to the data.
If AI is an accelerator of human behavior, we clearly want to accelerate the good, instead of promoting the bad.
People want to protect their jobs
There are many things we can do with artificial intelligence. The key is to prioritize those things that we should do. If the unfocused use of AI results in a reduction of employment in a community where role adaptation is difficult, is it a social advance?
There are some clear uses of AI that result in both economic and social benefits. First, we must use AI to separate individuals from dangerous environments. This is invaluable to everyone. If we can do it, then why are there still people working on high-risk tasks in related activities, for example, with oil and gas, with road and rail maintenance, or with waste processing? Our focus is on separating individuals from these environments by using AI to manage those repetitive tasks that can really be automated.
The other area where there is no doubt about the value of AI is in the inspection and testing processes. In a world of low qualification, why are we going to use our more experienced individuals to inspect products and assets that are in good condition? This is when the AI becomes more reliable: continuously inspecting and monitoring the conditions to ensure that they are within predetermined limits, but alerting the staff when conditions fall outside of these standards; that is when the most valuable human resources must be deployed, because at that point, it is usually more efficient to make use of a ‘human processor’, either in the maintenance of infrastructures or in the diagnosis of the health of patients.
For this reason, our interest in recruiting the brightest and best in mathematics and computer science must maintain a balance with a talent development in consulting. When we consider the demographic scope of the relevant skills (our AI technical work force mainly comprises Millennials and “Generation Z”, while our consultants are closer to the baby boom of the 60s), the key is to design processes of effective knowledge management so that the geniuses of the artificial intelligence of the future know how to use their skills to generate the maximum economic impact with the greatest social value.