- The world is changing and processes are accelerating. Artificial Intelligence is coming to everyday life to make it simpler, help companies make decisions and make routine processes a secondary task, but how is this revolution regulated?
- Experts say there is no need for scaremongering, but they do recognise that AI is approaching the ability of humans to learn. This, they warn, does have regulatory risks that do not yet seem to have a clear answer.
In this article we take a look at the evolution of Artificial Intelligence, how it is changing our lives and whether we are really losing control over these developments.
AI law in the world
After many rumours and with a draft law on the table since 2021, the Artificial Intelligence (AI) Law is coming to every country in the world. A milestone that is intended to serve as an example in the development and advancement of this technology. This regulation seeks to control and make these systems safe and respect the fundamental rights of users in both the public and private sectors.
However, this law still contains many blank spots that they hope to resolve before the end of the year, but it will not be until at least 2026 that the regulation comes into force throughout Europe. The main idea is to regulate AI based on its capacity to cause harm to society following a ‘risk-based’ approach: the higher the risk, the stricter the rules.
Here are the 5 key points of this new law:
- AI will be classified according to its risks. Labelling them for risk of danger.
- Transparency, i.e. compliance with copyright, will be required.
- Some applications will be banned and/or removed, those that violate citizens’ privacy rights.
- AI for general use should provide transparency and security.
- Sanctions will be applied to ensure compliance with this regulation.
Is it a real risk?
More than 300 industry experts and researchers, such as Google DeepMind CEO and OpenAI CEO Sam Altman, have warned in a letter that AI poses an “extinction risk” comparable to that of pandemics or nuclear war.
The main concern is also linked to the large-scale dissemination of disinformation and propaganda and the possible elimination of jobs of all kinds. At the same time, we are entering uncharted territory, hitherto reserved only for fiction, in which machines surpass the intellectual capacity of a human being.
According to the experts themselves, artificial intelligence was only capable of executing solutions for very specific problems, but with the learning capacity linked to deep learning, the matter changes completely. AI is now able to perform more cross-cutting and creative tasks, very similar to what a human can do.
AI in the drive for Industry 4.0
AI is poised to radically transform businesses of all sizes and industries. Only the use of generative artificial intelligence, capable of answering the user’s questions or creating images on demand, 2.4 to 4 trillion euros into the global economy. And if we talk about technology in general terms, the figure rises to between 10.5 and 16.7 trillion.
According to Carlos Martínez, global director of AI solutions and services at Telefónica Tech: “AI is a technology with extraordinary potential to bring about an industrial and economic revolution. Its application in the business sector not only allows us to understand what has happened and why, but also to anticipate and predict what is going to happen.
Furthermore, data today shows that 55% of companies use AI in some way to optimise their business, more than double the number in 2017. But despite the notable improvement, investment is currently somewhat stagnant, due to the complexity and advances in this area and the difficulty of finding qualified personnel to develop these jobs, where the learning curve is very steep.
In conclusion, AI is transforming the way businesses operate and work, and the industrial sector is no exception. The successful use of this technology holds great promise for the industry, which is seeking to standardise processes and decarbonise its factories by relying on this revolutionary technology. Conversely, their misuse can threaten the stability of companies and the creation of new jobs, as well as generate greater social damage in terms of disinformation or national security.
The need for regulation
In view of possible measures to be taken by government institutions, the technology industry association DigitalEurope has urged the European Union to avoid over-regulation of fundamental artificial intelligence (AI) models, creating a situation that could force startups to leave the region.
In an open letter signed by 32 digital associations across the continent, DigitalEurope points out that only 8% of businesses in Europe currently use AI, a far cry from the European Commission’s target of 75% by 2030.
It also points out that just 3% of AI-savvy companies globally come from the EU, warning that the continent’s competitiveness and financial stability depend on the ability of companies to implement AI in areas such as green technology, healthcare, manufacturing and energy.
In summary, and seeing that AI is presented as an unstoppable technology, it seems that regulation should not necessarily affect all new technologies linked to AI, but only those where there are high-risk uses, so as not to slow down the industry’s progress.
In this sense, Europe knows that it cannot compete with the US in AI technological development, but it has wanted to be at the forefront of what it has called ethically responsible AI. A committee of experts has been studying this issue for years in order to put in place regulatory actions to ensure ethical standards in line with European sentiment, but this regulatory race seems to need bolder and faster measures to keep pace with the unstoppable progress that is taking place at the technological level.