Artificial Intelligence and Industrial Cybersecurity

Artificial Intelligence and Industrial Cybersecurity

Artificial Intelligence and Industrial Cybersecurity 2560 1440 Centro de Ciberseguridad Industrial

Artificial Intelligence is developing so rapidly that it seems to have already invaded every corner of our society. We feel that we are not able to identify what aspects of our lives it has or will have influence on, because with every day that passes we receive news about it of all kinds, a veritable flood of information, and we think that perhaps it will be useful for everything. And there is some of that, so in this blog we are going to try to bring news of the latest advances in Artificial Intelligence (AI) and some comments about its impact, preferably about its impact on the industry, which is what interests us the most. AI can help us, a lot, or it can cause us problems, and this largely depends on how we integrate this technological wave into our personal intellectual baggage. We have to learn the essentials of Artificial Intelligence, there is no other choice, just as we once learned to use office tools, industrial automation or the basic concepts of cybersecurity. It’s the time we have. First of all, we are going to dedicate a few lines to locating this current development in time, which is not spontaneous at all, but has been the result of many years of research and development.

First steps of Artificial Intelligence

Current Artificial Intelligence systems use elements that are essentially the same as in their origin, back in the late 1980s. Yes, I am referring to neural networks, those programs that try to imitate how our neurons work. brain. They became fashionable in academic circles at the time (although they were designed a few years earlier) and worked almost magically, as it was the first computer system for which you couldn’t tell exactly how it had reached a conclusion. Currently, they continue in this way.
Learning phase. We organize a set of input data and train them with a previously known output set, so that the system adjusts the relationships between nodes (neuron links) in an iterative process that must converge. It is called “learning”.

 

Graphic and real example of the operation of neural networks

Graphic and real example of the operation of neural networks.Its main problem was that a lot of data was needed to “train” it; a repeated phrase was that there was no neural network that worked well with less than 1,000 input data. That amount of data seemed enormous at that time, when computers were ridiculously slow compared to today’s computers. But they were able to correctly identify handwritten numbers and read car license plates. An incredible success, but it did not prosper too much because next to each Artificial Intelligence system, a computer expert was needed to be in charge of fine-tuning them. This acted as a brake to limit its expansion, and on the other hand, the power of computing equipment was insufficient for the increasingly demanding needs of neural networks, so that it took almost 20 years for relevant advances to emerge.
This situation is reflected in the attached graph, which shows references in the literature to the term “Artificial Intelligence” in English, according to Google. The peak centered in 1990 and the subsequent valley can be observed, until around 2015 the start of the current explosion occurs.

*I was lucky enough to work in the 90s and for a few years in AI, it was a difficult time due to the limited echo it had in society and I can assure you that the change that has occurred recently is extraordinary.

 

https://books.google.com/ngrams/graph?content=artificial+intelligence&year_start=1930&year_end=2019&corpus=en-2019&smoothing=0

The publication by Google in 2017 of the article “All you need is attention” https://arxiv.org/pdf/1706.03762.pdf, (paraphrasing the title of the Beatles song), was the starting signal for the current development. In that article, Google engineers presented a special architecture for analyzing human language, called “Transformers.” An architecture that represented a change with respect to the best that had existed up to that time in that field, the pseudo-statistical prediction of the next word in a sentence. We have known and suffered this last system in the correctors installed on mobile phones.
The new Transformers architecture tries, with mathematical tricks, to detect and highlight the most important word in a sentence, so that the search for the rest of the words, whether before or after, is strongly influenced by the influence of the main word. I prefer not to go into details, but they are described in the cited article.
This simple trick represented a fundamental change, since the size of the training data sets, while gigantic, were acceptable to existing computers, and especially to graphics cards (GPU). These “accessories” have been the salvation of Artificial Intelligence (and the enrichment of its manufacturers, with NVIDIA as the main beneficiary).

Current situation of AI

The current scenario is in full swing. Google’s company, Deepmind, which had its success by beating the game of Go, has not managed to displace, with its new product Gemini, its main rival, OpenAI, which is now in Microsoft’s orbit.OpenAI launched ChatGPT-v3 (the T stands for Transformer = Chat Generative Pre-trained Transformer) in 2021, and version 4 in March 2023. In just two months it has broken all records, gaining more than 100 million users. At the end of 2023 it had 180 million monthly active users.
The training of these systems is something fabulous, if ChatGPT2 needed 40 Gb of data in 2019, GPT4 is in turn trained with 570 Gb of data, all taken from the Internet and updated until September 2021. The result for GPT4 is 220 thousand millions of parameters.
All these numbers are dizzying, but they give us an idea that these systems work based on an enormous amount of data and process capacity (and associated energy cost). There are many initiatives to reduce these values, but at the moment the main winner in the race is CHAT-GPT4, which today is in first place in most AI rankings. And it plans to launch GPT-5 in 2024. We’ll see.
If we look at the current AI environment, we have many systems, which follow a multitude of architectures and variants, but a clear distinction is detected between those that are general purpose (Chat-GPT4, Mistral, Gemini, Claude-3 etc.) and those that have specialized in some application niche, for example, image generation (Midjourney, Leonardo-AI, DALL-E3 etc…), voice and video, etc…

Application of artificial inteligence in industrial cybersecurity

With this background and this entire panorama we can now begin to analyze the importance of AI and its impact on our industrial activities.

To begin with, there is a whole series of applications that are not related to process control and that are similar to those that can be found in other areas of corporate computing, among which we can mention production and supply logistics, planning, optimization. , market analysis, customer marketing, study of trends and time series etc…
Here we see how preliminary versions of current AI are used, with the architectures of Machine learning (ML), Deep Learning and even the now “old” Big Data. We must also consider in this section the applications in the field of cybersecurity, with the basic analysis of the behavior of network traffic, anomaly detection, help for incident management, correlation and interpretation of events in logs, malware analysis, etc.
And, on the other hand, there is the application of AI in the control of industrial processes, sometimes in real time, identifying on the fly instrumentation errors, trends and unwanted drifts in control variables, optimization of linked control loops, advanced robotics etc…
As we see, the field of application is immense, but while it is very important to be aware of everything mentioned, which we will analyze in detail, we must not forget that, like all technology, it has its dark version, that of those who use it for specific purposes. malicious An extremely dangerous aspect that can represent a change in the model (paradigm) in the creation and exploitation of malware.

Conclusion

In this first chapter of the blog we have introduced the history and some concepts of AI, but the interesting stuff will come in future chapters. Our plan is to do a mix between the presentation of news in AI and the review of the applications and main threats that are appearing, so that we can get the most out of this reading.

AUTHOR:

Erik de Pablo Martínez

(Experto de CCI)