Getronics Editorial Team
In this article:
Artificial Intelligence (AI) seems to be appearing everywhere, and that’s because it is. Although AI has been around as a concept since the middle of the last century, only in recent years has it proliferated to the extent that it now influences various facets of our lives. There’s so much information and noise surrounding AI and its applications that it can be difficult to keep up with—so if you’re part of the majority who resonates with this, keep reading to find out everything that you need to know about this pivotal and somewhat consequential technology.
AI WEBINAR: REGISTER HERE
If you want to learn more about AI, apart from reading this article, you can register now for this event! Join us live on 30.11.2023 at 2 p.m. for our webinar in English “A small introduction to the beauty of AI“.
What is artificial intelligence?
Artificial intelligence is a very broad term. It generally refers to any piece of software that is designed to tackle a problem. This, of course, is open to interpretation.
In most instances, artificial intelligence is used to refer to a machine’s ability to perform tasks that normally require human intelligence, such as making decisions, recognising patterns, solving problems, and, in some cases, learning from its experiences and improving itself.
Many subsets such as machine learning, deep learning, natural language processing, and others make up artificial intelligence, which is the superset of various techniques that enable machines to be artificially intelligent.
A brief history
In 1943, a paper titled ‘A Logistical Calculus of Ideas Immanent in Nervous Activity’ described neural networks, and then in 1950, Alan Turing devised the imitation game (now referred to as the ‘Turing test’): a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. These two events laid the groundwork for establishing AI as a field of study, which took place in 1956 at the Dartmouth Conference.
In the decades that followed, innovators and computer scientists theorised and built programs to demonstrate artificial intelligence, beginning with the first artificial neural network ‘Perceptron’ in 1957 and ‘ELIZA’ in 1965, a program that could simulate conversation. It wasn’t until 1997, however, that the world realised the true potential of AI, when IBM’s ‘Deep Blue’ defeated chess world champion Garry Kasparov.
A little over 25 years later AI as we currently know it is transforming virtually every industry and plays a major role in our day-to-day lives, even if we don’t always see or know it.
How does artificial intelligence work?
AI, to put it simply, works by combining huge, unimaginable data sets with processing algorithms. AI learns from these algorithms by spotting behaviour patterns within them. The whole process can roughly be categorised into five areas:
You can’t have AI without input data. This can take various forms such as data from text sources, numbers, images, audio, sensors, and many, many (thousands!) more.
It’s important for data to be readable by the AI algorithm. This is the processing step, where data is taken by the said algorithm and (generally*) interpreted by relying on the behaviours it has learned from other data sets.
*This depends on the AI in question; some learn from real-time data for example.
After the AI algorithm has processed the data, it predicts the outcomes. This step determines whether the data, and the predictions that have been made, are a success or a failure.
If the input data sets produce a failure, the AI algorithm can learn from any mistakes and repeat the processing step differently in an attempt to rectify them. The algorithms’ rules may need to be adjusted to achieve this.
When AI has finished the task that has been assigned, the final step is assessment and output. This enables the technology to check the data and make its predictions. It can also provide feedback which is helpful for the next time the algorithm is run.
Types of artificial intelligence
AI can be classified into three types: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI).
Artificial narrow intelligence
ANI represents all existing AI, from the most basic AI technologies to the most complex and capable. As a category, ANI is used to refer to systems that can only perform a specific task and nothing more, meaning that they have a very limited scope.
Artificial general intelligence
AGI is the ability of AI to learn, perceive, understand, and function like a human being can. These systems can, in theory, master multiple competencies and form connections across different domains, thereby replicating the multi-functional capabilities seen in humans.
Artificial super intelligence
ASI, in addition to replicating human intelligence at the AGI level, will be leaps and bounds better at everything it does. Often referred to as the “singularity”, ASI is a level that we’re not at yet—and some hope that we’ll never reach it.
AI types and applications
Machine learning falls within the AI superset and focuses on the use of data and algorithms to imitate the way that humans learn. As a machine learning model is fed more input data, it gradually improves its accuracy—much in the same way as humans learn while they’re being taught at school—leading to better and more powerful AI applications.
Natural language processing
Natural language processing (NLP) is another subset of artificial intelligence that focuses on equipping computers with the ability to understand text and spoken words and respond to it, much in the same way humans can. ChatGPT is of course one of the best examples of NLP in action.
Generative AI (GenAI) is a relatively new type of artificial intelligence that can create a wide variety of synthetic data such as images, videos, audio, text, and 3D models. It does this by learning patterns from existing data, then using this knowledge to generate new and unique outputs. This AI can be used for a variety of purposes, from generating logos for businesses to providing synthetic data for training machine learning models.
This type of artificial intelligence can simulate human conversation. It’s made possible by NLP and uses a combination of this, foundation models, and machine learning to enable two-way conversations between humans and the AI itself. It’s most commonly used in chatbots, virtual assistants, and text-to-speech software.
Computer vision enables AI to derive meaningful information from digital images, videos, and other visual inputs. The AI can then act or make recommendations based on the information from that input. Computer vision therefore enables applications such as self-driving cars and traffic incident detection.
Robotics has come a long way with the proliferation of AI. Nowadays, there’s no shortage of examples of applications where AI is used to program robots to perform very specific tasks. This is widely seen in warehouse and factory environments where they automate workflows that previously required humans.