What is AI?

Artificial Intelligence (AI) may be organising our social media timelines and automatically improving the pictures on our phones, but it’s a field that’s widely misunderstood

AI is, after all, an incredibly complex field in computer science, where the top researchers earn as much as NFL quarterback prospects. It introduces terms seldom found in daily life, like training data, neural networks, and large language models. Many view AI through the lens of popular culture, with their understanding formed by characters like Star Trek’s Lt. Commander Data.

Let’s take a step back. In this post, we’ll talk about what AI actually is. You’ll learn the essential terminology, how AI models work, and how AI is changing both our present and future worlds.

What is artificial intelligence?

According to a January 2022 study by Ipsos Mori for the World Economic Forum, only 64% of global respondents claimed to fully understand what artificial intelligence is. This figure aligns with another survey — this time from the UK’s Centre for Data Ethics and Innovation — which showed 63% of the UK public understood AI, and only 13% could give a detailed explanation of AI.

On a basic level, Artificial intelligence can be understood as a decision made by a computer where its “smartness” is indistinguishable from a human-made decision — no matter how the decision is made. As Alan Turing, the legendary computer scientist and codebreaker, put it: “A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

What is the difference between AI and Machine Learning?

AI and machine learning are often mistakenly treated as the same thing, but there are nuances that separate the two concepts.

AI is defined as a computer program making decisions, which by their 'smartness' are indistinguishable from human decisions.

Since the 1960s, AI has evolved into a very large collection of algorithms, which can perform various tasks. One of those tasks is the detection and recognition of patterns, and is usually called Machine Learning.

 Machine Learning has known very fast improvement in the last 15 years due to progress in one of its algorithm family: neural networks. With computers more powerful than ever, neural networks can be made "deeper" (or larger), hence the advancement of Deep Learning. Mathematical tools have improved to optimise neural networks, with techniques such as back propagation and ReLU activation function, which allow for fast and accurate computation.

The combination of these advances have led to today's Generative AI, with systems becoming more scalable and accurate.

What can AI do? 

AI is an immense field of research in Computer Science, and We use AI in our daily lives — and often without realising it. Beyond the high-profile generative AI tools like ChatGPT and MidJourney (a model that can create vivid images from a single written prompt), there are the discrete AI systems that make our Instagram pictures look amazing and protect us from online threats. It wouldn’t make sense to list every AI application — or even the most high-profile ones. Rather, let’s talk about AI’s capabilities in broad terms. One way to describe the field is to talk about the type of questions AI can answer:

  • What is best? An AI algorithm can look at a set of conditions — both present and predicted — and work out the best way to do something. Examples include a navigation map choosing the most efficient route home based on actual and expected traffic, or a website balancing traffic load between servers. 
  • What does it belong to? AI is good at identifying and categorising objects and trends. A self-driving car uses AI to recognise other vehicles on the road. A spam-detection algorithm might use AI to identify likely phishing or spam emails. 
  • What repeats? AI can recognise patterns and relationships from large datasets. These pattern-recognition skills have myriads of applications: from a generative AI system writing coherent answers to questions, to a security system identifying a potential security threat based on a person’s actions and previous behaviour it observed. 
  • What is the next best action? AI can look at a situation and identify the next optimal step based on the current conditions. An example might be a self-driving car reducing its speed when it notices that the vehicle ahead has illuminated its brake lights, or a video game character adjusting its tactics and position based on the player’s behaviour.

An application can use multiple AI elements and models. For example, a self-driving car will have one system that categorises traffic and interprets that data to control the vehicle’s braking, acceleration, or steering. 

How do artificial intelligence systems work?

Whether we’re talking about powerful generative AI chatbots like ChatGPT or Google’s Bard, or smaller problem-specific AI systems like those that handle imaging and battery optimisation in your phone, AI systems often share the same common component — data.

Data is how an AI system automatically learns rules that can be used to predict, generate, and identify the target output. The data may need to be annotated (see our article on Supervised Learning in What is ML?), or not. The data may be text, video, sound, images, numeric values etc. The data gives the algorithm a certain representation of the world, and therefore, the quality of the data will strongly influence the quality of the algorithm’s output.

For the data to be effective, it must be of a sufficiently high quality. Quality can be determined by a number of factors. These include: 

  • Relevance: Does the image show a breakfast cereal? 
  • Quality: Can a human easily identify a breakfast cereal within the photo? Is the lighting, resolution, and framing good enough? 
  • Variability: Does the data show the same variety of cereal in a number of different ways?
  • Bias: Is the data representative, not just of yourself, but of everyone that’s likely to use your system? 

The Future of AI is Strong AI

Artificial intelligence applications fall into one of two categories: Weak AI and Strong AI.

Weak AI

Weak AI — sometimes referred to as “narrow AI” or “artificial narrow intelligence” — refers to AI systems that focus on a single task. Examples could be the iPhone’s camera, which uses AI to understand the composition of an image and adjust it accordingly, or the recommendation algorithm in your favourite social media app that learns your preferences and shows you similar content.

Don’t fixate on the words “narrow” or “weak.” These systems are seldom that. The terms simply mean that they’re focused on a single job, and therefore lack the ability to learn and operate beyond that single task. 

Every AI system used today — including those used in complex tasks, like self-driving cars — falls into this category. Narrow AI applications work by identifying patterns in training data and then implementing them in the real world. For example, the AI system within Tesla’s self-driving car software works by looking at footage of real-world driving conditions. This data helps the system anticipate the actions of pedestrians, cyclists, and other motorists. 

Artificial general intelligence

The term artificial general intelligence (AGI) — also known as “strong AI” — refers to a hypothetical class of artificial intelligence systems which have the ability, much like humans, to learn and adapt to new situations. Researchers describe this ability as “generality.” 

This is the AI of science fiction — where, much like Lt. Commander Data, computers can act, improvise, and even behave like humans. Whereas weak AI systems exist to accomplish specific tasks, an artificial general intelligence would exhibit something resembling sentience. 

Defining sentience is a tough — and often contentious — philosophical challenge. So, let’s be specific. In this context, we’re talking about a machine that can learn new tasks without having to consume vast amounts of training data, or through the creation of mathematical or statistical models. A machine would be able to impersonate the adaptability of a human being, particularly when it comes to unforeseen or new challenges.

While this definition seems straightforward, there’s no formal definition or criteria for what constitutes artificial general intelligence. Sarah Hooker, the head of the Cohere for AI research lab, describes the debate around AGI as “value-driven” rather than technical.

There is no universally-accepted test for whether an artificial intelligence system meets the threshold of an AGI, although researchers and computer scientists have proposed a number of potential solutions. 

Nils John Nilsson, who performed the foundational research in AI, suggested employment-centric tests, where a system is evaluated on its ability to learn and perform a new job – like a receptionist, paralegal, dishwasher, or marriage counselor. Steve Wozniak, the co-founder of Apple, proposed seeing whether an AGI could enter an unfamiliar house and figure out how to brew a pot of coffee. 

Wozniak: Could a Computer Make a Cup of Coffee?

To date, no AGI systems have been created. Moreover, the question of whether such systems could be created — or, indeed, whether they should — remains a fierce subject of discussion within the AI field. 

Some researchers believe that AGI could pose an existential threat to human life. In this scenario, most notably depicted in the Terminator franchise, an AGI would emerge not merely as a new class of being, but also one that’s higher on the evolutionary ladder than humanity. 

As the philosopher Ross Graham explained in a paper for AI & Society, an AGI could lead to an “intelligence explosion,” where it becomes “capable of designing and editing itself and other machines.” As the AGI becomes more powerful, it could “discard human beings as it deems them burdensome” or eliminate humanity through “indifference or accident.” 

There’s also the thorny question of whether an AGI would exhibit sentience, and therefore should possess the same inviolable rights as a natural-born human. This view was expressed in an op-ed for the Los Angeles Times written by philosopher Eric Schwitzgebel and AI researcher Henry Shevlin, where they argued that as AGI achieves “something like consciousness,” they may demand ethical treatment. 

"They might demand not to be turned off, reformatted or deleted; beg to be allowed to do certain tasks rather than others; insist on rights, freedom and new powers; perhaps even expect to be treated as our equals," they wrote. This would be nothing short of personhood. 

The limits of AI

AI has the potential to improve our working and personal lives, but it has limitations and presents risks that must be mitigated and balanced. 

  • AI models can't judge whether something they say, do, or predict is correct. Only a human can do that. That’s why many AI-based applications let users provide feedback about the accuracy or relevance of a result. Or action. 
  • AI can’t identify causation, only correlation. While they can identify the relationship between events or objects, they can’t say that “X resulted in Y” with any degree of certainty. 
  • AI can be biased. Their ability to reason is based on the training data provided, which must be curated by a human, as well as the algorithm used. This allows human biases to influence AI decision-making. An example could be facial recognition that only recognises white faces, because the training data consisted primarily of pictures of white people. 
  • AI sometimes hallucinates. This issue is particularly common with large language model-based AI systems. If the system doesn’t know the answer to a question, it’ll make something up that sounds reasonable, but has no basis in reality.
  • AI is often inexplicable. This issue is particularly – but not exclusively – true for deep neural network-based models, where it’s hard to understand precisely how a model came to a decision. The only things visible to a human participant is the input (the training data) and the output (the result).
  • AI is expensive. Developing and training a sophisticated model costs millions of dollars. Because AI is computationally demanding, it often requires sophisticated hardware – like a powerful GPU or AI accelerator card – to work at scale and speed. Some companies (notably Tesla and Google) even design their own specialist chips to handle AI tasks, which adds further upfront cost.
  • Ai is inequitable. It disproportionately negatively affects vulnerable populations

Many of these problems can — and, perhaps, will — be overcome with future advances in AI technology. Others are caused by the humans that created the models. Again, these problems aren’t insurmountable. 

One solution is for AI projects to embrace transparency, allowing external parties to scrutinise the composition of training data, or for external stakeholders to provide feedback on the aims, development, and outcomes of a project. 

Another approach taken by OpenAI, the developer of ChatGPT, is to introduce safeguards to the training process. These safeguards include establishing limits on a model’s default behaviour and establishing “values” that define, in broad terms, how an AI should work and its impact on society. Additionally, OpenAI also wants to allow affected third-parties to provide feedback on the default behaviours and limits of an AI system. 

Another strategy – one borrowed from the security world – sees companies launch “red team” tests on their algorithms. These tests are adversarial in nature, and the tester tries to induce the algorithm to do or say something improper. By identifying an AI system’s vulnerabilities, a company can reduce the risk of real-world harm. 

We all have a role in building the future of AI

Most people who use a generative AI system for the first time are nothing short of stunned. It’s hard not to feel impressed by the idea of a computer writing poetry, explaining difficult concepts, or creating surrealist pieces of art.

And we’re only at the beginning of our journey. As computers become faster and AI models increase in sophistication, artificial intelligence will continue to grow as a core component of daily life. Indeed, for many, it already is.

AI will assume greater responsibilities, and thus, greater risks. The idea that a computer will one-day drive you home from work or interpret your X-ray results no longer feels like science fiction, but rather a description of a world that’s looming ever-closer.

But before that happens, we need to foster a culture of AI governance and transparency, where entities are held accountable for the harm caused by their AI systems. AI’s power must be matched with equally-robust safeguards. Society needs protections against the misuse of AI, and the unintentional harms that can emerge as a result of AI bias. Perhaps the best way to accomplish this is for everyone to understand what AI is – from how these models work, to their capabilities and limitations. If people understand the mechanics of AI, they can judge whether to trust it.

As the Association for the Advancement of Artificial Intelligence (the world’s largest scientific society concerned with AI research) has argued, using AI for maximal benefit requires “wide participation” across every strata of society, from governments and technology companies, to civil society organisations.

“Civil society organisations and their members should weigh in on societal influences and aspirations. Governments and corporations can also play important roles [and] ensure that scientists have sufficient resources to perform research on large-scale models, support interdisciplinary socio-technical research on AI and its wider influences, encourage risk assessment best practices, insightfully regulate applications, and thwart criminal uses of AI. Technology companies should engage in developing means for providing university-based AI researchers with access to corporate AI models, resources, and expertise.”

If the wider society understands AI, and sees that it’s both well-regulated and reliable, they’ll trust it. And it’s that trust — as Scientific American explained back in 2018 — that’s crucial for AI to flourish into its fullest potential.