AI Glossary

Some of the most important AI terms explained from A like AI to Z like Zero Shot.
scifi datacentre image
A N3XTCODER series

Implementing AI for Social Innovation

Welcome to the N3XTCODER series on Implementing AI for Social Innovation Series.

In this series we are looking at ways in which Artificial Intelligence can be used to benefit society and our planet – in particular the practical use of AI for Social Innovation projects.

Overview

A

Artificial Intelligence (AI)

Artificial Intelligence describes a range of advanced computing processes that in one or more aspects seem to behave like human intelligence. A more precise term for most of the current generation of AI technologies is “Machine Learning”. 

Machine Learning systems, often called models,  are “trained” with very large amounts of data, called datasets. These datasets can be virtually any type of digital information - images, text, statistics, video.

Machine learning works very differently to conventional computing, where algorithms give computers a set of instructions to complete. With a machine learning model, the computer finds connections and patterns in the data to generate its own solutions to problems and questions.

A machine learning model is usually trained by humans, who label some of the data in the dataset, so the model can detect related patterns in their own datasets and other data. For example, if a machine learning model is trained by labelling thousands of photos of dogs in its dataset, it will find dogs in other images in its dataset, and in any other photo of a dog shared with it.

Machine learning models use very powerful computer networks that can process vast amounts of data, so they can almost instantly find some patterns in data that humans alone could never see.

AI Text Tools

AI Text Tools is a summary term for tools like chatGTP, Bard and Bing AI that use artificial intelligence to generate text in human language after receiving a prompt.

AI Detector Tools

AI Detector Tools identify and analyse content generated by AI. These tools help distinguish between human-created and AI-generated text, meaning to provide insight in digital communication about authenticity and transparency.

B

Bias in AI

AI Algorithms are trained on data. When this data is skewed in any which way and not completely representative this may lead to biases in the output or performance of the algorithm.

C

Computer Vision

Computer Vision is a field of AI wherein machines interpret and process visual data as humans do. The program learns to recognise patterns, objects, and scenes in images and videos. Deciphering human handwriting on old documents is an example of applied computer vision. Image recognition is a subset of computer vision. It's one of the key applications of computer vision, along with other tasks like object detection, image segmentation, and pattern recognition.

D

Deep Learning

Deep Learning is a subset of machine learning where artificial neural networks emulate the learning approach of the human brain. In contrast to Shallow Learning which uses simpler algorithms, Deep Learning involves complex structures with many layers to process data in depth.

G

Generative AI

Generative AI refers to AI that is trained to create new output: new content, new ideas or new data patterns. This contrasts with Analytical AI, which focuses on understanding and interpreting existing information rather than generating new output.

H

Hallucinations

Large Language Models can generate very convincing answers to questions, or “prompts”, but sometimes those answers can be factually wrong. These wrong answers are often called “hallucinations”.

This happens because an LLM is not answering a question in the same way a person does. An LLM is not looking for a “correct” answer in its dataset, it is looking for data with similar language patterns to the prompt, and from that it can detect the language patterns in its data that are most probably correct..

This means that LLMs often produce an answer that is very similar to a correct answer, but not the correct answer itself.

Many AI developers are currently working on different methods to automatically cross check LLM results for accuracy, but the problem is fundamental to how LLMs work, so it is a very hard problem to solve.

In the meantime, if you use LLMs, make sure a person checks any facts in an LLM response for any errors.

I

Inference

Inference is the term used for querying AI. Inference means “a conclusion reached on the basis of evidence and reasoning”. Or as the IBM blog puts it neatly: “Inference is the process of running live data through a trained AI model to make a prediction or solve a task. Inference is an AI model's moment of truth, a test of how well it can apply information learned during training to make a prediction or solve a task” (https://research.ibm.com/blog/AI-inference-explained)

L

Large Language Models

The latest generation of text-based AI or machine learning technologies, such as ChatGPT and Google Bard, are often called “Large Language Models” (LLMs).

LLMs are trained on vast amounts of text, from books, social media and the internet.

Typically, to use an LLM you ask a question - or a “prompt” - and it will generate an answer. Up until recently LLMs were impressive, but the answers they gave were still obviously generated by a computer.

That changed with the launch of ChatGPT, quickly followed by Google Bard and a few other LLMs in late 2022. These LLMs can search their huge datasets to give insightful answers, often in excellent written language, so it can be hard to detect that the answer is generated by a computer.

LLMs can be extremely useful for writing text, for researching and for structuring information - for example articles or emails, to create plans and itineraries, to conduct research, and for writing computer code, among many other uses.

However, because of the way LLMs work, the answers they give can often be factually wrong, even when they look  convincing. These incorrect answers are sometimes called “Hallucinations”.

M

Machine Learning

A subset of AI that enables systems to automatically learn and improve from experience without being explicitly programmed.

Models

Models in AI are algorithmic structures trained to process, interpret and respond to data in ways that mimic human decision-making. 

Multi-Purpose Models

Multi-Purpose Models are versatile algorithms designed to handle a wide range of tasks.

P

Predictive Analytics

The use of data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical data.

Prompt

To generate an answer from an LLM, you need to enter a prompt. This often takes the form of a question in plain language, however you may want to add specific information,  for example, what to include and exclude, in what style you want the response, how long you want it to be, for whom the answer is directed. Your prompt could also be much bigger - for example an entire 1500 word article that you want to be summarised in a tweet.

A major part of using LLMs effectively is producing prompts that precisely tailor the response to your requirements. This is often called “prompt engineering” and is fast becoming a sought after technical skill in its own right.

T

Task Specific Models

Task Specific Models are models in AI that have been programmed to execute a specific task.

Transfer Learning

Transfer Learning is a machine learning approach where a model developed for one task is reused as the starting point for a model on a different task. 

Z

Zero-shot

The ability of a machine learning model to recognize and classify data that it has never seen before.

Was this article helpful? yes no

Join us in the conversation on various social channels

Join us in the conversation on various social channels. We discuss the latest developments in technology as they happen!

THIS ARTICLE HAS BEEN REALISED WITH THE HELP OF
Bundesministerium für Wirtschaft und Klimaschutz
NextGenerationEU