Algorithm:
A set of rules to be followed by a computer to perform a task or solve a problem.
API (Application Programming Interface):
A set of rules that allows different software applications to communicate and share data with each other.
Artificial Intelligence:
Machines that are created to simulate human intelligence, such as learning, problem solving and decision making.
Bias (in AI):
Systematic error or prejudiced outcomes generated by an artificial intelligence system. Often reflected by the biases present in the data the system was trained on.
Big Data:
Extremely large and complex datasets that can't be easily processed by traditional data tools.
Chatbot:
A computer program designed to simulate human conversation, typically through text or voice commands, allowing users to interact in a conversational way.
Cloud Computing:
The delivery of on-demand computing services-such as storage, databases, servers, networking and software- over the internet to offer flexible recources.
Computer Vision:
A field of artificial intelligence that enables computers to interpret, analyse and understand visiual information from images and videos.
Data:
Raw facts, figures and values that can be collected, stored.
Data Analytics:
The process of inspecting, cleaning and interpreting data to discover patterns, draw conclusions and support decision-making.
Data Governance:
The set of policies, processes and standards that ensure effective and secure management of data within an organisation.
Data Visualisation:
The graphical representation od data and information using visual elements such as, graphs, charts and maps to make complex data understandable.
Dataset:
A collection of related data used to train, test or vaidate an artificial intelligence model.
Deep Learning:
A subset of machine learning that uses complex, multi-layered neural networks to learn complex patterns nd features from large amounts of data.
Deployment:
The proces of making a trained artificial intelligence model or software available to use in a real-world environment.
Ethical AI:
The development and use of artificial intelligence in ways that are fair, transparent, accountable, and aligned with human values, avoiding harm and discrimination.
Feature Engineering:
The process of selecting and transforming raw data into features that make the underlying patterns more visible and useful for a machine learning model.
Fine-Tuning:
The process of adapting a pre-trained artificial intelligence model to a specific task or dataset by continuing its training on a more specialised dataset.
Generative AI:
A type of artificial intelligence that creates new, original content, such as text, images, music or code in response to prompts.
GPU (Graphics Processing Unit):
A specialised processir designed to rapidly manipulate and alter memory to ro accelerate the creation of images for artificial intelligence and deep learning training.
Hallucination (AI):
When an AI model generates false, misleading or nonsensical information presented as truth but not based on real data.
Human-in-the-Loop (HITL):
An AI development approach where human input, judgmrnt and oversight are explicity intergrated into an AI system's learning or decision making process.
Hyperparameters:
Configuration settings defined before training a machine learning model that controls it's learning process.
Inference:
The process of using trained AI model and what it has learned to make predictions or decisions based on new, unseen data.
Large Language Model (LLM):
A type of AI model trained on massive text datasets and code to understand and generate human-like language.
Loss Function:
A mathematical formula that measures the difference between an AI model's predicted output and the actual output, guiding the model's learning to minimise errors.
Machine Learning (ML):
A branch of AI where systems learn from data and improve their performance on tasks whithout being explicity programmed for every scenario.
Model (AI/Ml):
The output of a machine learning algorithm after it has been trained on a dataset, summarising the patterns and relationships it has learned.
Multimodal AI:
Ai systems that can process, understand and generate multiple types of data (e.g. text, images, audio, video) simultaneously.
Natural Language Processing (NLP):
A branch of AI that enables machines to understand, interpret, and generate human language.
Neural Network:
A machine learning model inspired by the structure and function of the human brain, used in deep learning and made of layers of interconnected nodes that can learn to recognise patterns in data.
Overfitting:
A problem where a model learns the training data too well, including its noise and errors, resulting in poor performance on new, unseen data.
Prompt:
The specific input, question or command given to an AI model to provide a desired output.
Prompt Engineering:
The practise of refining and optimising prompts to guid AI models to produce accurate, relevant, and high-quality results.
Reinforcement Learning:
A type of machine learning where an AI agent learns to make decisions by trial and error, recieving rewardsfor desired actions and penalties for undesirable ones.
Responsible AI:
The development and deployment of AI systems in a way that is ethical, transparent, fair, accountable, and aligned with societal values.
SQL (Structured Query Language):
A programming language used to manage and manipulate relational databases by querying, inserting, updating and deleting data
Supervised Learning:
A type of machine learning where a model is trained on labeled data, meaning each input has a known correct output.
Test Data:
A portion of the dataset used to evaluate the performance and accuracy of a fully trained AI model on data it has not seen before.
Text-to-Image:
A type of generative AI that creates visual images based on textual description or prompt using models trained on both text and image data.
Training Data:
The dataset used to teach a machine learning model by exposing it to examples with known outcomes.
Transparency (AI):
Making AI systems understandable and explainable, so users and stakeholders can see how decisions are made and why.
Underfitting:
A problem where a model is too simple to capture te underlying patterns in the data, resulting in poor performance on both training and test data.
Unsupervised Learning:
A type of machine learning where the model discovers patterns and structures within unlabeled data without explicit guidance or pre-defined outputs.
Validation Data:
A subset of data used during model training to tune parameters in order to evaluate the model's performance on unsees data before final testing.