AI Glossary

Essential AI terms explained in plain language. No jargon, just clear definitions for founders and business leaders.

AI Agent

Applications

An AI system that can autonomously perform tasks, make decisions, and take actions to achieve specified goals. Can use tools, access APIs, and operate with minimal human intervention.

API (in AI context)

Infrastructure

Application Programming Interface that allows developers to access AI models and services. OpenAI, Anthropic, and others provide APIs to integrate their models into applications.

Artificial Intelligence (AI)

Fundamentals

The simulation of human intelligence by machines, enabling them to perform tasks that typically require human cognition such as learning, reasoning, problem-solving, and decision-making.

Attention Mechanism

Technical

A technique that allows models to focus on different parts of the input when producing outputs. Enables models to capture long-range dependencies in data.

Batch Processing

Techniques

Processing multiple AI requests together rather than one at a time. Can improve efficiency and reduce costs for non-time-sensitive workloads.

Computer Vision

Applications

AI field focused on enabling machines to interpret and understand visual information from the world, including images and videos.

Context Window

Technical

The maximum amount of text (measured in tokens) that a model can process at once. Larger context windows allow for more information to be considered when generating responses.

Deep Learning

Fundamentals

A subset of machine learning using neural networks with many layers (hence 'deep'). Particularly effective for complex tasks like image recognition and natural language processing.

Edge AI

Infrastructure

Running AI models locally on devices (phones, IoT devices) rather than in the cloud. Offers benefits in latency, privacy, and offline operation.

Embedding

Technical

A numerical representation of data (text, images, etc.) as vectors in a high-dimensional space. Similar items have similar embeddings, enabling semantic search and comparison.

Few-shot Learning

Techniques

Providing a model with a few examples of a task within the prompt to help it understand the desired output format and behavior.

Fine-tuning

Techniques

The process of taking a pre-trained model and further training it on a specific dataset to customize it for particular tasks or domains.

Generative AI

Applications

AI systems that can create new content — text, images, code, music, video. Includes LLMs, image generators like DALL-E and Midjourney, and more.

Hallucination

Challenges

When an AI model generates information that is factually incorrect or made up, but presents it as truth. A key challenge in deploying LLMs for factual applications.

Inference

Technical

The process of using a trained AI model to make predictions or generate outputs on new data. Distinct from training, which is the process of creating the model.

Large Language Model (LLM)

Models

AI models trained on massive amounts of text data that can understand and generate human-like text. Examples include GPT-4, Claude, and Llama.

Latency

Technical

The time delay between sending a request to an AI model and receiving a response. Critical factor in user experience for real-time applications.

Machine Learning (ML)

Fundamentals

A subset of AI where systems learn and improve from experience without being explicitly programmed. ML algorithms use data to identify patterns and make predictions.

Model Drift

Challenges

When an AI model's performance degrades over time as real-world data changes from what the model was trained on. Requires monitoring and periodic retraining.

Multimodal AI

Models

AI systems that can process and generate multiple types of data — text, images, audio, video. GPT-4V and Gemini are examples of multimodal models.

Natural Language Processing (NLP)

Applications

The branch of AI focused on enabling computers to understand, interpret, and generate human language. Powers chatbots, translation, and text analysis.

Neural Network

Fundamentals

A computing system inspired by biological neural networks in the brain. Consists of interconnected nodes (neurons) that process information in layers.

Prompt Engineering

Techniques

The practice of crafting effective inputs (prompts) for AI models to achieve desired outputs. Critical skill for working with LLMs.

RAG (Retrieval Augmented Generation)

Techniques

A technique that combines LLMs with external knowledge retrieval. The model first retrieves relevant information from a knowledge base, then uses it to generate more accurate responses.

Reinforcement Learning from Human Feedback (RLHF)

Techniques

A training technique where models are refined based on human preferences and feedback. Used to make LLMs more helpful, harmless, and honest.

Temperature (in AI)

Technical

A parameter that controls randomness in model outputs. Higher temperature = more creative/random responses. Lower temperature = more focused/deterministic responses.

Token

Technical

The basic unit of text that language models process. Can be a word, part of a word, or punctuation. Model pricing and limits are often measured in tokens.

Transformer

Fundamentals

A neural network architecture that revolutionized NLP. Uses attention mechanisms to process sequential data. Foundation of modern LLMs like GPT and BERT.

Vector Database

Infrastructure

A database optimized for storing and querying embeddings (vectors). Essential for semantic search, recommendations, and RAG systems. Examples: Pinecone, Weaviate.

Zero-shot Learning

Techniques

The ability of a model to perform tasks it wasn't explicitly trained for, using only natural language instructions. A key capability of modern LLMs.

Showing 30 of 30 terms

Ready to put this knowledge to work?

Book a Discovery Call and let's discuss how AI can help your business.

Book a Discovery Call