A
- Agentic AI
- AI systems capable of autonomous, multi-step action — perceiving context, planning, and executing tasks without human intervention at each step.
- Agent Orchestration
- The coordination of multiple specialised AI agents working together on a shared goal, with routing, sequencing, and error-handling logic managing the flow.
- Agent Smriti
- Nagent's persistent memory layer that stores contextual knowledge, user preferences, and interaction history across sessions and agents.
- Autonomous Execution
- The ability of an AI agent to carry out a defined task from start to finish without requiring human approval at each step.
F
- Feedback Loop
- A mechanism where agent outputs are evaluated and used to improve future performance — either through human signals or automated evaluation.
- Fine-tuning
- The process of updating a language model's weights using domain-specific training data to improve performance on targeted tasks.
- Function Calling
- A capability in modern LLMs that allows the model to invoke external tools, APIs, and services in a structured, predictable way.
H
- Human-in-the-Loop (HITL)
- A workflow design pattern where an AI agent pauses at defined checkpoints to wait for human review or approval before proceeding.
- Hallucination
- When a language model generates content that is factually incorrect or fabricated, presented with apparent confidence.
K
- KARMIC Feedback Loop
- Nagent's proprietary continuous-learning engine that scores every agent output and uses that signal to automatically improve prompts, routing, and agent behaviour over time.
- Knowledge Base
- A structured repository of domain-specific information that agents can retrieve and reference when executing tasks.
L
- LLM (Large Language Model)
- A deep learning model trained on large text corpora that can generate, classify, summarise, and transform natural language.
- LangChain
- An open-source framework for building applications powered by language models, widely used for constructing agent pipelines.
M
- Memory (Agent Memory)
- The ability of an agent to retain and recall information from prior interactions — enabling personalisation, consistency, and context-awareness across sessions.
- Multi-Agent System
- An architecture where multiple specialised AI agents collaborate on a task, each handling the part of the problem it is best suited for.
- Multi-Step Reasoning
- The capacity of an AI agent to decompose a complex goal into sub-tasks, reason through them in sequence, and synthesise a final output.
P
- Prompt Engineering
- The practice of crafting and optimising the instructions given to an LLM to elicit more accurate, relevant, or structured outputs.
- Programmatic SEO
- The technique of generating large volumes of SEO-optimised pages from structured data, targeting long-tail search queries at scale.
R
- RAG (Retrieval-Augmented Generation)
- An architecture that combines a language model with a retrieval system — the model queries a knowledge base to ground its responses in relevant, up-to-date information.
- ReAct (Reasoning + Acting)
- A prompting pattern where an LLM alternates between reasoning steps and tool-use actions, mimicking how a human might tackle a problem.
S
- Sovereign AI
- An approach to AI deployment where an organisation retains full control over their AI infrastructure, data, and models — typically via private cloud or on-premise deployment.
- System Prompt
- Instructions passed to an LLM before the user message that define the agent's role, constraints, tone, and behaviour.
T
- Tool Use (Function Calling)
- The ability of an LLM to invoke external functions — APIs, databases, calculators, code runners — and incorporate the results into its response.
- Token
- The basic unit of text processed by a language model — roughly equivalent to a word or word-part. Context limits and costs are measured in tokens.
W
- Workflow Automation
- The use of software (including AI agents) to execute a repeatable sequence of business tasks with minimal human intervention.
