Agent
What is an Agent?
Section titled “What is an Agent?”An Agent is a program that receives a question, decides what tools to use, and produces an answer. It runs a ReAct loop (Reason + Act):
User question -> LLM thinks + picks a tool -> Tool executes, returns result -> LLM thinks again (with tool result) -> Final answer (or pick another tool)The loop continues until the LLM produces a final answer or hits maxSteps (default: 10).
Creating an Agent
Section titled “Creating an Agent”import { Schift, Agent, RAG } from "@schift-io/sdk";
const schift = new Schift({ apiKey: "sch_..." });const rag = new RAG({ bucket: "my-docs" }, schift.transport);
const agent = new Agent({ name: "My Agent", instructions: "You are a helpful assistant. Use the knowledge base.", rag, tools: [myCustomTool], model: "gpt-4o-mini", transport: schift.transport, maxSteps: 15,});
const result = await agent.run("What is Schift?");console.log(result.output);LLM Connection Modes
Section titled “LLM Connection Modes”Agents support three ways to connect to an LLM:
1. Schift Cloud (default)
Section titled “1. Schift Cloud (default)”Routes through Schift’s /v1/chat/completions endpoint. Supports model routing across OpenAI, Google, and Anthropic.
const agent = new Agent({ name: "Cloud Agent", instructions: "...", model: "gpt-4o-mini", // or "claude-sonnet-4-6", "gemini-2.5-flash" transport: schift.transport,});2. Direct Provider
Section titled “2. Direct Provider”Connect directly to OpenAI, Google, or Anthropic endpoints. Bypasses Schift Cloud for LLM calls (RAG still uses Schift Cloud).
const agent = new Agent({ name: "Direct Agent", instructions: "...", model: "gpt-4o-mini", baseUrl: "https://api.openai.com/v1", apiKey: process.env.OPENAI_API_KEY,});3. Self-hosted
Section titled “3. Self-hosted”Connect to Ollama, vLLM, LiteLLM, or any OpenAI-compatible endpoint.
const agent = new Agent({ name: "Local Agent", instructions: "...", model: "llama3", baseUrl: "http://localhost:11434/v1",});See the Self-hosting guide for details.
Agent vs Workflow vs Client
Section titled “Agent vs Workflow vs Client”| Agent | Workflow | Client | |
|---|---|---|---|
| Use when | Interactive Q&A, tool calling | Fixed data pipelines (ETL, batch) | Simple embed/search calls |
| Loop | ReAct (dynamic, LLM decides) | DAG (fixed steps, deterministic) | None |
| Tools | Yes | Block types | N/A |
| Memory | Conversation history | N/A | N/A |
Configuration Reference
Section titled “Configuration Reference”| Option | Type | Default | Description |
|---|---|---|---|
name | string | required | Display name |
instructions | string | required | System prompt for the LLM |
model | ModelId | string | "gpt-4o-mini" | LLM model identifier |
transport | Transport | — | Schift Cloud transport (from schift.transport) |
baseUrl | string | — | Custom OpenAI-compatible endpoint |
apiKey | string | — | API key for direct/self-hosted mode |
tools | AgentTool[] | [] | Tools available to the agent |
rag | RAG | — | RAG instance (auto-registers as tool) |
memory | MemoryConfig | { maxMessages: 50 } | Conversation memory config |
maxSteps | number | 10 | Max ReAct loop iterations |
Run Result
Section titled “Run Result”agent.run() returns an AgentRunResult:
interface AgentRunResult { steps: AgentStep[]; // Each step in the ReAct loop output: string; // Final answer text totalDurationMs: number; // Total execution time}Each step has a type: think, tool_call, tool_result, final_answer, or error.