Docs
Workflows
Build and run DAG-based processing pipelines. Define workflows in YAML or JSON, manage them via SDK or REST API, and execute with a single call.
Choose Your Format
Schift workflows accept both YAML and JSON. Pick the one that fits your workflow:
| Format | Best for | How to use |
|---|---|---|
| YAML | Version control, human editing, code review | SDK push_yaml() / importYaml() or REST POST /v1/workflows/import |
| JSON | Programmatic generation, API-first workflows | SDK create() or REST POST /v1/workflows with graph body |
| Template | Quick start with a pre-built pipeline | SDK create(template="BASIC_RAG") or REST with template field |
TL;DR
If you are defining a pipeline by hand, use YAML. If your app generates pipelines dynamically, use JSON. Both produce the same workflow on the server.
YAML Schema Reference
A workflow YAML file has four top-level keys: version, name, blocks, and edges.
yamlversion: 1 # Required. Always 1.
name: "Legal Document QA" # Required. Display name.
description: "RAG pipeline with reranking for legal docs"
blocks:
- id: start # Unique ID (referenced by edges)
type: start # Block type (see table below)
- id: retriever
type: retriever
title: "Search Legal Docs" # Optional display name
config: # Block-specific parameters
collection: "legal-kr"
top_k: 10
rerank: true
rerank_top_k: 3
- id: prompt
type: prompt_template
config:
template: |
Context:
{{results}}
Question: {{query}}
Answer in the same language as the question.
- id: llm
type: llm
config:
model: "openai/gpt-4.1-nano"
temperature: 0.3
max_tokens: 1024
- id: answer
type: answer
- id: end
type: end
edges:
- source: start
target: retriever
- source: retriever
target: prompt
- source: prompt
target: llm
- source: llm
target: answer
- source: answer
target: endBlock Types
Use GET /v1/workflows/meta/block-types to list all available types. Full reference:
| Category | Type | Description |
|---|---|---|
| Control | start | Entry point. Forwards all inputs to downstream blocks. |
| Control | end | Terminal node. Marks workflow completion. |
| Document | document_loader | Load PDF, DOCX, HWP, PPT, images, or URLs. |
| Document | document_parser | OCR and table/chart extraction. |
| Document | chunker | Split text. Strategies: recursive, semantic, sentence, fixed. |
| Embedding | embedder | Embed via Schift API with automatic canonical projection. |
| Embedding | model_selector | Auto-select the best embedding model for the task. |
| Storage | vector_store | Upsert vectors to a collection. |
| Storage | collection | Reference an existing collection by name. |
| Retrieval | retriever | Vector search with optional metadata filters and reranking. |
| Retrieval | reranker | Cross-encoder rerank on retrieved results. |
| LLM | llm | LLM generation. Prefix-routed: openai/, anthropic/, google/. |
| LLM | prompt_template | Jinja2 template for prompt construction. |
| Logic | condition | If/else branching. |
| Logic | router | Multi-path routing (question classifier). |
| Logic | ai_router | LLM-powered dynamic routing. |
| Logic | loop | Iterate over a list of items. |
| Transform | code | Python sandbox for custom logic. |
| Transform | merge | Merge multiple branch outputs. |
| Transform | variable | Set/get workflow variables. |
| Transform | field_selector | Pick columns from tables or paths from JSON. |
| Integration | http_request | Call an external API. |
| Integration | webhook | Incoming webhook trigger. |
| Ingest | webhook_source | Inbound webhook to ingest pipeline. |
| Ingest | ingest_bridge | Dedup + download + document creation. |
| Ingest | feed_poll | Periodic RSS/API polling. |
| Ingest | notify | Outbound webhook on job completion. |
| Output | answer | Chat-style response output. |
| Output | metadata_extractor | Extract structured metadata from text. |
SDK Quickstart
Both SDKs support the full lifecycle: create, YAML import/export, run, and manage.
pythonfrom schift import Schift
client = Schift(api_key="sch_xxx")
# Option 1: Create from YAML file
wf = client.workflows.push_yaml("pipeline.yaml")
# Option 2: Create from template
wf = client.workflows.create("My RAG", template="BASIC_RAG")
# Option 3: Build step by step
wf = client.workflows.create("Custom Pipeline")
client.workflows.add_block(wf.id, "retriever", config={"collection": "docs", "top_k": 5})
client.workflows.add_block(wf.id, "llm", config={"model": "openai/gpt-4.1-nano"})
client.workflows.add_edge(wf.id, source="retriever", target="llm")
# Run
result = client.workflows.run(wf.id, inputs={"query": "What is the refund policy?"})
print(result.outputs)
# Export to YAML for version control
yaml_str = client.workflows.to_yaml(wf.id, path="pipeline.yaml")typescriptimport { Schift } from "@schift-io/sdk";
const schift = new Schift({ apiKey: "sch_xxx" });
// Option 1: Import from YAML string (requires js-yaml)
const wf = await schift.workflows.importYaml(yamlString);
// Option 2: Create from template
const wf2 = await schift.workflows.create({ name: "My RAG", template: "BASIC_RAG" });
// Option 3: Build step by step
const wf3 = await schift.workflows.create({ name: "Custom" });
await schift.workflows.addBlock(wf3.id, { type: "retriever", config: { collection: "docs" } });
await schift.workflows.addBlock(wf3.id, { type: "llm", config: { model: "openai/gpt-4.1-nano" } });
await schift.workflows.addEdge(wf3.id, { source: "retriever", target: "llm" });
// Run
const run = await schift.workflows.run(wf3.id, { query: "What is the refund policy?" });
// Export to YAML
const yaml = await schift.workflows.exportYaml(wf3.id);REST API Reference
| Method | Endpoint | Description |
|---|---|---|
| POST | /v1/workflows | Create a workflow (blank, from template, or with full graph) |
| GET | /v1/workflows | List all workflows |
| GET | /v1/workflows/{id} | Get a single workflow with full graph |
| PATCH | /v1/workflows/{id} | Update name, description, status, or graph |
| DELETE | /v1/workflows/{id} | Delete a workflow |
| POST | /v1/workflows/import | Import from YAML string |
| GET | /v1/workflows/{id}/export?format=yaml | Export as YAML (or format=json) |
| POST | /v1/workflows/{id}/blocks | Add a block |
| DELETE | /v1/workflows/{id}/blocks/{block_id} | Remove a block |
| POST | /v1/workflows/{id}/edges | Add an edge |
| DELETE | /v1/workflows/{id}/edges/{edge_id} | Remove an edge |
| POST | /v1/workflows/{id}/validate | Validate the graph (cycles, missing connections) |
| POST | /v1/workflows/{id}/run | Execute with inputs |
| GET | /v1/workflows/{id}/runs | List past runs |
| GET | /v1/workflows/{id}/runs/{run_id} | Get a specific run result |
| POST | /v1/workflows/{id}/webhook/{path} | Trigger via external webhook |
| POST | /v1/workflows/generate | AI-generate a workflow from natural language (paid) |
bash# Import a YAML file via REST
curl -X POST https://api.schift.io/v1/workflows/import \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $SCHIFT_API_KEY" \
-d '{"yaml": "version: 1\nname: My Pipeline\nblocks:\n - id: start\n type: start\n - id: end\n type: end\nedges:\n - source: start\n target: end"}'
# Export as YAML
curl https://api.schift.io/v1/workflows/{id}/export?format=yaml \
-H "Authorization: Bearer $SCHIFT_API_KEY"
# Export as JSON
curl https://api.schift.io/v1/workflows/{id}/export?format=json \
-H "Authorization: Bearer $SCHIFT_API_KEY"LLM Provider Routing
The llm block routes to different providers based on the model prefix:
| Model format | Provider | Example |
|---|---|---|
openai/model-name | OpenAI | openai/gpt-4.1-nano |
anthropic/model-name | Anthropic | anthropic/claude-sonnet-4-6 |
google/model-name or gemini-* | Google (Gemini) | gemini-2.5-flash |
API keys are resolved from org settings first, then from environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY).
Templates
Use GET /v1/workflows/meta/templates to list available templates.
| Template | Pipeline |
|---|---|
BASIC_RAG | Start -> Retriever -> Reranker -> Prompt -> LLM -> Answer -> End |
DOCUMENT_QA | Document QA with source attribution |
CONVERSATIONAL_RAG | Multi-turn conversational RAG with context |
CHAT_RAG | Chat-optimized RAG |
IMAGE_OCR_INGEST | OCR -> Chunk -> Embed ingestion pipeline |
Execution
Pass any number of input variables. The start node forwards all of them to downstream blocks.
bash# Validate before running
curl -X POST https://api.schift.io/v1/workflows/{id}/validate \
-H "Authorization: Bearer $SCHIFT_API_KEY"
# Run with inputs
curl -X POST https://api.schift.io/v1/workflows/{id}/run \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $SCHIFT_API_KEY" \
-d '{"inputs": {"query": "maternity leave policy", "language": "ko"}}'
# List past runs
curl https://api.schift.io/v1/workflows/{id}/runs \
-H "Authorization: Bearer $SCHIFT_API_KEY"The run response includes status, outputs, per-block block_states with timing, and error if any step failed.
Webhook Triggers
Workflows can be triggered by external systems. POST to /v1/workflows/{id}/webhook/{path} and the payload is forwarded as workflow inputs (body, headers, query_params).
AI Generation (Paid)
POST /v1/workflows/generate creates a workflow graph from a natural language description. The generated graph is returned for review and is not saved automatically.
bashcurl -X POST https://api.schift.io/v1/workflows/generate \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $SCHIFT_API_KEY" \
-d '{"prompt": "Search my docs, rerank results, and summarize with GPT-4o"}'