Skip to content

Workflow Builder

Workflows are fixed, deterministic pipelines — unlike Agents (dynamic ReAct loops). Use workflows for batch processing, ETL, or when you need predictable, repeatable steps.

import { Schift } from "@schift-io/sdk";
const schift = new Schift({ apiKey: "sch_..." });
const workflow = await schift.workflows.create({ name: "My RAG Pipeline" });

Workflows are built from blocks connected by edges. Available block types:

TypeDescription
startEntry point
endExit point
retrieverSearch a vector store
rerankerRe-rank search results
llmCall an LLM
prompt_templateFormat a prompt
document_loaderLoad documents
chunkerSplit documents into chunks
embedderGenerate embeddings
web_searchSearch the web
code_executorRun custom code
conditionalBranch based on condition
loopRepeat a set of blocks
api_callCall an external API
outbound_webhookDispatch an HMAC-signed POST to an external URL
subworkflowInvoke another published workflow as a single block

The llm block accepts a provider-neutral JSON Schema through output_schema or response_schema. This is the same shape produced by the dashboard Structured output editor.

blocks:
- id: company_research
type: llm
config:
model: openai/gpt-4.1-nano
template: "Research {{topic}} and return JSON only."
output_format: json
output_schema:
type: object
properties:
companies:
type: array
items:
type: object
properties:
company_name: { type: string }
industry: { type: string }
website: { type: string }
founded_year: { type: integer }
required: [company_name, industry, website, founded_year]
additionalProperties: false
required: [companies]
additionalProperties: false

At runtime, Schift translates that schema into the provider’s native request format:

Provider/model prefixRequest format
openai/*Chat Completions response_format: { type: "json_schema", json_schema: ... }
openrouter/*OpenRouter response_format.json_schema plus provider.require_parameters: true
anthropic/*Claude Messages API output_format: { type: "json_schema", schema: ... } with the structured-output beta header
gemini* or google/*Gemini native generationConfig.responseJsonSchema
ollama/*Ollama native /api/chat format: <json schema>
qwen/*DashScope JSON mode response_format: { type: "json_object" } plus the schema in the system instruction

The block always returns the raw model text in text. When JSON parsing succeeds, it also returns the parsed object in data and response.

Qwen/DashScope JSON mode guarantees valid JSON, but does not enforce JSON Schema as strictly as OpenAI, OpenRouter, Claude, Gemini, or Ollama schema modes. Validate downstream if schema conformance is critical.

import { workflowFromYaml, workflowToYaml } from "@schift-io/sdk";
// Import from YAML
const definition = workflowFromYaml(yamlString);
// Export to YAML
const yaml = workflowToYaml(definition);
const result = await schift.workflows.run(workflow.id, {
query: "What is vector search?",
});
console.log(result);

Break a monolithic graph into small, focused child workflows and invoke them from a parent router. Each child owns its own prompts, model config, and budget surface.

parent.yaml
blocks:
- { id: start, type: start }
- id: stage_router
type: router
config:
routes: [baby_info, letter, free_chat, fallback]
expression: |
workflowStage === "free_chat" ? "free_chat"
: (workflowStage === 0 && /네|볼래|보여/.test(query)) ? "baby_info"
: (workflowStage === 2 && currentAttachmentQuestionId) ? "letter"
: "fallback"
- id: sub_baby
type: subworkflow
config:
workflow_id: $env.SCHIFT_WF_BABY_INFO # or a literal workflow ID
input_mapping:
currentWeek: "$.currentWeek" # $. reads from this block's inputs
query: "$.query"
weekKnowledgeEntityId: "$.weekKnowledgeEntityId"
output_mapping:
"*": "result" # spread child.result onto parent output
- id: merge
type: merge
config: { strategy: first_non_null }
- { id: end, type: end }
edges:
- { source: start, target: stage_router }
- { source: stage_router, source_handle: baby_info, target: sub_baby }
- { source: sub_baby, target: merge }
- { source: merge, target: end }
KeyTypeDescription
workflow_idstring (required)Target workflow ID. Supports $env.VAR_NAME and {{var}} substitution from ctx.variables.
input_mappingdictMap parent inputs/variables → child inputs. Values: $.field (from resolved inputs), $var.foo.bar (from workflow variables), plain string (treated as an inputs key). If omitted, the full input payload is forwarded.
output_mappingdictReshape child outputs. Three forms: rename (answer: "text"), nested path (answer: "result.answer"), or spread ("*": "result" flattens child.result onto the parent output). Spread runs first; explicit keys win on collision.
timeout_snumberChild-workflow execution timeout. Default 30.

Without output_mapping, the parent block receives the child workflow’s END outputs verbatim. Two meta keys are always added:

{
"...child keys...": "...",
"_subworkflow_id": "wf_abc123",
"_subworkflow_run_id": "run_xyz789"
}

Meta keys are kept across remapping (via setdefault) so downstream blocks can still trace which child ran.

  • Recursion: a workflow cannot invoke itself transitively; max depth is 5. Exceeding either limit fails the parent run.
  • Shared spend guard: the child inherits the parent’s LLM/token/call budget caps — fanning out to children cannot bypass a per-run budget.
  • Org-scoped: the child workflow must belong to the same org as the parent. Cross-org invocation returns not found.
  • Skipping via router: subworkflow blocks gated behind a router branch that doesn’t match are auto-skipped (engine _is_skipped). No explicit skip flag needed.

Register custom block types for specialized processing:

import { registerCustomNode, SDKBaseNode } from "@schift-io/sdk";
class MyNode extends SDKBaseNode {
async execute(input: unknown) {
// Custom logic
return { processed: true, data: input };
}
}
registerCustomNode("my_custom_node", MyNode);