Workflow Builder
Overview
Section titled “Overview”Workflows are fixed, deterministic pipelines — unlike Agents (dynamic ReAct loops). Use workflows for batch processing, ETL, or when you need predictable, repeatable steps.
Creating a Workflow
Section titled “Creating a Workflow”import { Schift } from "@schift-io/sdk";
const schift = new Schift({ apiKey: "sch_..." });const workflow = await schift.workflows.create({ name: "My RAG Pipeline" });Block Types
Section titled “Block Types”Workflows are built from blocks connected by edges. Available block types:
| Type | Description |
|---|---|
start | Entry point |
end | Exit point |
retriever | Search a vector store |
reranker | Re-rank search results |
llm | Call an LLM |
prompt_template | Format a prompt |
document_loader | Load documents |
chunker | Split documents into chunks |
embedder | Generate embeddings |
web_search | Search the web |
code_executor | Run custom code |
conditional | Branch based on condition |
loop | Repeat a set of blocks |
api_call | Call an external API |
outbound_webhook | Dispatch an HMAC-signed POST to an external URL |
subworkflow | Invoke another published workflow as a single block |
Structured LLM Output
Section titled “Structured LLM Output”The llm block accepts a provider-neutral JSON Schema through output_schema
or response_schema. This is the same shape produced by the dashboard
Structured output editor.
blocks: - id: company_research type: llm config: model: openai/gpt-4.1-nano template: "Research {{topic}} and return JSON only." output_format: json output_schema: type: object properties: companies: type: array items: type: object properties: company_name: { type: string } industry: { type: string } website: { type: string } founded_year: { type: integer } required: [company_name, industry, website, founded_year] additionalProperties: false required: [companies] additionalProperties: falseAt runtime, Schift translates that schema into the provider’s native request format:
| Provider/model prefix | Request format |
|---|---|
openai/* | Chat Completions response_format: { type: "json_schema", json_schema: ... } |
openrouter/* | OpenRouter response_format.json_schema plus provider.require_parameters: true |
anthropic/* | Claude Messages API output_format: { type: "json_schema", schema: ... } with the structured-output beta header |
gemini* or google/* | Gemini native generationConfig.responseJsonSchema |
ollama/* | Ollama native /api/chat format: <json schema> |
qwen/* | DashScope JSON mode response_format: { type: "json_object" } plus the schema in the system instruction |
The block always returns the raw model text in text. When JSON parsing
succeeds, it also returns the parsed object in data and response.
Qwen/DashScope JSON mode guarantees valid JSON, but does not enforce JSON Schema as strictly as OpenAI, OpenRouter, Claude, Gemini, or Ollama schema modes. Validate downstream if schema conformance is critical.
YAML Import/Export
Section titled “YAML Import/Export”import { workflowFromYaml, workflowToYaml } from "@schift-io/sdk";
// Import from YAMLconst definition = workflowFromYaml(yamlString);
// Export to YAMLconst yaml = workflowToYaml(definition);Running a Workflow
Section titled “Running a Workflow”const result = await schift.workflows.run(workflow.id, { query: "What is vector search?",});
console.log(result);Composing Workflows with subworkflow
Section titled “Composing Workflows with subworkflow”Break a monolithic graph into small, focused child workflows and invoke them from a parent router. Each child owns its own prompts, model config, and budget surface.
blocks: - { id: start, type: start }
- id: stage_router type: router config: routes: [baby_info, letter, free_chat, fallback] expression: | workflowStage === "free_chat" ? "free_chat" : (workflowStage === 0 && /네|볼래|보여/.test(query)) ? "baby_info" : (workflowStage === 2 && currentAttachmentQuestionId) ? "letter" : "fallback"
- id: sub_baby type: subworkflow config: workflow_id: $env.SCHIFT_WF_BABY_INFO # or a literal workflow ID input_mapping: currentWeek: "$.currentWeek" # $. reads from this block's inputs query: "$.query" weekKnowledgeEntityId: "$.weekKnowledgeEntityId" output_mapping: "*": "result" # spread child.result onto parent output
- id: merge type: merge config: { strategy: first_non_null }
- { id: end, type: end }
edges: - { source: start, target: stage_router } - { source: stage_router, source_handle: baby_info, target: sub_baby } - { source: sub_baby, target: merge } - { source: merge, target: end }Config
Section titled “Config”| Key | Type | Description |
|---|---|---|
workflow_id | string (required) | Target workflow ID. Supports $env.VAR_NAME and {{var}} substitution from ctx.variables. |
input_mapping | dict | Map parent inputs/variables → child inputs. Values: $.field (from resolved inputs), $var.foo.bar (from workflow variables), plain string (treated as an inputs key). If omitted, the full input payload is forwarded. |
output_mapping | dict | Reshape child outputs. Three forms: rename (answer: "text"), nested path (answer: "result.answer"), or spread ("*": "result" flattens child.result onto the parent output). Spread runs first; explicit keys win on collision. |
timeout_s | number | Child-workflow execution timeout. Default 30. |
Output envelope
Section titled “Output envelope”Without output_mapping, the parent block receives the child workflow’s END outputs verbatim. Two meta keys are always added:
{ "...child keys...": "...", "_subworkflow_id": "wf_abc123", "_subworkflow_run_id": "run_xyz789"}Meta keys are kept across remapping (via setdefault) so downstream blocks can still trace which child ran.
Safety
Section titled “Safety”- Recursion: a workflow cannot invoke itself transitively; max depth is 5. Exceeding either limit fails the parent run.
- Shared spend guard: the child inherits the parent’s LLM/token/call budget caps — fanning out to children cannot bypass a per-run budget.
- Org-scoped: the child workflow must belong to the same org as the parent. Cross-org invocation returns
not found. - Skipping via router: subworkflow blocks gated behind a
routerbranch that doesn’t match are auto-skipped (engine_is_skipped). No explicitskipflag needed.
Custom Nodes
Section titled “Custom Nodes”Register custom block types for specialized processing:
import { registerCustomNode, SDKBaseNode } from "@schift-io/sdk";
class MyNode extends SDKBaseNode { async execute(input: unknown) { // Custom logic return { processed: true, data: input }; }}
registerCustomNode("my_custom_node", MyNode);