Orchestration and Decomposition¶
Status: Active
Date: March 11, 2026
Purpose: Define the deterministic control-plane model for Mozaiks runtime orchestration and the authoring contract for decomposition.
Non-Negotiable Rules¶
- The runtime control plane is deterministic.
mfj_extension.jsonis a compiled execution artifact, not a place for prose reasoning.- Natural-language reasoning does not belong in runtime graphs.
- LLMs may produce plans, classifications, and structured outputs inside workflows.
- The runtime may execute those outputs, but it must not interpret vague prose to decide control flow.
The Three Layers¶
1. Global Orchestrator¶
The global pack graph in factory_app/workflows/extended_orchestration/extension_registry.json is for sequencing across workflows.
Sequence key: workflow_sequences[].
It answers:
- which workflows exist
- which workflows belong to the same journey
- which workflows run sequentially vs in parallel groups
It does not answer:
- how a workflow decomposes a task internally
- how a child fan-out is generated
- how an LLM should reason about branching
Use the global layer for coarse journey phases such as:
ValueEngine -> BuildAppGreenRoom -> WritersRoom -> MainStageReview -> Publish
2. Workflow-Level MFJ¶
The per-workflow pack graph in app/workflows/<workflow>/extended_orchestration/mfj_extension.json is for mid-flight journeys inside one workflow. Builder workflows use the same contract under factory_app/workflows/<workflow>/extended_orchestration/mfj_extension.json.
It answers:
- which
decomposition_agenttriggers the MFJ - what the child spawn mode is
- which context fields must be present
- how fan-in resumes the parent
- where merged child results are injected
It does not contain business prose. It only contains executable runtime config.
3. Task Graph / DAG¶
This is optional and separate from MFJ.
A DAG only exists when a planner emits explicit dependency edges such as depends_on.
That means:
- a
decomposition_agentoutput is not automatically a DAG - an MFJ is not automatically a DAG
- a DAG is a structured task plan plus dependency edges plus a scheduler
For most Mozaiks workflows, a layered execution model is enough:
- foundation
- parallel child work
- integration
- summary / preview
Runtime Contract¶
Global Pack Graph¶
Global pack graphs should stay minimal:
{
"version": 3,
"workflows": [
{ "id": "GreenRoom" },
{ "id": "WritersRoom" },
{ "id": "MainStage" }
],
"workflow_sequences": [
{
"id": "backstage_showcase",
"steps": [
{ "workflows": ["GreenRoom"] },
{ "workflows": ["WritersRoom"] },
{ "workflows": ["MainStage"] }
]
}
]
}
Meaning:
GreenRoomruns first- then
WritersRoomstarts - then
MainStagefinishes the journey - the runtime does not guess intent from prose
Workflow MFJ Graph¶
Workflow MFJ graphs should stay as small as possible.
Single-phase form:
{
"version": 3,
"mid_flight_journeys": [
{
"id": "writers_room_cycle",
"description": "Fan out to 3 writer children, fan in to host.",
"decomposition_agent": "DecompositionAgent",
"fan_out": { "spawn_mode": "workflow", "max_children": 3 },
"fan_in": {
"resume_agent": "WritersHostAgent",
"inject_as": "mfj_writers_room_results"
}
}
]
}
Meaning:
decomposition_agentmust emit the child specs in its structured output- runtime fans out deterministically
- runtime fans in deterministically
- parent resumes at the configured
resume_agent aggregation_strategydefaults tocollect_all— no need to author itresume_entry_agentdefaults toresume_agentwhen omitted
Multi-stage form:
Use stages when one decomposition agent powers multiple sequential fan-out → fan-in phases. Each stage after the first requires a gate_agent that serves as the fan-in resume point of the prior stage and the decomposition trigger for the next fan-out.
{
"version": 3,
"mid_flight_journeys": [
{
"id": "generation_journey",
"description": "Stage 1 plans all workflows in parallel, user approves, stage 2 implements.",
"decomposition_agent": "DecompositionAgent",
"fan_out": { "spawn_mode": "workflow", "max_children": 10 },
"stages": [
{
"id": "plan",
"child_initial_agent": "PlanningAgent",
"resume_agent": "ReviewAgent",
"inject_as": "mfj_plan_results"
},
{
"id": "implement",
"gate_agent": "ApprovalAgent",
"child_initial_agent": "ImplementationAgent",
"resume_agent": "PackagingAgent",
"inject_as": "mfj_impl_results"
}
]
}
]
}
The schema expands stages to flat journeys at load time. The coordinator never sees the staged format — it sees one journey per stage, with the gate agent as the decomposition trigger of the next stage.
Advanced fields like trigger_on, input_contract, output_contract, child_context_seed, and timeout settings are optional override knobs. They exist for stricter validation or special cases, but they should not be the default authored experience. Keep those advanced knobs in roadmap profiles until the baseline authoring flow needs them: internal MFJ authoring roadmap notes.
Decomposition Contract¶
If a workflow needs productive fan-out, a dedicated decomposition step should prepare it.
That means:
- do not put reasoning in
mfj_extension.json - do put reasoning in a
decomposition_agent - require structured outputs from that agent
MFJ context variable auto-synthesis¶
The runtime reads extended_orchestration/mfj_extension.json at plan-load time and auto-registers context variables for:
- every
inject_askey — typeobject, defaultnull, scoped to the correspondingresume_agent - the runtime
_mfj_resume_*handshake keys — scoped to everyresume_agent
Workflow authors do not declare these in context_variables.yaml. The only authoring obligation is in the resume_agent's [CONTEXT] prompt section: name the inject_as key and describe the shape of the injected value so the agent knows what it is reading.
The decomposition agent is responsible for producing:
- bounded child work units
- child workflow specs
- any lane/task metadata needed for fan-in
For build-style workflows, the output should include ownership and dependency information such as:
task_idgoalowned_pathsdepends_onacceptance_criteria
Cross-Workflow Data Transfer¶
Global journeys do not magically share workflow-local context.
Cross-workflow carry must be explicit:
- workflow A persists canonical fields to its
ChatSessionsdocument - workflow B loads them in a
before_chatlifecycle tool - workflow B seeds its own context variables from those persisted fields
This is the current Mozaiks contract.
Use it for:
ValueEnginecanonical app specGreenRoomset brief- any other workflow-to-workflow carry
Runtime Event Flow¶
The runtime emits chat.agent_output_validated for any agent with a registered structured-output model. Two downstream handlers react:
handle_tool_dispatch— invoked only when the agent has anauto_tool_call: truetool in tools.yaml. Runs the mapped tool function deterministically.handle_journey_triggered— invoked only when the agent matches adecomposition_agentin the workflow's MFJ pack graph. Starts fan-out.
That means:
- MFJ decomposition agents do not need fake auto-tool bindings
- UI tool or side-effect automation should use
auto_tool_call: trueon the tool in tools.yaml
Showcase Pattern¶
The canonical demo in this repo is:
GreenRoomWritersRoomMainStage
GreenRoom¶
Purpose:
- capture a comedy premise and performer boundaries
- convert it into a canonical set brief
- persist that brief for the next workflow
WritersRoom¶
Purpose:
- load the persisted set brief
- decompose it into three parallel evaluation lanes
- fan out to three child runs inside the same workflow
- fan in to the host
- render both inline and artifact UI surfaces
MainStage¶
Purpose:
- load the writers-room summary
- package the strongest material into a final stage-ready set
- render the final artifact for presentation
This demonstrates:
- global universal orchestration
- workflow-level MFJ
- lifecycle-tool carry between workflows
- inline UI tools
- artifact UI tools
BuildApp Guidance¶
For real application generation, the pattern should be:
ValueEngineproducesProductSpec, orExistingAppDiscoveryproducesExistingProductSpec- downstream planning resolves
CapabilitySpec[] ExperienceSpecandAgentAugmentationPlanare derived from that product model- compilers turn those artifacts into a typed
BuildGraphand concrete bundles - major changes emit a typed
ChangeIntent - the universal orchestrator routes deterministically from typed refinement state rather than raw prose
Do not route from raw prose.
Use:
- typed
AppSpec - typed
ProductSpec - typed
CapabilitySpec - typed
ExperienceSpec - typed
AgentAugmentationPlan - typed
ChangeIntent - typed
BuildGraph - typed
BuildTaskSpec
Decompose into product artifacts first, not workflows first.
- modules and persistent pages come from deterministic product planning
- workflows are attached only when a capability requires agentic behavior
- refinements route by artifact boundary (
ProductSpec,CapabilitySpec[],ExperienceSpec,AgentAugmentationPlan,BuildGraph)
Summary¶
- Global pack graphs sequence workflows.
- Workflow pack graphs handle MFJ inside a workflow.
- Decomposition belongs to agents, not runtime graph prose.
- DAG scheduling is optional and separate from MFJ.
- Cross-workflow carry is explicit persistence plus lifecycle loading.
- The runtime executes compiled contracts, not natural-language logic.