What this step does, in enough detail for a new contributor to understand.
"""
[[steps]]
id = "collect"
description = "Assemble metrics into evidence/{category}/{date}.json."
output = "evidence/{category}/{date}.json"
[[steps]]
id = "deliver"
description = "Commit evidence file and post summary comment to issue."
[products.evidence_file]
path = "evidence/{category}/{date}.json"
delivery = "commit to main"
schema = "evidence/README.md"
[resources]
profile = "light" # or "heavy"
concurrency = "safe to run in parallel" # or "exclusive"
```
## How to Add a New Formula
1.**Pick a name.** File goes in `formulas/run-{name}.toml`. The `[formula] id` must match: `run-{name}`.
2.**Decide sense vs act.** If your formula only reads state and writes evidence → `sense`. If it creates PRs, commits code, or modifies contracts → `act`.
3.**Write the TOML.** Follow the skeleton above. Key sections:
-`[formula]` — id, name, description, type.
-`[inputs.*]` — every tuneable parameter the script accepts.
-`[execution]` — script path and full invocation with `{input}` interpolation.
-`[[steps]]` — ordered list of logical steps. Always end with `collect` and `deliver`.
-`[products.*]` — what the formula produces (evidence file, PR, issue comment).
4.**Write or wire the backing script.** The `[execution] script` must exist and be executable. Most scripts live in `scripts/harb-evaluator/` or `tools/`. Exit codes: `0` = success, `1` = gate failed, `2` = infra error.
5.**Define the evidence schema.** If your formula writes `evidence/{category}/{date}.json`, add the schema to `evidence/README.md`.
6.**Update this file.** Add your formula to the "Current Formulas" table above.
7.**Test locally.** Run the backing script with the required inputs and verify the evidence file is well-formed JSON.
## Resource Profiles
| Profile | Meaning | Can run in parallel? |
|---------|---------|---------------------|
| `light` | Shell commands only (df, curl, cast). No Docker, no Anvil. | Yes — safe to run alongside anything. |
| `heavy` | Needs Anvil on port 8545, Docker containers, or long-running agents. | No — exclusive. Heavy formulas share port bindings and cannot overlap. |
## Evaluator Integration
Formula execution is dispatched by the orchestrator to scripts in
`scripts/harb-evaluator/`. See [scripts/harb-evaluator/AGENTS.md](../scripts/harb-evaluator/AGENTS.md)
for details on the evaluator runtime: stack lifecycle, scenario execution,
evidence collection, and the adversarial agent harness.