harb/AGENTS.md
johba e5e1308e72 refactor: consolidate CI and local dev orchestration (#108)
## Summary
- Extract shared bootstrap functions into `scripts/bootstrap-common.sh` (eliminates ~120 lines of duplicated forge/cast commands from e2e.yml)
- Create reusable `scripts/wait-for-service.sh` for health checks (replaces 60-line inline wait-for-stack)
- Merge dev and CI entrypoints into unified scripts branching on `CI` env var (delete `docker/ci-entrypoints/`)
- Replace 4 per-service CI Dockerfiles with parameterized `docker/Dockerfile.service-ci`
- Add `sync-tax-rates.mjs` to CI image builder stage
- Fix: CI now grants txnBot recenter access (was missing)
- Fix: txnBot funding parameterized (CI=10eth, local=1eth)
- Delete 5 obsolete migration docs and 4 DinD integration files

Net: -1540 lines removed

Closes #107

## Test plan
- [ ] E2E pipeline passes (bootstrap sources shared script, services use old images with commands override)
- [ ] build-ci-images pipeline builds all 4 services with unified Dockerfile
- [ ] Local dev stack boots via `./scripts/dev.sh start`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: openhands <openhands@all-hands.dev>
Reviewed-on: https://codeberg.org/johba/harb/pulls/108
2026-02-03 12:07:28 +01:00

17 KiB
Raw Blame History

Agent Brief: Harb Stack

Core Concepts

  • KRAIKEN couples Harberger-tax staking with a dominant Uniswap V3 liquidity manager to create asymmetric slippage, sentiment-driven pricing, and VWAP "price memory" safeguards.
  • Liquidity dominance is mission-critical; treat any regression that weakens the LiquidityManager's control as a priority incident.
  • Harberger staking supplies the sentiment oracle that drives Optimizer parameters, which in turn tune liquidity placement and supply expansion.

User Journey

  1. Buy - Acquire KRAIKEN on Uniswap.
  2. Stake - Declare a tax rate on kraiken.org to earn from protocol growth.
  3. Compete - Snatch undervalued positions to optimise returns.

Operating the Stack

Quick Start

nohup ./scripts/dev.sh start &    # start (takes ~3-6 min first time)
tail -f nohup.out                  # watch progress
./scripts/dev.sh health            # verify all services healthy
./scripts/dev.sh stop              # stop and clean up

Do not launch services individually — dev.sh enforces phased startup with health gates.

Restart Modes

  • ./scripts/dev.sh restart --light — Fast (~10-20s): only webapp + txnbot, preserves Anvil/Ponder state. Use for frontend changes.
  • ./scripts/dev.sh restart --full — Full (~3-6min): redeploys contracts, fresh state. Use for contract changes.

Common Pitfalls

  • Docker disk full: dev.sh start refuses to run if Docker disk usage exceeds 20GB. Fix: ./scripts/dev.sh stop (auto-prunes) or docker system prune -af --volumes.
  • Stale Ponder state: If Ponder fails with schema errors after contract changes, delete its state: rm -rf services/ponder/.ponder/ then ./scripts/dev.sh restart --full.
  • kraiken-lib out of date: If services fail with import errors or missing exports, rebuild: ./scripts/build-kraiken-lib.sh. The dev script does this automatically on start, but manual rebuilds are needed if you change kraiken-lib while the stack is already running.
  • Container not found errors: dev.sh expects Docker Compose v2 container names (harb-anvil-1, hyphens not underscores). Verify with docker compose version.
  • Port conflicts: The stack uses ports 8545 (Anvil), 5173 (webapp), 5174 (landing), 42069 (Ponder), 43069 (txnBot), 8081 (Caddy). Check with lsof -i :<port> if startup fails.
  • npm ci failures in containers: Named Docker volumes cache node_modules/. If dependencies change and installs fail, remove the volume: docker volume rm harb_webapp_node_modules (or similar), then restart.

Environments

Supported: BASE_SEPOLIA_LOCAL_FORK (default Anvil fork), BASE_SEPOLIA, and BASE. Match contract addresses and RPCs accordingly.

Prerequisites

Docker Engine (Linux) or Colima (Mac). See docs/docker.md for installation.

Component Guides

  • onchain/ - Solidity + Foundry contracts, deploy scripts, and fuzzing helpers (details).
  • services/ponder/ - Ponder indexer powering the GraphQL API (details).
  • landing/ - Vue 3 marketing + staking interface (details).
  • kraiken-lib/ - Shared TypeScript helpers for clients and bots (details).
  • services/txnBot/ - Automation bot for recenter() and payTax() upkeep (details).

Testing & Tooling

  • Contracts: run forge build, forge test, and forge snapshot inside onchain/.
  • Fuzzing: scripts under onchain/analysis/ (e.g., ./analysis/run-fuzzing.sh [optimizer] debugCSV) generate replayable scenarios.
  • Integration: after the stack boots, inspect Anvil logs, hit http://localhost:8081/api/graphql for Ponder, and poll http://localhost:8081/api/txn/status for txnBot health.
  • E2E Tests: Playwright-based full-stack tests in tests/e2e/ verify complete user journeys (mint ETH → swap KRK → stake). Run with npm run test:e2e from repo root. Tests use mocked wallet provider with Anvil accounts. In CI, the Woodpecker e2e pipeline runs these against pre-built service images.

Version Validation System

  • Contract VERSION: Kraiken.sol exposes a VERSION constant (currently v1) that must be incremented for breaking changes to TAX_RATES, events, or core data structures.
  • Ponder Validation: On startup, Ponder reads the contract VERSION and validates against COMPATIBLE_CONTRACT_VERSIONS in kraiken-lib/src/version.ts. Fails hard (exit 1) on mismatch to prevent indexing wrong data.
  • Frontend Check: Web-app validates KRAIKEN_LIB_VERSION at runtime (currently placeholder; future: query Ponder GraphQL for full 3-way validation).
  • CI Enforcement: Woodpecker release.yml pipeline validates that contract VERSION matches COMPATIBLE_CONTRACT_VERSIONS before release.
  • See VERSION_VALIDATION.md for complete architecture, workflows, and troubleshooting.

Docker Installation & Setup

  • Linux: Install Docker Engine via package manager or curl -fsSL https://get.docker.com | sh, then add user to docker group: sudo usermod -aG docker $USER (logout/login required)
  • Mac: Use Colima (open-source Docker Desktop alternative):
    brew install colima docker docker-compose
    colima start --cpu 4 --memory 8 --disk 100
    docker ps  # verify installation
    
  • Container Orchestration: docker-compose.yml has NO depends_on declarations. All service ordering is handled in scripts/dev.sh via phased startup with explicit health checks.
  • Startup Phases: (1) Start anvil+postgres and wait for healthy, (2) Start bootstrap and wait for exit, (3) Start ponder and wait for healthy, (4) Start webapp/landing/txn-bot, (5) Start caddy, (6) Smoke test via scripts/wait-for-service.sh.
  • Shared Bootstrap: Contract deployment, seeding, and funding logic lives in scripts/bootstrap-common.sh, sourced by both containers/bootstrap.sh (local dev) and scripts/ci-bootstrap.sh (CI). Constants (FEE_DEST, WETH, SWAP_ROUTER, default keys) are defined once there.
  • Logging Configuration: All services have log rotation configured (max 10MB per file, 3 files max = 30MB per container) to prevent disk bloat. Logs are automatically rotated by Docker.
  • Disk Management (Portable, No Per-Machine Setup Required):
    • 20GB Hard Limit: The stack enforces a 20GB total Docker disk usage limit (images + containers + volumes + build cache).
    • Pre-flight Checks: ./scripts/dev.sh start checks Docker disk usage before starting and refuses to start if over 20GB.
    • Aggressive Auto-Cleanup on Stop: ./scripts/dev.sh stop automatically prunes ALL unused Docker resources including build cache (the primary cause of bloat).
    • Named Volumes for node_modules: All Node.js services (ponder, webapp, landing, txnBot) use named Docker volumes for node_modules/ instead of writing to the host filesystem. This prevents host pollution (20-30GB savings) and ensures docker system prune --volumes cleans them up.
    • npm Best Practices: All entrypoints use npm ci (not npm install) for reproducible builds and npm cache clean --force to remove ~50-100MB of cache per service.
    • PostgreSQL WAL Limits: Postgres configured with wal_level=minimal, max_wal_size=128MB, and archive_mode=off to prevent unbounded WAL file growth in the postgres volume.
    • Log Rotation: All containers limited to 30MB logs (10MB × 3 files) via docker-compose logging configuration.
    • .dockerignore: Excludes node_modules/, caches, and build outputs from Docker build context to speed up builds and reduce image size.
    • Monitoring: The stack displays current Docker disk usage on startup and warns at 80% (16GB).
    • Note: Docker has no built-in portable disk quotas. All limits are enforced via aggressive pruning, bounded configurations, and isolation of dependencies to Docker volumes.

Guardrails & Tips

  • token0isWeth flips amount semantics; confirm ordering before seeding or interpreting liquidity.
  • VWAP, ethScarcity, and Optimizer outputs operate on price^2 (X96). Avoid "normalising" to sqrt inadvertently.
  • Fund the LiquidityManager with Base WETH (0x4200...0006) before expecting recenter() to succeed.
  • Ponder stores data in .ponder/; drop the directory if schema changes break migrations.
  • Keep git clean before committing; never leave commented-out code or untested changes.
  • ES Modules: The entire stack uses ES modules. kraiken-lib, txnBot, Ponder, and web-app all require "type": "module" in package.json and use import syntax.
  • kraiken-lib Build: Run ./scripts/build-kraiken-lib.sh before docker-compose up so containers mount a fresh kraiken-lib/dist from the host.
  • Live Reload: scripts/watch-kraiken-lib.sh rebuilds on file changes (requires inotify-tools) and restarts dependent containers automatically.

Code Quality & Git Hooks

  • Pre-commit Hooks: Husky runs lint-staged on all staged files before commits. Each component (onchain, kraiken-lib, ponder, txnBot, web-app, landing) has .lintstagedrc.json configured for ESLint + Prettier.
  • Version Validation (Future): Pre-commit hook includes validation logic that will enforce version sync between onchain/src/Kraiken.sol (contract VERSION constant) and kraiken-lib/src/version.ts (COMPATIBLE_CONTRACT_VERSIONS array). This validation only runs if both files exist and contain version information.
  • Husky Setup: .husky/pre-commit orchestrates all pre-commit checks. Modify this file to add new validation steps.
  • To test hooks manually: git add <files> && .husky/pre-commit

Handy Commands

  • foundryup - update Foundry toolchain.
  • anvil --fork-url https://sepolia.base.org - manual fork when diagnosing outside the helper script.
  • cast call <POOL> "slot0()" - inspect pool state.
  • PONDER_NETWORK=BASE_SEPOLIA_LOCAL_FORK npm run dev (inside services/ponder/) - focused indexer debugging when the full stack is already running.
  • curl -X POST http://localhost:8081/api/graphql -d '{"query":"{ stats(id:\"0x01\"){kraikenTotalSupply}}"}'
  • curl http://localhost:8081/api/txn/status

Woodpecker CI

Infrastructure

  • Server: Woodpecker 3.10.0 runs as a systemd service (woodpecker-server.service), NOT a Docker container. Binary at /usr/local/bin/woodpecker-server.
  • Host: https://ci.sovraigns.network (port 8000 locally at http://127.0.0.1:8000)
  • Forge: Codeberg (Gitea-compatible) — repo johba/harb, forge remote ID 800173
  • Database: PostgreSQL at 127.0.0.1:5432, database woodpecker, user woodpecker
  • Config: /etc/woodpecker/server.env (contains secrets — agent secret, Gitea OAuth secret, DB credentials)
  • CLI: Downloaded to /tmp/woodpecker-cli (v3.10.0). Requires WOODPECKER_SERVER and WOODPECKER_TOKEN env vars.
  • Logs: journalctl -u woodpecker-server -f (NOT docker logs)

Pipeline Configs

  • .woodpecker/build-ci-images.yml — Builds Docker CI images using unified docker/Dockerfile.service-ci. Triggers on push to master or feature/ci when files in docker/, .woodpecker/, containers/, kraiken-lib/, onchain/, services/, web-app/, or landing/ change.
  • .woodpecker/e2e.yml — Runs Playwright E2E tests. Bootstrap step sources scripts/bootstrap-common.sh for shared deploy/seed logic. Health checks use scripts/wait-for-service.sh. Triggers on pull_request to master.
  • Pipeline numbering: even = build-ci-images (push events), odd = E2E (pull_request events). This is not guaranteed but was the observed pattern.

Monitoring Pipelines via DB

Since the Woodpecker API requires authentication (tokens are cached in server memory; DB-only token changes don't work without a server restart), monitor pipelines directly via PostgreSQL:

# Latest pipelines
PGPASSWORD='<db_password>' psql -h 127.0.0.1 -U woodpecker -d woodpecker -c \
  "SELECT number, status, branch, event, commit FROM pipelines
   WHERE repo_id = (SELECT id FROM repos WHERE full_name = 'johba/harb')
   ORDER BY number DESC LIMIT 5;"

# Step details for a specific pipeline
PGPASSWORD='<db_password>' psql -h 127.0.0.1 -U woodpecker -d woodpecker -c \
  "SELECT s.name, s.state,
    CASE WHEN s.finished > 0 AND s.started > 0 THEN (s.finished - s.started)::int::text || 's'
    ELSE '-' END as duration, s.exit_code
   FROM steps s WHERE s.pipeline_id = (
     SELECT id FROM pipelines WHERE number = <N>
     AND repo_id = (SELECT id FROM repos WHERE full_name = 'johba/harb'))
   ORDER BY s.started NULLS LAST;"

Triggering Pipelines

  • Normal flow: Push to Codeberg → Codeberg fires webhook to https://ci.sovraigns.network/api/hook → Woodpecker creates pipeline.
  • Known issue: Codeberg webhooks can stop firing if ci.sovraigns.network becomes unreachable (DNS/connectivity). Check Codeberg repo settings → Webhooks to verify delivery history and re-trigger.
  • Manual trigger via API (requires valid token — see known issues):
    WOODPECKER_SERVER=http://127.0.0.1:8000 WOODPECKER_TOKEN=<token> \
      /tmp/woodpecker-cli pipeline create --branch feature/ci johba/harb
    
  • API auth limitation: The server caches user token hashes in memory. Inserting a token directly into the DB does not work without restarting the server (sudo systemctl restart woodpecker-server).

CI Docker Images

  • docker/Dockerfile.service-ci — Unified parameterized Dockerfile for all service CI images (ponder, webapp, landing, txnBot). Uses --build-arg for service-specific configuration (SERVICE_DIR, SERVICE_PORT, ENTRYPOINT_SCRIPT, NEEDS_SYMLINKS, etc.).
    • sync-tax-rates: Builder stage runs scripts/sync-tax-rates.mjs to sync tax rates from Stake.sol into kraiken-lib before TypeScript compilation.
    • Symlinks fix (webapp only, NEEDS_SYMLINKS=true): Creates /web-app, /kraiken-lib, /onchain symlinks to work around Vite's removeBase() stripping /app/ prefix from filesystem paths.
    • CI env detection (CI=true): Disables Vue DevTools plugin in vite.config.ts to prevent 500 errors caused by path resolution issues with /app/ base path.
    • HEALTHCHECK: Configurable via build args; webapp uses --retries=84 --interval=5s = 420s (7 min), aligned with wait-for-stack step timeout.
  • Shared entrypoints: Each service uses a unified entrypoint script (containers/<service>-entrypoint.sh) that branches on CI=true env var for CI vs local dev paths. Common helpers in containers/entrypoint-common.sh.
  • Shared bootstrap: scripts/bootstrap-common.sh contains shared contract deployment, seeding, and funding functions used by both containers/bootstrap.sh (local dev) and .woodpecker/e2e.yml (CI).
  • CI images are tagged with git SHA and latest, pushed to a local registry.

CI Agent & Registry Auth

  • Agent: Runs as user ci (uid 1001) on harb-staging, same host as the dev environment. Binary at /usr/local/bin/woodpecker-agent.
  • Registry credentials: The ci user must have Docker auth configured at /home/ci/.docker/config.json to pull private images from registry.niovi.voyage. If images fail to pull with "no basic auth credentials", fix with:
    sudo mkdir -p /home/ci/.docker
    sudo cp /home/debian/.docker/config.json /home/ci/.docker/config.json
    sudo chown -R ci:ci /home/ci/.docker
    sudo chmod 600 /home/ci/.docker/config.json
    
  • Shared Docker daemon: The ci and debian users share the same Docker daemon. Running docker system prune as debian removes images cached for CI pipelines. If CI image pulls fail after a prune, either fix registry auth (above) or pre-pull images as debian: docker pull registry.niovi.voyage/harb/ponder-ci:latest etc.

CI Debugging Tips

  • If pipelines aren't being created after a push, check Codeberg webhook delivery logs first.
  • The Woodpecker server needs sudo to restart. Without it, you cannot: refresh API tokens, clear cached state, or recover from webhook auth issues.
  • E2E pipeline failures often come from wait-for-stack timing out. Check the webapp HEALTHCHECK alignment and Ponder indexing time.
  • The web-app/vite.config.ts allowedHosts array must include container hostnames (webapp, caddy) for health checks to succeed inside Docker networks.
  • Never use bash -lc in Woodpecker pipeline commands — login shell resets PATH via /etc/profile, losing Foundry and other tools set by Docker ENV. Use bash -c instead.

Codeberg API Access

  • Auth: Codeberg API tokens are stored in ~/.netrc (standard curl --netrc format, chmod 600):
    machine codeberg.org
    login johba
    password <api-token>
    
    The password field holds the API token — this is standard .netrc convention, not an actual password.
  • Generate tokens at https://codeberg.org/user/settings/applications.
  • Usage: Pass --netrc to curl for authenticated Codeberg API calls:
    curl --netrc -s https://codeberg.org/api/v1/repos/johba/harb/issues | jq '.[0].title'
    
  • Note: The repo uses SSH for git push/pull (ssh://git@codeberg.org), so .netrc is only used for REST API interactions (issues, PRs, releases).

References

  • Deployment history: onchain/deployments-local.json, onchain/broadcast/.
  • Deep dives: TECHNICAL_APPENDIX.md, HARBERG.md, and onchain/UNISWAP_V3_MATH.md.