Merge pull request 'feat: replace tax with holders in ring buffer, add sparkline charts (#170)' (#177) from fix/issue-170 into master

This commit is contained in:
johba 2026-02-22 21:51:26 +01:00
commit 6f7e07b4fc
12 changed files with 1045 additions and 248 deletions

1
.gitignore vendored
View file

@ -37,3 +37,4 @@ services/ponder/.ponder/
# Temporary files # Temporary files
/tmp/ /tmp/
logs/

120
docs/ARCHITECTURE.md Normal file
View file

@ -0,0 +1,120 @@
# ARCHITECTURE.md — System Map
Compressed overview for AI agents. Read this first, drill into source for details.
## Contract Architecture
```
Kraiken.sol (ERC-20 token)
├── liquidityManager: address (set once, immutable after)
│ └── LiquidityManager.sol (ThreePositionStrategy)
│ ├── optimizer: Optimizer (private immutable ref)
│ ├── pool: IUniswapV3Pool
│ ├── kraiken: Kraiken
│ └── Positions: Floor, Anchor, Discovery
├── stakingPool: address
│ └── Stake.sol
│ ├── Staking positions with tax rates
│ ├── Snatch mechanics (competitive staking)
│ └── getPercentageStaked(), getAverageTaxRate()
└── feeDestination: address (protocol revenue)
Optimizer.sol (UUPS Upgradeable Proxy)
├── Reads: stake.getPercentageStaked(), stake.getAverageTaxRate()
├── Computes: sentiment → 4 liquidity params
├── Versions: Optimizer, OptimizerV2, OptimizerV3, OptimizerV3Push3
└── Admin: single address, set at initialize()
```
## Key Relationships
- **Kraiken → LiquidityManager**: set once via `setLiquidityManager()`, reverts if already set
- **LiquidityManager → Optimizer**: `private immutable` — baked into constructor, never changes
- **LiquidityManager → Kraiken**: exclusive minting/burning rights
- **Optimizer → Stake**: reads sentiment data (% staked, avg tax rate)
- **Optimizer upgrades**: UUPS proxy, admin-only `_authorizeUpgrade()`
## Three-Position Strategy
All managed by LiquidityManager via ThreePositionStrategy abstract:
| Position | Purpose | Behavior |
|----------|---------|----------|
| **Floor** | Safety net | Deep liquidity at VWAP-adjusted prices |
| **Anchor** | Price discovery | Near current price, 1-100% width |
| **Discovery** | Fee capture | Borders anchor, ~3x price range (11000 tick spacing) |
**Recenter** = atomic repositioning of all three positions. Triggered by anyone, automated by txnBot.
## Optimizer Parameters
`getLiquidityParams()` returns 4 values:
1. `capitalInefficiency` (0 to 1e18) — capital buffer level
2. `anchorShare` (0 to 1e18) — % allocated to anchor position
3. `anchorWidth` (ticks) — width of anchor position
4. `discoveryDepth` (0 to 1e18) — depth of discovery position
Sentiment calculation: `sentiment = f(averageTaxRate, percentageStaked)`
- High sentiment (bull) → wider discovery, aggressive fees
- Low sentiment (bear) → tight around floor, maximum protection
## Stack
### On-chain
- Solidity, Foundry toolchain
- Uniswap V3 for liquidity positions
- OpenZeppelin for UUPS proxy, Initializable
- Base L2 (deployment target)
### Indexer
- **Ponder** (`services/ponder/`) — indexes on-chain events
- Schema: `services/ponder/ponder.schema.ts`
- Stats table with 168-slot ring buffer (7d × 24h × 4 segments)
- Ring buffer segments: [ethReserve, minted, burned, tax] (slot 3 being changed to holderCount)
- GraphQL API at port 42069
### Landing Page
- Vue 3 + Vite (`landing/`)
- Three variants: HomeView (default), HomeViewOffensive (degens), HomeViewMixed
- Docs section: HowItWorks, Tokenomics, Staking, LiquidityManagement, AIAgent, FAQ
- LiveStats component polls Ponder GraphQL every 30s
### Staking Web App
- Vue 3 (`web-app/`)
- Password-protected (multiple passwords in LoginView.vue)
- ProtocolStatsCard shows real-time protocol metrics
### Infrastructure
- Docker Compose on 8GB VPS
- Woodpecker CI at ci.niovi.voyage
- Codeberg repo: johba/harb (private)
- Container registry: registry.niovi.voyage
## Directory Map
```
harb/
├── onchain/ # Solidity contracts + Foundry
│ ├── src/ # Contract source
│ ├── test/ # Forge tests
│ └── foundry.toml # via_ir = true required
├── services/
│ └── ponder/ # Indexer service
│ ├── ponder.schema.ts
│ ├── src/
│ │ ├── helpers/stats.ts # Ring buffer logic
│ │ ├── lm.ts # LiquidityManager indexing
│ │ └── stake.ts # Stake indexing
├── landing/ # Landing page (Vue 3)
│ ├── src/
│ │ ├── components/ # LiveStats, KFooter, WalletCard, etc.
│ │ ├── views/ # HomeView variants, docs pages
│ │ └── router/
├── web-app/ # Staking app (Vue 3)
│ ├── src/
│ │ ├── components/ # ProtocolStatsCard, etc.
│ │ └── views/ # LoginView, StakeView, etc.
├── containers/ # Docker configs, entrypoints
├── docs/ # This file, PRODUCT-TRUTH.md
└── .woodpecker/ # CI pipeline configs
```

99
docs/ENVIRONMENT.md Normal file
View file

@ -0,0 +1,99 @@
# ENVIRONMENT.md — Local Dev Stack
How to start, stop, and verify the harb development environment.
## Stack Overview
Docker Compose services (in startup order):
| Service | Purpose | Port | Health Check |
|---------|---------|------|-------------|
| **anvil** | Local Ethereum fork (Base Sepolia) | 8545 | JSON-RPC response |
| **postgres** | Ponder database | 5432 | pg_isready |
| **bootstrap** | Deploys contracts to anvil | — | One-shot, exits 0 |
| **ponder** | On-chain indexer + GraphQL API | 42069 | HTTP /ready or GraphQL |
| **landing** | Landing page (Vue 3 + Vite) | 5174 | HTTP response |
| **webapp** | Staking app (Vue 3) | 5173 | HTTP response |
| **txn-bot** | Automated recenter/tx bot | — | Process alive |
| **caddy** | Reverse proxy / TLS | 80/443 | — |
| **otterscan** | Block explorer | 5100 | — |
## Quick Start
```bash
cd /home/debian/harb
# Start everything
docker compose up -d
# Wait for bootstrap (deploys contracts, ~60-90s)
docker compose logs -f bootstrap
# Check all healthy
docker compose ps
```
## Verify Stack Health
```bash
# Anvil (local chain)
curl -s http://localhost:8545 -X POST -H 'Content-Type: application/json' \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' | jq .result
# Ponder (indexer + GraphQL)
curl -s http://localhost:42069/graphql -X POST \
-H 'Content-Type: application/json' \
-d '{"query":"{ stats { id } }"}' | jq .
# Landing page
curl -sf http://localhost:5174 | head -5
# Staking app
curl -sf http://localhost:5173 | head -5
```
## Container Network
Services communicate on `harb-network` Docker bridge.
Internal hostnames match service names (e.g., `ponder:42069`).
Landing page container IP (for Playwright testing): check with
```bash
docker inspect landing --format '{{.NetworkSettings.Networks.harb_harb-network.IPAddress}}'
```
## Common URLs (for testing/review)
- **Landing:** `http://172.18.0.6:5174` (container IP) or `http://localhost:5174`
- **Staking app:** `http://localhost:5173/app/`
- **Ponder GraphQL:** `http://localhost:42069/graphql`
- **Anvil RPC:** `http://localhost:8545`
## Resource Notes
- 8GB VPS — running full stack uses ~4-5GB RAM
- npm install inside containers can OOM with all services running
- Landing container takes ~2min to restart (npm install + vite startup)
- 4GB swap is essential for CI + stack concurrency
## Staking App Passwords
For testing login: `lobsterDao`, `test123`, `lobster-x010syqe?412!`
(defined in `web-app/src/views/LoginView.vue`)
## Contract Addresses
After bootstrap, addresses are in `/home/debian/harb/tmp/containers/contracts.env`.
Landing sources this file on startup for `VITE_KRAIKEN_ADDRESS` and `VITE_STAKE_ADDRESS`.
## Playwright Testing
```bash
# Chromium path
/home/debian/.cache/ms-playwright/chromium-1209/chrome-linux64/chrome
# Run against landing (block fonts for speed)
NODE_PATH=$(npm root -g) node test-script.cjs
```
See `tmp/user-test-r4.cjs` for the most recent test script pattern.

109
docs/PRODUCT-TRUTH.md Normal file
View file

@ -0,0 +1,109 @@
# PRODUCT-TRUTH.md — What We Can and Cannot Claim
This file is the source of truth for all product messaging, docs, and marketing.
If a claim isn't here or contradicts what's here, it's wrong. Update this file
when the protocol changes — not the marketing copy.
**Last updated:** 2026-02-22
**Updated by:** Johann + Clawy after user test review session
---
## Target Audience
- **Crypto natives** who know DeFi but don't know KrAIken
- NOT beginners. NOT "new to DeFi" users.
- Think: people who've used Uniswap, understand liquidity, know what a floor price means
## The Floor
✅ **Can say:**
- Every KRK token has a minimum redemption price backed by real ETH
- The floor is enforced by immutable smart contracts
- The floor is backed by actual ETH reserves, not promises
- No rug pulls — liquidity is locked in contracts
- "Programmatic guarantee" (borrowed from Baseline — accurate for us too)
❌ **Cannot say:**
- "The floor can never decrease" — **FALSE.** Selling withdraws ETH from reserves. The floor CAN decrease.
- "Guaranteed profit" or "risk-free" — staking is leveraged exposure, it has real downside
- "Floor always goes up" — only true if fee income exceeds sell pressure, which isn't guaranteed
## The Optimizer
✅ **Can say:**
- Reads staker sentiment (% staked, average tax rate) to calculate parameters
- Returns 4 parameters: capitalInefficiency, anchorShare, anchorWidth, discoveryDepth
- Runs autonomously on-chain — no human triggers needed for parameter reads
- Is a UUPS upgradeable proxy — can be upgraded to new versions
- Currently admin-upgradeable (single admin key set at initialization)
- Multiple versions exist: Optimizer, OptimizerV2, OptimizerV3, OptimizerV3Push3
- "The optimizer evolves" — true in the sense that new versions get deployed
❌ **Cannot say:**
- "No admin keys" — **FALSE.** UUPS upgrade requires admin. Admin key exists.
- "No proxy patterns" — **FALSE.** It IS a UUPS proxy.
- "Stakers vote for new optimizers" — **NOT YET.** This is roadmap, not current state.
- "Simply evolves" / "evolves without upgrades" — misleading. It's an explicit upgrade via proxy.
- "Three strategies" — **FALSE.** It's ONE strategy with THREE positions (Floor, Anchor, Discovery).
- "AI learns from the market" — overstated. The optimizer reads staking sentiment, not market data directly.
🔮 **Roadmap (can say "planned" / "coming"):**
- Staker governance for optimizer upgrades (vote with stake weight)
- On-chain training data → new optimizer contracts via Push3 transpiler
- Remove admin key in favor of staker voting
## Liquidity Positions
✅ **Can say:**
- Three positions: Floor, Anchor, Discovery
- Floor: deep liquidity at VWAP-adjusted prices (safety net)
- Anchor: near current price, fast price discovery (1-100% width)
- Discovery: borders anchor, captures fees (wide range, ~3x current price)
- The optimizer adjusts position parameters based on sentiment
- "Recenter" = atomic repositioning of all liquidity in one transaction
- Anyone can trigger a recenter; the protocol bot does it automatically
- Bull mode: wider discovery, aggressive fee capture. Bear mode: tight around floor.
❌ **Cannot say:**
- "Three trading strategies" — it's three positions in ONE strategy
- "Token-owned liquidity" — ⚠️ USE CAREFULLY. KRK doesn't "own" anything in the legal/contract sense. The LiquidityManager manages positions. Acceptable as metaphor in marketing, not in technical docs.
## Staking
✅ **Can say:**
- Staking = leveraged directional exposure
- Stakers set tax rates; positions can be "snatched" by others willing to pay higher tax
- Tax rates influence optimizer sentiment → bull/bear positioning
- "Stakers profit when the community grows" (via supply expansion + leverage)
- Staking is optional — most holders just hold
❌ **Cannot say:**
- "Start Earning" / "Earn yield" / "APY" — staking is NOT yield farming
- "Guaranteed returns" — leveraged positions amplify losses too
- "Passive income" — tax payments are a cost, not income
## Supply Mechanics
✅ **Can say:**
- Elastic supply: buy = mint, sell = burn
- Protocol controls minting exclusively through LiquidityManager
- LiquidityManager address is set once on Kraiken contract and cannot be changed
## Code / Open Source
✅ **Can say:**
- Smart contracts are verifiable on Basescan
- Key contracts are viewable on the docs/code page
- "Full source will be published at mainnet launch" (if that's the plan)
❌ **Cannot say:**
- "Open source" — the Codeberg repo is **private**. This is currently false.
- "Audited" — unless an audit has been completed
## General Rules
1. When in doubt, understate. "The floor is backed by ETH" > "The floor guarantees you'll never lose money"
2. Separate current state from roadmap. Always.
3. Technical docs: be precise. Marketing: metaphors OK but never contradict technical reality.
4. If you're not sure a claim is true, check this file. If it's not here, verify against contract source before writing it.

View file

@ -5,6 +5,9 @@
<div class="stat-label">ETH Reserve</div> <div class="stat-label">ETH Reserve</div>
<div class="stat-value">{{ ethReserveAmount }}</div> <div class="stat-value">{{ ethReserveAmount }}</div>
<div v-if="growthIndicator !== null" class="growth-badge" :class="growthClass">{{ growthIndicator }}</div> <div v-if="growthIndicator !== null" class="growth-badge" :class="growthClass">{{ growthIndicator }}</div>
<svg v-if="ethReserveSpark.length > 1" class="sparkline" viewBox="0 0 80 24" preserveAspectRatio="none">
<polyline :points="toSvgPoints(ethReserveSpark)" fill="none" stroke="rgba(96,165,250,0.5)" stroke-width="1.5" stroke-linejoin="round" stroke-linecap="round" />
</svg>
</div> </div>
<div class="stat-item"> <div class="stat-item">
<div class="stat-label">ETH / Token</div> <div class="stat-label">ETH / Token</div>
@ -19,10 +22,17 @@
<div class="stat-label">Supply (7d)</div> <div class="stat-label">Supply (7d)</div>
<div class="stat-value">{{ totalSupply }}</div> <div class="stat-value">{{ totalSupply }}</div>
<div v-if="netSupplyIndicator !== null" class="growth-badge" :class="netSupplyClass">{{ netSupplyIndicator }}</div> <div v-if="netSupplyIndicator !== null" class="growth-badge" :class="netSupplyClass">{{ netSupplyIndicator }}</div>
<svg v-if="supplySpark.length > 1" class="sparkline" viewBox="0 0 80 24" preserveAspectRatio="none">
<polyline :points="toSvgPoints(supplySpark)" fill="none" stroke="rgba(74,222,128,0.5)" stroke-width="1.5" stroke-linejoin="round" stroke-linecap="round" />
</svg>
</div> </div>
<div class="stat-item"> <div class="stat-item">
<div class="stat-label">Holders</div> <div class="stat-label">Holders</div>
<div class="stat-value">{{ holders }}</div> <div class="stat-value">{{ holders }}</div>
<div v-if="holderGrowthIndicator !== null" class="growth-badge" :class="holderGrowthClass">{{ holderGrowthIndicator }}</div>
<svg v-if="holdersSpark.length > 1" class="sparkline" viewBox="0 0 80 24" preserveAspectRatio="none">
<polyline :points="toSvgPoints(holdersSpark)" fill="none" stroke="rgba(251,191,36,0.5)" stroke-width="1.5" stroke-linejoin="round" stroke-linecap="round" />
</svg>
</div> </div>
<div class="stat-item" :class="{ 'pulse': isRecentRebalance }"> <div class="stat-item" :class="{ 'pulse': isRecentRebalance }">
<div class="stat-label">Rebalances</div> <div class="stat-label">Rebalances</div>
@ -44,22 +54,26 @@
<script setup lang="ts"> <script setup lang="ts">
import { ref, computed, onMounted, onUnmounted } from 'vue'; import { ref, computed, onMounted, onUnmounted } from 'vue';
// Must match RING_BUFFER_SEGMENTS and HOURS_IN_RING_BUFFER in services/ponder/src/helpers/stats.ts
const RING_SEGMENTS = 4; // ethReserve, minted, burned, holderCount
const RING_HOURS = 168; // 7 days * 24 hours
interface Stats { interface Stats {
kraikenTotalSupply: string; kraikenTotalSupply: string;
holderCount: number; holderCount: number;
lastRecenterTimestamp: number; lastRecenterTimestamp: number;
recentersLastWeek: number; recentersLastWeek: number;
lastEthReserve: string; lastEthReserve: string;
taxPaidLastWeek: string;
mintedLastWeek: string; mintedLastWeek: string;
burnedLastWeek: string; burnedLastWeek: string;
netSupplyChangeWeek: string; netSupplyChangeWeek: string;
// New fields (batch1) all nullable until indexer has sufficient history
ethReserveGrowthBps: number | null; ethReserveGrowthBps: number | null;
feesEarned7dEth: string | null; feesEarned7dEth: string | null;
floorPriceWei: string | null; floorPriceWei: string | null;
floorDistanceBps: number | null; floorDistanceBps: number | null;
currentPriceWei: string | null; currentPriceWei: string | null;
ringBuffer: string[] | null;
ringBufferPointer: number | null;
} }
const stats = ref<Stats | null>(null); const stats = ref<Stats | null>(null);
@ -76,6 +90,107 @@ function weiToEth(wei: string | null | undefined): number {
} }
} }
/**
* Extract a time-ordered series from the ring buffer for a given slot offset.
* Skips leading zeros (pre-launch padding).
*/
function extractSeries(ringBuffer: string[], pointer: number, slotOffset: number): number[] {
if (ringBuffer.length !== RING_HOURS * RING_SEGMENTS) {
return [];
}
const raw: number[] = [];
for (let i = 0; i < RING_HOURS; i++) {
// Walk from oldest to newest
const idx = ((pointer + 1 + i) % RING_HOURS) * RING_SEGMENTS + slotOffset;
raw.push(Number(ringBuffer[idx] || '0'));
}
// Skip leading zeros (pre-launch padding) use findIndex on any non-zero
// Note: legitimate zero values mid-series are kept, only leading zeros trimmed
const firstNonZero = raw.findIndex(v => v > 0);
return firstNonZero === -1 ? [] : raw.slice(firstNonZero);
}
/**
* Build cumulative net supply series from minted (slot 1) and burned (slot 2).
*/
function extractSupplySeries(ringBuffer: string[], pointer: number): number[] {
if (ringBuffer.length !== RING_HOURS * RING_SEGMENTS) return [];
const minted: number[] = [];
const burned: number[] = [];
for (let i = 0; i < RING_HOURS; i++) {
const idx = ((pointer + 1 + i) % RING_HOURS) * RING_SEGMENTS;
minted.push(Number(ringBuffer[idx + 1] || '0'));
burned.push(Number(ringBuffer[idx + 2] || '0'));
}
// Find first hour with any activity (align with extractSeries)
const firstActive = minted.findIndex((m, i) => m > 0 || burned[i] > 0);
if (firstActive === -1) return [];
// Build cumulative net supply change from first active hour
const cumulative: number[] = [];
let sum = 0;
for (let i = firstActive; i < RING_HOURS; i++) {
sum += minted[i] - burned[i];
cumulative.push(sum);
}
return cumulative;
}
/**
* Convert a number[] series to SVG polyline points string, scaled to 80x24.
*/
function toSvgPoints(series: number[]): string {
if (series.length < 2) return '';
const min = Math.min(...series);
const max = Math.max(...series);
const range = max - min || 1;
const isFlat = max === min;
return series
.map((v, i) => {
const x = (i / (series.length - 1)) * 80;
const y = isFlat ? 12 : 24 - ((v - min) / range) * 22 - 1; // center flat lines
return `${x.toFixed(1)},${y.toFixed(1)}`;
})
.join(' ');
}
// Sparkline series extracted from ring buffer
const ethReserveSpark = computed(() => {
if (!stats.value?.ringBuffer || stats.value.ringBufferPointer == null) return [];
return extractSeries(stats.value.ringBuffer, stats.value.ringBufferPointer, 0);
});
const supplySpark = computed(() => {
if (!stats.value?.ringBuffer || stats.value.ringBufferPointer == null) return [];
return extractSupplySeries(stats.value.ringBuffer, stats.value.ringBufferPointer);
});
const holdersSpark = computed(() => {
if (!stats.value?.ringBuffer || stats.value.ringBufferPointer == null) return [];
return extractSeries(stats.value.ringBuffer, stats.value.ringBufferPointer, 3);
});
// Holder growth indicator from ring buffer
const holderGrowthIndicator = computed((): string | null => {
const series = holdersSpark.value;
if (series.length < 2) return null;
const oldest = series[0];
const newest = series[series.length - 1];
if (oldest === 0) return newest > 0 ? `${newest} holders` : null;
const pct = ((newest - oldest) / oldest) * 100;
if (Math.abs(pct) < 0.1) return '~ flat';
return pct > 0 ? `${pct.toFixed(1)}% this week` : `${Math.abs(pct).toFixed(1)}% this week`;
});
const holderGrowthClass = computed(() => {
const series = holdersSpark.value;
if (series.length < 2) return '';
const oldest = series[0];
const newest = series[series.length - 1];
if (newest > oldest) return 'growth-up';
if (newest < oldest) return 'growth-down';
return 'growth-flat';
});
const ethReserveAmount = computed(() => { const ethReserveAmount = computed(() => {
if (!stats.value) return '0.00 ETH'; if (!stats.value) return '0.00 ETH';
const eth = weiToEth(stats.value.lastEthReserve); const eth = weiToEth(stats.value.lastEthReserve);
@ -107,7 +222,6 @@ const ethPerToken = computed(() => {
const supply = Number(stats.value.kraikenTotalSupply) / 1e18; const supply = Number(stats.value.kraikenTotalSupply) / 1e18;
if (supply === 0) return '—'; if (supply === 0) return '—';
const ratio = reserve / supply; const ratio = reserve / supply;
// Format with appropriate precision
if (ratio >= 0.01) return `${ratio.toFixed(4)} ETH`; if (ratio >= 0.01) return `${ratio.toFixed(4)} ETH`;
if (ratio >= 0.0001) return `${ratio.toFixed(6)} ETH`; if (ratio >= 0.0001) return `${ratio.toFixed(6)} ETH`;
return `${ratio.toExponential(2)} ETH`; return `${ratio.toExponential(2)} ETH`;
@ -207,7 +321,6 @@ async function fetchStats() {
lastRecenterTimestamp lastRecenterTimestamp
recentersLastWeek recentersLastWeek
lastEthReserve lastEthReserve
taxPaidLastWeek
mintedLastWeek mintedLastWeek
burnedLastWeek burnedLastWeek
netSupplyChangeWeek netSupplyChangeWeek
@ -216,6 +329,8 @@ async function fetchStats() {
floorPriceWei floorPriceWei
floorDistanceBps floorDistanceBps
currentPriceWei currentPriceWei
ringBuffer
ringBufferPointer
} }
} }
} }
@ -338,6 +453,12 @@ onUnmounted(() => {
background: rgba(255, 255, 255, 0.05) background: rgba(255, 255, 255, 0.05)
border-color: rgba(255, 255, 255, 0.12) border-color: rgba(255, 255, 255, 0.12)
.sparkline
width: 80px
height: 24px
margin-top: 4px
opacity: 0.8
.stat-label .stat-label
font-size: 12px font-size: 12px
color: rgba(240, 240, 240, 0.6) color: rgba(240, 240, 240, 0.6)

106
scripts/review-poll.sh Executable file
View file

@ -0,0 +1,106 @@
#!/usr/bin/env bash
# review-poll.sh — Poll open PRs and review those with green CI
#
# Usage: ./scripts/review-poll.sh
#
# Runs from system cron. Checks all open PRs targeting master.
# Reviews unreviewed ones sequentially via review-pr.sh.
#
# Peek while running: cat /tmp/harb-review-status
# Full log: tail -f /home/debian/harb/logs/review.log
set -euo pipefail
# --- Environment (cron-safe) ---
export PATH="/home/debian/.nvm/versions/node/v22.20.0/bin:/usr/local/bin:/usr/bin:/bin:$PATH"
export HOME="${HOME:-/home/debian}"
# --- Config ---
REPO="johba/harb"
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
CODEBERG_TOKEN="$(awk '/codeberg.org/{getline;getline;print $2}' ~/.netrc)"
API_BASE="https://codeberg.org/api/v1/repos/${REPO}"
LOGDIR="/home/debian/harb/logs"
LOGFILE="$LOGDIR/review.log"
STATUSFILE="/tmp/harb-review-status"
MAX_REVIEWS=3
mkdir -p "$LOGDIR"
log() {
local ts
ts="$(date -u '+%Y-%m-%d %H:%M:%S UTC')"
echo "[$ts] $*" >> "$LOGFILE"
echo "[$ts] $*"
}
# --- Log rotation (keep last 50KB, archive once) ---
if [ -f "$LOGFILE" ] && [ "$(stat -c%s "$LOGFILE" 2>/dev/null || echo 0)" -gt 51200 ]; then
mv "$LOGFILE" "$LOGFILE.1"
# Only keep one rotated file
rm -f "$LOGFILE.2"
log "Log rotated"
fi
log "--- Poll start ---"
# --- Fetch open PRs targeting master ---
PRS=$(curl -sf -H "Authorization: token ${CODEBERG_TOKEN}" \
"${API_BASE}/pulls?state=open&limit=20" | \
jq -r '.[] | select(.base.ref == "master") | "\(.number) \(.head.sha)"')
if [ -z "$PRS" ]; then
log "No open PRs targeting master"
exit 0
fi
TOTAL=$(echo "$PRS" | wc -l)
log "Found ${TOTAL} open PRs"
REVIEWED=0
SKIPPED=0
while IFS= read -r line; do
PR_NUM=$(echo "$line" | awk '{print $1}')
PR_SHA=$(echo "$line" | awk '{print $2}')
# Quick pre-check: CI status (avoid calling review-pr.sh just to skip)
CI_STATE=$(curl -sf -H "Authorization: token ${CODEBERG_TOKEN}" \
"${API_BASE}/commits/${PR_SHA}/status" | jq -r '.state // "unknown"')
if [ "$CI_STATE" != "success" ]; then
log " #${PR_NUM} CI=${CI_STATE}, skip"
SKIPPED=$((SKIPPED + 1))
continue
fi
# Check for existing review at this SHA
HAS_REVIEW=$(curl -sf -H "Authorization: token ${CODEBERG_TOKEN}" \
"${API_BASE}/issues/${PR_NUM}/comments?limit=50" | \
jq -r --arg sha "$PR_SHA" \
'[.[] | select(.body | contains("<!-- reviewed:")) | select(.body | contains($sha))] | length')
if [ "$HAS_REVIEW" -gt "0" ]; then
log " #${PR_NUM} already reviewed at ${PR_SHA:0:7}, skip"
SKIPPED=$((SKIPPED + 1))
continue
fi
log " #${PR_NUM} needs review (CI=success, SHA=${PR_SHA:0:7})"
if "${SCRIPT_DIR}/review-pr.sh" "$PR_NUM" 2>&1; then
REVIEWED=$((REVIEWED + 1))
else
log " #${PR_NUM} review failed"
fi
if [ "$REVIEWED" -ge "$MAX_REVIEWS" ]; then
log "Hit max reviews (${MAX_REVIEWS}), stopping"
break
fi
sleep 2
done <<< "$PRS"
log "--- Poll done: ${REVIEWED} reviewed, ${SKIPPED} skipped ---"

270
scripts/review-pr.sh Executable file
View file

@ -0,0 +1,270 @@
#!/usr/bin/env bash
# review-pr.sh — AI-powered PR review using claude CLI
#
# Usage: ./scripts/review-pr.sh <pr-number> [--force]
#
# Calls `claude -p --model sonnet` with context docs + diff.
# No tool access (pure text review), ~$0.02-0.05 per review.
#
# --force: skip the "already reviewed" check
#
# Concurrency: uses a lockfile to ensure only one review runs at a time.
# Status: writes live progress to /tmp/harb-review-status for peeking.
# Logs: /home/debian/harb/logs/review.log (auto-rotated at 100KB)
#
# Peek while running: cat /tmp/harb-review-status
# Watch log: tail -f ~/harb/logs/review.log
set -euo pipefail
# --- Environment (cron-safe) ---
export PATH="/home/debian/.nvm/versions/node/v22.20.0/bin:/usr/local/bin:/usr/bin:/bin:$PATH"
export HOME="${HOME:-/home/debian}"
# --- Config ---
PR_NUMBER="${1:?Usage: review-pr.sh <pr-number> [--force]}"
FORCE="${2:-}"
REPO="johba/harb"
REPO_ROOT="$(cd "$(dirname "$0")/.." && pwd)"
CODEBERG_TOKEN="$(awk '/codeberg.org/{getline;getline;print $2}' ~/.netrc)"
API_BASE="https://codeberg.org/api/v1/repos/${REPO}"
LOCKFILE="/tmp/harb-review.lock"
STATUSFILE="/tmp/harb-review-status"
LOGDIR="/home/debian/harb/logs"
LOGFILE="$LOGDIR/review.log"
MIN_MEM_MB=1500
TMPDIR=$(mktemp -d)
mkdir -p "$LOGDIR"
# --- Logging ---
log() {
local ts
ts="$(date -u '+%Y-%m-%d %H:%M:%S UTC')"
echo "[$ts] PR#${PR_NUMBER} $*" | tee -a "$LOGFILE"
}
status() {
local ts
ts="$(date -u '+%Y-%m-%d %H:%M:%S UTC')"
printf '[%s] PR #%s: %s\n' "$ts" "$PR_NUMBER" "$*" > "$STATUSFILE"
log "$*"
}
cleanup() {
rm -rf "$TMPDIR"
rm -f "$LOCKFILE" "$STATUSFILE"
}
trap cleanup EXIT
# --- Log rotation (keep ~100KB + 1 archive) ---
if [ -f "$LOGFILE" ]; then
LOGSIZE=$(stat -c%s "$LOGFILE" 2>/dev/null || echo 0)
if [ "$LOGSIZE" -gt 102400 ]; then
mv "$LOGFILE" "$LOGFILE.old"
log "Log rotated (was ${LOGSIZE} bytes)"
fi
fi
# --- Memory guard ---
AVAIL_MB=$(awk '/MemAvailable/{printf "%d", $2/1024}' /proc/meminfo)
if [ "$AVAIL_MB" -lt "$MIN_MEM_MB" ]; then
log "SKIP: only ${AVAIL_MB}MB available (need ${MIN_MEM_MB}MB)"
exit 0
fi
# --- Concurrency lock ---
if [ -f "$LOCKFILE" ]; then
LOCK_PID=$(cat "$LOCKFILE" 2>/dev/null || echo "")
if [ -n "$LOCK_PID" ] && kill -0 "$LOCK_PID" 2>/dev/null; then
log "SKIP: another review running (PID ${LOCK_PID})"
exit 0
fi
log "Removing stale lock (PID ${LOCK_PID:-?})"
rm -f "$LOCKFILE"
fi
echo $$ > "$LOCKFILE"
# --- Fetch PR metadata ---
status "fetching metadata"
PR_JSON=$(curl -sf -H "Authorization: token ${CODEBERG_TOKEN}" \
"${API_BASE}/pulls/${PR_NUMBER}")
PR_TITLE=$(echo "$PR_JSON" | jq -r '.title')
PR_BODY=$(echo "$PR_JSON" | jq -r '.body // ""')
PR_HEAD=$(echo "$PR_JSON" | jq -r '.head.ref')
PR_BASE=$(echo "$PR_JSON" | jq -r '.base.ref')
PR_SHA=$(echo "$PR_JSON" | jq -r '.head.sha')
PR_STATE=$(echo "$PR_JSON" | jq -r '.state')
log "${PR_TITLE} (${PR_HEAD}${PR_BASE} ${PR_SHA:0:7})"
# --- Guards ---
if [ "$PR_STATE" != "open" ]; then
log "SKIP: state=${PR_STATE}"
exit 0
fi
status "checking CI"
CI_STATE=$(curl -sf -H "Authorization: token ${CODEBERG_TOKEN}" \
"${API_BASE}/commits/${PR_SHA}/status" | jq -r '.state // "unknown"')
if [ "$CI_STATE" != "success" ]; then
log "SKIP: CI=${CI_STATE}"
exit 0
fi
if [ "$FORCE" != "--force" ]; then
status "checking existing reviews"
EXISTING=$(curl -sf -H "Authorization: token ${CODEBERG_TOKEN}" \
"${API_BASE}/issues/${PR_NUMBER}/comments?limit=50" | \
jq -r --arg sha "$PR_SHA" \
'[.[] | select(.body | contains("<!-- reviewed:")) | select(.body | contains($sha))] | length')
if [ "$EXISTING" -gt "0" ]; then
log "SKIP: already reviewed at ${PR_SHA:0:7}"
exit 0
fi
fi
# --- Fetch diff ---
status "fetching diff"
DIFF=$(curl -sf -H "Authorization: token ${CODEBERG_TOKEN}" \
"${API_BASE}/pulls/${PR_NUMBER}.diff" | head -c 25000)
DIFF_STAT=$(echo "$DIFF" | grep -E '^\+\+\+ b/' | sed 's|^+++ b/||' | sort)
# --- Which context docs? ---
NEEDS_UX=false
for f in $DIFF_STAT; do
case "$f" in
landing/*|web-app/*) NEEDS_UX=true ;;
esac
done
# --- Build prompt file ---
status "building prompt"
cat > "${TMPDIR}/prompt.md" << PROMPT_EOF
# PR #${PR_NUMBER}: ${PR_TITLE}
## PR Description
${PR_BODY}
## Changed Files
${DIFF_STAT}
## PRODUCT-TRUTH.md (what we can/cannot claim)
$(cat "${REPO_ROOT}/docs/PRODUCT-TRUTH.md")
## ARCHITECTURE.md
$(cat "${REPO_ROOT}/docs/ARCHITECTURE.md")
PROMPT_EOF
if [ "$NEEDS_UX" = true ] && [ -f "${REPO_ROOT}/docs/UX-DECISIONS.md" ]; then
cat >> "${TMPDIR}/prompt.md" << UX_EOF
## UX-DECISIONS.md
$(cat "${REPO_ROOT}/docs/UX-DECISIONS.md")
UX_EOF
fi
cat >> "${TMPDIR}/prompt.md" << DIFF_EOF
## Diff
\`\`\`diff
${DIFF}
\`\`\`
## Your Task
Produce a structured review:
### 1. Claim Check
Extract every factual claim about the protocol from user-facing text in the diff.
For each, verify against PRODUCT-TRUTH.md:
- ✅ Accurate
- ⚠️ Partially true (explain)
- ❌ False (cite contradiction)
If no claims, say "No user-facing claims in this diff."
### 2. Code Review
Bugs, logic errors, missing edge cases, broken imports.
### 3. Architecture Check
Does this follow patterns in ARCHITECTURE.md?
### 4. UX/Messaging Check
Does copy follow UX-DECISIONS.md?
(Skip if no UX-DECISIONS context provided.)
### 5. Verdict
**APPROVE**, **REQUEST_CHANGES**, or **DISCUSS** — one line reason.
Be direct. No filler.
DIFF_EOF
PROMPT_SIZE=$(stat -c%s "${TMPDIR}/prompt.md")
log "Prompt: ${PROMPT_SIZE} bytes"
# --- Run claude -p ---
status "running claude (sonnet)"
SECONDS=0
REVIEW=$(claude -p \
--model sonnet \
--dangerously-skip-permissions \
--output-format text \
< "${TMPDIR}/prompt.md" 2>"${TMPDIR}/claude-stderr.log")
ELAPSED=$SECONDS
CLAUDE_EXIT=$?
if [ $CLAUDE_EXIT -ne 0 ]; then
log "ERROR: claude exited ${CLAUDE_EXIT} after ${ELAPSED}s"
log "stderr: $(cat "${TMPDIR}/claude-stderr.log" | tail -5)"
exit 1
fi
if [ -z "$REVIEW" ]; then
log "ERROR: empty review after ${ELAPSED}s"
exit 1
fi
REVIEW_SIZE=$(echo "$REVIEW" | wc -c)
log "Review: ${REVIEW_SIZE} bytes in ${ELAPSED}s"
# --- Post to Codeberg ---
status "posting to Codeberg"
COMMENT_BODY="## 🤖 AI Review
<!-- reviewed: ${PR_SHA} -->
${REVIEW}
---
*Reviewed at \`${PR_SHA:0:7}\` · [PRODUCT-TRUTH.md](../docs/PRODUCT-TRUTH.md) · [ARCHITECTURE.md](../docs/ARCHITECTURE.md)*"
POST_CODE=$(curl -sf -o /dev/null -w "%{http_code}" \
-X POST \
-H "Authorization: token ${CODEBERG_TOKEN}" \
-H "Content-Type: application/json" \
"${API_BASE}/issues/${PR_NUMBER}/comments" \
-d "$(jq -n --arg body "$COMMENT_BODY" '{body: $body}')")
if [ "${POST_CODE}" = "201" ]; then
log "POSTED to Codeberg"
else
log "ERROR: Codeberg HTTP ${POST_CODE}"
echo "$REVIEW" > "${LOGDIR}/review-pr${PR_NUMBER}-${PR_SHA:0:7}.md"
log "Review saved to ${LOGDIR}/review-pr${PR_NUMBER}-${PR_SHA:0:7}.md"
exit 1
fi
# --- Notify OpenClaw (best effort) ---
VERDICT=$(echo "$REVIEW" | grep -oP '\*\*(APPROVE|REQUEST_CHANGES|DISCUSS)\*\*' | head -1 | tr -d '*')
if command -v openclaw &>/dev/null; then
openclaw system event \
--text "🤖 PR #${PR_NUMBER} reviewed: ${VERDICT:-UNKNOWN}${PR_TITLE}" \
--mode now 2>/dev/null || true
fi
log "DONE: ${VERDICT:-UNKNOWN} (${ELAPSED}s)"

View file

@ -21,16 +21,14 @@ type Query {
stackMetas(where: stackMetaFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): stackMetaPage! stackMetas(where: stackMetaFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): stackMetaPage!
stats(id: String!): stats stats(id: String!): stats
statss(where: statsFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): statsPage! statss(where: statsFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): statsPage!
ethReserveHistory(id: String!): ethReserveHistory
ethReserveHistorys(where: ethReserveHistoryFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): ethReserveHistoryPage!
feeHistory(id: String!): feeHistory
feeHistorys(where: feeHistoryFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): feeHistoryPage!
positions(id: String!): positions positions(id: String!): positions
positionss(where: positionsFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): positionsPage! positionss(where: positionsFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): positionsPage!
recenters(id: String!): recenters recenters(id: String!): recenters
recenterss(where: recentersFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): recentersPage! recenterss(where: recentersFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): recentersPage!
holders(address: String!): holders holders(address: String!): holders
holderss(where: holdersFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): holdersPage! holderss(where: holdersFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): holdersPage!
transactions(id: String!): transactions
transactionss(where: transactionsFilter, orderBy: String, orderDirection: String, before: String, after: String, limit: Int): transactionsPage!
_meta: Meta _meta: Meta
} }
@ -107,7 +105,6 @@ type stats {
totalMinted: BigInt! totalMinted: BigInt!
totalBurned: BigInt! totalBurned: BigInt!
totalTaxPaid: BigInt! totalTaxPaid: BigInt!
totalUbiClaimed: BigInt!
mintedLastWeek: BigInt! mintedLastWeek: BigInt!
mintedLastDay: BigInt! mintedLastDay: BigInt!
mintNextHourProjected: BigInt! mintNextHourProjected: BigInt!
@ -117,9 +114,10 @@ type stats {
taxPaidLastWeek: BigInt! taxPaidLastWeek: BigInt!
taxPaidLastDay: BigInt! taxPaidLastDay: BigInt!
taxPaidNextHourProjected: BigInt! taxPaidNextHourProjected: BigInt!
ubiClaimedLastWeek: BigInt! ethReserveLastDay: BigInt!
ubiClaimedLastDay: BigInt! ethReserveLastWeek: BigInt!
ubiClaimedNextHourProjected: BigInt! netSupplyChangeDay: BigInt!
netSupplyChangeWeek: BigInt!
ringBufferPointer: Int! ringBufferPointer: Int!
lastHourlyUpdateTimestamp: BigInt! lastHourlyUpdateTimestamp: BigInt!
ringBuffer: JSON! ringBuffer: JSON!
@ -224,14 +222,6 @@ input statsFilter {
totalTaxPaid_lt: BigInt totalTaxPaid_lt: BigInt
totalTaxPaid_gte: BigInt totalTaxPaid_gte: BigInt
totalTaxPaid_lte: BigInt totalTaxPaid_lte: BigInt
totalUbiClaimed: BigInt
totalUbiClaimed_not: BigInt
totalUbiClaimed_in: [BigInt]
totalUbiClaimed_not_in: [BigInt]
totalUbiClaimed_gt: BigInt
totalUbiClaimed_lt: BigInt
totalUbiClaimed_gte: BigInt
totalUbiClaimed_lte: BigInt
mintedLastWeek: BigInt mintedLastWeek: BigInt
mintedLastWeek_not: BigInt mintedLastWeek_not: BigInt
mintedLastWeek_in: [BigInt] mintedLastWeek_in: [BigInt]
@ -304,30 +294,38 @@ input statsFilter {
taxPaidNextHourProjected_lt: BigInt taxPaidNextHourProjected_lt: BigInt
taxPaidNextHourProjected_gte: BigInt taxPaidNextHourProjected_gte: BigInt
taxPaidNextHourProjected_lte: BigInt taxPaidNextHourProjected_lte: BigInt
ubiClaimedLastWeek: BigInt ethReserveLastDay: BigInt
ubiClaimedLastWeek_not: BigInt ethReserveLastDay_not: BigInt
ubiClaimedLastWeek_in: [BigInt] ethReserveLastDay_in: [BigInt]
ubiClaimedLastWeek_not_in: [BigInt] ethReserveLastDay_not_in: [BigInt]
ubiClaimedLastWeek_gt: BigInt ethReserveLastDay_gt: BigInt
ubiClaimedLastWeek_lt: BigInt ethReserveLastDay_lt: BigInt
ubiClaimedLastWeek_gte: BigInt ethReserveLastDay_gte: BigInt
ubiClaimedLastWeek_lte: BigInt ethReserveLastDay_lte: BigInt
ubiClaimedLastDay: BigInt ethReserveLastWeek: BigInt
ubiClaimedLastDay_not: BigInt ethReserveLastWeek_not: BigInt
ubiClaimedLastDay_in: [BigInt] ethReserveLastWeek_in: [BigInt]
ubiClaimedLastDay_not_in: [BigInt] ethReserveLastWeek_not_in: [BigInt]
ubiClaimedLastDay_gt: BigInt ethReserveLastWeek_gt: BigInt
ubiClaimedLastDay_lt: BigInt ethReserveLastWeek_lt: BigInt
ubiClaimedLastDay_gte: BigInt ethReserveLastWeek_gte: BigInt
ubiClaimedLastDay_lte: BigInt ethReserveLastWeek_lte: BigInt
ubiClaimedNextHourProjected: BigInt netSupplyChangeDay: BigInt
ubiClaimedNextHourProjected_not: BigInt netSupplyChangeDay_not: BigInt
ubiClaimedNextHourProjected_in: [BigInt] netSupplyChangeDay_in: [BigInt]
ubiClaimedNextHourProjected_not_in: [BigInt] netSupplyChangeDay_not_in: [BigInt]
ubiClaimedNextHourProjected_gt: BigInt netSupplyChangeDay_gt: BigInt
ubiClaimedNextHourProjected_lt: BigInt netSupplyChangeDay_lt: BigInt
ubiClaimedNextHourProjected_gte: BigInt netSupplyChangeDay_gte: BigInt
ubiClaimedNextHourProjected_lte: BigInt netSupplyChangeDay_lte: BigInt
netSupplyChangeWeek: BigInt
netSupplyChangeWeek_not: BigInt
netSupplyChangeWeek_in: [BigInt]
netSupplyChangeWeek_not_in: [BigInt]
netSupplyChangeWeek_gt: BigInt
netSupplyChangeWeek_lt: BigInt
netSupplyChangeWeek_gte: BigInt
netSupplyChangeWeek_lte: BigInt
ringBufferPointer: Int ringBufferPointer: Int
ringBufferPointer_not: Int ringBufferPointer_not: Int
ringBufferPointer_in: [Int] ringBufferPointer_in: [Int]
@ -474,101 +472,6 @@ input statsFilter {
floorDistanceBps_lte: Int floorDistanceBps_lte: Int
} }
type ethReserveHistory {
id: String!
timestamp: BigInt!
ethBalance: BigInt!
}
type ethReserveHistoryPage {
items: [ethReserveHistory!]!
pageInfo: PageInfo!
totalCount: Int!
}
input ethReserveHistoryFilter {
AND: [ethReserveHistoryFilter]
OR: [ethReserveHistoryFilter]
id: String
id_not: String
id_in: [String]
id_not_in: [String]
id_contains: String
id_not_contains: String
id_starts_with: String
id_ends_with: String
id_not_starts_with: String
id_not_ends_with: String
timestamp: BigInt
timestamp_not: BigInt
timestamp_in: [BigInt]
timestamp_not_in: [BigInt]
timestamp_gt: BigInt
timestamp_lt: BigInt
timestamp_gte: BigInt
timestamp_lte: BigInt
ethBalance: BigInt
ethBalance_not: BigInt
ethBalance_in: [BigInt]
ethBalance_not_in: [BigInt]
ethBalance_gt: BigInt
ethBalance_lt: BigInt
ethBalance_gte: BigInt
ethBalance_lte: BigInt
}
type feeHistory {
id: String!
timestamp: BigInt!
ethFees: BigInt!
krkFees: BigInt!
}
type feeHistoryPage {
items: [feeHistory!]!
pageInfo: PageInfo!
totalCount: Int!
}
input feeHistoryFilter {
AND: [feeHistoryFilter]
OR: [feeHistoryFilter]
id: String
id_not: String
id_in: [String]
id_not_in: [String]
id_contains: String
id_not_contains: String
id_starts_with: String
id_ends_with: String
id_not_starts_with: String
id_not_ends_with: String
timestamp: BigInt
timestamp_not: BigInt
timestamp_in: [BigInt]
timestamp_not_in: [BigInt]
timestamp_gt: BigInt
timestamp_lt: BigInt
timestamp_gte: BigInt
timestamp_lte: BigInt
ethFees: BigInt
ethFees_not: BigInt
ethFees_in: [BigInt]
ethFees_not_in: [BigInt]
ethFees_gt: BigInt
ethFees_lt: BigInt
ethFees_gte: BigInt
ethFees_lte: BigInt
krkFees: BigInt
krkFees_not: BigInt
krkFees_in: [BigInt]
krkFees_not_in: [BigInt]
krkFees_gt: BigInt
krkFees_lt: BigInt
krkFees_gte: BigInt
krkFees_lte: BigInt
}
type positions { type positions {
id: String! id: String!
owner: String! owner: String!
@ -820,6 +723,8 @@ input recentersFilter {
type holders { type holders {
address: String! address: String!
balance: BigInt! balance: BigInt!
totalEthSpent: BigInt!
totalTokensAcquired: BigInt!
} }
type holdersPage { type holdersPage {
@ -849,4 +754,114 @@ input holdersFilter {
balance_lt: BigInt balance_lt: BigInt
balance_gte: BigInt balance_gte: BigInt
balance_lte: BigInt balance_lte: BigInt
totalEthSpent: BigInt
totalEthSpent_not: BigInt
totalEthSpent_in: [BigInt]
totalEthSpent_not_in: [BigInt]
totalEthSpent_gt: BigInt
totalEthSpent_lt: BigInt
totalEthSpent_gte: BigInt
totalEthSpent_lte: BigInt
totalTokensAcquired: BigInt
totalTokensAcquired_not: BigInt
totalTokensAcquired_in: [BigInt]
totalTokensAcquired_not_in: [BigInt]
totalTokensAcquired_gt: BigInt
totalTokensAcquired_lt: BigInt
totalTokensAcquired_gte: BigInt
totalTokensAcquired_lte: BigInt
}
type transactions {
id: String!
holder: String!
type: String!
tokenAmount: BigInt!
ethAmount: BigInt!
timestamp: BigInt!
blockNumber: Int!
txHash: String!
}
type transactionsPage {
items: [transactions!]!
pageInfo: PageInfo!
totalCount: Int!
}
input transactionsFilter {
AND: [transactionsFilter]
OR: [transactionsFilter]
id: String
id_not: String
id_in: [String]
id_not_in: [String]
id_contains: String
id_not_contains: String
id_starts_with: String
id_ends_with: String
id_not_starts_with: String
id_not_ends_with: String
holder: String
holder_not: String
holder_in: [String]
holder_not_in: [String]
holder_contains: String
holder_not_contains: String
holder_starts_with: String
holder_ends_with: String
holder_not_starts_with: String
holder_not_ends_with: String
type: String
type_not: String
type_in: [String]
type_not_in: [String]
type_contains: String
type_not_contains: String
type_starts_with: String
type_ends_with: String
type_not_starts_with: String
type_not_ends_with: String
tokenAmount: BigInt
tokenAmount_not: BigInt
tokenAmount_in: [BigInt]
tokenAmount_not_in: [BigInt]
tokenAmount_gt: BigInt
tokenAmount_lt: BigInt
tokenAmount_gte: BigInt
tokenAmount_lte: BigInt
ethAmount: BigInt
ethAmount_not: BigInt
ethAmount_in: [BigInt]
ethAmount_not_in: [BigInt]
ethAmount_gt: BigInt
ethAmount_lt: BigInt
ethAmount_gte: BigInt
ethAmount_lte: BigInt
timestamp: BigInt
timestamp_not: BigInt
timestamp_in: [BigInt]
timestamp_not_in: [BigInt]
timestamp_gt: BigInt
timestamp_lt: BigInt
timestamp_gte: BigInt
timestamp_lte: BigInt
blockNumber: Int
blockNumber_not: Int
blockNumber_in: [Int]
blockNumber_not_in: [Int]
blockNumber_gt: Int
blockNumber_lt: Int
blockNumber_gte: Int
blockNumber_lte: Int
txHash: String
txHash_not: String
txHash_in: [String]
txHash_not_in: [String]
txHash_contains: String
txHash_not_contains: String
txHash_starts_with: String
txHash_ends_with: String
txHash_not_starts_with: String
txHash_not_ends_with: String
} }

View file

@ -2,7 +2,7 @@ import { onchainTable, index } from 'ponder';
import { TAX_RATE_OPTIONS } from 'kraiken-lib/taxRates'; import { TAX_RATE_OPTIONS } from 'kraiken-lib/taxRates';
export const HOURS_IN_RING_BUFFER = 168; // 7 days * 24 hours export const HOURS_IN_RING_BUFFER = 168; // 7 days * 24 hours
const RING_BUFFER_SEGMENTS = 4; // ethReserve, minted, burned, tax const RING_BUFFER_SEGMENTS = 4; // ethReserve, minted, burned, holderCount
export const stackMeta = onchainTable('stackMeta', t => ({ export const stackMeta = onchainTable('stackMeta', t => ({
id: t.text().primaryKey(), id: t.text().primaryKey(),
@ -188,21 +188,6 @@ export const stats = onchainTable('stats', t => ({
floorDistanceBps: t.integer(), floorDistanceBps: t.integer(),
})); }));
// ETH reserve history - tracks ethBalance over time for 7d growth calculation
export const ethReserveHistory = onchainTable('ethReserveHistory', t => ({
id: t.text().primaryKey(), // block_logIndex format
timestamp: t.bigint().notNull(),
ethBalance: t.bigint().notNull(),
}));
// Fee history - tracks fees earned over time for 7d totals
export const feeHistory = onchainTable('feeHistory', t => ({
id: t.text().primaryKey(), // block_logIndex format
timestamp: t.bigint().notNull(),
ethFees: t.bigint().notNull(),
krkFees: t.bigint().notNull(),
}));
// Individual staking positions // Individual staking positions
export const positions = onchainTable( export const positions = onchainTable(
'positions', 'positions',

View file

@ -6,7 +6,7 @@ type HandlerArgs = Handler extends (...args: infer Args) => unknown ? Args[0] :
export type StatsContext = HandlerArgs extends { context: infer C } ? C : never; export type StatsContext = HandlerArgs extends { context: infer C } ? C : never;
type StatsEvent = HandlerArgs extends { event: infer E } ? E : never; type StatsEvent = HandlerArgs extends { event: infer E } ? E : never;
export const RING_BUFFER_SEGMENTS = 4; // ethReserve, minted, burned, tax export const RING_BUFFER_SEGMENTS = 4; // ethReserve, minted, burned, holderCount
export const MINIMUM_BLOCKS_FOR_RINGBUFFER = 100; export const MINIMUM_BLOCKS_FOR_RINGBUFFER = 100;
// Get deploy block from environment (set by bootstrap) // Get deploy block from environment (set by bootstrap)
@ -34,46 +34,52 @@ function computeMetrics(ringBuffer: bigint[], pointer: number) {
let mintedWeek = 0n; let mintedWeek = 0n;
let burnedDay = 0n; let burnedDay = 0n;
let burnedWeek = 0n; let burnedWeek = 0n;
let taxDay = 0n; // Slot 0: ETH reserve snapshots per hour (latest value, not cumulative)
let taxWeek = 0n; let ethReserveLatest = 0n;
// Slot 0 now stores ETH reserve snapshots per hour (latest value, not cumulative) let ethReserve24hAgo = 0n;
let ethReserveLatest = 0n; // Most recent non-zero snapshot let ethReserve7dAgo = 0n;
let ethReserve24hAgo = 0n; // Snapshot from ~24h ago // Slot 3: holderCount snapshots per hour
let ethReserve7dAgo = 0n; // Oldest snapshot in buffer let holderCountLatest = 0n;
let holderCount24hAgo = 0n;
let holderCount7dAgo = 0n;
for (let i = 0; i < HOURS_IN_RING_BUFFER; i++) { for (let i = 0; i < HOURS_IN_RING_BUFFER; i++) {
const baseIndex = ((pointer - i + HOURS_IN_RING_BUFFER) % HOURS_IN_RING_BUFFER) * RING_BUFFER_SEGMENTS; const baseIndex = ((pointer - i + HOURS_IN_RING_BUFFER) % HOURS_IN_RING_BUFFER) * RING_BUFFER_SEGMENTS;
const ethReserve = ringBuffer[baseIndex + 0]; const ethReserve = ringBuffer[baseIndex + 0];
const minted = ringBuffer[baseIndex + 1]; const minted = ringBuffer[baseIndex + 1];
const burned = ringBuffer[baseIndex + 2]; const burned = ringBuffer[baseIndex + 2];
const tax = ringBuffer[baseIndex + 3]; const holderCount = ringBuffer[baseIndex + 3];
// Track ETH reserve at key points // Track ETH reserve at key points
if (i === 0 && ethReserve > 0n) ethReserveLatest = ethReserve; if (i === 0 && ethReserve > 0n) ethReserveLatest = ethReserve;
if (i === 23 && ethReserve > 0n) ethReserve24hAgo = ethReserve; if (i === 23 && ethReserve > 0n) ethReserve24hAgo = ethReserve;
if (ethReserve > 0n) ethReserve7dAgo = ethReserve; // Last non-zero = oldest if (ethReserve > 0n) ethReserve7dAgo = ethReserve; // Last non-zero = oldest
// Track holder count at key points
if (i === 0 && holderCount > 0n) holderCountLatest = holderCount;
if (i === 23 && holderCount > 0n) holderCount24hAgo = holderCount;
if (holderCount > 0n) holderCount7dAgo = holderCount; // Last non-zero = oldest
if (i < 24) { if (i < 24) {
mintedDay += minted; mintedDay += minted;
burnedDay += burned; burnedDay += burned;
taxDay += tax;
} }
mintedWeek += minted; mintedWeek += minted;
burnedWeek += burned; burnedWeek += burned;
taxWeek += tax;
} }
return { return {
ethReserveLatest, ethReserveLatest,
ethReserve24hAgo, ethReserve24hAgo,
ethReserve7dAgo, ethReserve7dAgo,
holderCountLatest,
holderCount24hAgo,
holderCount7dAgo,
mintedDay, mintedDay,
mintedWeek, mintedWeek,
burnedDay, burnedDay,
burnedWeek, burnedWeek,
taxDay,
taxWeek,
}; };
} }
@ -95,12 +101,10 @@ function computeProjections(ringBuffer: bigint[], pointer: number, timestamp: bi
const mintProjection = project(ringBuffer[currentBase + 1], ringBuffer[previousBase + 1], metrics.mintedWeek); const mintProjection = project(ringBuffer[currentBase + 1], ringBuffer[previousBase + 1], metrics.mintedWeek);
const burnProjection = project(ringBuffer[currentBase + 2], ringBuffer[previousBase + 2], metrics.burnedWeek); const burnProjection = project(ringBuffer[currentBase + 2], ringBuffer[previousBase + 2], metrics.burnedWeek);
const taxProjection = project(ringBuffer[currentBase + 3], ringBuffer[previousBase + 3], metrics.taxWeek);
return { return {
mintProjection, mintProjection,
burnProjection, burnProjection,
taxProjection,
}; };
} }
@ -211,6 +215,15 @@ export async function updateHourlyData(context: StatsContext, timestamp: bigint)
let pointer = statsData.ringBufferPointer ?? 0; let pointer = statsData.ringBufferPointer ?? 0;
const lastUpdate = statsData.lastHourlyUpdateTimestamp ?? 0n; const lastUpdate = statsData.lastHourlyUpdateTimestamp ?? 0n;
// Snapshot current holderCount into ring buffer slot 3
// NOTE: Slot 3 migrated from cumulative tax to holderCount in PR #177.
// Existing ring buffer data will contain stale tax values interpreted as
// holder counts for up to 7 days (168 hours) post-deploy until the buffer
// fully rotates. Data self-heals as new hourly snapshots overwrite old slots.
const currentHolderCount = BigInt(statsData.holderCount ?? 0);
const base = pointer * RING_BUFFER_SEGMENTS;
ringBuffer[base + 3] = currentHolderCount;
if (lastUpdate === 0n) { if (lastUpdate === 0n) {
await context.db.update(stats, { id: STATS_ID }).set({ await context.db.update(stats, { id: STATS_ID }).set({
lastHourlyUpdateTimestamp: currentHour, lastHourlyUpdateTimestamp: currentHour,
@ -225,11 +238,11 @@ export async function updateHourlyData(context: StatsContext, timestamp: bigint)
for (let h = 0; h < hoursElapsed; h++) { for (let h = 0; h < hoursElapsed; h++) {
pointer = (pointer + 1) % HOURS_IN_RING_BUFFER; pointer = (pointer + 1) % HOURS_IN_RING_BUFFER;
const base = pointer * RING_BUFFER_SEGMENTS; const newBase = pointer * RING_BUFFER_SEGMENTS;
ringBuffer[base + 0] = 0n; ringBuffer[newBase + 0] = 0n;
ringBuffer[base + 1] = 0n; ringBuffer[newBase + 1] = 0n;
ringBuffer[base + 2] = 0n; ringBuffer[newBase + 2] = 0n;
ringBuffer[base + 3] = 0n; ringBuffer[newBase + 3] = currentHolderCount; // Carry forward current holderCount
} }
const metrics = computeMetrics(ringBuffer, pointer); const metrics = computeMetrics(ringBuffer, pointer);
@ -242,15 +255,12 @@ export async function updateHourlyData(context: StatsContext, timestamp: bigint)
mintedLastWeek: metrics.mintedWeek, mintedLastWeek: metrics.mintedWeek,
burnedLastDay: metrics.burnedDay, burnedLastDay: metrics.burnedDay,
burnedLastWeek: metrics.burnedWeek, burnedLastWeek: metrics.burnedWeek,
taxPaidLastDay: metrics.taxDay,
taxPaidLastWeek: metrics.taxWeek,
ethReserveLastDay: metrics.ethReserveLatest > 0n ? metrics.ethReserveLatest - metrics.ethReserve24hAgo : 0n, ethReserveLastDay: metrics.ethReserveLatest > 0n ? metrics.ethReserveLatest - metrics.ethReserve24hAgo : 0n,
ethReserveLastWeek: metrics.ethReserveLatest > 0n ? metrics.ethReserveLatest - metrics.ethReserve7dAgo : 0n, ethReserveLastWeek: metrics.ethReserveLatest > 0n ? metrics.ethReserveLatest - metrics.ethReserve7dAgo : 0n,
netSupplyChangeDay: metrics.mintedDay - metrics.burnedDay, netSupplyChangeDay: metrics.mintedDay - metrics.burnedDay,
netSupplyChangeWeek: metrics.mintedWeek - metrics.burnedWeek, netSupplyChangeWeek: metrics.mintedWeek - metrics.burnedWeek,
mintNextHourProjected: metrics.mintedWeek / 7n, mintNextHourProjected: metrics.mintedWeek / 7n,
burnNextHourProjected: metrics.burnedWeek / 7n, burnNextHourProjected: metrics.burnedWeek / 7n,
taxPaidNextHourProjected: metrics.taxWeek / 7n,
}); });
} else { } else {
const metrics = computeMetrics(ringBuffer, pointer); const metrics = computeMetrics(ringBuffer, pointer);
@ -262,15 +272,12 @@ export async function updateHourlyData(context: StatsContext, timestamp: bigint)
mintedLastWeek: metrics.mintedWeek, mintedLastWeek: metrics.mintedWeek,
burnedLastDay: metrics.burnedDay, burnedLastDay: metrics.burnedDay,
burnedLastWeek: metrics.burnedWeek, burnedLastWeek: metrics.burnedWeek,
taxPaidLastDay: metrics.taxDay,
taxPaidLastWeek: metrics.taxWeek,
ethReserveLastDay: metrics.ethReserveLatest > 0n ? metrics.ethReserveLatest - metrics.ethReserve24hAgo : 0n, ethReserveLastDay: metrics.ethReserveLatest > 0n ? metrics.ethReserveLatest - metrics.ethReserve24hAgo : 0n,
ethReserveLastWeek: metrics.ethReserveLatest > 0n ? metrics.ethReserveLatest - metrics.ethReserve7dAgo : 0n, ethReserveLastWeek: metrics.ethReserveLatest > 0n ? metrics.ethReserveLatest - metrics.ethReserve7dAgo : 0n,
netSupplyChangeDay: metrics.mintedDay - metrics.burnedDay, netSupplyChangeDay: metrics.mintedDay - metrics.burnedDay,
netSupplyChangeWeek: metrics.mintedWeek - metrics.burnedWeek, netSupplyChangeWeek: metrics.mintedWeek - metrics.burnedWeek,
mintNextHourProjected: projections.mintProjection, mintNextHourProjected: projections.mintProjection,
burnNextHourProjected: projections.burnProjection, burnNextHourProjected: projections.burnProjection,
taxPaidNextHourProjected: projections.taxProjection,
}); });
} }
} }

View file

@ -1,10 +1,7 @@
import { ponder } from 'ponder:registry'; import { ponder } from 'ponder:registry';
import { getLogger } from './helpers/logger'; import { getLogger } from './helpers/logger';
import { recenters, stats, STATS_ID, ethReserveHistory } from 'ponder:schema'; import { recenters, stats, STATS_ID, HOURS_IN_RING_BUFFER } from 'ponder:schema';
import { ensureStatsExists, recordEthReserveSnapshot } from './helpers/stats'; import { ensureStatsExists, recordEthReserveSnapshot, parseRingBuffer, RING_BUFFER_SEGMENTS } from './helpers/stats';
import { gte, asc } from 'drizzle-orm';
const SECONDS_IN_7_DAYS = 7n * 24n * 60n * 60n;
/** /**
* Fee tracking approach: * Fee tracking approach:
@ -17,12 +14,6 @@ const SECONDS_IN_7_DAYS = 7n * 24n * 60n * 60n;
* - Pros: No config changes needed * - Pros: No config changes needed
* - Cons: Less accurate, hard to isolate fees from other balance changes * - Cons: Less accurate, hard to isolate fees from other balance changes
* *
* Current: Fee tracking infrastructure (feeHistory table, stats fields) is in place
* but not populated. To implement:
* 1. Add UniswapV3Pool contract to ponder.config.ts with Collect event
* 2. Handle Collect events to populate feeHistory table
* 3. Calculate 7-day rolling totals from feeHistory
*
* The feesEarned7dEth and feesEarned7dKrk fields default to 0n until implemented. * The feesEarned7dEth and feesEarned7dKrk fields default to 0n until implemented.
*/ */
@ -134,14 +125,6 @@ ponder.on('LiquidityManager:EthScarcity', async ({ event, context }) => {
); );
} }
// Record ETH reserve to history for 7d growth tracking
const historyId = `${event.block.number}_${event.log.logIndex}`;
await context.db.insert(ethReserveHistory).values({
id: historyId,
timestamp: event.block.timestamp,
ethBalance,
});
// Update stats with reserve data, floor price, and 7d growth // Update stats with reserve data, floor price, and 7d growth
await updateReserveStats(context, event, ethBalance, currentTick, vwapTick); await updateReserveStats(context, event, ethBalance, currentTick, vwapTick);
}); });
@ -195,7 +178,7 @@ ponder.on('LiquidityManager:EthAbundance', async ({ event, context }) => {
/** /**
* Shared logic for EthScarcity and EthAbundance handlers: * Shared logic for EthScarcity and EthAbundance handlers:
* Records ETH reserve history, calculates 7d growth, floor price, and updates stats. * Records ETH reserve in ring buffer, calculates 7d growth from ring buffer, floor price, and updates stats.
*/ */
async function updateReserveStats( async function updateReserveStats(
// eslint-disable-next-line @typescript-eslint/no-explicit-any // eslint-disable-next-line @typescript-eslint/no-explicit-any
@ -205,29 +188,31 @@ async function updateReserveStats(
currentTick: number | bigint, currentTick: number | bigint,
vwapTick: number | bigint vwapTick: number | bigint
) { ) {
// Record ETH reserve to history for 7d growth tracking // Record ETH reserve in ring buffer for hourly time-series
const historyId = `${event.block.number}_${event.log.logIndex}`; await recordEthReserveSnapshot(context, event.block.timestamp, ethBalance);
await context.db.insert(ethReserveHistory).values({
id: historyId,
timestamp: event.block.timestamp,
ethBalance,
});
// Look back 7 days for growth calculation using raw Drizzle query
const sevenDaysAgo = event.block.timestamp - SECONDS_IN_7_DAYS;
const oldReserves = await context.db.sql
.select()
.from(ethReserveHistory)
.where(gte(ethReserveHistory.timestamp, sevenDaysAgo))
.orderBy(asc(ethReserveHistory.timestamp))
.limit(1);
// Compute 7d growth from ring buffer (slot 0 = ethReserve snapshots)
const statsData = await context.db.find(stats, { id: STATS_ID });
let ethReserve7dAgo: bigint | null = null; let ethReserve7dAgo: bigint | null = null;
let ethReserveGrowthBps: number | null = null; let ethReserveGrowthBps: number | null = null;
if (oldReserves.length > 0 && oldReserves[0]) { if (statsData) {
ethReserve7dAgo = oldReserves[0].ethBalance; const ringBuffer = parseRingBuffer(statsData.ringBuffer as string[]);
ethReserveGrowthBps = calculateBps(ethBalance, ethReserve7dAgo); const pointer = statsData.ringBufferPointer ?? 0;
// Walk backwards through ring buffer to find oldest non-zero ETH reserve
for (let i = HOURS_IN_RING_BUFFER - 1; i >= 0; i--) {
const baseIndex = ((pointer - i + HOURS_IN_RING_BUFFER) % HOURS_IN_RING_BUFFER) * RING_BUFFER_SEGMENTS;
const reserve = ringBuffer[baseIndex + 0];
if (reserve > 0n) {
ethReserve7dAgo = reserve;
break;
}
}
if (ethReserve7dAgo && ethReserve7dAgo > 0n) {
ethReserveGrowthBps = calculateBps(ethBalance, ethReserve7dAgo);
}
} }
// Calculate floor price (from vwapTick) and current price (from currentTick) // Calculate floor price (from vwapTick) and current price (from currentTick)
@ -249,7 +234,4 @@ async function updateReserveStats(
currentPriceWei, currentPriceWei,
floorDistanceBps, floorDistanceBps,
}); });
// Record ETH reserve in ring buffer for hourly time-series
await recordEthReserveSnapshot(context, event.block.timestamp, ethBalance);
} }

View file

@ -4,12 +4,9 @@ import {
ensureStatsExists, ensureStatsExists,
getStakeTotalSupply, getStakeTotalSupply,
markPositionsUpdated, markPositionsUpdated,
parseRingBuffer,
refreshOutstandingStake, refreshOutstandingStake,
serializeRingBuffer,
updateHourlyData, updateHourlyData,
checkBlockHistorySufficient, checkBlockHistorySufficient,
RING_BUFFER_SEGMENTS,
} from './helpers/stats'; } from './helpers/stats';
import type { StatsContext } from './helpers/stats'; import type { StatsContext } from './helpers/stats';
@ -154,31 +151,16 @@ ponder.on('Stake:PositionTaxPaid', async ({ event, context }) => {
lastTaxTime: event.block.timestamp, lastTaxTime: event.block.timestamp,
}); });
// Only update ringbuffer if we have sufficient block history // Update totalTaxPaid counter (no longer ring-buffered)
const statsData = await context.db.find(stats, { id: STATS_ID });
if (statsData) {
await context.db.update(stats, { id: STATS_ID }).set({
totalTaxPaid: statsData.totalTaxPaid + event.args.taxPaid,
});
}
if (checkBlockHistorySufficient(context, event)) { if (checkBlockHistorySufficient(context, event)) {
const statsData = await context.db.find(stats, { id: STATS_ID });
if (statsData) {
const ringBuffer = parseRingBuffer(statsData.ringBuffer as string[]);
const pointer = statsData.ringBufferPointer ?? 0;
const baseIndex = pointer * RING_BUFFER_SEGMENTS;
ringBuffer[baseIndex + 3] = ringBuffer[baseIndex + 3] + event.args.taxPaid;
await context.db.update(stats, { id: STATS_ID }).set({
ringBuffer: serializeRingBuffer(ringBuffer),
totalTaxPaid: statsData.totalTaxPaid + event.args.taxPaid,
});
}
await updateHourlyData(context, event.block.timestamp); await updateHourlyData(context, event.block.timestamp);
} else {
// Insufficient history - update only totalTaxPaid without ringbuffer
const statsData = await context.db.find(stats, { id: STATS_ID });
if (statsData) {
await context.db.update(stats, { id: STATS_ID }).set({
totalTaxPaid: statsData.totalTaxPaid + event.args.taxPaid,
});
}
} }
await refreshOutstandingStake(context); await refreshOutstandingStake(context);