Skip to main content

This hackathon has ended

This event is no longer accepting registrations or submissions. Explore upcoming hackathons

Cortensor

Cortensor Hackathon #4

Cortensor

0 of 0
Cortensor Hackathon #4 banner

Prize Pool

COR3.0K

Location

Online

Status

Ended

Days Left

Ended

Date Range

Jan 19, 2026 - Feb 28, 2026

Submission Period

Not specified

About the Hackathon

Doc: https://docs.cortensor.network/community-and-ecosystem/hackathon/hackathon-4

Join Discord For Detail: https://discord.gg/cortensor


Hackathon #4 is designed to push agentic applications from “cool demos” into usable, repeatable workflows — where agents can delegate work, call tools, validate outcomes, and keep going.

Hackathon Format (unchanged): Open-ended runs continue — variable inflow = high-value edge-case data. We’ll keep inviting outsiders to try agentic apps while hardening software, docs, and process.


Objective

Build proof-of-concept (PoC) agent applications, agent frameworks, bots, validators, and developer tools that leverage Cortensor’s decentralized inference protocol as the execution + trust backbone.

Primary focus areas (Agent-first)

  • Agentic Applications (core track) Agents that do real work: monitor, act, coordinate, transact, and report — with clear tool interfaces and repeatable flows.
  • Delegation + Execution Loops Demonstrate a full loop: Agent plans → delegates tasks → Cortensor executes (redundant compute) → results returned → agent continues.
  • Trust & Verification (PoI/PoUW utilization) Use PoI (redundant inference) and PoUW (validator scoring) where it adds credibility:
    • cross-model or cross-run checks
    • rubric-based scoring
    • evidence bundles / audit logs
    • “why trust this output” surfaces
  • Agent Tooling & Templates SDK wrappers, CLI scaffolds, starter kits for:
    • agent + tool calling
    • session management
    • logging + replay
    • evaluation harnesses
  • Operational Agents (DevOps / Community / Monitoring) Agents that keep infra and communities healthy:
    • router health checks, latency alerts
    • miner/model usage analytics
    • automated incident summaries
    • support triage bots with safe guardrails
  • Public Goods Agents Open-access bots that provide ongoing community value (free inference tier, limited quotas, or public endpoints).

Stretch goals (bonus points)

Note: ERC-8004, x402, MCP, and COR Prover are optional R&D surfaces — not required to participate.

  • ERC-8004 artifacts: emit agent identity/validation artifacts for discoverability.
  • x402: pay-per-call rails or UI flows for usage-based access.
  • Router surfaces: integrate Router v1 (REST) with a /validate endpoint; prototype COR Prover surfaces; explore MCP-compatible patterns.

Prize Pool (in $COR equivalent)

  • 1st: $1000
  • 2nd: $800
  • 3rd: $500
  • 4th–5th: $300 each
  • 6th–7th: $150 each
  • 8th–10th: $100 each
  • 11th–20th: $50 each

Ongoing Support: Top projects may qualify for monthly $COR grants for continued maintenance and improvements.

Important Notice on Prize Payouts (eligibility condition)

To align incentives with Cortensor’s long-term growth: if a prize-winning participant is not (a) an existing node operator running >20 nodes, or (b) a holder of $500+ $COR prior to kick-off, or (c) a participant with active staked tokens, then their prize will be split:

  • 50% allocated directly to Staking Pool #1 under their address
  • 50% distributed upfront as liquid $COR

This ensures rewards both support winners and strengthen network security and staking participation.

Judging & Awarding Policy

  • Quality over rank: Prizes are not awarded strictly by leaderboard order. Cortensor will judge qualified entries on overall quality, impact, technical rigor, usability, documentation, and alignment with Cortensor’s roadmap.
  • Discretionary placement: Judges may award a prize to any qualified entry at a given place (e.g., a lower-ranked but higher-quality project may receive a higher prize tier).
  • No guarantee to fill all places: Cortensor may withhold or reallocate any prize tier if submissions do not meet the quality bar.
  • Ties / partial awards: Cortensor may declare ties, split awards, or issue honorable-mention grants at its discretion.
  • Compliance: Entries must comply with all rules; violation may result in disqualification and forfeiture of prizes.
  • KYC requirement: Cortensor may require KYC verification for any qualified or prize-winning entry. If requested, successful completion of KYC is mandatory to receive prizes.

Suggested Project Ideas

New and emphasized for #4 (Agent Apps)

1) Real Operators: agents that run workflows

  • Release Manager Agent Watches repos, builds changelogs, drafts release notes, posts weekly dev summaries.
  • Incident Commander Agent Monitors router/validator health → detects anomalies → opens tickets → posts a short incident report.
  • QA / Regression Agent Runs daily test scripts against endpoints, compares outputs, flags drift, and files issues with evidence.
  • Community Support Agent Triage questions in Discord/Telegram, routes to docs, escalates edge cases to humans.

2) On-chain & crypto-native agents (safe + verifiable)

  • Receipt Verifier Agent Takes tx hashes/receipts → explains what happened → validates expected state changes → emits an attestation artifact.
  • Treasury Watch Agent Monitors balances/flows, posts alerts, generates periodic reports (with policy-based thresholds).
  • Governance Analyst Agent Summarizes proposals, finds risks/tradeoffs, and provides structured “for/against” reasoning (with sources).

3) Multi-agent systems and coordination

  • Coordinator Agent Breaks a goal into subtasks, delegates to specialized agents/tools, merges results, validates final output.
  • Disagreement Resolver When two agents disagree, run multi-run validation + rubric scoring and return a structured arbitration bundle.

4) Memory & personalization (done responsibly)

  • Project Memory Agent Maintains a structured memory store (decisions, constraints, TODOs), produces weekly deltas, supports “why did we decide X?”
  • Knowledge Base Builder Turns docs/issues into a searchable map, then answers questions with citations + confidence + validation.

Tooling / DX ideas (still highly valued)

  • Agent Starter Templates (TS/Python): tool registry, session helpers, structured logging, replay runner
  • Evaluation Harness: local scripts to run N trials across models/runs; generate validation reports
  • Observability dashboards: agent task success rates, latency, failure reasons, validator scores over time

Ongoing Support

  • Outstanding projects may receive monthly $COR grants for ongoing improvements.
  • Exceptional teams may be offered dedicated roles in the Cortensor developer community.

How to Participate

Deliverables Checklist (Agent-focused)

  • Public repo with permissive license (MIT/Apache-2.0)
  • README with:
    • quickstart + runbook
    • architecture diagram
    • tool list (what the agent can do)
    • safety/constraints (what it refuses to do)
  • Demo link (live URL or recorded video) + reproduction steps
  • Agent runtime proof:
    • sample transcripts / logs
    • structured outputs (JSON where relevant)
    • replay script or test command
  • Verification (recommended):
    • rubric prompt(s) / scoring policy
    • cross-run checks or validator usage
    • evidence bundle format (JSON + optional IPFS)

Timeline

  • Kick-off: TBD (Q1 2026)
  • Submission deadline: TBD (≈6 weeks after kick-off)

If dates shift, we’ll announce updates in Discord.


Evaluation Criteria

Mandatory: Working public demonstration during the hackathon (Discord/Telegram or designated channel). Projects not demonstrated will not qualify.

Scoring rubric:

  • Agent capability & workflow completeness — 30%
  • Integration with Cortensor (sessions, routing, validators, staking/payment flows; PoI/PoUW where applicable) — 25%
  • Reliability & safety guardrails — 20%
  • Usability & demo quality — 15%
  • Public good impact (free access, docs, community value) — 10%

Bonus considerations: ERC-8004 artifacts, x402 flows, MCP patterns, /validate usage, on-chain records, tests/benchmarks, observability dashboards.

No Auto-Generated Submissions: Purely AI-generated entries without a working implementation do not qualify.


Terms & Notes

  • Optional surfaces (FYI): COR Prover, ERC-8004, x402, and MCP are internal/WIP and not required.
  • Safety/Compliance: Projects must follow applicable laws and reasonable safety guidelines.
  • Transparency: Clearly mark any centralized components or paid services used.
  • Data: Use public or properly licensed datasets/APIs.
  • Contact: Questions and office hours will be posted in Discord.

Hackathon #4 is about agents that actually ship work — with execution you can rely on and results you can verify.

Let’s build agent apps that hold up in the real world.

Prize Breakdown

  • 🏆 Prize Pool
    • 1st: $1000
    • 2nd: $800
    • 3rd: $500
    • 4th–5th: $300 each
    • 6th–7th: $150 each
    • 8th–10th: $100 each
    • 11th–20th: $50 each