Files
MOPC-Portal/.planning/PROJECT.md
2026-02-26 23:32:28 +01:00

5.2 KiB

MOPC — AI Ranking, Advancement & Mentoring During Rounds

What This Is

An enhancement to the MOPC jury voting platform that adds AI-powered project ranking after evaluation rounds, an admin dashboard for reviewing/adjusting rankings and advancing projects to the next round, and the ability to assign mentors during non-mentoring rounds (e.g., during document submission or evaluation) with automatic carryover across rounds.

Core Value

Admins can describe ranking criteria in natural language, the system interprets and ranks projects accordingly, and they can advance the top projects to the next round with one click — all with full override control.

Requirements

Validated

  • ✓ Competition system with ordered rounds (INTAKE → FILTERING → EVALUATION → SUBMISSION → MENTORING → LIVE_FINAL → DELIBERATION) — existing
  • ✓ Jury evaluation with scoring forms and pass/fail criteria — existing
  • ✓ AdvancementRule model with configurable rule types (AUTO_ADVANCE, SCORE_THRESHOLD, TOP_N, ADMIN_SELECTION) — existing
  • ✓ ProjectRoundState tracking per project per round — existing
  • ✓ JuryGroup and JuryAssignment for panel management — existing
  • ✓ CompetitionCategory enum (STARTUP, BUSINESS_CONCEPT) with per-project categorization — existing
  • ✓ Email notification system with Nodemailer/Poste.io — existing
  • ✓ Mentor dashboard route group (mentor) — existing
  • ✓ Round engine state machine for round transitions — existing
  • ✓ AI services with anonymization layer — existing

Active

  • AI ranking engine that interprets natural-language criteria into ranking logic
  • Admin ranking dashboard with drag-and-drop reordering per competition category
  • Side panel detail view showing evaluation data for selected project in ranking list
  • "Advance top X" button to promote selected projects to next round
  • Admin choice per-batch: send advancement/rejection email OR update status silently
  • Admin-editable email templates with variable insertion ({{firstName}}, {{teamName}}, etc.)
  • AI criteria preview mode: admin sees parsed rules before applying
  • Quick rank mode: AI interprets and ranks directly, admin adjusts after
  • Mentor assignment during non-MENTORING rounds (evaluation, submission, etc.)
  • Auto-persist mentor assignments across rounds (unless project eliminated)
  • Admin override for mentor assignments at any time
  • AI-suggested mentor-to-project matching with admin confirmation
  • Notification awareness: warn admin if next round doesn't have auto-emails, so they know to send manually

Out of Scope

  • Award eligibility (Spotlight on Africa, etc.) — separate workflow, later milestone
  • Changes to the juror evaluation interface — already built
  • Real-time collaborative ranking (multi-admin simultaneous drag) — unnecessary complexity
  • Public-facing ranking results — admin-only feature

Context

The competition is actively running. Evaluations for the first round are complete and the client needs to rank projects and advance semi-finalists urgently (by Monday). The ranking criteria were communicated in a mix of French and English by the organizers:

  • 2 yes votes → semi-finalist
  • 2 no votes → not semi-finalist
  • 1 yes + 1 no with ≥6/10 overall → consider as semi-finalist (depending on total count)
  • Special attention to whether evaluations included at least 1 internal + 1 external juror

Categories are STARTUP and BUSINESS_CONCEPT — rankings and advancement happen per-category within a single competition.

The platform already has AdvancementRule with rule types but no AI interpretation layer. The MentorAssignment concept doesn't yet support cross-round persistence or assignment during non-mentoring rounds.

Constraints

  • Timeline: Semi-finalist notifications need to go out by Monday — ranking and advancement are highest priority
  • Tech stack: Must use existing stack (Next.js 15, tRPC, Prisma, OpenAI)
  • Data model: CompetitionCategory (STARTUP/BUSINESS_CONCEPT) is on the Project model, rankings must respect this split
  • Security: AI ranking criteria go through OpenAI — must anonymize project data before sending
  • Existing patterns: Follow tRPC router + Zod validation + service layer pattern

Key Decisions

Decision Rationale Outcome
AI interprets natural-language criteria rather than hardcoded rules Client changes criteria between rounds; flexible system avoids code changes — Pending
Rankings per CompetitionCategory, not per JuryGroup Categories (Startup vs Business Concept) are the meaningful split for advancement — Pending
Mentor assignments auto-persist across rounds Reduces admin work; mentors build relationship with teams over time — Pending
Admin-editable email templates with variables Client sends personalized emails in French/English; templates must be customizable — Pending
Side panel for ranking detail view Keeps drag-and-drop list compact while providing full evaluation context on demand — Pending

Last updated: 2026-02-26 after initialization