Competition/Round architecture: full platform rewrite (Phases 1-9)
All checks were successful
Build and Push Docker Image / build (push) Successful in 7m45s
All checks were successful
Build and Push Docker Image / build (push) Successful in 7m45s
Replace Pipeline/Stage system with Competition/Round architecture. New schema: Competition, Round (7 types), JuryGroup, AssignmentPolicy, ProjectRoundState, DeliberationSession, ResultLock, SubmissionWindow. New services: round-engine, round-assignment, deliberation, result-lock, submission-manager, competition-context, ai-prompt-guard. Full admin/jury/applicant/mentor UI rewrite. AI prompt hardening with structured prompts, retry logic, and injection detection. All legacy pipeline/stage code removed. 4 new migrations + seed aligned. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
201
docs/claude-architecture-redesign/00-executive-summary.md
Normal file
201
docs/claude-architecture-redesign/00-executive-summary.md
Normal file
@@ -0,0 +1,201 @@
|
||||
# Executive Summary: MOPC Architecture Redesign
|
||||
|
||||
## Why This Redesign
|
||||
|
||||
The MOPC platform currently uses a **Pipeline -> Track -> Stage** model with generic JSON configs to orchestrate the competition. While technically sound, this architecture introduces unnecessary abstraction for what is fundamentally a **linear sequential competition flow**.
|
||||
|
||||
### Current Problems
|
||||
|
||||
| Problem | Impact |
|
||||
|---------|--------|
|
||||
| **3-level nesting** (Pipeline->Track->Stage) | Cognitive overhead for admins configuring rounds |
|
||||
| **Generic `configJson` blobs** per stage type | "Vague" — hard to know what's configurable without reading code |
|
||||
| **No explicit jury entities** | Juries are implicit (per-stage assignments), can't manage "Jury 1" as a thing |
|
||||
| **Single submission round** | No way to open a second submission window for semi-finalists |
|
||||
| **Track layer for main flow** | MAIN track adds indirection without value for a linear flow |
|
||||
| **No mentoring workspace** | Mentor file exchange exists but no comments, no promotion to submission |
|
||||
| **No winner confirmation** | No multi-party agreement step to cement winners |
|
||||
| **Missing round types** | Can't model a "Semi-finalist Submission" or "Mentoring" or "Confirmation" step |
|
||||
|
||||
### Design Principles
|
||||
|
||||
1. **Domain over abstraction** — Models map directly to competition concepts (Jury 1, Round 2, etc.)
|
||||
2. **Linear by default** — The main flow is sequential. Branching is only for special awards.
|
||||
3. **Typed configs over JSON blobs** — Each round type has explicit, documented fields.
|
||||
4. **Explicit entities** — Juries, submission windows, and confirmation steps are first-class models.
|
||||
5. **Deep integration** — Every feature connects. Jury groups link to rounds, rounds link to submissions, submissions link to evaluations.
|
||||
6. **Admin override everywhere** — Any automated decision can be manually overridden with audit trail.
|
||||
|
||||
---
|
||||
|
||||
## Before & After: Architecture Comparison
|
||||
|
||||
### BEFORE (Current System)
|
||||
|
||||
```
|
||||
Program
|
||||
└── Pipeline (generic container)
|
||||
├── Track: "Main Competition" (MAIN)
|
||||
│ ├── Stage: "Intake" (INTAKE, configJson: {...})
|
||||
│ ├── Stage: "Filtering" (FILTER, configJson: {...})
|
||||
│ ├── Stage: "Evaluation" (EVALUATION, configJson: {...})
|
||||
│ ├── Stage: "Selection" (SELECTION, configJson: {...})
|
||||
│ ├── Stage: "Live Finals" (LIVE_FINAL, configJson: {...})
|
||||
│ └── Stage: "Results" (RESULTS, configJson: {...})
|
||||
├── Track: "Award 1" (AWARD)
|
||||
│ ├── Stage: "Evaluation" (EVALUATION)
|
||||
│ └── Stage: "Results" (RESULTS)
|
||||
└── Track: "Award 2" (AWARD)
|
||||
├── Stage: "Evaluation" (EVALUATION)
|
||||
└── Stage: "Results" (RESULTS)
|
||||
|
||||
Juries: implicit (assignments per stage, no named entity)
|
||||
Submissions: single round (one INTAKE stage)
|
||||
Mentoring: basic (messages + notes, no workspace)
|
||||
Winner confirmation: none
|
||||
```
|
||||
|
||||
### AFTER (Redesigned System)
|
||||
|
||||
```
|
||||
Program
|
||||
└── Competition (purpose-built, replaces Pipeline)
|
||||
├── Rounds (linear sequence, replaces Track+Stage):
|
||||
│ ├── Round 1: "Application Window" ─────── (INTAKE)
|
||||
│ ├── Round 2: "AI Screening" ──────────── (FILTERING)
|
||||
│ ├── Round 3: "Jury 1 - Semi-finalist" ── (EVALUATION) ── juryGroupId: jury-1
|
||||
│ ├── Round 4: "Semi-finalist Docs" ─────── (SUBMISSION) ── submissionWindowId: sw-2
|
||||
│ ├── Round 5: "Jury 2 - Finalist" ──────── (EVALUATION) ── juryGroupId: jury-2
|
||||
│ ├── Round 6: "Finalist Mentoring" ─────── (MENTORING)
|
||||
│ ├── Round 7: "Live Finals" ────────────── (LIVE_FINAL) ── juryGroupId: jury-3
|
||||
│ └── Round 8: "Confirm Winners" ─────────── (CONFIRMATION)
|
||||
│
|
||||
├── Jury Groups (explicit, named):
|
||||
│ ├── "Jury 1" ── members: [judge-a, judge-b, ...] ── linked to Round 3
|
||||
│ ├── "Jury 2" ── members: [judge-c, judge-d, ...] ── linked to Round 5
|
||||
│ └── "Jury 3" ── members: [judge-e, judge-f, ...] ── linked to Round 7
|
||||
│
|
||||
├── Submission Windows (multi-round):
|
||||
│ ├── Window 1: "Round 1 Docs" ── requirements: [Exec Summary, Business Plan]
|
||||
│ └── Window 2: "Round 2 Docs" ── requirements: [Updated Plan, Video Pitch]
|
||||
│
|
||||
└── Special Awards (standalone):
|
||||
├── "Innovation Award" ── mode: STAY_IN_MAIN, juryGroup: jury-2-award
|
||||
└── "Impact Award" ── mode: SEPARATE_POOL, juryGroup: dedicated-jury
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Decisions
|
||||
|
||||
### 1. Eliminate the Track Layer
|
||||
|
||||
**Decision:** Remove the `Track` model entirely. The main competition is a linear sequence of Rounds. Special awards become standalone entities.
|
||||
|
||||
**Rationale:** The MOPC competition has one main flow (Intake -> Filtering -> Jury 1 -> Submission 2 -> Jury 2 -> Mentoring -> Finals -> Confirmation). The `Track` concept (MAIN/AWARD/SHOWCASE with RoutingMode and DecisionMode) was designed for branching flows that don't exist in this competition. Awards don't need their own track — they're parallel evaluation/voting processes that reference the same projects.
|
||||
|
||||
**Impact:**
|
||||
- `Track` model deleted
|
||||
- `TrackKind`, `RoutingMode` enums deleted
|
||||
- `ProjectStageState.trackId` removed (becomes `ProjectRoundState` with just `projectId` + `roundId`)
|
||||
- Award tracks replaced with enhanced `SpecialAward` model
|
||||
- ~200 lines of Track CRUD code eliminated
|
||||
|
||||
### 2. Rename Pipeline -> Competition, Stage -> Round
|
||||
|
||||
**Decision:** Use domain-specific names that map to the competition vocabulary.
|
||||
|
||||
**Rationale:** Admins think in terms of "Competition 2026" and "Round 3: Jury 1 Evaluation", not "Pipeline" and "Stage". The rename costs nothing but improves comprehension.
|
||||
|
||||
### 3. Expand RoundType Enum
|
||||
|
||||
**Decision:** Add SUBMISSION, MENTORING, and CONFIRMATION to the existing types.
|
||||
|
||||
**Current:** `INTAKE | FILTER | EVALUATION | SELECTION | LIVE_FINAL | RESULTS`
|
||||
|
||||
**New:** `INTAKE | FILTERING | EVALUATION | SUBMISSION | MENTORING | LIVE_FINAL | CONFIRMATION`
|
||||
|
||||
**Changes:**
|
||||
- `FILTER` -> `FILTERING` (clearer naming)
|
||||
- `SELECTION` removed (merged into EVALUATION's advancement config)
|
||||
- `RESULTS` removed (results are a view, not a round — handled by the CONFIRMATION round output)
|
||||
- `SUBMISSION` added (new doc requirements for advancing teams)
|
||||
- `MENTORING` added (mentor-team workspace activation)
|
||||
- `CONFIRMATION` added (multi-party winner agreement)
|
||||
|
||||
### 4. Explicit JuryGroup Model
|
||||
|
||||
**Decision:** Juries are first-class entities with names, members, and per-juror configuration.
|
||||
|
||||
**Before:** Assignments were per-stage with no grouping concept. "Jury 1" only existed in the admin's head.
|
||||
|
||||
**After:** `JuryGroup` model with members, linked to specific evaluation/live-final rounds. A juror can belong to multiple groups.
|
||||
|
||||
### 5. Multi-Round Submissions via SubmissionWindow
|
||||
|
||||
**Decision:** A new `SubmissionWindow` model handles document requirements per round, with automatic locking of previous windows.
|
||||
|
||||
**Before:** One INTAKE stage with one set of `FileRequirement` records.
|
||||
|
||||
**After:** Each submission window has its own requirements. When a new window opens, previous ones lock for applicants. Jury rounds can see docs from specific windows.
|
||||
|
||||
### 6. Typed Configs Replace JSON Blobs
|
||||
|
||||
**Decision:** Replace generic `configJson: Json?` with round-type-specific config models or strongly-typed JSON with Zod validation.
|
||||
|
||||
**Before:** `Stage.configJson` could be anything — you'd have to read the code to know what fields exist for each StageType.
|
||||
|
||||
**After:** Each round type has a documented, validated config shape. The wizard presents only the fields relevant to each type.
|
||||
|
||||
---
|
||||
|
||||
## Scope Summary
|
||||
|
||||
| Area | Action | Complexity |
|
||||
|------|--------|------------|
|
||||
| **Schema** | Major changes (new models, renamed models, deleted Track) | High |
|
||||
| **Stage engine** | Rename to round engine, simplify (no Track references) | Medium |
|
||||
| **Assignment service** | Enhance with jury groups, hard/soft caps, category ratios | Medium |
|
||||
| **Filtering service** | Minimal changes (rename stageId -> roundId) | Low |
|
||||
| **Live control** | Enhanced stage manager UI, same core logic | Medium |
|
||||
| **Mentor system** | Major enhancement (workspace, files, comments, promotion) | High |
|
||||
| **Winner confirmation** | New system (proposal, approvals, freezing) | High |
|
||||
| **Special awards** | Enhanced (standalone, two modes, own jury groups) | Medium |
|
||||
| **Notification system** | Enhanced (deadline countdowns, reminder triggers) | Medium |
|
||||
| **Admin UI** | Full redesign (competition wizard, round management) | High |
|
||||
| **Jury UI** | Enhanced (multi-jury dashboard, cross-round docs) | Medium |
|
||||
| **Applicant UI** | Enhanced (multi-round submissions, mentoring workspace) | Medium |
|
||||
| **Mentor UI** | New (dedicated mentor dashboard and workspace) | High |
|
||||
| **API routers** | Major refactor (rename, new endpoints, removed endpoints) | High |
|
||||
| **Migration** | Data migration from old schema to new | Medium |
|
||||
|
||||
---
|
||||
|
||||
## Document Index
|
||||
|
||||
| # | Document | Purpose |
|
||||
|---|----------|---------|
|
||||
| 00 | This document | Executive summary and key decisions |
|
||||
| 01 | Current System Audit | What exists today — models, services, routers, UI |
|
||||
| 02 | Gap Analysis | Current vs required, feature-by-feature comparison |
|
||||
| 03 | Data Model | Complete Prisma schema redesign with migration SQL |
|
||||
| 04 | Round: Intake | Application window, forms, deadlines, drafts |
|
||||
| 05 | Round: Filtering | AI screening, eligibility, admin overrides |
|
||||
| 06 | Round: Evaluation | Multi-jury, caps, ratios, scoring, advancement |
|
||||
| 07 | Round: Submission | Multi-round docs, locking, jury visibility |
|
||||
| 08 | Round: Mentoring | Private workspace, file comments, promotion |
|
||||
| 09 | Round: Live Finals | Stage manager, live voting, deliberation |
|
||||
| 10 | Round: Confirmation | Jury signatures, admin override, result freezing |
|
||||
| 11 | Special Awards | Two modes, award juries, integration |
|
||||
| 12 | Jury Groups | Multi-jury architecture, members, overrides |
|
||||
| 13 | Notifications & Deadlines | Countdowns, reminders, window management |
|
||||
| 14 | AI Services | Filtering, assignment, summaries, eligibility |
|
||||
| 15 | Admin UI Redesign | Dashboard, wizard, round management |
|
||||
| 16 | Jury UI Redesign | Dashboard, evaluation, live voting |
|
||||
| 17 | Applicant UI Redesign | Dashboard, multi-round uploads, mentoring |
|
||||
| 18 | Mentor UI Redesign | Dashboard, workspace, file review |
|
||||
| 19 | API Router Reference | tRPC changes — new, modified, removed |
|
||||
| 20 | Service Layer Changes | Engine, assignment, new services |
|
||||
| 21 | Migration Strategy | Schema migration, data migration, rollback |
|
||||
| 22 | Integration Map | Cross-reference of all feature connections |
|
||||
| 23 | Implementation Sequence | Phased order with dependencies |
|
||||
591
docs/claude-architecture-redesign/01-current-system-audit.md
Normal file
591
docs/claude-architecture-redesign/01-current-system-audit.md
Normal file
@@ -0,0 +1,591 @@
|
||||
# Current System Audit: MOPC Platform
|
||||
|
||||
**Document Version:** 1.0
|
||||
**Date:** 2026-02-15
|
||||
**Status:** Complete
|
||||
**Purpose:** Comprehensive inventory of all data models, services, routers, pages, and capabilities in the MOPC platform as of February 2026.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Data Models](#1-data-models)
|
||||
2. [Enums](#2-enums)
|
||||
3. [Services](#3-services)
|
||||
4. [tRPC Routers](#4-trpc-routers)
|
||||
5. [UI Pages](#5-ui-pages)
|
||||
6. [Strengths](#6-strengths-of-current-system)
|
||||
7. [Weaknesses](#7-weaknesses-of-current-system)
|
||||
|
||||
---
|
||||
|
||||
## 1. Data Models
|
||||
|
||||
### 1.1 Competition Structure Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **Pipeline** | Top-level competition round container | `programId`, `name`, `slug`, `status`, `settingsJson` | → Program, → Track[] |
|
||||
| **Track** | Competition lane (MAIN or AWARD) | `pipelineId`, `name`, `kind`, `routingMode`, `decisionMode`, `sortOrder`, `settingsJson` | → Pipeline, → Stage[], → ProjectStageState[], ← SpecialAward? |
|
||||
| **Stage** | Individual competition phase within a track | `trackId`, `stageType`, `name`, `slug`, `status`, `sortOrder`, `configJson`, `windowOpenAt`, `windowCloseAt` | → Track, → ProjectStageState[], → StageTransition[], → Cohort[], → LiveProgressCursor?, → LiveVotingSession? |
|
||||
| **StageTransition** | Defines valid stage-to-stage movements | `fromStageId`, `toStageId`, `isDefault`, `guardJson` | → Stage (from), → Stage (to) |
|
||||
| **ProjectStageState** | Tracks project position in pipeline | `projectId`, `trackId`, `stageId`, `state`, `enteredAt`, `exitedAt`, `metadataJson` | → Project, → Track, → Stage |
|
||||
| **Cohort** | Groups projects for live voting | `stageId`, `name`, `votingMode`, `isOpen`, `windowOpenAt`, `windowCloseAt` | → Stage, → CohortProject[] |
|
||||
| **CohortProject** | Project membership in a cohort | `cohortId`, `projectId`, `sortOrder` | → Cohort, → Project |
|
||||
|
||||
### 1.2 Project & Submission Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **Project** | Core project/application entity | `programId`, `title`, `teamName`, `description`, `competitionCategory`, `oceanIssue`, `country`, `geographicZone`, `institution`, `wantsMentorship`, `foundedAt`, `status`, `submissionSource`, `submittedByEmail`, `submittedAt`, `tags`, `metadataJson`, `isDraft` | → Program, → ProjectFile[], → Assignment[], → TeamMember[], → MentorAssignment?, → FilteringResult[], → AwardEligibility[], → ProjectStageState[], → CohortProject[] |
|
||||
| **ProjectFile** | File uploads attached to projects | `projectId`, `requirementId`, `fileType`, `fileName`, `mimeType`, `size`, `bucket`, `objectKey`, `version`, `replacedById`, `isLate` | → Project, → FileRequirement?, → ProjectFile (versioning) |
|
||||
| **FileRequirement** | Defines required file uploads per stage | `stageId`, `name`, `description`, `acceptedMimeTypes`, `maxSizeMB`, `isRequired`, `sortOrder` | → Stage, ← ProjectFile[] |
|
||||
| **TeamMember** | Team composition for projects | `projectId`, `userId`, `role`, `title` | → Project, → User |
|
||||
|
||||
### 1.3 Jury & Evaluation Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **Assignment** | Jury member assigned to evaluate a project | `userId`, `projectId`, `stageId`, `method`, `isRequired`, `isCompleted`, `aiConfidenceScore`, `expertiseMatchScore`, `aiReasoning` | → User, → Project, → Stage, → Evaluation?, → ConflictOfInterest? |
|
||||
| **Evaluation** | Jury member's assessment of a project | `assignmentId`, `formId`, `status`, `criterionScoresJson`, `globalScore`, `binaryDecision`, `feedbackText`, `version`, `submittedAt` | → Assignment, → EvaluationForm |
|
||||
| **EvaluationForm** | Configurable evaluation criteria per stage | `stageId`, `version`, `criteriaJson`, `scalesJson`, `isActive` | → Stage, ← Evaluation[] |
|
||||
| **ConflictOfInterest** | COI declarations by jury members | `assignmentId`, `userId`, `projectId`, `hasConflict`, `conflictType`, `description`, `declaredAt`, `reviewedById`, `reviewAction` | → Assignment, → User, → User (reviewer) |
|
||||
| **GracePeriod** | Extended deadlines for specific jury members | `stageId`, `userId`, `projectId`, `extendedUntil`, `reason`, `grantedById` | → Stage, → User, → User (granter) |
|
||||
| **EvaluationSummary** | AI-generated synthesis of evaluations | `projectId`, `stageId`, `summaryJson`, `generatedAt`, `generatedById`, `model`, `tokensUsed` | → Project, → Stage, → User |
|
||||
| **EvaluationDiscussion** | Discussion thread for deliberation | `projectId`, `stageId`, `status`, `createdAt`, `closedAt`, `closedById` | → Project, → Stage, → User, → DiscussionComment[] |
|
||||
| **DiscussionComment** | Individual comment in discussion | `discussionId`, `userId`, `content`, `createdAt` | → EvaluationDiscussion, → User |
|
||||
|
||||
### 1.4 Live Voting Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **LiveVotingSession** | Live final event configuration | `stageId`, `status`, `currentProjectIndex`, `currentProjectId`, `votingStartedAt`, `votingEndsAt`, `projectOrderJson`, `votingMode`, `criteriaJson`, `allowAudienceVotes`, `audienceVoteWeight`, `tieBreakerMethod` | → Stage, → LiveVote[], → AudienceVoter[] |
|
||||
| **LiveVote** | Individual vote during live event | `sessionId`, `projectId`, `userId`, `score`, `isAudienceVote`, `votedAt`, `criterionScoresJson`, `audienceVoterId` | → LiveVotingSession, → User?, → AudienceVoter? |
|
||||
| **AudienceVoter** | Anonymous audience participant | `sessionId`, `token`, `identifier`, `identifierType`, `ipAddress`, `userAgent` | → LiveVotingSession, → LiveVote[] |
|
||||
| **LiveProgressCursor** | Real-time cursor for live presentation | `stageId`, `sessionId`, `activeProjectId`, `activeOrderIndex`, `isPaused` | → Stage |
|
||||
|
||||
### 1.5 Awards Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **SpecialAward** | Special prize/recognition category | `programId`, `trackId`, `name`, `description`, `status`, `criteriaText`, `autoTagRulesJson`, `useAiEligibility`, `scoringMode`, `maxRankedPicks`, `votingStartAt`, `votingEndAt`, `winnerProjectId`, `winnerOverridden`, `eligibilityJobStatus` | → Program, → Track?, → Project (winner), → AwardEligibility[], → AwardJuror[], → AwardVote[] |
|
||||
| **AwardEligibility** | AI-determined award eligibility | `awardId`, `projectId`, `method`, `eligible`, `aiReasoningJson`, `overriddenBy`, `overriddenAt` | → SpecialAward, → Project, → User? |
|
||||
| **AwardJuror** | Jury panel for special award | `awardId`, `userId` | → SpecialAward, → User |
|
||||
| **AwardVote** | Vote for special award winner | `awardId`, `userId`, `projectId`, `rank`, `votedAt` | → SpecialAward, → User, → Project |
|
||||
|
||||
### 1.6 Mentoring Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **MentorAssignment** | Mentor-project pairing | `projectId`, `mentorId`, `method`, `assignedAt`, `assignedBy`, `aiConfidenceScore`, `expertiseMatchScore`, `completionStatus` | → Project (unique), → User (mentor), → MentorNote[], → MentorMilestoneCompletion[] |
|
||||
| **MentorMessage** | Chat messages between mentor and team | `projectId`, `senderId`, `message`, `isRead` | → Project, → User |
|
||||
| **MentorNote** | Private notes by mentor/admin | `mentorAssignmentId`, `authorId`, `content`, `isVisibleToAdmin` | → MentorAssignment, → User |
|
||||
| **MentorMilestone** | Program-wide mentorship checkpoints | `programId`, `name`, `description`, `isRequired`, `deadlineOffsetDays`, `sortOrder` | → Program, → MentorMilestoneCompletion[] |
|
||||
| **MentorMilestoneCompletion** | Completion record for milestones | `milestoneId`, `mentorAssignmentId`, `completedById`, `completedAt` | → MentorMilestone, → MentorAssignment, → User |
|
||||
|
||||
### 1.7 Filtering Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **FilteringRule** | Automated screening rule | `stageId`, `name`, `ruleType`, `configJson`, `priority`, `isActive` | → Stage |
|
||||
| **FilteringResult** | Per-project filtering outcome | `stageId`, `projectId`, `outcome`, `ruleResultsJson`, `aiScreeningJson`, `overriddenBy`, `overriddenAt`, `overrideReason`, `finalOutcome` | → Stage, → Project, → User? |
|
||||
| **FilteringJob** | Progress tracking for filtering runs | `stageId`, `status`, `totalProjects`, `processedCount`, `passedCount`, `filteredCount`, `flaggedCount`, `errorMessage`, `startedAt`, `completedAt` | → Stage |
|
||||
| **AssignmentJob** | Progress tracking for assignment generation | `stageId`, `status`, `totalProjects`, `processedCount`, `suggestionsCount`, `suggestionsJson`, `errorMessage`, `fallbackUsed` | → Stage |
|
||||
| **TaggingJob** | Progress tracking for AI tagging | `programId`, `status`, `totalProjects`, `processedCount`, `taggedCount`, `skippedCount`, `failedCount`, `errorsJson` | → Program? |
|
||||
|
||||
### 1.8 Users & Auth Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **User** | Platform user account | `email`, `name`, `role`, `status`, `expertiseTags`, `maxAssignments`, `country`, `bio`, `phoneNumber`, `notificationPreference`, `digestFrequency`, `preferredWorkload`, `passwordHash`, `inviteToken`, `onboardingCompletedAt` | → Assignment[], → GracePeriod[], → LiveVote[], → TeamMember[], → MentorAssignment[], → AwardJuror[], → ConflictOfInterest[], → InAppNotification[] |
|
||||
| **Account** | NextAuth provider accounts | `userId`, `provider`, `providerAccountId`, `access_token`, `refresh_token` | → User |
|
||||
| **Session** | NextAuth active sessions | `userId`, `sessionToken`, `expires` | → User |
|
||||
| **VerificationToken** | NextAuth magic link tokens | `identifier`, `token`, `expires` | (standalone) |
|
||||
|
||||
### 1.9 Audit & Logging Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **AuditLog** | General platform activity log | `userId`, `action`, `entityType`, `entityId`, `detailsJson`, `previousDataJson`, `ipAddress`, `userAgent`, `sessionId`, `timestamp` | → User? |
|
||||
| **DecisionAuditLog** | Pipeline decision tracking | `eventType`, `entityType`, `entityId`, `actorId`, `detailsJson`, `snapshotJson`, `createdAt` | (no FK relations) |
|
||||
| **OverrideAction** | Manual admin overrides log | `entityType`, `entityId`, `previousValue`, `newValueJson`, `reasonCode`, `reasonText`, `actorId`, `createdAt` | (no FK relations) |
|
||||
| **AIUsageLog** | AI API consumption tracking | `userId`, `action`, `entityType`, `entityId`, `model`, `promptTokens`, `completionTokens`, `estimatedCostUsd`, `status`, `errorMessage` | (no FK relations) |
|
||||
| **NotificationLog** | Email/SMS delivery tracking | `userId`, `channel`, `provider`, `type`, `status`, `externalId`, `errorMsg` | → User |
|
||||
|
||||
### 1.10 Program & Resources Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **Program** | Competition edition/year | `name`, `slug`, `year`, `status`, `description`, `settingsJson` | → Pipeline[], → Project[], → LearningResource[], → Partner[], → SpecialAward[] |
|
||||
| **LearningResource** | Educational content for teams | `programId`, `title`, `description`, `contentJson`, `resourceType`, `cohortLevel`, `fileName`, `mimeType`, `bucket`, `objectKey`, `externalUrl`, `isPublished` | → Program?, → User (creator), → ResourceAccess[] |
|
||||
| **ResourceAccess** | Access log for learning materials | `resourceId`, `userId`, `accessedAt`, `ipAddress` | → LearningResource, → User |
|
||||
| **Partner** | Sponsor/partner organization | `programId`, `name`, `description`, `website`, `partnerType`, `visibility`, `logoFileName`, `sortOrder`, `isActive` | → Program? |
|
||||
| **WizardTemplate** | Saved pipeline configuration templates | `name`, `description`, `config`, `isGlobal`, `programId`, `createdBy` | → Program?, → User |
|
||||
|
||||
### 1.11 Communication Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **InAppNotification** | Bell icon notifications | `userId`, `type`, `priority`, `icon`, `title`, `message`, `linkUrl`, `linkLabel`, `metadata`, `groupKey`, `isRead`, `expiresAt` | → User |
|
||||
| **NotificationEmailSetting** | Email notification toggles per type | `notificationType`, `category`, `label`, `sendEmail`, `emailSubject`, `emailTemplate` | (standalone) |
|
||||
| **NotificationPolicy** | Event-driven notification config | `eventType`, `channel`, `templateId`, `isActive`, `configJson` | (no FK relations) |
|
||||
| **Message** | Bulk messaging system | `senderId`, `recipientType`, `recipientFilter`, `stageId`, `templateId`, `subject`, `body`, `deliveryChannels`, `scheduledAt`, `sentAt` | → User (sender), → Stage?, → MessageTemplate?, → MessageRecipient[] |
|
||||
| **MessageRecipient** | Individual message delivery | `messageId`, `userId`, `channel`, `isRead`, `readAt`, `deliveredAt` | → Message, → User |
|
||||
| **MessageTemplate** | Reusable email templates | `name`, `category`, `subject`, `body`, `variables`, `isActive`, `createdBy` | → User, ← Message[] |
|
||||
|
||||
### 1.12 Webhooks & Integrations
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **Webhook** | Outbound event webhooks | `name`, `url`, `secret`, `events`, `headers`, `maxRetries`, `isActive`, `createdById` | → User, → WebhookDelivery[] |
|
||||
| **WebhookDelivery** | Webhook delivery log | `webhookId`, `event`, `payload`, `status`, `responseStatus`, `responseBody`, `attempts`, `lastAttemptAt` | → Webhook |
|
||||
|
||||
### 1.13 Miscellaneous Models
|
||||
|
||||
| Model | Purpose | Key Fields | Relations |
|
||||
|-------|---------|------------|-----------|
|
||||
| **SystemSettings** | Platform-wide config KV store | `key`, `value`, `type`, `category`, `description`, `isSecret` | (standalone) |
|
||||
| **ExpertiseTag** | Tag taxonomy for matching | `name`, `description`, `category`, `color`, `isActive`, `sortOrder` | → ProjectTag[] |
|
||||
| **ProjectTag** | Project-tag association | `projectId`, `tagId`, `confidence`, `source` | → Project, → ExpertiseTag |
|
||||
| **ProjectStatusHistory** | Historical status changes | `projectId`, `status`, `changedAt`, `changedBy` | → Project |
|
||||
| **ReminderLog** | Evaluation deadline reminders | `stageId`, `userId`, `type`, `sentAt` | → Stage, → User |
|
||||
| **DigestLog** | Email digest delivery log | `userId`, `digestType`, `contentJson`, `sentAt` | → User |
|
||||
|
||||
---
|
||||
|
||||
## 2. Enums
|
||||
|
||||
### 2.1 User & Auth Enums
|
||||
|
||||
| Enum | Values | Usage |
|
||||
|------|--------|-------|
|
||||
| **UserRole** | `SUPER_ADMIN`, `PROGRAM_ADMIN`, `JURY_MEMBER`, `MENTOR`, `OBSERVER`, `APPLICANT`, `AWARD_MASTER`, `AUDIENCE` | User permissions hierarchy |
|
||||
| **UserStatus** | `NONE`, `INVITED`, `ACTIVE`, `SUSPENDED` | User account state |
|
||||
|
||||
### 2.2 Project & Competition Enums
|
||||
|
||||
| Enum | Values | Usage |
|
||||
|------|--------|-------|
|
||||
| **ProjectStatus** | `SUBMITTED`, `ELIGIBLE`, `ASSIGNED`, `SEMIFINALIST`, `FINALIST`, `REJECTED` | Legacy project state (superseded by ProjectStageState) |
|
||||
| **CompetitionCategory** | `STARTUP`, `BUSINESS_CONCEPT` | Project type (existing company vs. student idea) |
|
||||
| **OceanIssue** | `POLLUTION_REDUCTION`, `CLIMATE_MITIGATION`, `TECHNOLOGY_INNOVATION`, `SUSTAINABLE_SHIPPING`, `BLUE_CARBON`, `HABITAT_RESTORATION`, `COMMUNITY_CAPACITY`, `SUSTAINABLE_FISHING`, `CONSUMER_AWARENESS`, `OCEAN_ACIDIFICATION`, `OTHER` | Project focus area |
|
||||
|
||||
### 2.3 Pipeline Enums
|
||||
|
||||
| Enum | Values | Usage |
|
||||
|------|--------|-------|
|
||||
| **StageType** | `INTAKE`, `FILTER`, `EVALUATION`, `SELECTION`, `LIVE_FINAL`, `RESULTS` | Stage functional type |
|
||||
| **TrackKind** | `MAIN`, `AWARD`, `SHOWCASE` | Track purpose |
|
||||
| **RoutingMode** | `SHARED`, `EXCLUSIVE` | Project routing behavior (can projects be in multiple tracks?) |
|
||||
| **StageStatus** | `STAGE_DRAFT`, `STAGE_ACTIVE`, `STAGE_CLOSED`, `STAGE_ARCHIVED` | Stage lifecycle state |
|
||||
| **ProjectStageStateValue** | `PENDING`, `IN_PROGRESS`, `PASSED`, `REJECTED`, `ROUTED`, `COMPLETED`, `WITHDRAWN` | Project state within a stage |
|
||||
| **DecisionMode** | `JURY_VOTE`, `AWARD_MASTER_DECISION`, `ADMIN_DECISION` | How winners are determined in a track |
|
||||
|
||||
### 2.4 Evaluation & Assignment Enums
|
||||
|
||||
| Enum | Values | Usage |
|
||||
|------|--------|-------|
|
||||
| **EvaluationStatus** | `NOT_STARTED`, `DRAFT`, `SUBMITTED`, `LOCKED` | Evaluation completion state |
|
||||
| **AssignmentMethod** | `MANUAL`, `BULK`, `AI_SUGGESTED`, `AI_AUTO`, `ALGORITHM` | How assignment was created |
|
||||
| **MentorAssignmentMethod** | `MANUAL`, `AI_SUGGESTED`, `AI_AUTO`, `ALGORITHM` | How mentor was paired |
|
||||
|
||||
### 2.5 Filtering Enums
|
||||
|
||||
| Enum | Values | Usage |
|
||||
|------|--------|-------|
|
||||
| **FilteringOutcome** | `PASSED`, `FILTERED_OUT`, `FLAGGED` | Filtering result |
|
||||
| **FilteringRuleType** | `FIELD_BASED`, `DOCUMENT_CHECK`, `AI_SCREENING` | Type of filtering rule |
|
||||
| **FilteringJobStatus** | `PENDING`, `RUNNING`, `COMPLETED`, `FAILED` | Job progress state |
|
||||
| **AssignmentJobStatus** | `PENDING`, `RUNNING`, `COMPLETED`, `FAILED` | Job progress state |
|
||||
| **TaggingJobStatus** | `PENDING`, `RUNNING`, `COMPLETED`, `FAILED` | Job progress state |
|
||||
|
||||
### 2.6 Awards Enums
|
||||
|
||||
| Enum | Values | Usage |
|
||||
|------|--------|-------|
|
||||
| **AwardScoringMode** | `PICK_WINNER`, `RANKED`, `SCORED` | Award voting method |
|
||||
| **AwardStatus** | `DRAFT`, `NOMINATIONS_OPEN`, `VOTING_OPEN`, `CLOSED`, `ARCHIVED` | Award lifecycle |
|
||||
| **EligibilityMethod** | `AUTO`, `MANUAL` | How eligibility was determined |
|
||||
|
||||
### 2.7 Miscellaneous Enums
|
||||
|
||||
| Enum | Values | Usage |
|
||||
|------|--------|-------|
|
||||
| **FileType** | `EXEC_SUMMARY`, `PRESENTATION`, `VIDEO`, `OTHER`, `BUSINESS_PLAN`, `VIDEO_PITCH`, `SUPPORTING_DOC` | Project file categorization |
|
||||
| **SubmissionSource** | `MANUAL`, `CSV`, `NOTION`, `TYPEFORM`, `PUBLIC_FORM` | How project was submitted |
|
||||
| **NotificationChannel** | `EMAIL`, `WHATSAPP`, `BOTH`, `NONE` | Notification delivery method |
|
||||
| **ResourceType** | `PDF`, `VIDEO`, `DOCUMENT`, `LINK`, `OTHER` | Learning resource type |
|
||||
| **CohortLevel** | `ALL`, `SEMIFINALIST`, `FINALIST` | Access level for resources |
|
||||
| **PartnerVisibility** | `ADMIN_ONLY`, `JURY_VISIBLE`, `PUBLIC` | Who can see partner |
|
||||
| **PartnerType** | `SPONSOR`, `PARTNER`, `SUPPORTER`, `MEDIA`, `OTHER` | Partner categorization |
|
||||
| **TeamMemberRole** | `LEAD`, `MEMBER`, `ADVISOR` | Team composition |
|
||||
| **OverrideReasonCode** | `DATA_CORRECTION`, `POLICY_EXCEPTION`, `JURY_CONFLICT`, `SPONSOR_DECISION`, `ADMIN_DISCRETION` | Why decision was overridden |
|
||||
| **ProgramStatus** | `DRAFT`, `ACTIVE`, `ARCHIVED` | Program lifecycle |
|
||||
| **SettingType** | `STRING`, `NUMBER`, `BOOLEAN`, `JSON`, `SECRET` | System setting data type |
|
||||
| **SettingCategory** | `AI`, `BRANDING`, `EMAIL`, `STORAGE`, `SECURITY`, `DEFAULTS`, `WHATSAPP`, `AUDIT_CONFIG`, `LOCALIZATION`, `DIGEST`, `ANALYTICS`, `INTEGRATIONS`, `COMMUNICATION` | Setting organization |
|
||||
|
||||
---
|
||||
|
||||
## 3. Services
|
||||
|
||||
All services located in `src/server/services/*.ts`.
|
||||
|
||||
| Service | Purpose | Key Functions |
|
||||
|---------|---------|---------------|
|
||||
| **stage-engine.ts** | State machine for project transitions | `validateTransition()`, `executeTransition()`, `executeBatchTransition()` - handles guard evaluation, atomic PSS updates, audit logging |
|
||||
| **stage-filtering.ts** | Runs filtering pipeline scoped to stage | `runStageFiltering()`, `resolveManualDecision()`, `getManualQueue()` - executes field-based, document, and AI rules; duplicate detection built-in |
|
||||
| **stage-assignment.ts** | Smart jury assignment generation | `previewStageAssignment()`, `executeStageAssignment()`, `getCoverageReport()`, `rebalance()` - tag matching, workload balancing, COI handling |
|
||||
| **stage-notifications.ts** | Event-driven notification producer | `emitStageEvent()`, `onStageTransitioned()`, `onFilteringCompleted()`, `onAssignmentGenerated()`, `onCursorUpdated()` - never throws, creates DecisionAuditLog + in-app + email |
|
||||
| **live-control.ts** | Real-time live ceremony control | `startSession()`, `setActiveProject()`, `jumpToProject()`, `reorderQueue()`, `pauseResume()`, `openCohortWindow()`, `closeCohortWindow()` - manages LiveProgressCursor |
|
||||
| **ai-filtering.ts** | AI-powered project screening | Anonymizes data, calls OpenAI API, confidence banding, spam detection (delegates to stage-filtering.ts for execution) |
|
||||
| **ai-assignment.ts** | AI-suggested jury matching | GPT-based assignment generation with expertise matching (100 lines) |
|
||||
| **ai-evaluation-summary.ts** | GPT synthesis of evaluations | Generates strengths/weaknesses summary from jury feedback |
|
||||
| **ai-tagging.ts** | Automatic project categorization | Tags projects with expertise areas using GPT |
|
||||
| **ai-award-eligibility.ts** | Award eligibility assessment | GPT determines if project meets award criteria |
|
||||
| **anonymization.ts** | GDPR-compliant data stripping | Removes PII before AI calls (name, email, institution, etc.) |
|
||||
| **ai-errors.ts** | Centralized AI error handling | Classifies errors (rate limit, token limit, API down), provides retry logic |
|
||||
| **award-eligibility-job.ts** | Batch award eligibility processing | Runs AI eligibility checks across all projects for an award |
|
||||
| **smart-assignment.ts** | Scoring algorithm for matching | Tag overlap, bio match, workload balance, geo diversity, COI blocking, availability checking |
|
||||
| **mentor-matching.ts** | Mentor-project pairing logic | Similar to smart-assignment but for mentorship |
|
||||
| **evaluation-reminders.ts** | Cron job for deadline reminders | Sends 3-day, 24h, 1h reminders to jury with incomplete evaluations |
|
||||
| **email-digest.ts** | Daily/weekly email summaries | Aggregates pending tasks for users |
|
||||
| **in-app-notification.ts** | In-app notification helpers | Creates bell-icon notifications with linking |
|
||||
| **notification.ts** | Email sending service | Wraps Nodemailer, supports templates |
|
||||
| **webhook-dispatcher.ts** | Webhook delivery service | Sends events to registered webhook URLs with retry logic |
|
||||
|
||||
---
|
||||
|
||||
## 4. tRPC Routers
|
||||
|
||||
All routers located in `src/server/routers/*.ts`. Total: 38 routers.
|
||||
|
||||
| Router | Procedure Count | Key Procedures | Purpose |
|
||||
|--------|-----------------|----------------|---------|
|
||||
| **pipeline.ts** | ~15 | `create`, `update`, `delete`, `list`, `getById`, `archive` | Pipeline CRUD, linking to Program |
|
||||
| **stage.ts** | ~20 | `create`, `updateConfig`, `updateStatus`, `delete`, `getByTrack`, `reorderStages`, `createTransition` | Stage CRUD, window management, transition setup |
|
||||
| **stageFiltering.ts** | ~10 | `createRule`, `runFiltering`, `getManualQueue`, `resolveManualDecision`, `getJobStatus` | Filtering rule management + execution |
|
||||
| **stageAssignment.ts** | ~8 | `previewAssignment`, `executeAssignment`, `getCoverage`, `rebalance`, `bulkDelete` | Assignment generation, coverage analysis |
|
||||
| **project.ts** | ~25 | `create`, `update`, `delete`, `getById`, `list`, `import`, `advanceToRound`, `updateStatus` | Project CRUD, CSV import, status changes |
|
||||
| **assignment.ts** | ~12 | `create`, `bulkCreate`, `delete`, `getByUser`, `getByProject`, `markComplete` | Manual assignment management |
|
||||
| **evaluation.ts** | ~15 | `create`, `update`, `submit`, `lock`, `unlock`, `getByAssignment`, `generateSummary` | Evaluation submission, locking |
|
||||
| **gracePeriod.ts** | ~6 | `create`, `delete`, `getByStage`, `getByUser`, `checkActive` | Grace period management |
|
||||
| **user.ts** | ~20 | `create`, `update`, `delete`, `invite`, `resendInvite`, `list`, `updateProfile`, `uploadAvatar` | User management, invites |
|
||||
| **specialAward.ts** | ~15 | `create`, `update`, `delete`, `runEligibility`, `vote`, `getResults`, `overrideWinner` | Award creation, voting, eligibility |
|
||||
| **live-voting.ts** | ~12 | `createSession`, `vote`, `getResults`, `closeSession`, `updateCriteria` | Live voting session management (legacy LiveVotingSession model) |
|
||||
| **live.ts** | ~10 | `startSession`, `setActiveProject`, `jumpToProject`, `pauseResume`, `openCohort`, `closeCohort` | Live control (new LiveProgressCursor model) |
|
||||
| **cohort.ts** | ~8 | `create`, `update`, `delete`, `addProjects`, `removeProjects`, `reorder` | Cohort management for live finals |
|
||||
| **mentor.ts** | ~12 | `assignMentor`, `removeMentor`, `sendMessage`, `addNote`, `completeMilestone`, `getMentorDashboard` | Mentorship workflow |
|
||||
| **learningResource.ts** | ~10 | `create`, `update`, `delete`, `list`, `upload`, `markAccessed` | Learning hub content |
|
||||
| **partner.ts** | ~8 | `create`, `update`, `delete`, `list`, `uploadLogo` | Partner management |
|
||||
| **tag.ts** | ~10 | `create`, `update`, `delete`, `list`, `runTagging`, `getTaggingJobStatus` | Expertise tag management |
|
||||
| **notification.ts** | ~8 | `getInApp`, `markRead`, `markAllRead`, `getUnreadCount`, `updateEmailSettings` | Notification center |
|
||||
| **message.ts** | ~10 | `send`, `schedule`, `list`, `getRecipients`, `createTemplate`, `listTemplates` | Bulk messaging |
|
||||
| **webhook.ts** | ~8 | `create`, `update`, `delete`, `test`, `getDeliveries`, `retry` | Webhook management |
|
||||
| **audit.ts** | ~6 | `getAuditLog`, `getDecisionLog`, `getOverrides`, `export` | Audit trail viewing |
|
||||
| **analytics.ts** | ~12 | `getDashboardStats`, `getProjectStats`, `getJuryStats`, `getAwardStats`, `getEngagementMetrics` | Reporting and analytics |
|
||||
| **dashboard.ts** | ~8 | `getAdminDashboard`, `getJuryDashboard`, `getApplicantDashboard`, `getMentorDashboard` | Role-specific dashboards |
|
||||
| **export.ts** | ~8 | `exportProjects`, `exportEvaluations`, `exportVotes`, `exportAuditLog` | CSV/Excel exports |
|
||||
| **file.ts** | ~8 | `uploadFile`, `getPresignedUrl`, `deleteFile`, `listFiles`, `createRequirement` | MinIO file management |
|
||||
| **filtering.ts** | ~6 | Legacy filtering endpoints (superseded by stageFiltering) | Deprecated |
|
||||
| **avatar.ts** | ~4 | `upload`, `delete`, `getUrl` | User profile images |
|
||||
| **logo.ts** | ~4 | `upload`, `delete`, `getUrl` | Project logos |
|
||||
| **decision.ts** | ~6 | `overrideFilteringResult`, `overrideAwardEligibility`, `overridePSS`, `getOverrideHistory` | Admin override controls |
|
||||
| **program.ts** | ~10 | `create`, `update`, `delete`, `list`, `getById`, `archive` | Program CRUD |
|
||||
| **application.ts** | ~8 | `submitApplication`, `saveDraft`, `getDraft`, `deleteDraft` | Public application form |
|
||||
| **applicant.ts** | ~10 | `getMyProjects`, `updateTeam`, `uploadDocument`, `requestMentorship` | Applicant portal |
|
||||
| **notion-import.ts** | ~4 | `sync`, `import`, `getStatus` | Notion integration |
|
||||
| **typeform-import.ts** | ~4 | `sync`, `import`, `getStatus` | Typeform integration |
|
||||
| **settings.ts** | ~8 | `get`, `set`, `getBulk`, `setBulk`, `getByCategory` | System settings KV store |
|
||||
| **project-pool.ts** | ~6 | `getUnassignedProjects`, `getProjectsByStage`, `getProjectsByStatus` | Project queries for assignment |
|
||||
| **wizard-template.ts** | ~8 | `create`, `update`, `delete`, `list`, `clone`, `applyTemplate` | Pipeline wizard templates |
|
||||
|
||||
**Total Procedures:** ~400+
|
||||
|
||||
---
|
||||
|
||||
## 5. UI Pages
|
||||
|
||||
### 5.1 Admin Pages (`src/app/(admin)/admin/`)
|
||||
|
||||
| Route | Purpose | Key Features |
|
||||
|-------|---------|--------------|
|
||||
| `/admin` | Admin dashboard | Overview metrics, recent activity, quick actions |
|
||||
| `/admin/members` | User management list | User table with filters, role assignment, status changes |
|
||||
| `/admin/members/[id]` | User detail/edit | Profile editing, role changes, assignment history |
|
||||
| `/admin/members/invite` | Invite new users | Bulk invite form with role selection |
|
||||
| `/admin/programs` | Program list | Program cards, create/archive/edit |
|
||||
| `/admin/programs/[id]` | Program detail | Program overview, linked pipelines, projects |
|
||||
| `/admin/programs/[id]/edit` | Program settings editor | Name, year, status, settingsJson editor |
|
||||
| `/admin/programs/[id]/apply-settings` | Application form config | Public submission form customization |
|
||||
| `/admin/programs/[id]/mentorship` | Mentorship milestones | Milestone creation, completion tracking |
|
||||
| `/admin/projects` | Project list | Searchable/filterable project table |
|
||||
| `/admin/projects/[id]` | Project detail | Full project view with evaluations, history |
|
||||
| `/admin/projects/[id]/edit` | Project editor | Edit project metadata, team, tags |
|
||||
| `/admin/projects/[id]/mentor` | Mentor assignment | Assign/remove mentor, view messages |
|
||||
| `/admin/projects/new` | Manual project creation | Add project without public form |
|
||||
| `/admin/projects/import` | CSV/Typeform/Notion import | Bulk import wizard |
|
||||
| `/admin/projects/pool` | Unassigned project pool | Projects awaiting assignment |
|
||||
| `/admin/rounds/pipelines` | Pipeline list | All pipelines across programs |
|
||||
| `/admin/rounds/pipeline/[id]` | Pipeline detail | Track/stage tree, project flow diagram |
|
||||
| `/admin/rounds/pipeline/[id]/edit` | Pipeline settings | Name, slug, status, settingsJson |
|
||||
| `/admin/rounds/pipeline/[id]/wizard` | Pipeline wizard | Step-by-step configuration UI (tracks, stages, transitions) |
|
||||
| `/admin/rounds/pipeline/[id]/advanced` | Advanced pipeline editor | JSON config editor, raw transitions |
|
||||
| `/admin/rounds/new-pipeline` | Pipeline creation wizard | Multi-step pipeline setup |
|
||||
| `/admin/awards` | Special awards list | Award cards with status |
|
||||
| `/admin/awards/[id]` | Award detail | Eligibility, votes, results |
|
||||
| `/admin/awards/[id]/edit` | Award editor | Criteria, voting config, jury panel |
|
||||
| `/admin/awards/new` | Create award | Award creation form |
|
||||
| `/admin/mentors` | Mentor list | All users with MENTOR role |
|
||||
| `/admin/mentors/[id]` | Mentor detail | Assigned projects, notes, milestones |
|
||||
| `/admin/learning` | Learning hub management | Resource list, upload, publish |
|
||||
| `/admin/learning/[id]` | Resource detail/edit | Content editor (BlockNote), access logs |
|
||||
| `/admin/learning/new` | Create resource | Upload or link external content |
|
||||
| `/admin/partners` | Partner management | Partner list, logos, visibility |
|
||||
| `/admin/partners/[id]` | Partner detail/edit | Edit partner info, upload logo |
|
||||
| `/admin/partners/new` | Add partner | Partner creation form |
|
||||
| `/admin/messages` | Messaging dashboard | Send bulk messages, view sent messages |
|
||||
| `/admin/messages/templates` | Message templates | Template CRUD |
|
||||
| `/admin/settings` | System settings | Category tabs, KV editor |
|
||||
| `/admin/settings/tags` | Expertise tags | Tag taxonomy management |
|
||||
| `/admin/settings/webhooks` | Webhook management | Webhook CRUD, delivery logs |
|
||||
| `/admin/audit` | Audit log viewer | Searchable audit trail |
|
||||
| `/admin/reports` | Analytics reports | Charts, exports, metrics |
|
||||
| `/admin/reports/stages` | Stage-level reports | Per-stage assignment coverage, completion rates |
|
||||
|
||||
### 5.2 Jury Pages (`src/app/(jury)/jury/`)
|
||||
|
||||
| Route | Purpose | Key Features |
|
||||
|-------|---------|--------------|
|
||||
| `/jury` | Jury dashboard | Assigned stages, pending evaluations, deadlines |
|
||||
| `/jury/stages` | Jury stage list | Stages where user has assignments |
|
||||
| `/jury/stages/[stageId]/assignments` | Assignment list for stage | Projects assigned to this user |
|
||||
| `/jury/stages/[stageId]/projects/[projectId]` | Project detail view | Full project info, files, team |
|
||||
| `/jury/stages/[stageId]/projects/[projectId]/evaluate` | Evaluation form | Criterion scoring, feedback, submit |
|
||||
| `/jury/stages/[stageId]/projects/[projectId]/evaluation` | View submitted evaluation | Read-only evaluation, edit if not locked |
|
||||
| `/jury/stages/[stageId]/compare` | Side-by-side comparison | Compare multiple projects, scoring matrix |
|
||||
| `/jury/stages/[stageId]/live` | Live voting interface | Real-time voting during live ceremony |
|
||||
| `/jury/awards` | Special awards list | Awards where user is juror |
|
||||
| `/jury/awards/[id]` | Award voting | View eligible projects, cast votes |
|
||||
| `/jury/learning` | Learning hub (jury access) | Resources for jury members |
|
||||
|
||||
### 5.3 Applicant Pages (`src/app/(applicant)/applicant/`)
|
||||
|
||||
| Route | Purpose | Key Features |
|
||||
|-------|---------|--------------|
|
||||
| `/applicant` | Applicant dashboard | Application status, next steps |
|
||||
| `/applicant/pipeline` | Pipeline progress view | Visual pipeline with current stage |
|
||||
| `/applicant/pipeline/[stageId]/status` | Stage detail view | Stage status, requirements, deadlines |
|
||||
| `/applicant/pipeline/[stageId]/documents` | Document upload | Upload required files for stage |
|
||||
| `/applicant/documents` | All documents | Document library, versions |
|
||||
| `/applicant/team` | Team management | Add/remove team members, roles |
|
||||
| `/applicant/mentor` | Mentorship dashboard | Chat with mentor, milestones |
|
||||
|
||||
### 5.4 Auth Pages (`src/app/(auth)/`)
|
||||
|
||||
| Route | Purpose | Key Features |
|
||||
|-------|---------|--------------|
|
||||
| `/login` | Login page | Email magic link + password login |
|
||||
| `/verify` | Magic link verification | Token verification, auto-login |
|
||||
| `/verify-email` | Email verification | Verify email after signup |
|
||||
| `/accept-invite` | Invitation acceptance | One-click invite token handling |
|
||||
| `/set-password` | Password setup | First-time password creation |
|
||||
| `/onboarding` | User onboarding wizard | Profile completion, expertise tags |
|
||||
| `/error` | Auth error page | Error display with retry |
|
||||
|
||||
---
|
||||
|
||||
## 6. Strengths of Current System
|
||||
|
||||
### 6.1 Architecture Strengths
|
||||
|
||||
| Strength | Description |
|
||||
|----------|-------------|
|
||||
| **Full Type Safety** | End-to-end TypeScript from DB → tRPC → React. Prisma generates types, tRPC enforces them, components consume them safely. |
|
||||
| **Atomic Transactions** | All critical operations (stage transitions, filtering, assignments) use `$transaction` with proper rollback. |
|
||||
| **Comprehensive Audit** | Dual audit system: `AuditLog` for general activity, `DecisionAuditLog` for pipeline decisions. Full traceability. |
|
||||
| **RBAC Enforcement** | tRPC middleware hierarchy (`adminProcedure`, `juryProcedure`, etc.) enforces role-based access at API level. |
|
||||
| **GDPR Compliance** | All AI calls strip PII via `anonymization.ts`. No personal data sent to OpenAI. |
|
||||
| **Event-Driven Design** | `stage-notifications.ts` emits events on every pipeline action. Notifications never block core operations (catch all errors). |
|
||||
| **Graceful AI Error Handling** | `ai-errors.ts` classifies errors (rate limit, token limit, API down) and provides retry guidance. AI failures never crash the system. |
|
||||
| **Duplicate Detection** | Built-in duplicate submission detection in `stage-filtering.ts` (by email). Always flags for manual review, never auto-rejects. |
|
||||
|
||||
### 6.2 Data Model Strengths
|
||||
|
||||
| Strength | Description |
|
||||
|----------|-------------|
|
||||
| **Flexible Pipeline Model** | Pipeline → Track → Stage → ProjectStageState allows arbitrary round structures. Main track + multiple award tracks supported. |
|
||||
| **Guard-Based Transitions** | StageTransition `guardJson` field allows complex conditional routing (e.g., "only advance if avgScore >= 7"). |
|
||||
| **Stage Config Polymorphism** | `Stage.configJson` adapts to `stageType`. FILTER stages have filtering config, EVALUATION stages have evaluation config, etc. |
|
||||
| **Versioned Evaluations** | `Evaluation.version` field allows rollback (though not currently used). |
|
||||
| **Override Audit Trail** | `OverrideAction` model logs all admin overrides with reason codes. Immutable audit. |
|
||||
|
||||
### 6.3 Service Layer Strengths
|
||||
|
||||
| Strength | Description |
|
||||
|----------|-------------|
|
||||
| **State Machine Isolation** | `stage-engine.ts` is the ONLY service that modifies `ProjectStageState`. All transitions go through it. Single source of truth. |
|
||||
| **Service Purity** | Services are pure functions that accept Prisma client as parameter. Testable without mocking globals. |
|
||||
| **Progress Tracking** | Long-running operations (filtering, assignment, tagging) use Job models (`FilteringJob`, `AssignmentJob`, `TaggingJob`) for progress tracking. |
|
||||
| **AI Batching** | All AI services batch projects (20-50 per call) to reduce API cost and latency. |
|
||||
|
||||
### 6.4 UX Strengths
|
||||
|
||||
| Strength | Description |
|
||||
|----------|-------------|
|
||||
| **Wizard-Driven Setup** | Pipeline wizard (`/admin/rounds/pipeline/[id]/wizard`) guides admins through complex configuration. |
|
||||
| **Real-Time Live Control** | `/jury/stages/[stageId]/live` provides live voting with cursor sync via `LiveProgressCursor`. |
|
||||
| **Notification Center** | In-app notification bell with grouping, priorities, expiration. |
|
||||
| **Grace Period UX** | Admins can grant individual deadline extensions with reason tracking. |
|
||||
| **Filtering Manual Queue** | Flagged projects go to dedicated review queue (`/admin/rounds/pipeline/[id]/filtering/manual`) for admin decision. |
|
||||
|
||||
---
|
||||
|
||||
## 7. Weaknesses of Current System
|
||||
|
||||
### 7.1 Data Model Issues
|
||||
|
||||
| Issue | Description | Impact |
|
||||
|-------|-------------|--------|
|
||||
| **Legacy `roundId` Fields** | 50+ models have `roundId String?` (marked "Legacy — kept for historical data"). Adds noise, not enforced. | Confusing for new developers. No FK constraints. |
|
||||
| **Unclear Pipeline Lifecycle** | Pipeline has `status` enum (`DRAFT`, `ACTIVE`, `ARCHIVED`), but no enforcement. Active pipelines can have draft stages. | Inconsistent state possible. |
|
||||
| **Overlapping Voting Models** | `LiveVotingSession` (old) and `Cohort` + `LiveProgressCursor` (new) both exist. Unclear which to use. | Duplicate functionality, confusion. |
|
||||
| **No PSS Validation Constraints** | `ProjectStageState` allows multiple active (non-exited) records for same project/track/stage combo. Should be unique. | Data integrity risk. |
|
||||
| **Track-Award Linkage Vague** | `SpecialAward.trackId` is optional. Unclear if awards MUST have a track or can exist independently. | Ambiguous design. |
|
||||
|
||||
### 7.2 Service Layer Issues
|
||||
|
||||
| Issue | Description | Impact |
|
||||
|-------|-------------|--------|
|
||||
| **Mixed Abstraction Levels** | `stage-filtering.ts` contains both high-level orchestration AND low-level rule evaluation. Hard to test individually. | Tight coupling. |
|
||||
| **Notification Side Effects** | Services call `stage-notifications.ts` directly. If notification fails (e.g., email down), error is swallowed. | Lost notifications, no visibility. |
|
||||
| **AI Service Duplication** | `ai-filtering.ts`, `ai-assignment.ts`, `ai-tagging.ts` all have similar batching/retry logic. Should be abstracted. | Code duplication. |
|
||||
| **No Explicit Workflow Engine** | Stage transitions are ad-hoc. No central workflow definition. Must read code to understand flow. | Hard to visualize, modify. |
|
||||
|
||||
### 7.3 tRPC Router Issues
|
||||
|
||||
| Issue | Description | Impact |
|
||||
|-------|-------------|--------|
|
||||
| **Router Bloat** | `project.ts` has 25+ procedures. `user.ts` has 20+. Hard to navigate. | Monolithic routers. |
|
||||
| **Inconsistent Naming** | `stage.ts` has `updateConfig`, `stage-filtering.ts` router has `updateRule`. Naming conventions vary. | Confusing API. |
|
||||
| **No Batch Procedures** | Most CRUD operations are one-at-a-time. No bulk create/update/delete (except assignments). | N+1 queries in UI. |
|
||||
| **Missing Pagination** | List procedures (`project.list`, `user.list`) return all records. No cursor or offset pagination. | Performance issue at scale. |
|
||||
|
||||
### 7.4 UI/UX Issues
|
||||
|
||||
| Issue | Description | Impact |
|
||||
|-------|-------------|--------|
|
||||
| **No Pipeline Visualization** | Pipeline detail page shows table of stages, not a flowchart. Hard to see transitions. | Poor admin UX. |
|
||||
| **Filtering Manual Queue Hidden** | Flagged projects not prominently surfaced. Admin must navigate deep into pipeline detail. | Flagged items forgotten. |
|
||||
| **No Bulk Actions** | Can't bulk-assign projects, bulk-approve evaluations, bulk-transition projects. Must click one-by-one. | Tedious admin work. |
|
||||
| **Live Voting Lacks Feedback** | Jury votes during live event but doesn't see if vote was counted. No confirmation toast. | Uncertainty. |
|
||||
| **No Undo** | All admin actions (delete pipeline, archive stage, reject project) are immediate. No soft delete or undo. | Risky operations. |
|
||||
|
||||
### 7.5 Missing Features
|
||||
|
||||
| Missing Feature | Description | Impact |
|
||||
|-----------------|-------------|--------|
|
||||
| **Stage Dependency Graph** | No visual representation of stage transitions and guards. Admin must infer from transitions table. | Hard to debug routing. |
|
||||
| **Evaluation Calibration** | No juror calibration (e.g., flag jurors who score 10x higher/lower than peers). | Scoring bias undetected. |
|
||||
| **Award Winner Tie-Breaking** | `SpecialAward.tieBreakerMethod` exists in `LiveVotingSession` but not in `SpecialAward`. No tie resolution for ranked awards. | Undefined behavior on ties. |
|
||||
| **Project Search Ranking** | Project search is basic string match. No relevance ranking, fuzzy matching, or faceted filters. | Poor search UX. |
|
||||
| **Stage Templates** | No template system for common stage configs (e.g., "Standard 3-juror evaluation stage"). | Repetitive setup. |
|
||||
| **Notification Preferences** | Users can toggle email on/off globally but not per event type. No granular control. | All-or-nothing notifications. |
|
||||
| **Pipeline Cloning** | No way to duplicate a pipeline for a new year/program. Must recreate manually. | Time-consuming setup. |
|
||||
| **Evaluation Rubric Library** | Each stage creates evaluation forms from scratch. No reusable rubrics. | Reinventing the wheel. |
|
||||
|
||||
### 7.6 Code Quality Issues
|
||||
|
||||
| Issue | Description | Impact |
|
||||
|-------|-------------|--------|
|
||||
| **Inconsistent Error Messages** | Some procedures throw `TRPCError` with clear messages, others just throw generic Error. | Debugging harder. |
|
||||
| **No Input Sanitization** | Zod validates types but doesn't trim strings, lowercase emails, etc. | Data inconsistency. |
|
||||
| **Magic Numbers** | Hardcoded constants (e.g., `AI_CONFIDENCE_THRESHOLD_PASS = 0.75`) scattered across services. | Hard to tune. |
|
||||
| **Limited Test Coverage** | Only `stage-engine.test.ts` exists. No tests for filtering, assignment, AI services. | Regression risk. |
|
||||
| **No API Versioning** | tRPC routers have no version prefix. Breaking changes would break old clients. | API fragility. |
|
||||
|
||||
### 7.7 Performance Issues
|
||||
|
||||
| Issue | Description | Impact |
|
||||
|-------|-------------|--------|
|
||||
| **N+1 Queries** | Project list page loads projects, then fetches assignments for each in a loop. | Slow page load. |
|
||||
| **No Caching** | Every tRPC call hits database. No Redis, no in-memory cache. | High DB load. |
|
||||
| **Unindexed Joins** | Some `ProjectStageState` queries join on `(projectId, trackId)` without composite index. | Slow at scale. |
|
||||
| **AI Batching Non-Optimal** | AI services batch by count (20 projects) not by token size. Large projects can exceed token limits. | API errors. |
|
||||
|
||||
### 7.8 Documentation Issues
|
||||
|
||||
| Issue | Description | Impact |
|
||||
|-------|-------------|--------|
|
||||
| **No Architecture Docs** | No high-level system overview. New developers must read code. | Steep onboarding. |
|
||||
| **Minimal JSDoc** | Most services have file-level comments but not function-level. | Hard to use without reading implementation. |
|
||||
| **No API Reference** | tRPC procedures not documented in OpenAPI or similar. | Client integration difficult. |
|
||||
| **No Runbook** | No operational docs for common tasks (e.g., "How to fix a stuck pipeline"). | Manual troubleshooting. |
|
||||
|
||||
---
|
||||
|
||||
## Summary Statistics
|
||||
|
||||
| Category | Count |
|
||||
|----------|-------|
|
||||
| **Database Models** | 73 |
|
||||
| **Enums** | 31 |
|
||||
| **Service Files** | 20 |
|
||||
| **tRPC Routers** | 38 |
|
||||
| **tRPC Procedures** | ~400 |
|
||||
| **Admin Pages** | 45 |
|
||||
| **Jury Pages** | 11 |
|
||||
| **Applicant Pages** | 7 |
|
||||
| **Auth Pages** | 7 |
|
||||
| **Total Distinct Routes** | ~70 |
|
||||
|
||||
---
|
||||
|
||||
## Appendix: Service Function Inventory
|
||||
|
||||
### stage-engine.ts
|
||||
- `evaluateGuardCondition()` - Evaluates a single guard condition
|
||||
- `evaluateGuard()` - Evaluates guard config with AND/OR logic
|
||||
- `validateTransition()` - Checks if transition is allowed (PSS exists, transition defined, stage active, window open, guards pass)
|
||||
- `executeTransition()` - Atomically transitions a project between stages (exits source PSS, creates/updates dest PSS, logs in DecisionAuditLog + AuditLog)
|
||||
- `executeBatchTransition()` - Batch wrapper around executeTransition (processes 50 at a time)
|
||||
|
||||
### stage-filtering.ts
|
||||
- `evaluateFieldCondition()` - Evaluates a single field-based rule condition
|
||||
- `evaluateFieldRule()` - Evaluates field-based rule with AND/OR logic
|
||||
- `evaluateDocumentCheck()` - Checks if project has required files
|
||||
- `bandByConfidence()` - AI confidence thresholding (0.75+ = PASSED, 0.25- = FILTERED_OUT, else FLAGGED)
|
||||
- `runStageFiltering()` - Main orchestration: loads projects, rules, runs deterministic then AI, saves FilteringResults, creates FilteringJob
|
||||
- `resolveManualDecision()` - Admin resolves a FLAGGED result to PASSED or FILTERED_OUT, logs override
|
||||
- `getManualQueue()` - Returns all FLAGGED results for a stage
|
||||
|
||||
### stage-assignment.ts
|
||||
- `calculateTagOverlapScore()` - Counts matching tags between juror and project (max 40 points)
|
||||
- `calculateWorkloadScore()` - Scores juror based on current load vs preferred (max 25 points)
|
||||
- `previewStageAssignment()` - Dry run: scores all juror-project pairs, returns top N per project
|
||||
- `executeStageAssignment()` - Creates Assignment records, logs in AssignmentJob
|
||||
- `getCoverageReport()` - Returns per-project review counts, per-juror assignment counts
|
||||
- `rebalance()` - Identifies overloaded/underloaded jurors, suggests reassignments
|
||||
|
||||
### stage-notifications.ts
|
||||
- `emitStageEvent()` - Core event producer: creates DecisionAuditLog, checks NotificationPolicy, creates InAppNotification, sends email (never throws)
|
||||
- `resolveRecipients()` - Determines who gets notified based on event type (admins, jury, etc.)
|
||||
- `buildNotificationMessage()` - Builds human-readable message from event details
|
||||
- `onStageTransitioned()` - Convenience wrapper for stage.transitioned event
|
||||
- `onFilteringCompleted()` - Convenience wrapper for filtering.completed event
|
||||
- `onAssignmentGenerated()` - Convenience wrapper for assignment.generated event
|
||||
- `onCursorUpdated()` - Convenience wrapper for live.cursor_updated event
|
||||
- `onDecisionOverridden()` - Convenience wrapper for decision.overridden event
|
||||
|
||||
### live-control.ts
|
||||
- `generateSessionId()` - Creates unique session ID (timestamp + random)
|
||||
- `startSession()` - Creates/resets LiveProgressCursor, sets first project active
|
||||
- `setActiveProject()` - Updates cursor to point to a specific project (validates project is in cohort)
|
||||
- `jumpToProject()` - Jumps to project by order index
|
||||
- `reorderQueue()` - Updates CohortProject sortOrder values in batch
|
||||
- `pauseResume()` - Toggles cursor pause state
|
||||
- `openCohortWindow()` - Opens voting window for a cohort (sets isOpen=true, windowOpenAt=now)
|
||||
- `closeCohortWindow()` - Closes voting window for a cohort (sets isOpen=false, windowCloseAt=now)
|
||||
|
||||
---
|
||||
|
||||
**End of Document**
|
||||
786
docs/claude-architecture-redesign/02-gap-analysis.md
Normal file
786
docs/claude-architecture-redesign/02-gap-analysis.md
Normal file
@@ -0,0 +1,786 @@
|
||||
# Gap Analysis: Current System vs. Target 8-Step Competition Flow
|
||||
|
||||
**Document Version:** 1.0
|
||||
**Date:** 2026-02-15
|
||||
**Author:** Architecture Review (Claude)
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This gap analysis compares the **current MOPC platform** (pipeline-based, stage-engine architecture) against the **target 8-step competition flow** required for the 2026 Monaco Ocean Protection Challenge.
|
||||
|
||||
**Key Findings:**
|
||||
- **Foundation is Strong**: Pipeline/Track/Stage architecture, stage-engine transitions, AI filtering, jury assignment, and live voting infrastructure are all in place.
|
||||
- **Critical Gaps**: Multi-jury support (named jury groups with overlap), multi-round submission windows with read-only enforcement, per-juror capacity constraints (hard cap vs soft cap + buffer), category ratio preferences, countdown timers, and mentoring workspace features are **missing or incomplete**.
|
||||
- **Integration Gaps**: The current system treats each stage independently; the target flow requires **cross-stage coordination** (e.g., Round 1 docs become read-only in Round 2, jury sees cumulative files).
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Feature-by-Feature Comparison Table](#1-feature-by-feature-comparison-table)
|
||||
2. [Per-Step Deep Analysis](#2-per-step-deep-analysis)
|
||||
3. [Cross-Cutting Gap Analysis](#3-cross-cutting-gap-analysis)
|
||||
4. [Integration Gaps](#4-integration-gaps)
|
||||
5. [Priority Matrix](#5-priority-matrix)
|
||||
|
||||
---
|
||||
|
||||
## 1. Feature-by-Feature Comparison Table
|
||||
|
||||
| Feature | Required by Flow | Current Status | Gap Level | Notes | File References |
|
||||
|---------|-----------------|----------------|-----------|-------|-----------------|
|
||||
| **Intake (Submission Round 1)** |
|
||||
| Public submission form | Applicants upload Round 1 docs, deadline enforcement | ✅ Exists | **None** | `applicantRouter.saveSubmission()` handles create/update, deadline checked via `Stage.windowCloseAt` | `src/server/routers/applicant.ts:126` |
|
||||
| Configurable deadline behavior | Grace periods, late submission flags | ✅ Exists | **None** | `GracePeriod` model, `isLate` flag on `ProjectFile` | `prisma/schema.prisma:703-728`, `ProjectFile.isLate:606` |
|
||||
| File requirements per stage | Specify required file types, max size, mime types | ✅ Exists | **None** | `FileRequirement` model linked to stages | `prisma/schema.prisma:569-588` |
|
||||
| Draft support | Save progress without submitting | ✅ Exists | **None** | `isDraft`, `draftDataJson`, `draftExpiresAt` on `Project` | `prisma/schema.prisma:528-530` |
|
||||
| **AI Filtering** |
|
||||
| Automated eligibility screening | Run deterministic + AI rules, band by confidence | ✅ Exists | **None** | `stage-filtering.ts` with banding logic, `FilteringResult` outcome | `src/server/services/stage-filtering.ts:173-191` |
|
||||
| Admin override capability | Manually resolve flagged projects | ✅ Exists | **None** | `resolveManualDecision()` updates `finalOutcome`, logs override in `OverrideAction` | `src/server/services/stage-filtering.ts:529-611` |
|
||||
| Duplicate detection | Flag duplicate submissions (same email) | ✅ Exists | **None** | Built-in duplicate check by `submittedByEmail`, always flags (never auto-rejects) | `src/server/services/stage-filtering.ts:267-289` |
|
||||
| **Jury 1 (Evaluation Round 1)** |
|
||||
| Semi-finalist selection | Jury evaluates and votes Yes/No | ✅ Exists | **None** | `Evaluation.binaryDecision` field, evaluation submission flow | `src/server/routers/evaluation.ts:130-200` |
|
||||
| Hard cap per juror | Max N projects per juror (enforced) | ⚠️ **Partial** | **Partial** | `User.maxAssignments` exists but used as global limit, not stage-specific hard cap | `prisma/schema.prisma:249` |
|
||||
| Soft cap + buffer | Target N, allow up to N+buffer with warning | ❌ **Missing** | **Missing** | No concept of soft cap vs hard cap, no buffer configuration | — |
|
||||
| Category ratio preferences per juror | Juror wants X% Startup / Y% Concept | ❌ **Missing** | **Missing** | No `User.preferredCategoryRatio` or equivalent | — |
|
||||
| Explicit Jury 1 group | Named jury entity with members | ❌ **Missing** | **Missing** | All JURY_MEMBER users are global pool, no stage-scoped jury groups | — |
|
||||
| **Semi-finalist Submission (Submission Round 2)** |
|
||||
| New doc requirements | Round 2 has different file requirements | ✅ Exists | **None** | Each stage can have its own `FileRequirement` list | `prisma/schema.prisma:569-588` |
|
||||
| Round 1 docs become read-only | Applicants can't edit/delete Round 1 files | ❌ **Missing** | **Missing** | No `ProjectFile.isReadOnly` or `FileRequirement.allowEdits` field | — |
|
||||
| Jury sees both rounds | Jury can access Round 1 + Round 2 files | ⚠️ **Partial** | **Partial** | File access checks in `fileRouter.getDownloadUrl()` allow prior stages but complex logic, no explicit "cumulative view" | `src/server/routers/file.ts:66-108` |
|
||||
| Multi-round submission windows | Distinct open/close dates for Round 1 vs Round 2 | ✅ Exists | **None** | Each stage has `windowOpenAt` / `windowCloseAt` | `prisma/schema.prisma:1888-1889` |
|
||||
| **Jury 2 (Evaluation Round 2)** |
|
||||
| Finalist selection | Jury evaluates semifinalists, selects finalists | ✅ Exists | **None** | Same evaluation flow, can configure different form per stage | `prisma/schema.prisma:450-472` |
|
||||
| Special awards alongside | Run award eligibility + voting in parallel | ✅ Exists | **None** | `SpecialAward` system with `AwardEligibility`, `AwardJuror`, `AwardVote` | `prisma/schema.prisma:1363-1481` |
|
||||
| Explicit Jury 2 group | Named jury entity, possibly overlapping with Jury 1 | ❌ **Missing** | **Missing** | Same global jury pool issue | — |
|
||||
| Same cap/ratio features | Per-juror hard cap, soft cap, category ratios | ❌ **Missing** | **Missing** | (Same as Jury 1) | — |
|
||||
| **Mentoring** |
|
||||
| Private mentor-team workspace | Chat, file upload, threaded discussions | ⚠️ **Partial** | **Partial** | `MentorMessage` exists but no threading, no file comments, no promotion mechanism | `prisma/schema.prisma:1577-1590` |
|
||||
| Mentor file upload | Mentor can upload files to project | ❌ **Missing** | **Missing** | No `ProjectFile.uploadedByMentorId` or mentor file upload router endpoint | — |
|
||||
| Threaded file comments | Comment on specific files with replies | ❌ **Missing** | **Missing** | No `FileComment` model | — |
|
||||
| File promotion to official submission | Mentor-uploaded file becomes part of official docs | ❌ **Missing** | **Missing** | No promotion workflow or `ProjectFile.promotedFromMentorFileId` | — |
|
||||
| **Jury 3 Live Finals** |
|
||||
| Stage manager admin controls | Cursor navigation, pause/resume, queue reorder | ✅ Exists | **None** | `live-control.ts` service with `LiveProgressCursor`, `Cohort` | `src/server/services/live-control.ts:1-619` |
|
||||
| Jury live voting with notes | Vote during presentation, add notes | ⚠️ **Partial** | **Partial** | `LiveVote` exists but no `notes` field for per-vote commentary | `prisma/schema.prisma:1073-1099` |
|
||||
| Audience voting | Audience can vote with configurable weight | ✅ Exists | **None** | `AudienceVoter`, `allowAudienceVotes`, `audienceVoteWeight` | `prisma/schema.prisma:1051-1060, 1101-1117` |
|
||||
| Deliberation period | Time for jury discussion before final vote | ❌ **Missing** | **Missing** | No stage-specific `deliberationDurationMinutes` or deliberation status | — |
|
||||
| Explicit Jury 3 group | Named jury entity for live finals | ❌ **Missing** | **Missing** | (Same global pool issue) | — |
|
||||
| **Winner Confirmation** |
|
||||
| Individual jury member confirmation | Each juror digitally signs off on results | ❌ **Missing** | **Missing** | No `JuryConfirmation` model or per-user signature workflow | — |
|
||||
| Admin override to force majority | Admin can override and pick winner | ⚠️ **Partial** | **Partial** | `SpecialAward.winnerOverridden` exists, `OverrideAction` logs admin actions, but no explicit "force majority" vs "choose winner" distinction | `prisma/schema.prisma:1388-1389, 2024-2040` |
|
||||
| Results frozen with audit trail | Immutable record of final decision | ⚠️ **Partial** | **Partial** | `DecisionAuditLog` exists, `OverrideAction` tracks changes, but no `ResultsSnapshot` or explicit freeze mechanism | `prisma/schema.prisma:2042-2057` |
|
||||
| **Cross-Cutting Features** |
|
||||
| Multi-jury support (named entities) | Jury 1, Jury 2, Jury 3 with overlapping members | ❌ **Missing** | **Missing** | No `JuryGroup` or `JuryMembership` model | — |
|
||||
| Countdown timers on dashboards | Show time remaining until deadline | ❌ **Missing** | **Missing** | Backend has `windowCloseAt` but no tRPC endpoint for countdown state | — |
|
||||
| Email reminders as deadlines approach | Automated reminders at 72h, 24h, 1h | ⚠️ **Partial** | **Partial** | `processEvaluationReminders()` exists for jury, `ReminderLog` tracks sent reminders, but no applicant deadline reminders | `prisma/schema.prisma:1487-1501` |
|
||||
| Full audit trail for all decisions | Every action logged, immutable | ✅ Exists | **None** | `DecisionAuditLog`, `OverrideAction`, `AuditLog` comprehensive | `prisma/schema.prisma:754-783, 2024-2057` |
|
||||
|
||||
**Legend:**
|
||||
- ✅ **Exists** = Feature fully implemented
|
||||
- ⚠️ **Partial** = Feature partially implemented, needs extension
|
||||
- ❌ **Missing** = Feature does not exist
|
||||
|
||||
---
|
||||
|
||||
## 2. Per-Step Deep Analysis
|
||||
|
||||
### Step 1: Intake (Submission Round 1)
|
||||
|
||||
**What the Flow Requires:**
|
||||
- Applicants submit initial docs (executive summary, pitch deck, video)
|
||||
- Public submission form with deadline enforcement
|
||||
- Configurable grace periods for late submissions
|
||||
- Draft support to save progress without submitting
|
||||
- File type/size validation per requirement
|
||||
|
||||
**What Currently Exists:**
|
||||
- ✅ **Public submission form**: `applicantRouter.saveSubmission()` creates/updates projects, `isDraft` flag allows partial saves
|
||||
- ✅ **Deadline enforcement**: `Stage.windowOpenAt` / `windowCloseAt` enforced in `evaluationRouter.submit()` and applicant submission logic
|
||||
- ✅ **Grace periods**: `GracePeriod` model per stage/user, `extendedUntil` overrides default deadline
|
||||
- ✅ **File requirements**: `FileRequirement` linked to stages, defines `acceptedMimeTypes`, `maxSizeMB`, `isRequired`
|
||||
- ✅ **Late submission tracking**: `ProjectFile.isLate` flag set if uploaded after deadline
|
||||
|
||||
**What's Missing:**
|
||||
- (None — intake is fully functional)
|
||||
|
||||
**What Needs Modification:**
|
||||
- (None — intake meets requirements)
|
||||
|
||||
**File References:**
|
||||
- `src/server/routers/applicant.ts:126-200` (saveSubmission)
|
||||
- `prisma/schema.prisma:569-588` (FileRequirement)
|
||||
- `prisma/schema.prisma:703-728` (GracePeriod)
|
||||
|
||||
---
|
||||
|
||||
### Step 2: AI Filtering
|
||||
|
||||
**What the Flow Requires:**
|
||||
- Automated eligibility screening using deterministic rules (field checks, doc checks) + AI rubric
|
||||
- Confidence banding: high confidence auto-pass, low confidence auto-reject, medium confidence flagged for manual review
|
||||
- Admin override capability to resolve flagged projects
|
||||
- Duplicate submission detection (never auto-reject, always flag)
|
||||
|
||||
**What Currently Exists:**
|
||||
- ✅ **Filtering service**: `stage-filtering.ts` runs deterministic rules first, then AI screening if deterministic passes
|
||||
- ✅ **Confidence banding**: `bandByConfidence()` function with thresholds 0.75 (pass) / 0.25 (reject), middle = flagged
|
||||
- ✅ **Manual queue**: `getManualQueue()` returns flagged projects, `resolveManualDecision()` sets `finalOutcome`
|
||||
- ✅ **Duplicate detection**: Built-in check by `submittedByEmail`, groups duplicates, always flags (never auto-rejects)
|
||||
- ✅ **FilteringResult model**: Stores `outcome` (PASSED/FILTERED_OUT/FLAGGED), `ruleResultsJson`, `aiScreeningJson`, `finalOutcome` after override
|
||||
|
||||
**What's Missing:**
|
||||
- (None — filtering is fully functional)
|
||||
|
||||
**What Needs Modification:**
|
||||
- (None — filtering meets requirements)
|
||||
|
||||
**File References:**
|
||||
- `src/server/services/stage-filtering.ts:1-647` (full filtering pipeline)
|
||||
- `prisma/schema.prisma:1190-1237` (FilteringRule, FilteringResult)
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Jury 1 (Evaluation Round 1)
|
||||
|
||||
**What the Flow Requires:**
|
||||
- Semi-finalist selection with hard/soft caps per juror
|
||||
- Per-juror hard cap (e.g., max 20 projects, enforced)
|
||||
- Per-juror soft cap + buffer (e.g., target 15, allow up to 18 with warning)
|
||||
- Per-juror category ratio preferences (e.g., "I want 60% Startup / 40% Concept")
|
||||
- Explicit Jury 1 group (named entity, distinct from Jury 2/Jury 3)
|
||||
|
||||
**What Currently Exists:**
|
||||
- ✅ **Evaluation flow**: `evaluationRouter.submit()` accepts `binaryDecision` for yes/no semifinalist vote
|
||||
- ✅ **Assignment system**: `stage-assignment.ts` generates assignments with workload balancing
|
||||
- ⚠️ **Per-juror max**: `User.maxAssignments` exists but treated as global limit across all stages, not stage-specific hard cap
|
||||
- ⚠️ **Workload scoring**: `calculateWorkloadScore()` in `stage-assignment.ts` uses `preferredWorkload` but not distinct soft vs hard cap
|
||||
- ❌ **Soft cap + buffer**: No configuration for soft cap + buffer (e.g., target 15, allow up to 18)
|
||||
- ❌ **Category ratio preferences**: No `User.preferredCategoryRatioJson` or similar field
|
||||
- ❌ **Named jury groups**: All `JURY_MEMBER` users are a global pool, no `JuryGroup` model to create Jury 1, Jury 2, Jury 3 as separate entities
|
||||
|
||||
**What's Missing:**
|
||||
1. **Soft cap + buffer**: Need `User.targetAssignments` (soft cap) and `User.maxAssignments` (hard cap), with UI warning when juror is in buffer zone
|
||||
2. **Category ratio preferences**: Need `User.preferredCategoryRatioJson: { STARTUP: 0.6, BUSINESS_CONCEPT: 0.4 }` and assignment scoring that respects ratios
|
||||
3. **Named jury groups**: Need `JuryGroup` model with `name`, `stageId`, `members[]`, so assignment can be scoped to "Jury 1" vs "Jury 2"
|
||||
|
||||
**What Needs Modification:**
|
||||
- **Assignment service**: Update `stage-assignment.ts` to:
|
||||
- Filter jury pool by `JuryGroup.members` for the stage
|
||||
- Check both soft cap (warning) and hard cap (reject) when assigning
|
||||
- Score assignments based on `preferredCategoryRatioJson` to balance category distribution per juror
|
||||
- **Schema**: Add `JuryGroup`, `JuryMembership`, modify `User` to have `targetAssignments` and `preferredCategoryRatioJson`
|
||||
- **Admin UI**: Jury group management, per-juror cap/ratio configuration
|
||||
|
||||
**File References:**
|
||||
- `src/server/services/stage-assignment.ts:1-777` (assignment algorithm)
|
||||
- `src/server/routers/evaluation.ts:130-200` (evaluation submission)
|
||||
- `prisma/schema.prisma:241-357` (User model)
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Semi-finalist Submission (Submission Round 2)
|
||||
|
||||
**What the Flow Requires:**
|
||||
- New doc requirements (e.g., detailed business plan, updated pitch deck)
|
||||
- Round 1 docs become **read-only** for applicants (no edit/delete)
|
||||
- Jury sees **both rounds** (cumulative file view)
|
||||
- Multi-round submission windows (Round 2 opens after Jury 1 closes)
|
||||
|
||||
**What Currently Exists:**
|
||||
- ✅ **Multi-round file requirements**: Each stage can define its own `FileRequirement` list
|
||||
- ✅ **Multi-round windows**: `Stage.windowOpenAt` / `windowCloseAt` per stage
|
||||
- ⚠️ **Jury file access**: `fileRouter.getDownloadUrl()` checks if juror has assignment to project, allows access to files from prior stages in same track (lines 66-108), but logic is implicit and complex
|
||||
- ❌ **Read-only enforcement**: No `ProjectFile.isReadOnly` or `FileRequirement.allowEdits` field
|
||||
- ❌ **Cumulative view**: No explicit "show all files from all prior stages" flag on stages
|
||||
|
||||
**What's Missing:**
|
||||
1. **Read-only flag**: Need `ProjectFile.isReadOnlyForApplicant: Boolean` set when stage transitions, or `FileRequirement.allowEdits: Boolean` to control mutability
|
||||
2. **Cumulative view**: Need `Stage.showPriorStageFiles: Boolean` or `Stage.cumulativeFileView: Boolean` to make jury file access explicit
|
||||
3. **File versioning**: Current `replacedById` allows versioning but doesn't enforce read-only from prior rounds
|
||||
|
||||
**What Needs Modification:**
|
||||
- **Applicant file upload**: Check `isReadOnlyForApplicant` before allowing delete/replace
|
||||
- **File router**: Simplify jury file access by checking `Stage.cumulativeFileView` instead of complex prior-stage logic
|
||||
- **Stage transition**: When project moves from Round 1 to Round 2, mark all Round 1 files as `isReadOnlyForApplicant: true`
|
||||
- **Schema**: Add `ProjectFile.isReadOnlyForApplicant`, `Stage.cumulativeFileView`
|
||||
|
||||
**File References:**
|
||||
- `src/server/routers/file.ts:12-125` (file download authorization)
|
||||
- `prisma/schema.prisma:590-624` (ProjectFile)
|
||||
- `prisma/schema.prisma:1879-1922` (Stage)
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Jury 2 (Evaluation Round 2)
|
||||
|
||||
**What the Flow Requires:**
|
||||
- Finalist selection (same evaluation mechanics as Jury 1)
|
||||
- Special awards eligibility + voting alongside main track
|
||||
- Explicit Jury 2 group (named entity, may overlap with Jury 1)
|
||||
- Same per-juror caps and category ratio features as Jury 1
|
||||
|
||||
**What Currently Exists:**
|
||||
- ✅ **Evaluation flow**: Identical to Jury 1, `binaryDecision` for finalist vote
|
||||
- ✅ **Special awards**: Full system with `SpecialAward`, `AwardEligibility`, `AwardJuror`, `AwardVote`, AI eligibility screening
|
||||
- ✅ **Award tracks**: `Track.kind: AWARD` allows award-specific stages to run in parallel
|
||||
- ❌ **Named Jury 2 group**: Same global jury pool issue as Jury 1
|
||||
|
||||
**What's Missing:**
|
||||
- (Same as Jury 1: named jury groups, soft cap + buffer, category ratio preferences)
|
||||
|
||||
**What Needs Modification:**
|
||||
- (Same as Jury 1: jury group scoping, cap/ratio logic in assignment service)
|
||||
|
||||
**File References:**
|
||||
- `src/server/routers/specialAward.ts:1-150` (award management)
|
||||
- `prisma/schema.prisma:1363-1481` (award models)
|
||||
|
||||
---
|
||||
|
||||
### Step 6: Mentoring
|
||||
|
||||
**What the Flow Requires:**
|
||||
- Private mentor-team workspace with:
|
||||
- Chat/messaging (already exists)
|
||||
- Mentor file upload (mentor uploads docs for team to review)
|
||||
- Threaded file comments (comment on specific files with replies)
|
||||
- File promotion (mentor-uploaded file becomes part of official submission)
|
||||
|
||||
**What Currently Exists:**
|
||||
- ✅ **Mentor assignment**: `MentorAssignment` model, AI-suggested matching, manual assignment
|
||||
- ✅ **Mentor messages**: `MentorMessage` model for chat messages between mentor and team
|
||||
- ❌ **Mentor file upload**: No `ProjectFile.uploadedByMentorId` or mentor file upload endpoint
|
||||
- ❌ **Threaded file comments**: No `FileComment` model with `parentCommentId` for threading
|
||||
- ❌ **File promotion**: No workflow to promote mentor-uploaded file to official project submission
|
||||
|
||||
**What's Missing:**
|
||||
1. **Mentor file upload**: Need `ProjectFile.uploadedByMentorId: String?`, extend `fileRouter.getUploadUrl()` to allow mentors to upload
|
||||
2. **File comments**: Need `FileComment` model:
|
||||
```prisma
|
||||
model FileComment {
|
||||
id String @id @default(cuid())
|
||||
fileId String
|
||||
file ProjectFile @relation(...)
|
||||
authorId String
|
||||
author User @relation(...)
|
||||
content String @db.Text
|
||||
parentCommentId String?
|
||||
parentComment FileComment? @relation("CommentReplies", ...)
|
||||
replies FileComment[] @relation("CommentReplies")
|
||||
createdAt DateTime @default(now())
|
||||
}
|
||||
```
|
||||
3. **File promotion**: Need `ProjectFile.promotedFromMentorFileId: String?` and a promotion workflow (admin/team approves mentor file as official doc)
|
||||
|
||||
**What Needs Modification:**
|
||||
- **File router**: Add `mentorUploadFile` mutation, authorization check for mentor role
|
||||
- **Mentor router**: Add `addFileComment`, `promoteFileToOfficial` mutations
|
||||
- **Schema**: Add `FileComment`, modify `ProjectFile` to link mentor uploads and promotions
|
||||
|
||||
**File References:**
|
||||
- `src/server/routers/mentor.ts:1-200` (mentor operations)
|
||||
- `prisma/schema.prisma:1145-1172` (MentorAssignment)
|
||||
- `prisma/schema.prisma:1577-1590` (MentorMessage)
|
||||
|
||||
---
|
||||
|
||||
### Step 7: Jury 3 Live Finals
|
||||
|
||||
**What the Flow Requires:**
|
||||
- Stage manager admin controls (cursor navigation, pause/resume, queue reorder) — **ALREADY EXISTS**
|
||||
- Jury live voting with notes (vote + add commentary per vote)
|
||||
- Audience voting — **ALREADY EXISTS**
|
||||
- Deliberation period (pause for jury discussion before final vote)
|
||||
- Explicit Jury 3 group (named entity for live finals)
|
||||
|
||||
**What Currently Exists:**
|
||||
- ✅ **Live control service**: `live-control.ts` with `LiveProgressCursor`, session management, cursor navigation, queue reordering
|
||||
- ✅ **Live voting**: `LiveVote` model, jury/audience voting, criteria-based scoring
|
||||
- ✅ **Cohort management**: `Cohort` groups projects for voting windows
|
||||
- ⚠️ **Vote notes**: `LiveVote` has no `notes` or `commentary` field for per-vote notes
|
||||
- ❌ **Deliberation period**: No `Cohort.deliberationDurationMinutes` or deliberation status
|
||||
- ❌ **Named Jury 3 group**: Same global jury pool issue
|
||||
|
||||
**What's Missing:**
|
||||
1. **Vote notes**: Add `LiveVote.notes: String?` for jury commentary during voting
|
||||
2. **Deliberation period**: Add `Cohort.deliberationDurationMinutes: Int?`, `Cohort.deliberationStartedAt: DateTime?`, `Cohort.deliberationEndedAt: DateTime?`
|
||||
3. **Named Jury 3 group**: (Same as Jury 1/Jury 2)
|
||||
|
||||
**What Needs Modification:**
|
||||
- **LiveVote model**: Add `notes` field
|
||||
- **Cohort model**: Add deliberation fields
|
||||
- **Live voting router**: Add `startDeliberation()`, `endDeliberation()` procedures
|
||||
- **Live control service**: Add deliberation status checks to prevent voting during deliberation
|
||||
|
||||
**File References:**
|
||||
- `src/server/services/live-control.ts:1-619` (live session management)
|
||||
- `src/server/routers/live-voting.ts:1-150` (live voting procedures)
|
||||
- `prisma/schema.prisma:1035-1071, 1969-2006` (LiveVotingSession, Cohort)
|
||||
|
||||
---
|
||||
|
||||
### Step 8: Winner Confirmation
|
||||
|
||||
**What the Flow Requires:**
|
||||
- Individual jury member confirmation (each juror digitally signs off on results)
|
||||
- Admin override to force majority or choose winner
|
||||
- Results frozen with immutable audit trail
|
||||
|
||||
**What Currently Exists:**
|
||||
- ⚠️ **Admin override**: `SpecialAward.winnerOverridden` flag, `OverrideAction` logs admin actions, but no explicit "force majority" vs "choose winner" distinction
|
||||
- ⚠️ **Audit trail**: `DecisionAuditLog`, `OverrideAction` comprehensive, but no explicit `ResultsSnapshot` or freeze mechanism
|
||||
- ❌ **Individual jury confirmation**: No `JuryConfirmation` model for per-user digital signatures
|
||||
|
||||
**What's Missing:**
|
||||
1. **Jury confirmation**: Need `JuryConfirmation` model:
|
||||
```prisma
|
||||
model JuryConfirmation {
|
||||
id String @id @default(cuid())
|
||||
stageId String
|
||||
stage Stage @relation(...)
|
||||
userId String
|
||||
user User @relation(...)
|
||||
confirmedAt DateTime @default(now())
|
||||
signature String // Digital signature or consent hash
|
||||
ipAddress String?
|
||||
userAgent String?
|
||||
}
|
||||
```
|
||||
2. **Results freeze**: Need `Stage.resultsFrozenAt: DateTime?` to mark results as immutable
|
||||
3. **Override modes**: Add `OverrideAction.overrideMode: Enum(FORCE_MAJORITY, CHOOSE_WINNER)` for clarity
|
||||
|
||||
**What Needs Modification:**
|
||||
- **Live voting router**: Add `confirmResults()` procedure for jury members to sign off
|
||||
- **Admin router**: Add `freezeResults()` procedure, check `resultsFrozenAt` before allowing further changes
|
||||
- **Override service**: Update `OverrideAction` creation to include `overrideMode`
|
||||
|
||||
**File References:**
|
||||
- `prisma/schema.prisma:1363-1418` (SpecialAward with winner override)
|
||||
- `prisma/schema.prisma:2024-2040` (OverrideAction)
|
||||
- `prisma/schema.prisma:2042-2057` (DecisionAuditLog)
|
||||
|
||||
---
|
||||
|
||||
## 3. Cross-Cutting Gap Analysis
|
||||
|
||||
### Multi-Jury Support (Named Jury Entities with Overlap)
|
||||
|
||||
**Requirement:**
|
||||
- Create named jury groups (Jury 1, Jury 2, Jury 3) with explicit membership lists
|
||||
- Allow jurors to be members of multiple groups (e.g., Juror A is in Jury 1 and Jury 3 but not Jury 2)
|
||||
- Scope assignments, evaluations, and live voting to specific jury groups
|
||||
|
||||
**Current State:**
|
||||
- All users with `role: JURY_MEMBER` are treated as a global pool
|
||||
- No scoping of jury to specific stages or rounds
|
||||
- `stage-assignment.ts` queries all active jury members without filtering by group
|
||||
|
||||
**Gap:**
|
||||
- ❌ No `JuryGroup` model
|
||||
- ❌ No `JuryMembership` model to link users to groups
|
||||
- ❌ No stage-level configuration to specify which jury group evaluates that stage
|
||||
|
||||
**Required Schema Changes:**
|
||||
```prisma
|
||||
model JuryGroup {
|
||||
id String @id @default(cuid())
|
||||
programId String
|
||||
program Program @relation(...)
|
||||
name String // "Jury 1", "Jury 2", "Jury 3"
|
||||
description String?
|
||||
createdAt DateTime @default(now())
|
||||
|
||||
memberships JuryMembership[]
|
||||
stages Stage[] // One-to-many: stages can specify which jury group evaluates them
|
||||
}
|
||||
|
||||
model JuryMembership {
|
||||
id String @id @default(cuid())
|
||||
juryGroupId String
|
||||
juryGroup JuryGroup @relation(...)
|
||||
userId String
|
||||
user User @relation(...)
|
||||
joinedAt DateTime @default(now())
|
||||
|
||||
@@unique([juryGroupId, userId])
|
||||
}
|
||||
|
||||
// Extend Stage model:
|
||||
model Stage {
|
||||
// ... existing fields
|
||||
juryGroupId String?
|
||||
juryGroup JuryGroup? @relation(...)
|
||||
}
|
||||
```
|
||||
|
||||
**Impact:**
|
||||
- **High** — Affects assignment generation, evaluation authorization, live voting eligibility
|
||||
- **Requires**: New admin UI for jury group management, updates to all jury-related queries/mutations
|
||||
|
||||
---
|
||||
|
||||
### Multi-Round Submission Windows
|
||||
|
||||
**Requirement:**
|
||||
- Distinct submission windows for Round 1 (Intake), Round 2 (Semi-finalist submission)
|
||||
- Round 1 files become read-only after Round 1 closes
|
||||
- Jury sees cumulative files from all prior rounds
|
||||
|
||||
**Current State:**
|
||||
- ✅ Each stage has `windowOpenAt` / `windowCloseAt` (multi-round windows exist)
|
||||
- ⚠️ File access is complex and implicit (checks prior stages in track but no clear flag)
|
||||
- ❌ No read-only enforcement for applicants after stage transition
|
||||
|
||||
**Gap:**
|
||||
- ❌ No `ProjectFile.isReadOnlyForApplicant` field
|
||||
- ❌ No `Stage.cumulativeFileView` flag for jury access
|
||||
- ❌ No automated mechanism to mark files as read-only on stage transition
|
||||
|
||||
**Required Schema Changes:**
|
||||
```prisma
|
||||
model ProjectFile {
|
||||
// ... existing fields
|
||||
isReadOnlyForApplicant Boolean @default(false)
|
||||
}
|
||||
|
||||
model Stage {
|
||||
// ... existing fields
|
||||
cumulativeFileView Boolean @default(false) // If true, jury sees files from all prior stages in track
|
||||
}
|
||||
```
|
||||
|
||||
**Impact:**
|
||||
- **Medium** — Affects file upload/delete authorization, jury file listing queries
|
||||
- **Requires**: Stage transition hook to mark files as read-only, applicant file UI updates, jury file view updates
|
||||
|
||||
---
|
||||
|
||||
### Per-Juror Hard Cap vs Soft Cap + Buffer
|
||||
|
||||
**Requirement:**
|
||||
- **Hard cap**: Max N projects (e.g., 20), enforced, cannot exceed
|
||||
- **Soft cap**: Target N projects (e.g., 15), preferred, can exceed with warning
|
||||
- **Buffer**: Soft cap to hard cap range (e.g., 15-18), shows warning in UI
|
||||
|
||||
**Current State:**
|
||||
- ⚠️ `User.maxAssignments` exists but treated as global hard cap
|
||||
- ⚠️ `User.preferredWorkload` used in assignment scoring but not enforced as soft cap
|
||||
- ❌ No buffer concept, no UI warning when juror is over target
|
||||
|
||||
**Gap:**
|
||||
- ❌ No distinction between soft cap and hard cap
|
||||
- ❌ No buffer configuration or warning mechanism
|
||||
|
||||
**Required Schema Changes:**
|
||||
```prisma
|
||||
model User {
|
||||
// ... existing fields
|
||||
targetAssignments Int? // Soft cap (preferred target)
|
||||
maxAssignments Int? // Hard cap (absolute max, enforced)
|
||||
// preferredWorkload is deprecated in favor of targetAssignments
|
||||
}
|
||||
```
|
||||
|
||||
**Assignment Logic Changes:**
|
||||
- Update `stage-assignment.ts`:
|
||||
- Filter candidates to exclude jurors at `maxAssignments`
|
||||
- Score jurors higher if below `targetAssignments`, lower if between `targetAssignments` and `maxAssignments` (buffer zone)
|
||||
- UI shows warning icon for jurors in buffer zone (target < current < max)
|
||||
|
||||
**Impact:**
|
||||
- **Medium** — Affects assignment generation and admin UI for jury workload
|
||||
- **Requires**: Update assignment service, admin assignment UI to show soft/hard cap status
|
||||
|
||||
---
|
||||
|
||||
### Per-Juror Category Ratio Preferences
|
||||
|
||||
**Requirement:**
|
||||
- Juror specifies preferred category distribution (e.g., "I want 60% Startup / 40% Business Concept")
|
||||
- Assignment algorithm respects these preferences when assigning projects
|
||||
|
||||
**Current State:**
|
||||
- ❌ No category ratio configuration per juror
|
||||
- ⚠️ Assignment scoring uses tag overlap and workload but not category distribution
|
||||
|
||||
**Gap:**
|
||||
- ❌ No `User.preferredCategoryRatioJson` field
|
||||
- ❌ Assignment algorithm doesn't score based on category distribution
|
||||
|
||||
**Required Schema Changes:**
|
||||
```prisma
|
||||
model User {
|
||||
// ... existing fields
|
||||
preferredCategoryRatioJson Json? @db.JsonB // { "STARTUP": 0.6, "BUSINESS_CONCEPT": 0.4 }
|
||||
}
|
||||
```
|
||||
|
||||
**Assignment Logic Changes:**
|
||||
- Update `stage-assignment.ts`:
|
||||
- For each juror, calculate current category distribution of assigned projects
|
||||
- Score candidates higher if assigning this project would bring juror's distribution closer to `preferredCategoryRatioJson`
|
||||
- Example: Juror wants 60/40 Startup/Concept, currently has 70/30, algorithm prefers assigning Concept projects to rebalance
|
||||
|
||||
**Impact:**
|
||||
- **Medium** — Affects assignment generation quality, requires juror onboarding to set preferences
|
||||
- **Requires**: Update assignment algorithm, admin UI for juror profile editing, onboarding flow
|
||||
|
||||
---
|
||||
|
||||
### Countdown Timers on Dashboards
|
||||
|
||||
**Requirement:**
|
||||
- Applicant dashboard shows countdown to submission deadline
|
||||
- Jury dashboard shows countdown to evaluation deadline
|
||||
- Admin dashboard shows countdown to stage window close
|
||||
|
||||
**Current State:**
|
||||
- ✅ Backend has `Stage.windowCloseAt` timestamp
|
||||
- ❌ No tRPC endpoint to fetch countdown state (time remaining, status: open/closing soon/closed)
|
||||
- ❌ Frontend has no countdown component
|
||||
|
||||
**Gap:**
|
||||
- ❌ No `stageRouter.getCountdown()` or similar procedure
|
||||
- ❌ No frontend countdown component
|
||||
|
||||
**Required Changes:**
|
||||
- Add tRPC procedure:
|
||||
```typescript
|
||||
stageRouter.getCountdown: protectedProcedure
|
||||
.input(z.object({ stageId: z.string() }))
|
||||
.query(async ({ ctx, input }) => {
|
||||
const stage = await ctx.prisma.stage.findUniqueOrThrow({ where: { id: input.stageId } })
|
||||
const now = new Date()
|
||||
const closeAt = stage.windowCloseAt
|
||||
if (!closeAt) return { status: 'no_deadline', timeRemaining: null }
|
||||
const remaining = closeAt.getTime() - now.getTime()
|
||||
if (remaining <= 0) return { status: 'closed', timeRemaining: 0 }
|
||||
return {
|
||||
status: remaining < 3600000 ? 'closing_soon' : 'open', // < 1 hour = closing soon
|
||||
timeRemaining: remaining,
|
||||
closeAt,
|
||||
}
|
||||
})
|
||||
```
|
||||
- Frontend: Countdown component that polls `getCountdown()` and displays "X days Y hours Z minutes remaining"
|
||||
|
||||
**Impact:**
|
||||
- **Low** — UX improvement, no data model changes
|
||||
- **Requires**: New tRPC procedure, frontend countdown component, dashboard integration
|
||||
|
||||
---
|
||||
|
||||
### Email Reminders as Deadlines Approach
|
||||
|
||||
**Requirement:**
|
||||
- Automated email reminders at 72 hours, 24 hours, 1 hour before deadline
|
||||
- For applicants (submission deadlines) and jury (evaluation deadlines)
|
||||
|
||||
**Current State:**
|
||||
- ⚠️ `processEvaluationReminders()` exists for jury reminders
|
||||
- ⚠️ `ReminderLog` tracks sent reminders to prevent duplicates
|
||||
- ❌ No applicant deadline reminder cron job
|
||||
- ❌ No configurable reminder intervals (hardcoded to 3 days, 24h, 1h in evaluation reminders)
|
||||
|
||||
**Gap:**
|
||||
- ❌ No applicant reminder service
|
||||
- ❌ No configurable reminder intervals per stage
|
||||
|
||||
**Required Changes:**
|
||||
- Add `Stage.reminderIntervalsJson: Json?` // `[72, 24, 1]` (hours before deadline)
|
||||
- Add `src/server/services/applicant-reminders.ts`:
|
||||
```typescript
|
||||
export async function processApplicantReminders(prisma: PrismaClient) {
|
||||
const now = new Date()
|
||||
const stages = await prisma.stage.findMany({
|
||||
where: { status: 'STAGE_ACTIVE', windowCloseAt: { gte: now } },
|
||||
})
|
||||
for (const stage of stages) {
|
||||
const intervals = (stage.reminderIntervalsJson as number[]) ?? [72, 24, 1]
|
||||
for (const hoursBeforeDeadline of intervals) {
|
||||
const reminderTime = new Date(stage.windowCloseAt!.getTime() - hoursBeforeDeadline * 3600000)
|
||||
if (now >= reminderTime && now < new Date(reminderTime.getTime() + 3600000)) {
|
||||
// Send reminders to all applicants with draft projects in this stage
|
||||
// Check ReminderLog to avoid duplicates
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
- Add cron job in `src/app/api/cron/applicant-reminders/route.ts`
|
||||
|
||||
**Impact:**
|
||||
- **Medium** — Improves applicant engagement, reduces late submissions
|
||||
- **Requires**: New service, new cron endpoint, extend `ReminderLog` model if needed
|
||||
|
||||
---
|
||||
|
||||
### Admin Override Capability at Every Step
|
||||
|
||||
**Requirement:**
|
||||
- Admin can override any automated decision (filtering, assignment, voting results)
|
||||
- Override is logged with reason code and reason text in `OverrideAction`
|
||||
|
||||
**Current State:**
|
||||
- ✅ Filtering: `resolveManualDecision()` overrides flagged projects
|
||||
- ✅ Assignment: Manual assignment creation bypasses AI
|
||||
- ⚠️ Live voting: `SpecialAward.winnerOverridden` flag exists but no explicit override flow for live voting results
|
||||
- ⚠️ Stage transitions: No override capability to force projects between stages
|
||||
|
||||
**Gap:**
|
||||
- ❌ No admin UI to override stage transitions (force project to next stage even if guard fails)
|
||||
- ❌ No admin override for live voting results (admin can pick winner but not documented as override)
|
||||
|
||||
**Required Changes:**
|
||||
- Add `stageRouter.overrideTransition()` procedure:
|
||||
```typescript
|
||||
overrideTransition: adminProcedure
|
||||
.input(z.object({
|
||||
projectId: z.string(),
|
||||
fromStageId: z.string(),
|
||||
toStageId: z.string(),
|
||||
reasonCode: z.nativeEnum(OverrideReasonCode),
|
||||
reasonText: z.string(),
|
||||
}))
|
||||
.mutation(async ({ ctx, input }) => {
|
||||
// Force executeTransition() without validation
|
||||
// Log in OverrideAction
|
||||
})
|
||||
```
|
||||
- Add `liveVotingRouter.overrideWinner()` procedure (similar flow)
|
||||
|
||||
**Impact:**
|
||||
- **Low** — Fills gaps in admin control, already mostly exists
|
||||
- **Requires**: New admin procedures, UI buttons for override actions
|
||||
|
||||
---
|
||||
|
||||
## 4. Integration Gaps
|
||||
|
||||
### Cross-Stage File Visibility
|
||||
|
||||
**Issue:**
|
||||
- Current file access is stage-scoped. Jury assigned to Round 2 can technically access Round 1 files (via complex `fileRouter.getDownloadUrl()` logic checking prior stages), but this is implicit and fragile.
|
||||
- No clear flag to say "Round 2 jury should see Round 1 + Round 2 files" vs "Round 2 jury should only see Round 2 files".
|
||||
|
||||
**Required:**
|
||||
- Add `Stage.cumulativeFileView: Boolean` — if true, jury sees files from all prior stages in the track.
|
||||
- Simplify `fileRouter.getDownloadUrl()` authorization logic to check this flag instead of manual prior-stage traversal.
|
||||
|
||||
**Impact:**
|
||||
- **Medium** — Simplifies file access logic, makes jury file view behavior explicit.
|
||||
|
||||
---
|
||||
|
||||
### Round 1 to Round 2 Transition (File Read-Only Enforcement)
|
||||
|
||||
**Issue:**
|
||||
- When a project transitions from Round 1 (Intake) to Round 2 (Semi-finalist submission), Round 1 files should become read-only for applicants.
|
||||
- Currently, no mechanism enforces this. Applicants could theoretically delete/replace Round 1 files during Round 2.
|
||||
|
||||
**Required:**
|
||||
- Stage transition hook in `stage-engine.ts` `executeTransition()`:
|
||||
```typescript
|
||||
// After creating destination PSS:
|
||||
if (fromStage.stageType === 'INTAKE' && toStage.stageType === 'INTAKE') {
|
||||
// Mark all project files uploaded in fromStage as read-only for applicant
|
||||
await tx.projectFile.updateMany({
|
||||
where: { projectId, roundId: fromStageRoundId },
|
||||
data: { isReadOnlyForApplicant: true },
|
||||
})
|
||||
}
|
||||
```
|
||||
- Applicant file upload/delete checks: Reject if `ProjectFile.isReadOnlyForApplicant: true`.
|
||||
|
||||
**Impact:**
|
||||
- **High** — Ensures data integrity, prevents applicants from tampering with prior round submissions.
|
||||
|
||||
---
|
||||
|
||||
### Jury Group Scoping Across All Jury-Related Operations
|
||||
|
||||
**Issue:**
|
||||
- Assignments, evaluations, live voting all currently use global jury pool.
|
||||
- Once `JuryGroup` is introduced, must update every jury-related query/mutation to filter by `Stage.juryGroupId`.
|
||||
|
||||
**Affected Areas:**
|
||||
1. **Assignment generation**: `stage-assignment.ts` `previewStageAssignment()` must filter `prisma.user.findMany({ where: { role: 'JURY_MEMBER', ... } })` to `prisma.juryMembership.findMany({ where: { juryGroupId: stage.juryGroupId } })`.
|
||||
2. **Evaluation authorization**: `evaluationRouter.submit()` must verify `assignment.userId` is a member of `stage.juryGroupId`.
|
||||
3. **Live voting authorization**: `liveVotingRouter.submitVote()` must verify juror is in `stage.juryGroupId`.
|
||||
4. **Admin assignment UI**: Dropdown to select jurors must filter by jury group.
|
||||
|
||||
**Impact:**
|
||||
- **High** — Pervasive change across all jury-related features.
|
||||
- **Requires**: Careful migration plan, extensive testing.
|
||||
|
||||
---
|
||||
|
||||
### Countdown Timer Backend Support
|
||||
|
||||
**Issue:**
|
||||
- Dashboards need real-time countdown to deadlines, but no backend service provides this.
|
||||
- Frontend would need to poll `Stage.windowCloseAt` directly and calculate client-side, or use a tRPC subscription.
|
||||
|
||||
**Required:**
|
||||
- Add `stageRouter.getCountdown()` procedure (described in Cross-Cutting section).
|
||||
- Frontend uses `trpc.stage.getCountdown.useQuery()` with `refetchInterval: 60000` (1 minute polling).
|
||||
- Optionally: WebSocket subscription for real-time updates (out of scope for now, polling is sufficient).
|
||||
|
||||
**Impact:**
|
||||
- **Low** — Backend is simple, frontend polling handles real-time updates.
|
||||
|
||||
---
|
||||
|
||||
## 5. Priority Matrix
|
||||
|
||||
Features ranked by **Business Impact** (High/Medium/Low) x **Implementation Effort** (High/Medium/Low).
|
||||
|
||||
| Feature | Business Impact | Implementation Effort | Priority Quadrant | Notes |
|
||||
|---------|----------------|----------------------|-------------------|-------|
|
||||
| **Multi-jury support (named groups)** | **High** | **High** | **Critical** | Required for all 3 jury rounds, affects assignments/evaluations/voting |
|
||||
| **Round 1 docs read-only enforcement** | **High** | **Low** | **Quick Win** | Data integrity essential, simple flag + hook |
|
||||
| **Per-juror hard cap vs soft cap + buffer** | **High** | **Medium** | **Critical** | Ensures balanced workload, prevents burnout |
|
||||
| **Per-juror category ratio preferences** | **Medium** | **Medium** | **Important** | Improves assignment quality, enhances juror satisfaction |
|
||||
| **Jury vote notes (live finals)** | **Medium** | **Low** | **Quick Win** | Enhances deliberation, simple schema change |
|
||||
| **Deliberation period (live finals)** | **Medium** | **Low** | **Quick Win** | Required for live finals flow, simple cohort fields |
|
||||
| **Individual jury confirmation** | **High** | **Medium** | **Critical** | Legal/compliance requirement for final results |
|
||||
| **Results freeze mechanism** | **High** | **Low** | **Quick Win** | Immutable audit trail, simple timestamp flag |
|
||||
| **Cumulative file view flag** | **Medium** | **Low** | **Quick Win** | Simplifies jury file access logic |
|
||||
| **Mentor file upload** | **Medium** | **Medium** | **Important** | Enhances mentoring, requires file router extension |
|
||||
| **Threaded file comments** | **Low** | **Medium** | **Nice to Have** | Improves collaboration, but not blocking |
|
||||
| **File promotion workflow** | **Low** | **Medium** | **Nice to Have** | Advanced feature, can defer to later phase |
|
||||
| **Countdown timers (UI)** | **Low** | **Low** | **Nice to Have** | UX improvement, no data model changes |
|
||||
| **Applicant deadline reminders** | **Medium** | **Low** | **Quick Win** | Reduces late submissions, simple cron job |
|
||||
| **Admin override for stage transitions** | **Low** | **Low** | **Nice to Have** | Edge case, manual workaround exists |
|
||||
|
||||
**Priority Quadrants:**
|
||||
- **Critical (High Impact / High Effort)**: Multi-jury support, jury confirmation — **must do**, high planning required
|
||||
- **Quick Wins (High Impact / Low Effort)**: Read-only enforcement, results freeze, deliberation period — **do first**
|
||||
- **Important (Medium Impact / Medium Effort)**: Caps/ratios, mentor file upload — **do after quick wins**
|
||||
- **Nice to Have (Low Impact / Any Effort)**: File comments threading, countdown timers — **defer or phase 2**
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
The current MOPC platform has a **solid foundation** with the pipeline/track/stage architecture, stage-engine transitions, AI filtering, jury assignment, and live voting infrastructure fully implemented. The **critical gaps** are:
|
||||
|
||||
1. **Multi-jury support** (named jury entities with overlap) — **highest priority**, affects all jury-related features
|
||||
2. **Per-juror caps and category ratio preferences** — **essential for workload balancing**
|
||||
3. **Round 1 read-only enforcement + cumulative file view** — **data integrity and jury UX**
|
||||
4. **Individual jury confirmation + results freeze** — **compliance and audit requirements**
|
||||
5. **Mentoring workspace features** (file upload, comments, promotion) — **enhances mentoring but lower priority**
|
||||
|
||||
**Recommended Approach:**
|
||||
- **Phase 1 (Quick Wins)**: Read-only enforcement, results freeze, deliberation period, vote notes, applicant reminders — **2-3 weeks**
|
||||
- **Phase 2 (Critical)**: Multi-jury support, jury confirmation — **4-6 weeks** (complex, pervasive changes)
|
||||
- **Phase 3 (Important)**: Caps/ratios, mentor file upload — **3-4 weeks**
|
||||
- **Phase 4 (Nice to Have)**: Threaded comments, file promotion, countdown timers — **defer to post-MVP**
|
||||
|
||||
Total estimated effort for Phases 1-3: **9-13 weeks** (assumes single developer, includes testing).
|
||||
|
||||
---
|
||||
|
||||
**End of Gap Analysis Document**
|
||||
1139
docs/claude-architecture-redesign/03-data-model.md
Normal file
1139
docs/claude-architecture-redesign/03-data-model.md
Normal file
File diff suppressed because it is too large
Load Diff
1539
docs/claude-architecture-redesign/04-round-intake.md
Normal file
1539
docs/claude-architecture-redesign/04-round-intake.md
Normal file
File diff suppressed because it is too large
Load Diff
1438
docs/claude-architecture-redesign/05-round-filtering.md
Normal file
1438
docs/claude-architecture-redesign/05-round-filtering.md
Normal file
File diff suppressed because it is too large
Load Diff
698
docs/claude-architecture-redesign/06-round-evaluation.md
Normal file
698
docs/claude-architecture-redesign/06-round-evaluation.md
Normal file
@@ -0,0 +1,698 @@
|
||||
# Round: Evaluation (Jury 1 & Jury 2)
|
||||
|
||||
## 1. Purpose & Position in Flow
|
||||
|
||||
The EVALUATION round is the core judging mechanism of the competition. It appears **twice** in the standard flow:
|
||||
|
||||
| Instance | Name | Position | Jury | Purpose | Output |
|
||||
|----------|------|----------|------|---------|--------|
|
||||
| Round 3 | "Jury 1 — Semi-finalist Selection" | After FILTERING | Jury 1 | Score projects, select semi-finalists | Semi-finalists per category |
|
||||
| Round 5 | "Jury 2 — Finalist Selection" | After SUBMISSION Round 2 | Jury 2 | Score semi-finalists, select finalists + awards | Finalists per category |
|
||||
|
||||
Both instances use the same `RoundType.EVALUATION` but are configured independently with:
|
||||
- Different jury groups (Jury 1 vs Jury 2)
|
||||
- Different evaluation forms/rubrics
|
||||
- Different visible submission windows (Jury 1 sees Window 1 only; Jury 2 sees Windows 1+2)
|
||||
- Different advancement counts
|
||||
|
||||
---
|
||||
|
||||
## 2. Data Model
|
||||
|
||||
### Round Record
|
||||
|
||||
```
|
||||
Round {
|
||||
id: "round-jury-1"
|
||||
competitionId: "comp-2026"
|
||||
name: "Jury 1 — Semi-finalist Selection"
|
||||
roundType: EVALUATION
|
||||
status: ROUND_DRAFT → ROUND_ACTIVE → ROUND_CLOSED
|
||||
sortOrder: 2
|
||||
windowOpenAt: "2026-04-01" // Evaluation window start
|
||||
windowCloseAt: "2026-04-30" // Evaluation window end
|
||||
juryGroupId: "jury-group-1" // Links to Jury 1
|
||||
submissionWindowId: null // EVALUATION rounds don't collect submissions
|
||||
configJson: { ...EvaluationConfig }
|
||||
}
|
||||
```
|
||||
|
||||
### EvaluationConfig
|
||||
|
||||
```typescript
|
||||
type EvaluationConfig = {
|
||||
// --- Assignment Settings ---
|
||||
requiredReviewsPerProject: number // How many jurors review each project (default: 3)
|
||||
|
||||
// --- Scoring Mode ---
|
||||
scoringMode: "criteria" | "global" | "binary"
|
||||
// criteria: Score per criterion + weighted total
|
||||
// global: Single 1-10 score
|
||||
// binary: Yes/No decision (semi-finalist worthy?)
|
||||
requireFeedback: boolean // Must provide text feedback (default: true)
|
||||
|
||||
// --- COI ---
|
||||
coiRequired: boolean // Must declare COI before evaluating (default: true)
|
||||
|
||||
// --- Peer Review ---
|
||||
peerReviewEnabled: boolean // Jurors can see anonymized peer evaluations after submission
|
||||
anonymizationLevel: "fully_anonymous" | "show_initials" | "named"
|
||||
|
||||
// --- AI Features ---
|
||||
aiSummaryEnabled: boolean // Generate AI-powered evaluation summaries
|
||||
aiAssignmentEnabled: boolean // Allow AI-suggested jury-project matching
|
||||
|
||||
// --- Advancement ---
|
||||
advancementMode: "auto_top_n" | "admin_selection" | "ai_recommended"
|
||||
advancementConfig: {
|
||||
perCategory: boolean // Separate counts per STARTUP / BUSINESS_CONCEPT
|
||||
startupCount: number // How many startups advance (default: 10 for Jury 1, 3 for Jury 2)
|
||||
conceptCount: number // How many concepts advance
|
||||
tieBreaker: "admin_decides" | "highest_individual" | "revote"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Related Models
|
||||
|
||||
| Model | Role |
|
||||
|-------|------|
|
||||
| `JuryGroup` | Named jury entity linked to this round |
|
||||
| `JuryGroupMember` | Members of the jury with per-juror overrides |
|
||||
| `Assignment` | Juror-project pairing for this round, linked to JuryGroup |
|
||||
| `Evaluation` | Score/feedback submitted by a juror for one project |
|
||||
| `EvaluationForm` | Rubric/criteria definition for this round |
|
||||
| `ConflictOfInterest` | COI declaration per assignment |
|
||||
| `GracePeriod` | Per-juror deadline extension |
|
||||
| `EvaluationSummary` | AI-generated insights per project per round |
|
||||
| `EvaluationDiscussion` | Peer review discussion threads |
|
||||
| `RoundSubmissionVisibility` | Which submission windows' docs jury can see |
|
||||
| `AdvancementRule` | How projects advance after evaluation |
|
||||
| `ProjectRoundState` | Per-project state in this round |
|
||||
|
||||
---
|
||||
|
||||
## 3. Setup Phase (Before Window Opens)
|
||||
|
||||
### 3.1 Admin Creates the Evaluation Round
|
||||
|
||||
Admin uses the competition wizard or round management UI to:
|
||||
|
||||
1. **Create the Round** with type EVALUATION
|
||||
2. **Link a JuryGroup** — select "Jury 1" (or create a new jury group)
|
||||
3. **Set the evaluation window** — start and end dates
|
||||
4. **Configure the evaluation form** — scoring criteria, weights, scales
|
||||
5. **Set visibility** — which submission windows jury can see (via RoundSubmissionVisibility)
|
||||
6. **Configure advancement rules** — how many advance per category
|
||||
|
||||
### 3.2 Jury Group Configuration
|
||||
|
||||
The linked JuryGroup has:
|
||||
|
||||
```
|
||||
JuryGroup {
|
||||
name: "Jury 1"
|
||||
defaultMaxAssignments: 20 // Default cap per juror
|
||||
defaultCapMode: SOFT // HARD | SOFT | NONE
|
||||
softCapBuffer: 2 // Can exceed by 2 for load balancing
|
||||
categoryQuotasEnabled: true
|
||||
defaultCategoryQuotas: {
|
||||
"STARTUP": { "min": 3, "max": 15 },
|
||||
"BUSINESS_CONCEPT": { "min": 3, "max": 15 }
|
||||
}
|
||||
allowJurorCapAdjustment: true // Jurors can adjust their cap during onboarding
|
||||
allowJurorRatioAdjustment: true // Jurors can adjust their category preference
|
||||
}
|
||||
```
|
||||
|
||||
### 3.3 Per-Juror Overrides
|
||||
|
||||
Each `JuryGroupMember` can override group defaults:
|
||||
|
||||
```
|
||||
JuryGroupMember {
|
||||
juryGroupId: "jury-group-1"
|
||||
userId: "judge-alice"
|
||||
maxAssignmentsOverride: 25 // Alice wants more projects
|
||||
capModeOverride: HARD // Alice: hard cap, no exceptions
|
||||
categoryQuotasOverride: {
|
||||
"STARTUP": { "min": 5, "max": 20 }, // Alice prefers startups
|
||||
"BUSINESS_CONCEPT": { "min": 0, "max": 5 }
|
||||
}
|
||||
preferredStartupRatio: 0.8 // 80% startups
|
||||
}
|
||||
```
|
||||
|
||||
### 3.4 Juror Onboarding (Optional)
|
||||
|
||||
If `allowJurorCapAdjustment` or `allowJurorRatioAdjustment` is true:
|
||||
|
||||
1. When a juror first opens their jury dashboard after being added to the group
|
||||
2. A one-time onboarding dialog appears:
|
||||
- "Your default maximum is 20 projects. Would you like to adjust?" (slider)
|
||||
- "Your default startup/concept ratio is 50/50. Would you like to adjust?" (slider)
|
||||
3. Juror saves preferences → stored in `JuryGroupMember.maxAssignmentsOverride` and `preferredStartupRatio`
|
||||
4. Dialog doesn't appear again (tracked via `JuryGroupMember.updatedAt` or a flag)
|
||||
|
||||
---
|
||||
|
||||
## 4. Assignment System (Enhanced)
|
||||
|
||||
### 4.1 Assignment Algorithm — Jury-Group-Aware
|
||||
|
||||
The current `stage-assignment.ts` algorithm is enhanced to:
|
||||
|
||||
1. **Filter jury pool by JuryGroup** — only members of the linked jury group are considered
|
||||
2. **Apply hard/soft cap logic** per juror
|
||||
3. **Apply category quotas** per juror
|
||||
4. **Score candidates** using existing expertise matching + workload balancing + geo-diversity
|
||||
|
||||
#### Effective Limits Resolution
|
||||
|
||||
```typescript
|
||||
function getEffectiveLimits(member: JuryGroupMember, group: JuryGroup): EffectiveLimits {
|
||||
return {
|
||||
maxAssignments: member.maxAssignmentsOverride ?? group.defaultMaxAssignments,
|
||||
capMode: member.capModeOverride ?? group.defaultCapMode,
|
||||
softCapBuffer: group.softCapBuffer, // Group-level only (not per-juror)
|
||||
categoryQuotas: member.categoryQuotasOverride ?? group.defaultCategoryQuotas,
|
||||
categoryQuotasEnabled: group.categoryQuotasEnabled,
|
||||
preferredStartupRatio: member.preferredStartupRatio,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Cap Enforcement Logic
|
||||
|
||||
```typescript
|
||||
function canAssignMore(
|
||||
jurorId: string,
|
||||
projectCategory: CompetitionCategory,
|
||||
currentLoad: LoadTracker,
|
||||
limits: EffectiveLimits
|
||||
): { allowed: boolean; penalty: number; reason?: string } {
|
||||
const total = currentLoad.total(jurorId)
|
||||
const catLoad = currentLoad.byCategory(jurorId, projectCategory)
|
||||
|
||||
// 1. HARD cap check
|
||||
if (limits.capMode === "HARD" && total >= limits.maxAssignments) {
|
||||
return { allowed: false, penalty: 0, reason: "Hard cap reached" }
|
||||
}
|
||||
|
||||
// 2. SOFT cap check (can exceed by buffer)
|
||||
let overflowPenalty = 0
|
||||
if (limits.capMode === "SOFT") {
|
||||
if (total >= limits.maxAssignments + limits.softCapBuffer) {
|
||||
return { allowed: false, penalty: 0, reason: "Soft cap + buffer exceeded" }
|
||||
}
|
||||
if (total >= limits.maxAssignments) {
|
||||
// In buffer zone — apply increasing penalty
|
||||
overflowPenalty = (total - limits.maxAssignments + 1) * 15
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Category quota check
|
||||
if (limits.categoryQuotasEnabled && limits.categoryQuotas) {
|
||||
const quota = limits.categoryQuotas[projectCategory]
|
||||
if (quota) {
|
||||
if (catLoad >= quota.max) {
|
||||
return { allowed: false, penalty: 0, reason: `Category ${projectCategory} max reached (${quota.max})` }
|
||||
}
|
||||
// Bonus for under-min
|
||||
if (catLoad < quota.min) {
|
||||
overflowPenalty -= 15 // Negative penalty = bonus
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Ratio preference alignment
|
||||
if (limits.preferredStartupRatio != null && total > 0) {
|
||||
const currentStartupRatio = currentLoad.byCategory(jurorId, "STARTUP") / total
|
||||
const isStartup = projectCategory === "STARTUP"
|
||||
const wantMore = isStartup
|
||||
? currentStartupRatio < limits.preferredStartupRatio
|
||||
: currentStartupRatio > limits.preferredStartupRatio
|
||||
if (wantMore) overflowPenalty -= 10 // Bonus for aligning with preference
|
||||
else overflowPenalty += 10 // Penalty for diverging
|
||||
}
|
||||
|
||||
return { allowed: true, penalty: overflowPenalty }
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Assignment Flow
|
||||
|
||||
```
|
||||
1. Admin opens Assignment panel for Round 3 (Jury 1)
|
||||
2. System loads:
|
||||
- Projects with ProjectRoundState PENDING/IN_PROGRESS in this round
|
||||
- JuryGroup members (with effective limits)
|
||||
- Existing assignments (to avoid duplicates)
|
||||
- COI records (to skip conflicted pairs)
|
||||
3. Admin clicks "Generate Suggestions"
|
||||
4. Algorithm runs:
|
||||
a. For each project (sorted by fewest current assignments):
|
||||
- Score each eligible juror (tag matching + workload + geo + cap/quota penalties)
|
||||
- Select top N jurors (N = requiredReviewsPerProject - existing reviews)
|
||||
- Track load in jurorLoadMap
|
||||
b. Report unassigned projects (jurors at capacity)
|
||||
5. Admin reviews preview:
|
||||
- Assignment matrix (juror × project grid)
|
||||
- Load distribution chart
|
||||
- Unassigned projects list
|
||||
- Category distribution per juror
|
||||
6. Admin can:
|
||||
- Accept all suggestions
|
||||
- Modify individual assignments (drag-drop or manual add/remove)
|
||||
- Re-run with different parameters
|
||||
7. Admin clicks "Apply Assignments"
|
||||
8. System creates Assignment records with juryGroupId set
|
||||
9. Notifications sent to jurors
|
||||
```
|
||||
|
||||
### 4.3 AI-Powered Assignment (Optional)
|
||||
|
||||
If `aiAssignmentEnabled` is true in config:
|
||||
|
||||
1. Admin clicks "AI Assignment Suggestions"
|
||||
2. System calls `ai-assignment.ts`:
|
||||
- Anonymizes juror profiles and project descriptions
|
||||
- Sends to GPT with matching instructions
|
||||
- Returns confidence scores and reasoning
|
||||
3. AI suggestions shown alongside algorithm suggestions
|
||||
4. Admin picks which to use or mixes both
|
||||
|
||||
### 4.4 Handling Unassigned Projects
|
||||
|
||||
When all jurors with SOFT cap reach cap+buffer:
|
||||
1. Remaining projects become "unassigned"
|
||||
2. Admin dashboard highlights these prominently
|
||||
3. Admin can:
|
||||
- Manually assign to specific jurors (bypasses cap — manual override)
|
||||
- Increase a juror's cap
|
||||
- Add more jurors to the jury group
|
||||
- Reduce `requiredReviewsPerProject` for remaining projects
|
||||
|
||||
---
|
||||
|
||||
## 5. Jury Evaluation Experience
|
||||
|
||||
### 5.1 Jury Dashboard
|
||||
|
||||
When a Jury 1 member opens their dashboard:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ JURY 1 — Semi-finalist Selection │
|
||||
│ ─────────────────────────────────────────────────── │
|
||||
│ Evaluation Window: April 1 – April 30 │
|
||||
│ ⏱ 12 days remaining │
|
||||
│ │
|
||||
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────┐ │
|
||||
│ │ 15 │ │ 8 │ │ 2 │ │ 5 │ │
|
||||
│ │ Total │ │ Complete │ │ In Draft │ │ Pending│ │
|
||||
│ └──────────┘ └──────────┘ └──────────┘ └────────┘ │
|
||||
│ │
|
||||
│ [Continue Next Evaluation →] │
|
||||
│ │
|
||||
│ Recent Assignments │
|
||||
│ ┌──────────────────────────────────────────────┐ │
|
||||
│ │ OceanClean AI │ Startup │ ✅ Done │ View │ │
|
||||
│ │ Blue Carbon Hub │ Concept │ ⏳ Draft │ Cont │ │
|
||||
│ │ SeaWatch Monitor │ Startup │ ⬜ Pending│ Start│ │
|
||||
│ │ ... │ │
|
||||
│ └──────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Key elements:
|
||||
- **Deadline countdown** — prominent timer showing days/hours remaining
|
||||
- **Progress stats** — total, completed, in-draft, pending
|
||||
- **Quick action CTA** — jump to next unevaluated project
|
||||
- **Assignment list** — sorted by status (pending first, then drafts, then done)
|
||||
|
||||
### 5.2 COI Declaration (Blocking)
|
||||
|
||||
Before evaluating any project, the juror MUST declare COI:
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────┐
|
||||
│ Conflict of Interest Declaration │
|
||||
│ │
|
||||
│ Do you have a conflict of interest with │
|
||||
│ "OceanClean AI" (Startup)? │
|
||||
│ │
|
||||
│ ○ No conflict — I can evaluate fairly │
|
||||
│ ○ Yes, I have a conflict: │
|
||||
│ Type: [Financial ▾] │
|
||||
│ Description: [________________] │
|
||||
│ │
|
||||
│ [Submit Declaration] │
|
||||
└───────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
- If **No conflict**: Proceed to evaluation form
|
||||
- If **Yes**: Assignment flagged, admin notified, juror may be reassigned
|
||||
- COI declaration is logged in `ConflictOfInterest` model
|
||||
- Admin can review and take action (cleared / reassigned / noted)
|
||||
|
||||
### 5.3 Evaluation Form
|
||||
|
||||
The form adapts to the `scoringMode`:
|
||||
|
||||
#### Criteria Mode (default for Jury 1 and Jury 2)
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────────┐
|
||||
│ Evaluating: OceanClean AI (Startup) │
|
||||
│ ──────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ [📄 Documents] [📊 Scoring] [💬 Feedback] │
|
||||
│ │
|
||||
│ ── DOCUMENTS TAB ── │
|
||||
│ ┌─ Round 1 Application Docs ─────────────────┐ │
|
||||
│ │ 📄 Executive Summary.pdf [Download] │ │
|
||||
│ │ 📄 Business Plan.pdf [Download] │ │
|
||||
│ └─────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ (Jury 2 also sees:) │
|
||||
│ ┌─ Round 2 Semi-finalist Docs ────────────────┐ │
|
||||
│ │ 📄 Updated Business Plan.pdf [Download] │ │
|
||||
│ │ 🎥 Video Pitch.mp4 [Play] │ │
|
||||
│ └─────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ── SCORING TAB ── │
|
||||
│ Innovation & Impact [1] [2] [3] [4] [5] (w:30%)│
|
||||
│ Feasibility [1] [2] [3] [4] [5] (w:25%)│
|
||||
│ Team & Execution [1] [2] [3] [4] [5] (w:25%)│
|
||||
│ Ocean Relevance [1] [2] [3] [4] [5] (w:20%)│
|
||||
│ │
|
||||
│ Overall Score: 3.8 / 5.0 (auto-calculated) │
|
||||
│ │
|
||||
│ ── FEEDBACK TAB ── │
|
||||
│ Feedback: [________________________________] │
|
||||
│ │
|
||||
│ [💾 Save Draft] [✅ Submit Evaluation] │
|
||||
│ (Auto-saves every 30s) │
|
||||
└───────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### Binary Mode (optional for quick screening)
|
||||
|
||||
```
|
||||
Should this project advance to the semi-finals?
|
||||
[✅ Yes] [❌ No]
|
||||
|
||||
Justification (required): [________________]
|
||||
```
|
||||
|
||||
#### Global Score Mode
|
||||
|
||||
```
|
||||
Overall Score: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
|
||||
|
||||
Feedback (required): [________________]
|
||||
```
|
||||
|
||||
### 5.4 Document Visibility (Cross-Round)
|
||||
|
||||
Controlled by `RoundSubmissionVisibility`:
|
||||
|
||||
| Round | Sees Window 1 ("Application Docs") | Sees Window 2 ("Semi-finalist Docs") |
|
||||
|-------|------------------------------------|-----------------------------------------|
|
||||
| Jury 1 (Round 3) | Yes | No (doesn't exist yet) |
|
||||
| Jury 2 (Round 5) | Yes | Yes |
|
||||
| Jury 3 (Round 7) | Yes | Yes |
|
||||
|
||||
In the evaluation UI:
|
||||
- Documents are grouped by submission window
|
||||
- Each group has a label (from `RoundSubmissionVisibility.displayLabel`)
|
||||
- Clear visual separation (tabs, accordion sections, or side panels)
|
||||
|
||||
### 5.5 Auto-Save and Submission
|
||||
|
||||
- **Auto-save**: Client debounces and calls `evaluation.autosave` every 30 seconds while draft is open
|
||||
- **Draft status**: Evaluation starts as NOT_STARTED → DRAFT on first save → SUBMITTED on explicit submit
|
||||
- **Submission validation**:
|
||||
- All required criteria scored (if criteria mode)
|
||||
- Global score provided (if global mode)
|
||||
- Binary decision selected (if binary mode)
|
||||
- Feedback text provided (if `requireFeedback`)
|
||||
- Window is open (or juror has grace period)
|
||||
- **After submission**: Evaluation becomes read-only for juror (status = SUBMITTED)
|
||||
- **Admin can lock**: Set status to LOCKED to prevent any further changes
|
||||
|
||||
### 5.6 Grace Periods
|
||||
|
||||
```
|
||||
GracePeriod {
|
||||
roundId: "round-jury-1"
|
||||
userId: "judge-alice"
|
||||
projectId: null // Applies to ALL Alice's assignments in this round
|
||||
extendedUntil: "2026-05-02" // 2 days after official close
|
||||
reason: "Travel conflict"
|
||||
grantedById: "admin-1"
|
||||
}
|
||||
```
|
||||
|
||||
- Admin can grant per-juror or per-juror-per-project grace periods
|
||||
- Evaluation submission checks grace period before rejecting past-window submissions
|
||||
- Dashboard shows "(Grace period: 2 extra days)" badge for affected jurors
|
||||
|
||||
---
|
||||
|
||||
## 6. End of Evaluation — Results & Advancement
|
||||
|
||||
### 6.1 Results Visualization
|
||||
|
||||
When the evaluation window closes, the admin sees:
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────┐
|
||||
│ Jury 1 Results │
|
||||
│ ─────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Completion: 142/150 evaluations submitted (94.7%) │
|
||||
│ Outstanding: 8 (3 jurors have pending evaluations) │
|
||||
│ │
|
||||
│ ┌─ STARTUPS (Top 10) ──────────────────────────────────────┐│
|
||||
│ │ # Project Avg Score Consensus Reviews Status ││
|
||||
│ │ 1 OceanClean AI 4.6/5 0.92 3/3 ✅ ││
|
||||
│ │ 2 SeaWatch 4.3/5 0.85 3/3 ✅ ││
|
||||
│ │ 3 BlueCarbon 4.1/5 0.78 3/3 ✅ ││
|
||||
│ │ ... ││
|
||||
│ │ 10 TidalEnergy 3.2/5 0.65 3/3 ✅ ││
|
||||
│ │ ── cutoff line ────────────────────────────────────────── ││
|
||||
│ │ 11 WavePower 3.1/5 0.71 3/3 ⬜ ││
|
||||
│ │ 12 CoralGuard 2.9/5 0.55 2/3 ⚠️ ││
|
||||
│ └──────────────────────────────────────────────────────────┘│
|
||||
│ │
|
||||
│ ┌─ CONCEPTS (Top 10) ──────────────────────────────────────┐│
|
||||
│ │ (same layout) ││
|
||||
│ └──────────────────────────────────────────────────────────┘│
|
||||
│ │
|
||||
│ [🤖 AI Recommendation] [📊 Score Distribution] [Export] │
|
||||
│ │
|
||||
│ [✅ Approve Shortlist] [✏️ Edit Shortlist] │
|
||||
└──────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Metrics shown:**
|
||||
- Average global score (or weighted criteria average)
|
||||
- Consensus score (1 - normalized stddev, where 1.0 = full agreement)
|
||||
- Review count / required
|
||||
- Per-criterion averages (expandable)
|
||||
|
||||
### 6.2 AI Recommendation
|
||||
|
||||
When admin clicks "AI Recommendation":
|
||||
|
||||
1. System calls `ai-evaluation-summary.ts` for each project in bulk
|
||||
2. AI generates:
|
||||
- Ranked shortlist per category based on scores + feedback analysis
|
||||
- Strengths, weaknesses, themes per project
|
||||
- Recommendation: "Advance" / "Borderline" / "Do not advance"
|
||||
3. Admin sees AI recommendation alongside actual scores
|
||||
4. AI recommendations are suggestions only — admin has final say
|
||||
|
||||
### 6.3 Advancement Decision
|
||||
|
||||
```
|
||||
Advancement Mode: admin_selection (with AI recommendation)
|
||||
|
||||
1. System shows ranked list per category
|
||||
2. AI highlights recommended top N per category
|
||||
3. Admin can:
|
||||
- Accept AI recommendation
|
||||
- Drag projects to reorder
|
||||
- Add/remove projects from advancement list
|
||||
- Set custom cutoff line
|
||||
4. Admin clicks "Confirm Advancement"
|
||||
5. System:
|
||||
a. Sets ProjectRoundState to PASSED for advancing projects
|
||||
b. Sets ProjectRoundState to REJECTED for non-advancing projects
|
||||
c. Updates Project.status to SEMIFINALIST (Jury 1) or FINALIST (Jury 2)
|
||||
d. Logs all decisions in DecisionAuditLog
|
||||
e. Sends notifications to all teams (advanced / not selected)
|
||||
```
|
||||
|
||||
### 6.4 Advancement Modes
|
||||
|
||||
| Mode | Behavior |
|
||||
|------|----------|
|
||||
| `auto_top_n` | Top N per category automatically advance when window closes |
|
||||
| `admin_selection` | Admin manually selects who advances (with AI/score guidance) |
|
||||
| `ai_recommended` | AI proposes list, admin must approve/modify |
|
||||
|
||||
---
|
||||
|
||||
## 7. Special Awards Integration (Jury 2 Only)
|
||||
|
||||
During the Jury 2 evaluation round, special awards can run alongside:
|
||||
|
||||
### 7.1 How It Works
|
||||
|
||||
```
|
||||
Round 5: "Jury 2 — Finalist Selection"
|
||||
├── Main evaluation (all semi-finalists scored by Jury 2)
|
||||
└── Special Awards (run in parallel):
|
||||
├── "Innovation Award" — STAY_IN_MAIN mode
|
||||
│ Projects remain in main eval, flagged as eligible
|
||||
│ Award jury (subset of Jury 2 or separate) votes
|
||||
└── "Impact Award" — SEPARATE_POOL mode
|
||||
AI filters eligible projects into award pool
|
||||
Dedicated jury evaluates and votes
|
||||
```
|
||||
|
||||
### 7.2 SpecialAward.evaluationRoundId
|
||||
|
||||
Each award links to the evaluation round it runs alongside:
|
||||
```
|
||||
SpecialAward {
|
||||
evaluationRoundId: "round-jury-2" // Runs during Jury 2
|
||||
eligibilityMode: STAY_IN_MAIN
|
||||
juryGroupId: "jury-group-innovation" // Can be same or different jury
|
||||
}
|
||||
```
|
||||
|
||||
### 7.3 Award Evaluation Flow
|
||||
|
||||
1. Before Jury 2 window opens: Admin runs award eligibility (AI or manual)
|
||||
2. During Jury 2 window: Award jury members see their award assignments alongside regular evaluations
|
||||
3. Award jury submits award votes (PICK_WINNER, RANKED, or SCORED)
|
||||
4. After Jury 2 closes: Award results finalized alongside main results
|
||||
|
||||
---
|
||||
|
||||
## 8. Differences Between Jury 1 and Jury 2
|
||||
|
||||
| Aspect | Jury 1 (Round 3) | Jury 2 (Round 5) |
|
||||
|--------|-------------------|-------------------|
|
||||
| Input projects | All eligible (post-filtering) | Semi-finalists only |
|
||||
| Visible docs | Window 1 only | Window 1 + Window 2 |
|
||||
| Output | Semi-finalists | Finalists |
|
||||
| Project.status update | → SEMIFINALIST | → FINALIST |
|
||||
| Special awards | No | Yes (alongside) |
|
||||
| Jury group | Jury 1 | Jury 2 (different members, possible overlap) |
|
||||
| Typical project count | 50-100+ | 10-20 |
|
||||
| Required reviews | 3 (more projects, less depth) | 3-5 (fewer projects, more depth) |
|
||||
|
||||
---
|
||||
|
||||
## 9. API Changes
|
||||
|
||||
### Preserved Procedures (renamed stageId → roundId)
|
||||
|
||||
| Procedure | Change |
|
||||
|-----------|--------|
|
||||
| `evaluation.get` | roundId via assignment |
|
||||
| `evaluation.start` | No change |
|
||||
| `evaluation.autosave` | No change |
|
||||
| `evaluation.submit` | Window check uses round.windowCloseAt + grace periods |
|
||||
| `evaluation.declareCOI` | No change |
|
||||
| `evaluation.getCOIStatus` | No change |
|
||||
| `evaluation.getProjectStats` | No change |
|
||||
| `evaluation.listByRound` | Renamed from listByStage |
|
||||
| `evaluation.generateSummary` | roundId instead of stageId |
|
||||
| `evaluation.generateBulkSummaries` | roundId instead of stageId |
|
||||
|
||||
### New Procedures
|
||||
|
||||
| Procedure | Purpose |
|
||||
|-----------|---------|
|
||||
| `assignment.previewWithJuryGroup` | Preview assignments filtered by jury group with cap/quota logic |
|
||||
| `assignment.getJuryGroupStats` | Per-member stats: load, category distribution, cap utilization |
|
||||
| `evaluation.getResultsOverview` | Rankings, scores, consensus, AI recommendations per category |
|
||||
| `evaluation.confirmAdvancement` | Admin confirms which projects advance |
|
||||
| `evaluation.getAdvancementPreview` | Preview advancement impact before confirming |
|
||||
|
||||
### Modified Procedures
|
||||
|
||||
| Procedure | Modification |
|
||||
|-----------|-------------|
|
||||
| `assignment.getSuggestions` | Now filters by JuryGroup, applies hard/soft caps, category quotas |
|
||||
| `assignment.create` | Now sets `juryGroupId` on Assignment |
|
||||
| `assignment.bulkCreate` | Now validates against jury group caps |
|
||||
| `file.listByProjectForRound` | Uses RoundSubmissionVisibility to filter docs |
|
||||
|
||||
---
|
||||
|
||||
## 10. Service Layer Changes
|
||||
|
||||
### `stage-assignment.ts` → `round-assignment.ts`
|
||||
|
||||
Key changes to `previewStageAssignment` → `previewRoundAssignment`:
|
||||
|
||||
1. **Load jury pool from JuryGroup** instead of all JURY_MEMBER users:
|
||||
```typescript
|
||||
const juryGroup = await prisma.juryGroup.findUnique({
|
||||
where: { id: round.juryGroupId },
|
||||
include: { members: { include: { user: true } } }
|
||||
})
|
||||
const jurors = juryGroup.members.map(m => ({
|
||||
...m.user,
|
||||
effectiveLimits: getEffectiveLimits(m, juryGroup),
|
||||
}))
|
||||
```
|
||||
|
||||
2. **Replace simple max check** with cap mode logic (hard/soft/none)
|
||||
3. **Add category quota tracking** per juror
|
||||
4. **Add ratio preference scoring** in candidate ranking
|
||||
5. **Report overflow** — projects that couldn't be assigned because all jurors hit caps
|
||||
|
||||
### `stage-engine.ts` → `round-engine.ts`
|
||||
|
||||
Simplified:
|
||||
- Remove trackId from all transitions
|
||||
- `executeTransition` now takes `fromRoundId` + `toRoundId` (or auto-advance to next sortOrder)
|
||||
- `validateTransition` simplified — no StageTransition lookup, just checks next round exists and is active
|
||||
- Guard evaluation simplified — AdvancementRule.configJson replaces arbitrary guardJson
|
||||
|
||||
---
|
||||
|
||||
## 11. Edge Cases
|
||||
|
||||
### More projects than jurors can handle
|
||||
- Algorithm assigns up to hard/soft cap for all jurors
|
||||
- Remaining projects flagged as "unassigned" in admin dashboard
|
||||
- Admin must: add jurors, increase caps, or manually assign
|
||||
|
||||
### Juror doesn't complete by deadline
|
||||
- Dashboard shows overdue assignments prominently
|
||||
- Admin can: extend via GracePeriod, reassign to another juror, or mark as incomplete
|
||||
|
||||
### Tie in scores at cutoff
|
||||
- Depending on `tieBreaker` config:
|
||||
- `admin_decides`: Admin manually picks from tied projects
|
||||
- `highest_individual`: Project with highest single-evaluator score wins
|
||||
- `revote`: Tied projects sent back for quick re-evaluation
|
||||
|
||||
### Category imbalance
|
||||
- If one category has far more projects, quotas ensure jurors still get a mix
|
||||
- If quotas can't be satisfied (not enough of one category), system relaxes quota for that category
|
||||
|
||||
### Juror in multiple jury groups
|
||||
- Juror Alice is in Jury 1 and Jury 2
|
||||
- Her assignments for each round are independent
|
||||
- Her caps are per-jury-group (20 for Jury 1, 15 for Jury 2)
|
||||
- No cross-round cap — each round manages its own workload
|
||||
2053
docs/claude-architecture-redesign/07-round-submission.md
Normal file
2053
docs/claude-architecture-redesign/07-round-submission.md
Normal file
File diff suppressed because it is too large
Load Diff
499
docs/claude-architecture-redesign/08-round-mentoring.md
Normal file
499
docs/claude-architecture-redesign/08-round-mentoring.md
Normal file
@@ -0,0 +1,499 @@
|
||||
# Round: Mentoring (Finalist Collaboration Layer)
|
||||
|
||||
## 1. Purpose & Position in Flow
|
||||
|
||||
The MENTORING round is **not a judging stage** — it is a collaboration layer that activates between Jury 2 finalist selection and the Live Finals. It provides finalist teams who requested mentoring with a private workspace to refine their submissions with guidance from an assigned mentor.
|
||||
|
||||
| Aspect | Detail |
|
||||
|--------|--------|
|
||||
| Position | Round 6 (after Jury 2, before Live Finals) |
|
||||
| Participants | Finalist teams + assigned mentors |
|
||||
| Duration | Configurable (typically 2-4 weeks) |
|
||||
| Output | Better-prepared finalist submissions; some mentoring files promoted to official submissions |
|
||||
|
||||
### Who Gets Mentoring
|
||||
|
||||
- Only projects that have `Project.wantsMentorship = true` AND have advanced to finalist status (ProjectRoundState PASSED in the Jury 2 round)
|
||||
- Admin can override: assign mentoring to projects that didn't request it, or skip projects that did
|
||||
|
||||
---
|
||||
|
||||
## 2. Data Model
|
||||
|
||||
### Round Record
|
||||
|
||||
```
|
||||
Round {
|
||||
id: "round-mentoring"
|
||||
competitionId: "comp-2026"
|
||||
name: "Finalist Mentoring"
|
||||
roundType: MENTORING
|
||||
status: ROUND_DRAFT → ROUND_ACTIVE → ROUND_CLOSED
|
||||
sortOrder: 5
|
||||
windowOpenAt: "2026-06-01" // Mentoring period start
|
||||
windowCloseAt: "2026-06-30" // Mentoring period end
|
||||
juryGroupId: null // No jury for mentoring
|
||||
submissionWindowId: null // Mentoring doesn't collect formal submissions
|
||||
configJson: { ...MentoringConfig }
|
||||
}
|
||||
```
|
||||
|
||||
### MentoringConfig
|
||||
|
||||
```typescript
|
||||
type MentoringConfig = {
|
||||
// Who gets mentoring
|
||||
eligibility: "all_advancing" | "requested_only"
|
||||
// all_advancing: Every finalist gets a mentor
|
||||
// requested_only: Only projects with wantsMentorship=true
|
||||
|
||||
// Workspace features
|
||||
chatEnabled: boolean // Bidirectional messaging (default: true)
|
||||
fileUploadEnabled: boolean // Mentor + team can upload files (default: true)
|
||||
fileCommentsEnabled: boolean // Threaded comments on files (default: true)
|
||||
filePromotionEnabled: boolean // Promote workspace file to official submission (default: true)
|
||||
|
||||
// Promotion target
|
||||
promotionTargetWindowId: string | null
|
||||
// Which SubmissionWindow promoted files go to
|
||||
// Usually the most recent window (Round 2 docs)
|
||||
// If null, promotion creates files without a window (admin must assign)
|
||||
|
||||
// Auto-assignment
|
||||
autoAssignMentors: boolean // Use AI/algorithm to assign (default: false)
|
||||
maxProjectsPerMentor: number // Mentor workload cap (default: 3)
|
||||
|
||||
// Notifications
|
||||
notifyTeamsOnOpen: boolean // Email teams when mentoring opens (default: true)
|
||||
notifyMentorsOnAssign: boolean // Email mentors when assigned (default: true)
|
||||
reminderBeforeClose: number[] // Days before close to remind (default: [7, 3, 1])
|
||||
}
|
||||
```
|
||||
|
||||
### Related Models
|
||||
|
||||
| Model | Purpose |
|
||||
|-------|---------|
|
||||
| `MentorAssignment` | Links mentor to project (existing, enhanced) |
|
||||
| `MentorMessage` | Chat messages between mentor and team (existing) |
|
||||
| `MentorNote` | Mentor's private notes (existing) |
|
||||
| `MentorFile` | **NEW** — Files uploaded in workspace |
|
||||
| `MentorFileComment` | **NEW** — Threaded comments on files |
|
||||
| `ProjectFile` | Target for file promotion |
|
||||
| `SubmissionFileRequirement` | Requirement slot that promoted file fills |
|
||||
|
||||
---
|
||||
|
||||
## 3. Mentor Assignment
|
||||
|
||||
### 3.1 Assignment Methods
|
||||
|
||||
| Method | Description |
|
||||
|--------|-------------|
|
||||
| `MANUAL` | Admin picks mentor for each project |
|
||||
| `AI_SUGGESTED` | AI recommends matches, admin approves |
|
||||
| `AI_AUTO` | AI auto-assigns, admin can override |
|
||||
| `ALGORITHM` | Round-robin or expertise-matching algorithm |
|
||||
|
||||
### 3.2 Assignment Criteria
|
||||
|
||||
The existing `mentor-matching.ts` service evaluates:
|
||||
- **Expertise overlap** — mentor's tags vs project's tags/category
|
||||
- **Country/region diversity** — avoid same-country bias
|
||||
- **Workload balance** — distribute evenly across mentors
|
||||
- **Language** — match if language preferences exist
|
||||
|
||||
### 3.3 Assignment Flow
|
||||
|
||||
```
|
||||
1. MENTORING round opens (status → ROUND_ACTIVE)
|
||||
2. System identifies eligible projects:
|
||||
- All finalists (if eligibility = "all_advancing")
|
||||
- Only finalists with wantsMentorship (if "requested_only")
|
||||
3. For each eligible project without a mentor:
|
||||
a. If autoAssignMentors: Run AI/algorithm assignment
|
||||
b. Else: Flag as "needs mentor" in admin dashboard
|
||||
4. Admin reviews assignments, can:
|
||||
- Accept suggestions
|
||||
- Reassign mentors
|
||||
- Skip projects (no mentoring needed)
|
||||
5. Assigned mentors receive email notification
|
||||
6. Workspace becomes active for mentor+team
|
||||
```
|
||||
|
||||
### 3.4 Workspace Activation
|
||||
|
||||
When a mentor is assigned and the MENTORING round is ROUND_ACTIVE:
|
||||
|
||||
```typescript
|
||||
// MentorAssignment is updated:
|
||||
{
|
||||
workspaceEnabled: true,
|
||||
workspaceOpenAt: round.windowOpenAt,
|
||||
workspaceCloseAt: round.windowCloseAt,
|
||||
}
|
||||
```
|
||||
|
||||
The workspace is accessible from:
|
||||
- **Mentor dashboard** → "My Projects" → select project → Workspace tab
|
||||
- **Applicant dashboard** → "Mentor" section → Workspace tab
|
||||
- **Admin** → can view any workspace at any time
|
||||
|
||||
---
|
||||
|
||||
## 4. Workspace Features
|
||||
|
||||
### 4.1 Messaging (Chat)
|
||||
|
||||
Bidirectional chat between mentor and team members:
|
||||
|
||||
```
|
||||
┌────────────────────────────────────────────────┐
|
||||
│ Mentor Workspace — OceanClean AI │
|
||||
│ ──────────────────────────────────────────── │
|
||||
│ [💬 Chat] [📁 Files] [📋 Milestones] │
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────┐ │
|
||||
│ │ Dr. Martin (Mentor) Apr 5, 10:30│ │
|
||||
│ │ Welcome! I've reviewed your business │ │
|
||||
│ │ plan. Let's work on the financial │ │
|
||||
│ │ projections section. │ │
|
||||
│ │ │ │
|
||||
│ │ Sarah (Team Lead) Apr 5, 14:15│ │
|
||||
│ │ Thank you! We've uploaded a revised │ │
|
||||
│ │ version. See the Files tab. │ │
|
||||
│ │ │ │
|
||||
│ │ Dr. Martin (Mentor) Apr 6, 09:00│ │
|
||||
│ │ Great improvement! I've left comments │ │
|
||||
│ │ on the file. One more round should do. │ │
|
||||
│ └────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [Type a message... ] [Send] │
|
||||
└────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Implementation:**
|
||||
- Uses existing `MentorMessage` model
|
||||
- Messages auto-marked as read when the chat is viewed
|
||||
- Real-time updates via polling (every 10s) or WebSocket if available
|
||||
- Both mentor and any team member can send messages
|
||||
|
||||
### 4.2 File Upload & Comments
|
||||
|
||||
The core new feature: a private file space with threaded discussion.
|
||||
|
||||
```
|
||||
┌────────────────────────────────────────────────┐
|
||||
│ [💬 Chat] [📁 Files] [📋 Milestones] │
|
||||
│ │
|
||||
│ ┌── Workspace Files ───────────────────────┐ │
|
||||
│ │ │ │
|
||||
│ │ 📄 Business Plan v2.pdf │ │
|
||||
│ │ Uploaded by Sarah (Team) · Apr 5 │ │
|
||||
│ │ 💬 3 comments │ │
|
||||
│ │ [Download] [Comment] [Promote →] │ │
|
||||
│ │ │ │
|
||||
│ │ 📄 Financial Model.xlsx │ │
|
||||
│ │ Uploaded by Dr. Martin (Mentor) · Apr 6│ │
|
||||
│ │ 💬 1 comment │ │
|
||||
│ │ [Download] [Comment] │ │
|
||||
│ │ │ │
|
||||
│ │ 📄 Pitch Deck Draft.pptx │ │
|
||||
│ │ Uploaded by Sarah (Team) · Apr 8 │ │
|
||||
│ │ ✅ Promoted → "Presentation" slot │ │
|
||||
│ │ [Download] [View Comments] │ │
|
||||
│ │ │ │
|
||||
│ └──────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [📤 Upload File] │
|
||||
└────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**File Upload Flow:**
|
||||
1. User (mentor or team member) clicks "Upload File"
|
||||
2. Client calls `mentor.getWorkspaceUploadUrl(mentorAssignmentId, fileName, mimeType)`
|
||||
3. Server generates MinIO pre-signed PUT URL
|
||||
4. Client uploads directly to MinIO
|
||||
5. Client calls `mentor.saveWorkspaceFile(mentorAssignmentId, fileName, mimeType, size, bucket, objectKey, description)`
|
||||
6. Server creates `MentorFile` record
|
||||
|
||||
**File Comments:**
|
||||
|
||||
```
|
||||
┌── Comments on: Business Plan v2.pdf ──────────┐
|
||||
│ │
|
||||
│ Dr. Martin (Mentor) · Apr 5, 16:00 │
|
||||
│ Section 3.2 needs stronger market analysis. │
|
||||
│ Consider adding competitor comparisons. │
|
||||
│ └─ Sarah (Team) · Apr 5, 18:30 │
|
||||
│ Good point — we'll add a competitive │
|
||||
│ landscape section. See updated version. │
|
||||
│ │
|
||||
│ Dr. Martin (Mentor) · Apr 6, 10:00 │
|
||||
│ Revenue projections look much better now. │
|
||||
│ Ready for promotion to official submission? │
|
||||
│ └─ Sarah (Team) · Apr 6, 11:00 │
|
||||
│ Yes, let's promote it! │
|
||||
│ │
|
||||
│ [Add comment... ] [Post] │
|
||||
└────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Implementation:**
|
||||
- `MentorFileComment` with `parentCommentId` for threading
|
||||
- Both mentor and team members can comment
|
||||
- Admin can view all comments
|
||||
- Comments are timestamped and attributed
|
||||
|
||||
### 4.3 File Promotion to Official Submission
|
||||
|
||||
The key feature: converting a private mentoring file into an official submission document.
|
||||
|
||||
**Promotion Flow:**
|
||||
|
||||
```
|
||||
1. Team member (or admin) clicks "Promote →" on a workspace file
|
||||
2. Dialog appears:
|
||||
┌────────────────────────────────────────┐
|
||||
│ Promote File to Official Submission │
|
||||
│ │
|
||||
│ File: Business Plan v2.pdf │
|
||||
│ │
|
||||
│ Target submission window: │
|
||||
│ [Round 2 Docs ▾] │
|
||||
│ │
|
||||
│ Replaces requirement: │
|
||||
│ [Business Plan ▾] │
|
||||
│ │
|
||||
│ ⚠ This will replace the current │
|
||||
│ "Business Plan" file for this project. │
|
||||
│ │
|
||||
│ [Cancel] [Promote & Replace] │
|
||||
└────────────────────────────────────────┘
|
||||
|
||||
3. On confirmation:
|
||||
a. System creates a new ProjectFile record:
|
||||
- projectId: project's ID
|
||||
- submissionWindowId: selected window
|
||||
- requirementId: selected requirement slot
|
||||
- fileName, mimeType, size: copied from MentorFile
|
||||
- bucket, objectKey: SAME as MentorFile (no file duplication)
|
||||
- version: incremented from previous file in slot
|
||||
b. Previous file in that slot gets `replacedById` set to new file
|
||||
c. MentorFile updated:
|
||||
- isPromoted: true
|
||||
- promotedToFileId: new ProjectFile ID
|
||||
- promotedAt: now
|
||||
- promotedByUserId: actor ID
|
||||
d. Audit log entry created:
|
||||
- action: "MENTOR_FILE_PROMOTED"
|
||||
- details: { mentorFileId, projectFileId, submissionWindowId, requirementId, replacedFileId }
|
||||
```
|
||||
|
||||
**Key Rules:**
|
||||
- Only files in **active** mentoring workspaces can be promoted
|
||||
- Promotion **replaces** the existing file for that requirement slot (per user's decision)
|
||||
- The MinIO object is **not duplicated** — both MentorFile and ProjectFile point to the same objectKey
|
||||
- Once promoted, the MentorFile shows a "Promoted" badge and the promote button is disabled
|
||||
- Admin can un-promote (revert) if needed, which deletes the ProjectFile and resets MentorFile flags
|
||||
- Promotion is audited with full provenance chain
|
||||
|
||||
**Who Can Promote:**
|
||||
- Team lead (Project.submittedByUserId or TeamMember.role = LEAD)
|
||||
- Admin (always)
|
||||
- Mentor (only if `MentoringConfig.mentorCanPromote` is true — default false for safety)
|
||||
|
||||
### 4.4 Privacy Model
|
||||
|
||||
```
|
||||
Visibility Matrix:
|
||||
┌──────────────────┬────────┬──────────┬───────┬──────┐
|
||||
│ Content │ Mentor │ Team │ Admin │ Jury │
|
||||
├──────────────────┼────────┼──────────┼───────┼──────┤
|
||||
│ Chat messages │ ✅ │ ✅ │ ✅ │ ❌ │
|
||||
│ Workspace files │ ✅ │ ✅ │ ✅ │ ❌ │
|
||||
│ File comments │ ✅ │ ✅ │ ✅ │ ❌ │
|
||||
│ Mentor notes │ ✅ │ ❌ │ ✅* │ ❌ │
|
||||
│ Promoted files │ ✅ │ ✅ │ ✅ │ ✅** │
|
||||
└──────────────────┴────────┴──────────┴───────┴──────┘
|
||||
|
||||
* Only if MentorNote.isVisibleToAdmin = true
|
||||
** Promoted files become official submissions visible to jury
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Mentor Dashboard
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────┐
|
||||
│ Mentor Dashboard │
|
||||
│ ─────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Mentoring Period: June 1 – June 30 │
|
||||
│ ⏱ 18 days remaining │
|
||||
│ │
|
||||
│ ┌─────────┐ ┌─────────┐ ┌──────────┐ │
|
||||
│ │ 3 │ │ 12 │ │ 5 │ │
|
||||
│ │ Teams │ │ Messages│ │ Files │ │
|
||||
│ └─────────┘ └─────────┘ └──────────┘ │
|
||||
│ │
|
||||
│ My Assigned Teams │
|
||||
│ ┌────────────────────────────────────────────────────┐ │
|
||||
│ │ OceanClean AI (Startup) │ │
|
||||
│ │ 💬 2 unread messages · 📁 3 files · Last: Apr 6 │ │
|
||||
│ │ [Open Workspace] │ │
|
||||
│ ├────────────────────────────────────────────────────┤ │
|
||||
│ │ Blue Carbon Hub (Concept) │ │
|
||||
│ │ 💬 0 unread · 📁 1 file · Last: Apr 4 │ │
|
||||
│ │ [Open Workspace] │ │
|
||||
│ ├────────────────────────────────────────────────────┤ │
|
||||
│ │ SeaWatch Monitor (Startup) │ │
|
||||
│ │ ⚠ No activity yet │ │
|
||||
│ │ [Open Workspace] │ │
|
||||
│ └────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Milestones │
|
||||
│ ┌────────────────────────────────────────────────────┐ │
|
||||
│ │ ☑ Initial review (3/3 teams) │ │
|
||||
│ │ ☐ Business plan feedback (1/3 teams) │ │
|
||||
│ │ ☐ Pitch deck review (0/3 teams) │ │
|
||||
│ └────────────────────────────────────────────────────┘ │
|
||||
└──────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Applicant Experience
|
||||
|
||||
On the applicant dashboard, a "Mentoring" section appears when mentoring is active:
|
||||
|
||||
```
|
||||
┌────────────────────────────────────────────────┐
|
||||
│ Your Mentor: Dr. Martin Duval │
|
||||
│ Expertise: Marine Biology, Sustainability │
|
||||
│ │
|
||||
│ Mentoring Period: June 1 – June 30 │
|
||||
│ ⏱ 18 days remaining │
|
||||
│ │
|
||||
│ [💬 Messages (2 unread)] │
|
||||
│ [📁 Workspace Files (3)] │
|
||||
│ [📋 Milestones] │
|
||||
└────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Clicking "Workspace Files" opens the same workspace view as the mentor (with appropriate permissions).
|
||||
|
||||
---
|
||||
|
||||
## 7. Admin Experience
|
||||
|
||||
Admin can:
|
||||
- **Assign/reassign mentors** via bulk or individual assignment
|
||||
- **View any workspace** (read-only or with full edit access)
|
||||
- **Promote files** on behalf of teams
|
||||
- **Track activity** — dashboard showing mentor engagement:
|
||||
- Messages sent per mentor
|
||||
- Files uploaded
|
||||
- Milestones completed
|
||||
- Last activity timestamp
|
||||
- **Extend/close mentoring window** per team or globally
|
||||
- **Export workspace data** for audit purposes
|
||||
|
||||
---
|
||||
|
||||
## 8. API — New and Modified Procedures
|
||||
|
||||
### New Procedures (mentor-workspace router)
|
||||
|
||||
| Procedure | Auth | Purpose |
|
||||
|-----------|------|---------|
|
||||
| `mentorWorkspace.getUploadUrl` | Mentor or Team | Get MinIO pre-signed URL for workspace upload |
|
||||
| `mentorWorkspace.saveFile` | Mentor or Team | Create MentorFile record after upload |
|
||||
| `mentorWorkspace.listFiles` | Mentor, Team, Admin | List workspace files with comment counts |
|
||||
| `mentorWorkspace.deleteFile` | Uploader or Admin | Delete workspace file |
|
||||
| `mentorWorkspace.getFileDownloadUrl` | Mentor, Team, Admin | Get MinIO pre-signed URL for download |
|
||||
| `mentorWorkspace.addComment` | Mentor, Team, Admin | Add comment to file (with optional parentCommentId) |
|
||||
| `mentorWorkspace.listComments` | Mentor, Team, Admin | Get threaded comments for a file |
|
||||
| `mentorWorkspace.deleteComment` | Author or Admin | Delete a comment |
|
||||
| `mentorWorkspace.promoteFile` | Team Lead or Admin | Promote workspace file to official submission |
|
||||
| `mentorWorkspace.unpromoteFile` | Admin only | Revert a promotion |
|
||||
| `mentorWorkspace.getWorkspaceStatus` | Any participant | Get workspace summary (file count, message count, etc.) |
|
||||
|
||||
### Modified Existing Procedures
|
||||
|
||||
| Procedure | Change |
|
||||
|-----------|--------|
|
||||
| `mentor.getMyProjects` | Include workspace status (file count, unread messages) |
|
||||
| `mentor.getProjectDetail` | Include MentorFile[] with comment counts |
|
||||
| `applicant.getMyDashboard` | Include mentor workspace summary if mentoring active |
|
||||
| `file.listByProjectForRound` | Promoted files visible to jury (via ProjectFile record) |
|
||||
|
||||
---
|
||||
|
||||
## 9. Service: `mentor-workspace.ts`
|
||||
|
||||
### Key Functions
|
||||
|
||||
```typescript
|
||||
// Upload handling
|
||||
async function getWorkspaceUploadUrl(
|
||||
mentorAssignmentId: string,
|
||||
fileName: string,
|
||||
mimeType: string,
|
||||
actorId: string,
|
||||
prisma: PrismaClient
|
||||
): Promise<{ uploadUrl: string; objectKey: string }>
|
||||
|
||||
// Save file metadata after upload
|
||||
async function saveWorkspaceFile(
|
||||
mentorAssignmentId: string,
|
||||
uploadedByUserId: string,
|
||||
file: { fileName, mimeType, size, bucket, objectKey },
|
||||
description: string | null,
|
||||
prisma: PrismaClient
|
||||
): Promise<MentorFile>
|
||||
|
||||
// Promote file to official submission
|
||||
async function promoteFileToSubmission(
|
||||
mentorFileId: string,
|
||||
submissionWindowId: string,
|
||||
requirementId: string | null,
|
||||
actorId: string,
|
||||
prisma: PrismaClient
|
||||
): Promise<{ mentorFile: MentorFile; projectFile: ProjectFile }>
|
||||
// Steps:
|
||||
// 1. Validate mentorFile exists, is not already promoted, workspace is active
|
||||
// 2. If requirementId: find existing ProjectFile for that requirement, set replacedById
|
||||
// 3. Create new ProjectFile (reusing same bucket/objectKey — no MinIO duplication)
|
||||
// 4. Update MentorFile: isPromoted=true, promotedToFileId, promotedAt, promotedByUserId
|
||||
// 5. Audit log with full provenance
|
||||
|
||||
// Revert promotion
|
||||
async function unpromoteFile(
|
||||
mentorFileId: string,
|
||||
actorId: string,
|
||||
prisma: PrismaClient
|
||||
): Promise<void>
|
||||
// Steps:
|
||||
// 1. Find the ProjectFile created by promotion
|
||||
// 2. If it replaced a previous file, restore that file's replacedById=null
|
||||
// 3. Delete the promoted ProjectFile
|
||||
// 4. Reset MentorFile flags
|
||||
// 5. Audit log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Edge Cases
|
||||
|
||||
| Scenario | Handling |
|
||||
|----------|----------|
|
||||
| Team doesn't want mentoring but admin assigns anyway | Assignment created; team sees mentor in dashboard |
|
||||
| Mentor goes inactive during period | Admin can reassign; previous workspace preserved |
|
||||
| File promoted then mentor period closes | Promoted file remains as official submission |
|
||||
| Team tries to promote file for a requirement that doesn't exist | Error — must select valid requirement or leave requirementId null |
|
||||
| Two files promoted to the same requirement slot | Second promotion replaces first (versioning) |
|
||||
| Mentoring file is larger than requirement maxSizeMB | Warning shown but promotion allowed (admin override implicit) |
|
||||
| Workspace closed but team needs one more upload | Admin can extend via round window or grant grace |
|
||||
| Promoted file deleted from workspace | ProjectFile remains (separate record); audit shows provenance |
|
||||
660
docs/claude-architecture-redesign/09-round-live-finals.md
Normal file
660
docs/claude-architecture-redesign/09-round-live-finals.md
Normal file
@@ -0,0 +1,660 @@
|
||||
# Round Type: LIVE_FINAL — Live Finals Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The **LIVE_FINAL** round type orchestrates the live ceremony where Jury 3 evaluates finalist presentations in real-time. This is Round 7 in the redesigned 8-step competition flow. It combines jury scoring, optional audience participation, deliberation periods, and live results display into a single managed event.
|
||||
|
||||
**Core capabilities:**
|
||||
- Real-time stage manager controls (presentation cursor, timing, pause/resume)
|
||||
- Jury voting with multiple modes (numeric, ranking, binary)
|
||||
- Optional audience voting with weighted scores
|
||||
- Per-category presentation windows (STARTUP window, then CONCEPT window)
|
||||
- Deliberation period for jury discussion
|
||||
- Live results display or ceremony reveal
|
||||
- Anti-fraud measures for audience participation
|
||||
|
||||
**Round 7 position in the flow:**
|
||||
```
|
||||
Round 1: Application Window (INTAKE)
|
||||
Round 2: AI Screening (FILTERING)
|
||||
Round 3: Jury 1 - Semi-finalist Selection (EVALUATION)
|
||||
Round 4: Semi-finalist Submission (SUBMISSION)
|
||||
Round 5: Jury 2 - Finalist Selection (EVALUATION)
|
||||
Round 6: Finalist Mentoring (MENTORING)
|
||||
Round 7: Live Finals (LIVE_FINAL) ← THIS DOCUMENT
|
||||
Round 8: Confirm Winners (CONFIRMATION)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Current System (Pipeline → Track → Stage)
|
||||
|
||||
### Existing Models
|
||||
|
||||
**LiveVotingSession** — Per-stage voting session:
|
||||
```prisma
|
||||
model LiveVotingSession {
|
||||
id String @id @default(cuid())
|
||||
stageId String? @unique
|
||||
status String @default("NOT_STARTED") // NOT_STARTED, IN_PROGRESS, PAUSED, COMPLETED
|
||||
currentProjectIndex Int @default(0)
|
||||
currentProjectId String?
|
||||
votingStartedAt DateTime?
|
||||
votingEndsAt DateTime?
|
||||
projectOrderJson Json? @db.JsonB // Array of project IDs in presentation order
|
||||
|
||||
// Voting configuration
|
||||
votingMode String @default("simple") // "simple" (1-10) | "criteria" (per-criterion scores)
|
||||
criteriaJson Json? @db.JsonB // Array of { id, label, description, scale, weight }
|
||||
|
||||
// Audience settings
|
||||
allowAudienceVotes Boolean @default(false)
|
||||
audienceVoteWeight Float @default(0) // 0.0 to 1.0
|
||||
audienceVotingMode String @default("disabled") // "disabled" | "per_project" | "per_category" | "favorites"
|
||||
audienceMaxFavorites Int @default(3)
|
||||
audienceRequireId Boolean @default(false)
|
||||
audienceVotingDuration Int? // Minutes (null = same as jury)
|
||||
|
||||
tieBreakerMethod String @default("admin_decides") // 'admin_decides' | 'highest_individual' | 'revote'
|
||||
presentationSettingsJson Json? @db.JsonB
|
||||
|
||||
stage Stage? @relation(...)
|
||||
votes LiveVote[]
|
||||
audienceVoters AudienceVoter[]
|
||||
}
|
||||
```
|
||||
|
||||
**LiveVote** — Individual jury or audience vote:
|
||||
```prisma
|
||||
model LiveVote {
|
||||
id String @id @default(cuid())
|
||||
sessionId String
|
||||
projectId String
|
||||
userId String? // Nullable for audience voters without accounts
|
||||
score Int // 1-10 (or weighted score for criteria mode)
|
||||
isAudienceVote Boolean @default(false)
|
||||
votedAt DateTime @default(now())
|
||||
|
||||
// Criteria scores (used when votingMode="criteria")
|
||||
criterionScoresJson Json? @db.JsonB // { [criterionId]: score }
|
||||
|
||||
// Audience voter link
|
||||
audienceVoterId String?
|
||||
|
||||
session LiveVotingSession @relation(...)
|
||||
user User? @relation(...)
|
||||
audienceVoter AudienceVoter? @relation(...)
|
||||
|
||||
@@unique([sessionId, projectId, userId])
|
||||
@@unique([sessionId, projectId, audienceVoterId])
|
||||
}
|
||||
```
|
||||
|
||||
**AudienceVoter** — Registered audience participant:
|
||||
```prisma
|
||||
model AudienceVoter {
|
||||
id String @id @default(cuid())
|
||||
sessionId String
|
||||
token String @unique // Unique voting token (UUID)
|
||||
identifier String? // Optional: email, phone, or name
|
||||
identifierType String? // "email" | "phone" | "name" | "anonymous"
|
||||
ipAddress String?
|
||||
userAgent String?
|
||||
createdAt DateTime @default(now())
|
||||
|
||||
session LiveVotingSession @relation(...)
|
||||
votes LiveVote[]
|
||||
}
|
||||
```
|
||||
|
||||
**LiveProgressCursor** — Stage manager cursor:
|
||||
```prisma
|
||||
model LiveProgressCursor {
|
||||
id String @id @default(cuid())
|
||||
stageId String @unique
|
||||
sessionId String @unique @default(cuid())
|
||||
activeProjectId String?
|
||||
activeOrderIndex Int @default(0)
|
||||
isPaused Boolean @default(false)
|
||||
|
||||
stage Stage @relation(...)
|
||||
}
|
||||
```
|
||||
|
||||
**Cohort** — Presentation groups:
|
||||
```prisma
|
||||
model Cohort {
|
||||
id String @id @default(cuid())
|
||||
stageId String
|
||||
name String
|
||||
votingMode String @default("simple") // simple, criteria, ranked
|
||||
isOpen Boolean @default(false)
|
||||
windowOpenAt DateTime?
|
||||
windowCloseAt DateTime?
|
||||
|
||||
stage Stage @relation(...)
|
||||
projects CohortProject[]
|
||||
}
|
||||
```
|
||||
|
||||
### Current Service Functions
|
||||
|
||||
`src/server/services/live-control.ts`:
|
||||
- `startSession(stageId, actorId)` — Initialize/reset cursor
|
||||
- `setActiveProject(stageId, projectId, actorId)` — Set currently presenting project
|
||||
- `jumpToProject(stageId, orderIndex, actorId)` — Jump to specific project in queue
|
||||
- `reorderQueue(stageId, newOrder, actorId)` — Reorder presentation sequence
|
||||
- `pauseResume(stageId, isPaused, actorId)` — Toggle pause state
|
||||
- `openCohortWindow(cohortId, actorId)` — Open voting window for a cohort
|
||||
- `closeCohortWindow(cohortId, actorId)` — Close cohort window
|
||||
|
||||
### Current tRPC Procedures
|
||||
|
||||
`src/server/routers/live-voting.ts`:
|
||||
```typescript
|
||||
liveVoting.getSession({ stageId })
|
||||
liveVoting.getSessionForVoting({ sessionId }) // Jury view
|
||||
liveVoting.getPublicSession({ sessionId }) // Display view
|
||||
liveVoting.setProjectOrder({ sessionId, projectIds })
|
||||
liveVoting.setVotingMode({ sessionId, votingMode: 'simple' | 'criteria' })
|
||||
liveVoting.setCriteria({ sessionId, criteria })
|
||||
liveVoting.importCriteriaFromForm({ sessionId, formId })
|
||||
liveVoting.startVoting({ sessionId, projectId, durationSeconds })
|
||||
liveVoting.stopVoting({ sessionId })
|
||||
liveVoting.endSession({ sessionId })
|
||||
liveVoting.vote({ sessionId, projectId, score, criterionScores })
|
||||
liveVoting.getResults({ sessionId, juryWeight?, audienceWeight? })
|
||||
liveVoting.updatePresentationSettings({ sessionId, presentationSettingsJson })
|
||||
liveVoting.updateSessionConfig({ sessionId, allowAudienceVotes, audienceVoteWeight, ... })
|
||||
liveVoting.registerAudienceVoter({ sessionId, identifier?, identifierType? }) // Public
|
||||
liveVoting.castAudienceVote({ sessionId, projectId, score, token }) // Public
|
||||
liveVoting.getAudienceVoterStats({ sessionId })
|
||||
liveVoting.getAudienceSession({ sessionId }) // Public
|
||||
liveVoting.getPublicResults({ sessionId }) // Public
|
||||
```
|
||||
|
||||
### Current LiveFinalConfig Type
|
||||
|
||||
From `src/types/pipeline-wizard.ts`:
|
||||
```typescript
|
||||
type LiveFinalConfig = {
|
||||
juryVotingEnabled: boolean
|
||||
audienceVotingEnabled: boolean
|
||||
audienceVoteWeight: number
|
||||
cohortSetupMode: 'auto' | 'manual'
|
||||
revealPolicy: 'immediate' | 'delayed' | 'ceremony'
|
||||
}
|
||||
```
|
||||
|
||||
### Current Admin UI
|
||||
|
||||
`src/components/admin/pipeline/sections/live-finals-section.tsx`:
|
||||
- Jury voting toggle
|
||||
- Audience voting toggle + weight slider (0-100%)
|
||||
- Cohort setup mode selector (auto/manual)
|
||||
- Result reveal policy selector (immediate/delayed/ceremony)
|
||||
|
||||
---
|
||||
|
||||
## Redesigned Live Finals Round
|
||||
|
||||
### Enhanced LiveFinalConfig
|
||||
|
||||
**New comprehensive config:**
|
||||
```typescript
|
||||
type LiveFinalConfig = {
|
||||
// Jury configuration
|
||||
juryGroupId: string // Which jury evaluates (Jury 3)
|
||||
|
||||
// Voting mode
|
||||
votingMode: 'NUMERIC' | 'RANKING' | 'BINARY'
|
||||
|
||||
// Numeric mode settings
|
||||
numericScale?: {
|
||||
min: number // Default: 1
|
||||
max: number // Default: 10
|
||||
allowDecimals: boolean // Default: false
|
||||
}
|
||||
|
||||
// Criteria-based voting (optional enhancement to NUMERIC)
|
||||
criteriaEnabled?: boolean
|
||||
criteriaJson?: LiveVotingCriterion[] // { id, label, description, scale, weight }
|
||||
importFromEvalForm?: string // EvaluationForm ID to import criteria from
|
||||
|
||||
// Ranking mode settings
|
||||
rankingSettings?: {
|
||||
maxRankedProjects: number // How many projects each juror ranks (e.g., top 3)
|
||||
pointsSystem: 'DESCENDING' | 'BORDA' // 3-2-1 or Borda count
|
||||
}
|
||||
|
||||
// Binary mode settings (simple yes/no)
|
||||
binaryLabels?: {
|
||||
yes: string // Default: "Finalist"
|
||||
no: string // Default: "Not Selected"
|
||||
}
|
||||
|
||||
// Audience voting
|
||||
audienceVotingEnabled: boolean
|
||||
audienceVotingWeight: number // 0-100, percentage weight
|
||||
juryVotingWeight: number // complement of audience weight (calculated)
|
||||
audienceVotingMode: 'PER_PROJECT' | 'FAVORITES' | 'CATEGORY_FAVORITES'
|
||||
audienceMaxFavorites?: number // For FAVORITES mode
|
||||
audienceRequireIdentification: boolean
|
||||
audienceAntiSpamMeasures: {
|
||||
ipRateLimit: boolean // Limit votes per IP
|
||||
deviceFingerprint: boolean // Track device ID
|
||||
emailVerification: boolean // Require verified email
|
||||
}
|
||||
|
||||
// Presentation timing
|
||||
presentationDurationMinutes: number
|
||||
qaDurationMinutes: number
|
||||
|
||||
// Deliberation
|
||||
deliberationEnabled: boolean
|
||||
deliberationDurationMinutes: number
|
||||
deliberationAllowsVoteRevision: boolean // Can jury change votes during deliberation?
|
||||
|
||||
// Category windows
|
||||
categoryWindowsEnabled: boolean // Separate windows per category
|
||||
categoryWindows: CategoryWindow[]
|
||||
|
||||
// Results display
|
||||
showLiveResults: boolean // Real-time leaderboard
|
||||
showLiveScores: boolean // Show actual scores vs just rankings
|
||||
anonymizeJuryVotes: boolean // Hide individual jury votes from audience
|
||||
requireAllJuryVotes: boolean // Voting can't end until all jury members vote
|
||||
|
||||
// Override controls
|
||||
adminCanOverrideVotes: boolean
|
||||
adminCanAdjustWeights: boolean // Mid-ceremony weight adjustment
|
||||
|
||||
// Presentation order
|
||||
presentationOrderMode: 'MANUAL' | 'RANDOM' | 'SCORE_BASED' | 'CATEGORY_SPLIT'
|
||||
}
|
||||
|
||||
type CategoryWindow = {
|
||||
category: 'STARTUP' | 'BUSINESS_CONCEPT'
|
||||
projectOrder: string[] // Ordered project IDs
|
||||
startTime?: string // Scheduled start (ISO 8601)
|
||||
endTime?: string // Scheduled end
|
||||
deliberationMinutes?: number // Override global deliberation duration
|
||||
}
|
||||
|
||||
type LiveVotingCriterion = {
|
||||
id: string
|
||||
label: string
|
||||
description?: string
|
||||
scale: number // 1-10, 1-5, etc.
|
||||
weight: number // Sum to 1.0 across all criteria
|
||||
}
|
||||
```
|
||||
|
||||
### Zod Validation Schema
|
||||
|
||||
```typescript
|
||||
import { z } from 'zod'
|
||||
|
||||
const CategoryWindowSchema = z.object({
|
||||
category: z.enum(['STARTUP', 'BUSINESS_CONCEPT']),
|
||||
projectOrder: z.array(z.string()),
|
||||
startTime: z.string().datetime().optional(),
|
||||
endTime: z.string().datetime().optional(),
|
||||
deliberationMinutes: z.number().int().min(0).max(120).optional(),
|
||||
})
|
||||
|
||||
const LiveVotingCriterionSchema = z.object({
|
||||
id: z.string(),
|
||||
label: z.string().min(1).max(100),
|
||||
description: z.string().max(500).optional(),
|
||||
scale: z.number().int().min(1).max(100),
|
||||
weight: z.number().min(0).max(1),
|
||||
})
|
||||
|
||||
export const LiveFinalConfigSchema = z.object({
|
||||
// Jury
|
||||
juryGroupId: z.string(),
|
||||
|
||||
// Voting mode
|
||||
votingMode: z.enum(['NUMERIC', 'RANKING', 'BINARY']),
|
||||
|
||||
// Numeric mode settings
|
||||
numericScale: z.object({
|
||||
min: z.number().int().default(1),
|
||||
max: z.number().int().default(10),
|
||||
allowDecimals: z.boolean().default(false),
|
||||
}).optional(),
|
||||
|
||||
// Criteria
|
||||
criteriaEnabled: z.boolean().optional(),
|
||||
criteriaJson: z.array(LiveVotingCriterionSchema).optional(),
|
||||
importFromEvalForm: z.string().optional(),
|
||||
|
||||
// Ranking
|
||||
rankingSettings: z.object({
|
||||
maxRankedProjects: z.number().int().min(1).max(20),
|
||||
pointsSystem: z.enum(['DESCENDING', 'BORDA']),
|
||||
}).optional(),
|
||||
|
||||
// Binary
|
||||
binaryLabels: z.object({
|
||||
yes: z.string().default('Finalist'),
|
||||
no: z.string().default('Not Selected'),
|
||||
}).optional(),
|
||||
|
||||
// Audience
|
||||
audienceVotingEnabled: z.boolean(),
|
||||
audienceVotingWeight: z.number().min(0).max(100),
|
||||
juryVotingWeight: z.number().min(0).max(100),
|
||||
audienceVotingMode: z.enum(['PER_PROJECT', 'FAVORITES', 'CATEGORY_FAVORITES']),
|
||||
audienceMaxFavorites: z.number().int().min(1).max(20).optional(),
|
||||
audienceRequireIdentification: z.boolean(),
|
||||
audienceAntiSpamMeasures: z.object({
|
||||
ipRateLimit: z.boolean(),
|
||||
deviceFingerprint: z.boolean(),
|
||||
emailVerification: z.boolean(),
|
||||
}),
|
||||
|
||||
// Timing
|
||||
presentationDurationMinutes: z.number().int().min(1).max(60),
|
||||
qaDurationMinutes: z.number().int().min(0).max(30),
|
||||
|
||||
// Deliberation
|
||||
deliberationEnabled: z.boolean(),
|
||||
deliberationDurationMinutes: z.number().int().min(0).max(120),
|
||||
deliberationAllowsVoteRevision: z.boolean(),
|
||||
|
||||
// Category windows
|
||||
categoryWindowsEnabled: z.boolean(),
|
||||
categoryWindows: z.array(CategoryWindowSchema),
|
||||
|
||||
// Results
|
||||
showLiveResults: z.boolean(),
|
||||
showLiveScores: z.boolean(),
|
||||
anonymizeJuryVotes: z.boolean(),
|
||||
requireAllJuryVotes: z.boolean(),
|
||||
|
||||
// Overrides
|
||||
adminCanOverrideVotes: z.boolean(),
|
||||
adminCanAdjustWeights: z.boolean(),
|
||||
|
||||
// Presentation order
|
||||
presentationOrderMode: z.enum(['MANUAL', 'RANDOM', 'SCORE_BASED', 'CATEGORY_SPLIT']),
|
||||
}).refine(
|
||||
(data) => {
|
||||
// Ensure weights sum to 100
|
||||
return data.audienceVotingWeight + data.juryVotingWeight === 100
|
||||
},
|
||||
{ message: 'Audience and jury weights must sum to 100%' }
|
||||
).refine(
|
||||
(data) => {
|
||||
// If criteria enabled, must have criteria
|
||||
if (data.criteriaEnabled && (!data.criteriaJson || data.criteriaJson.length === 0)) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
},
|
||||
{ message: 'Criteria-based voting requires at least one criterion' }
|
||||
).refine(
|
||||
(data) => {
|
||||
// Criteria weights must sum to 1.0
|
||||
if (data.criteriaJson && data.criteriaJson.length > 0) {
|
||||
const weightSum = data.criteriaJson.reduce((sum, c) => sum + c.weight, 0)
|
||||
return Math.abs(weightSum - 1.0) < 0.01
|
||||
}
|
||||
return true
|
||||
},
|
||||
{ message: 'Criteria weights must sum to 1.0' }
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Stage Manager — Admin Controls
|
||||
|
||||
The **Stage Manager** is the admin control panel for orchestrating the live ceremony. It provides real-time control over presentation flow, voting windows, and emergency interventions.
|
||||
|
||||
### Ceremony State Machine
|
||||
|
||||
```
|
||||
Ceremony State Flow:
|
||||
NOT_STARTED → (start session) → IN_PROGRESS → (deliberation starts) → DELIBERATION → (voting ends) → COMPLETED
|
||||
|
||||
NOT_STARTED:
|
||||
- Session created but not started
|
||||
- Projects ordered (manual or automatic)
|
||||
- Jury and audience links generated
|
||||
- Stage manager can preview setup
|
||||
|
||||
IN_PROGRESS:
|
||||
- Presentations ongoing
|
||||
- Per-project state: WAITING → PRESENTING → Q_AND_A → VOTING → VOTED → SCORED
|
||||
- Admin can pause, skip, reorder on the fly
|
||||
|
||||
DELIBERATION:
|
||||
- Timer running for deliberation period
|
||||
- Jury can discuss (optional chat/discussion interface)
|
||||
- Votes may be revised (if deliberationAllowsVoteRevision=true)
|
||||
- Admin can extend deliberation time
|
||||
|
||||
COMPLETED:
|
||||
- All voting finished
|
||||
- Results calculated
|
||||
- Ceremony locked (or unlocked for result reveal)
|
||||
```
|
||||
|
||||
### Per-Project State
|
||||
|
||||
Each project in the live finals progresses through these states:
|
||||
|
||||
```
|
||||
WAITING → Project queued, not yet presenting
|
||||
PRESENTING → Presentation in progress (timer: presentationDurationMinutes)
|
||||
Q_AND_A → Q&A session (timer: qaDurationMinutes)
|
||||
VOTING → Voting window open (jury + audience can vote)
|
||||
VOTED → Voting window closed, awaiting next action
|
||||
SCORED → Scores calculated, moving to next project
|
||||
SKIPPED → Admin skipped this project (emergency override)
|
||||
```
|
||||
|
||||
### Stage Manager UI Controls
|
||||
|
||||
**ASCII Mockup:**
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ LIVE FINALS STAGE MANAGER Session: live-abc-123 │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Status: IN_PROGRESS Category: STARTUP Jury: Jury 3 (8/8) │
|
||||
│ │
|
||||
│ [Pause Ceremony] [End Session] [Emergency Stop] │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─ CURRENT PROJECT ───────────────────────────────────────────────────┐
|
||||
│ Project #3 of 6 (STARTUP) │
|
||||
│ Title: "OceanSense AI" — Team: AquaTech Solutions │
|
||||
│ │
|
||||
│ State: VOTING │
|
||||
│ ┌─ Presentation Timer ────┐ ┌─ Q&A Timer ─────┐ ┌─ Voting Timer ─┐│
|
||||
│ │ Completed: 8:00 / 8:00 │ │ Completed: 5:00 │ │ 0:45 remaining ││
|
||||
│ └─────────────────────────┘ └──────────────────┘ └────────────────┘│
|
||||
│ │
|
||||
│ Jury Votes: 6 / 8 (75%) │
|
||||
│ [✓] Alice Chen [✓] Bob Martin [ ] Carol Davis │
|
||||
│ [✓] David Lee [✓] Emma Wilson [ ] Frank Garcia │
|
||||
│ [✓] Grace Huang [✓] Henry Thompson │
|
||||
│ │
|
||||
│ Audience Votes: 142 │
|
||||
│ │
|
||||
│ [Skip Project] [Reset Votes] [Extend Time +1min] [Next Project] │
|
||||
└───────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─ PROJECT QUEUE ─────────────────────────────────────────────────────┐
|
||||
│ [✓] 1. AquaClean Tech (STARTUP) — Score: 8.2 (Completed) │
|
||||
│ [✓] 2. BlueCarbon Solutions (STARTUP) — Score: 7.8 (Completed) │
|
||||
│ [>] 3. OceanSense AI (STARTUP) — Voting in progress │
|
||||
│ [ ] 4. MarineTech Innovations (STARTUP) — Waiting │
|
||||
│ [ ] 5. CoralGuard (STARTUP) — Waiting │
|
||||
│ [ ] 6. DeepSea Robotics (STARTUP) — Waiting │
|
||||
│ │
|
||||
│ [Reorder Queue] [Jump to Project...] [Add Project] │
|
||||
└───────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─ CATEGORY WINDOWS ──────────────────────────────────────────────────┐
|
||||
│ Window 1: STARTUP (6 projects) │
|
||||
│ Status: IN_PROGRESS (Project 3/6) │
|
||||
│ Started: 2026-05-15 18:00:00 │
|
||||
│ [Close Window & Start Deliberation] │
|
||||
│ │
|
||||
│ Window 2: BUSINESS_CONCEPT (6 projects) │
|
||||
│ Status: WAITING │
|
||||
│ Scheduled: 2026-05-15 19:30:00 │
|
||||
│ [Start Window Early] │
|
||||
└───────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─ LIVE LEADERBOARD (STARTUP) ────────────────────────────────────────┐
|
||||
│ Rank | Project | Jury Avg | Audience | Weighted | Gap │
|
||||
│------+-----------------------+----------+----------+----------+------│
|
||||
│ 1 | AquaClean Tech | 8.5 | 7.2 | 8.2 | — │
|
||||
│ 2 | BlueCarbon Solutions | 8.0 | 7.4 | 7.8 | -0.4 │
|
||||
│ 3 | OceanSense AI | — | 6.8 | — | — │
|
||||
│ 4 | MarineTech Innov. | — | — | — | — │
|
||||
│ 5 | CoralGuard | — | — | — | — │
|
||||
│ 6 | DeepSea Robotics | — | — | — | — │
|
||||
└───────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─ CEREMONY LOG ──────────────────────────────────────────────────────┐
|
||||
│ 18:43:22 — Voting opened for "OceanSense AI" │
|
||||
│ 18:42:10 — Q&A period ended │
|
||||
│ 18:37:05 — Q&A period started │
|
||||
│ 18:29:00 — Presentation started: "OceanSense AI" │
|
||||
│ 18:28:45 — Voting closed for "BlueCarbon Solutions" │
|
||||
│ 18:27:30 — All jury votes received for "BlueCarbon Solutions" │
|
||||
└───────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─ ADMIN OVERRIDE PANEL ──────────────────────────────────────────────┐
|
||||
│ [Override Individual Vote...] [Adjust Weights...] [Reset Session] │
|
||||
└───────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Stage Manager Features
|
||||
|
||||
**Core controls:**
|
||||
1. **Session Management**
|
||||
- Start session (initialize cursor, generate jury/audience links)
|
||||
- Pause ceremony (freeze all timers, block votes)
|
||||
- Resume ceremony
|
||||
- End session (lock results, trigger CONFIRMATION round)
|
||||
|
||||
2. **Project Navigation**
|
||||
- Jump to specific project
|
||||
- Skip project (emergency)
|
||||
- Reorder queue (drag-and-drop or modal)
|
||||
- Add project mid-ceremony (rare edge case)
|
||||
|
||||
3. **Timer Controls**
|
||||
- Start presentation timer
|
||||
- Start Q&A timer
|
||||
- Start voting timer
|
||||
- Extend timer (+1 min, +5 min)
|
||||
- Manual timer override
|
||||
|
||||
4. **Voting Window Management**
|
||||
- Open voting for current project
|
||||
- Close voting early
|
||||
- Require all jury votes before closing
|
||||
- Reset votes (emergency undo)
|
||||
|
||||
5. **Category Window Controls**
|
||||
- Open category window (STARTUP or BUSINESS_CONCEPT)
|
||||
- Close category window
|
||||
- Start deliberation period
|
||||
- Advance to next category
|
||||
|
||||
6. **Emergency Controls**
|
||||
- Skip project
|
||||
- Reset individual vote
|
||||
- Reset all votes for project
|
||||
- Pause ceremony (emergency)
|
||||
- Force end session
|
||||
|
||||
7. **Override Controls** (if `adminCanOverrideVotes=true`):
|
||||
- Override individual jury vote
|
||||
- Adjust audience/jury weights mid-ceremony
|
||||
- Manual score adjustment
|
||||
|
||||
8. **Real-Time Monitoring**
|
||||
- Live vote count (jury + audience)
|
||||
- Missing jury votes indicator
|
||||
- Audience voter count
|
||||
- Leaderboard (if `showLiveResults=true`)
|
||||
- Ceremony event log
|
||||
|
||||
---
|
||||
|
||||
## Jury 3 Voting Experience
|
||||
|
||||
### Jury Dashboard
|
||||
|
||||
**ASCII Mockup:**
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ LIVE FINALS VOTING — Jury 3 Alice Chen │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ Status: VOTING IN PROGRESS │
|
||||
│ Category: STARTUP │
|
||||
│ │
|
||||
│ [View All Finalists] [Results Dashboard] [Jury Discussion] │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─ CURRENT PROJECT ───────────────────────────────────────────────────┐
|
||||
│ Project 3 of 6 │
|
||||
│ │
|
||||
│ OceanSense AI │
|
||||
│ Team: AquaTech Solutions │
|
||||
│ Category: STARTUP (Marine Technology) │
|
||||
│ │
|
||||
│ Description: │
|
||||
│ AI-powered ocean monitoring platform that detects pollution events │
|
||||
│ in real-time using satellite imagery and underwater sensors. │
|
||||
│ │
|
||||
│ ┌─ Documents ──────────────────────────────────────────────────┐ │
|
||||
│ │ Round 1 Docs: │ │
|
||||
│ │ • Executive Summary.pdf │ │
|
||||
│ │ • Business Plan.pdf │ │
|
||||
│ │ │ │
|
||||
│ │ Round 2 Docs (Semi-finalist): │ │
|
||||
│ │ • Updated Business Plan.pdf │ │
|
||||
│ │ • Pitch Video.mp4 │ │
|
||||
│ │ • Technical Whitepaper.pdf │ │
|
||||
│ └───────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Voting closes in: 0:45 │
|
||||
└───────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─ VOTING PANEL (Numeric Mode: 1-10) ─────────────────────────────────┐
|
||||
│ │
|
||||
│ How would you rate this project overall? │
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 1 2 3 4 5 6 7 8 9 10 │ │
|
||||
│ │ ○ ○ ○ ○ ○ ○ ○ ● ○ ○ │ │
|
||||
│ └────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Your score: 8 │
|
||||
│ │
|
||||
│ [Submit Vote] │
|
||||
│ │
|
||||
│ ⚠️ Votes cannot be changed after submission unless admin resets. │
|
||||
└───────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─ YOUR VOTES THIS SESSION ───────────────────────────────────────────┐
|
||||
│ [✓] 1. AquaClean Tech — Score: 9 │
|
||||
│ [✓] 2. BlueCarbon Solutions — Score: 8 │
|
||||
│ [ ] 3. OceanSense AI — Not voted yet │
|
||||
│ [ ] 4. MarineTech Innovations — Waiting │
|
||||
│ [ ] 5. CoralGuard — Waiting │
|
||||
│ [ ] 6. DeepSea Robotics — Waiting │
|
||||
└───────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
This is an extremely detailed 900+ line implementation document covering the Live Finals round type with complete technical specifications, UI mockups, API definitions, service functions, edge cases, and integration points. The document provides a comprehensive guide for implementing the live ceremony functionality in the redesigned MOPC architecture.
|
||||
1299
docs/claude-architecture-redesign/10-round-confirmation.md
Normal file
1299
docs/claude-architecture-redesign/10-round-confirmation.md
Normal file
File diff suppressed because it is too large
Load Diff
965
docs/claude-architecture-redesign/11-special-awards.md
Normal file
965
docs/claude-architecture-redesign/11-special-awards.md
Normal file
@@ -0,0 +1,965 @@
|
||||
# Special Awards System
|
||||
|
||||
## Overview
|
||||
|
||||
Special Awards are standalone award tracks that run parallel to the main competition flow. They enable the MOPC platform to recognize excellence in specific areas (e.g., "Innovation Award", "Impact Award", "Youth Leadership Award") with dedicated juries and evaluation processes while referencing the same pool of projects.
|
||||
|
||||
### Purpose
|
||||
|
||||
Special Awards serve three key purposes:
|
||||
|
||||
1. **Parallel Recognition** — Recognize excellence in specific domains beyond the main competition prizes
|
||||
2. **Specialized Evaluation** — Enable dedicated jury groups with domain expertise to evaluate specific criteria
|
||||
3. **Flexible Integration** — Awards can piggyback on main rounds or run independently with their own timelines
|
||||
|
||||
### Design Philosophy
|
||||
|
||||
- **Standalone Entities** — Awards are not tracks; they're first-class entities linked to competitions
|
||||
- **Two Modes** — STAY_IN_MAIN (piggyback evaluation) or SEPARATE_POOL (independent flow)
|
||||
- **Dedicated Juries** — Each award can have its own jury group with unique members or shared members
|
||||
- **Flexible Eligibility** — AI-suggested, manual, round-based, or all-eligible modes
|
||||
- **Integration with Results** — Award results feed into the confirmation round alongside main competition winners
|
||||
|
||||
---
|
||||
|
||||
## Current System Analysis
|
||||
|
||||
### Current Architecture (Pipeline-Based)
|
||||
|
||||
**Current State:**
|
||||
```
|
||||
Program
|
||||
└── Pipeline
|
||||
├── Track: "Main Competition" (MAIN)
|
||||
└── Track: "Innovation Award" (AWARD)
|
||||
├── Stage: "Evaluation" (EVALUATION)
|
||||
└── Stage: "Results" (RESULTS)
|
||||
|
||||
SpecialAward {
|
||||
id, programId, name, description
|
||||
trackId → Track (AWARD track)
|
||||
criteriaText (for AI)
|
||||
scoringMode: PICK_WINNER | RANKED | SCORED
|
||||
votingStartAt, votingEndAt
|
||||
winnerProjectId
|
||||
useAiEligibility: boolean
|
||||
}
|
||||
|
||||
AwardEligibility { awardId, projectId, eligible, method, aiReasoningJson }
|
||||
AwardJuror { awardId, userId }
|
||||
AwardVote { awardId, userId, projectId, rank? }
|
||||
```
|
||||
|
||||
**Current Flow:**
|
||||
1. Admin creates AWARD track within pipeline
|
||||
2. Admin configures SpecialAward linked to track
|
||||
3. Projects routed to award track via ProjectStageState
|
||||
4. AI or manual eligibility determination
|
||||
5. Award jurors evaluate/vote
|
||||
6. Winner selected (admin/award master decision)
|
||||
|
||||
**Current Limitations:**
|
||||
- Awards tied to track concept (being eliminated)
|
||||
- No distinction between "piggyback" awards and independent awards
|
||||
- No round-based eligibility
|
||||
- No jury group integration
|
||||
- No evaluation form linkage
|
||||
- No audience voting support
|
||||
- No integration with confirmation round
|
||||
|
||||
---
|
||||
|
||||
## Redesigned System: Two Award Modes
|
||||
|
||||
### Mode 1: STAY_IN_MAIN
|
||||
|
||||
**Concept:** Projects remain in the main competition flow. A dedicated award jury evaluates them using the same submissions, during the same evaluation windows.
|
||||
|
||||
**Use Case:** "Innovation Award" — Members of Jury 2 who also serve on the Innovation Award jury score projects specifically for innovation criteria during the Jury 2 evaluation round.
|
||||
|
||||
**Characteristics:**
|
||||
- Projects never leave main track
|
||||
- Award jury evaluates during specific main evaluation rounds
|
||||
- Award jury sees the same docs/submissions as main jury
|
||||
- Award uses its own evaluation form with award-specific criteria
|
||||
- No separate stages/timeline needed
|
||||
- Results announced alongside main results
|
||||
|
||||
**Data Flow:**
|
||||
```
|
||||
Competition → Round 5 (Jury 2 Evaluation)
|
||||
├─ Main Jury (Jury 2) evaluates with standard criteria
|
||||
└─ Innovation Award Jury evaluates same projects with innovation criteria
|
||||
|
||||
SpecialAward {
|
||||
evaluationMode: "STAY_IN_MAIN"
|
||||
evaluationRoundId: "round-5" ← Which main round this award evaluates during
|
||||
juryGroupId: "innovation-jury" ← Dedicated jury
|
||||
evaluationFormId: "innovation-form" ← Award-specific criteria
|
||||
}
|
||||
```
|
||||
|
||||
### Mode 2: SEPARATE_POOL
|
||||
|
||||
**Concept:** Dedicated evaluation with separate criteria, submission requirements, and timeline. Projects may be pulled out for award-specific evaluation.
|
||||
|
||||
**Use Case:** "Community Impact Award" — Separate jury evaluates finalists specifically for community impact using a unique rubric and potentially additional documentation.
|
||||
|
||||
**Characteristics:**
|
||||
- Own jury group with unique members
|
||||
- Own evaluation criteria/form
|
||||
- Can have own submission requirements
|
||||
- Runs on its own timeline
|
||||
- Can pull projects from specific rounds
|
||||
- Independent results timeline
|
||||
|
||||
**Data Flow:**
|
||||
```
|
||||
Competition
|
||||
└── SpecialAward {
|
||||
evaluationMode: "SEPARATE_POOL"
|
||||
eligibilityMode: "ROUND_BASED" ← Projects from Round 5 (finalists)
|
||||
juryGroupId: "impact-jury"
|
||||
evaluationFormId: "impact-form"
|
||||
votingStartAt: [own window]
|
||||
votingEndAt: [own window]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Enhanced SpecialAward Model
|
||||
|
||||
### Complete Schema
|
||||
|
||||
```prisma
|
||||
model SpecialAward {
|
||||
id String @id @default(cuid())
|
||||
competitionId String // CHANGED: Links to Competition, not Track
|
||||
name String
|
||||
description String? @db.Text
|
||||
|
||||
// Eligibility configuration
|
||||
eligibilityMode AwardEligibilityMode @default(AI_SUGGESTED)
|
||||
eligibilityCriteria Json? @db.JsonB // Mode-specific config
|
||||
|
||||
// Evaluation configuration
|
||||
evaluationMode AwardEvaluationMode @default(STAY_IN_MAIN)
|
||||
evaluationRoundId String? // Which main round (for STAY_IN_MAIN)
|
||||
evaluationFormId String? // Custom criteria
|
||||
juryGroupId String? // Dedicated or shared jury
|
||||
|
||||
// Voting configuration
|
||||
votingMode AwardVotingMode @default(JURY_ONLY)
|
||||
scoringMode AwardScoringMode @default(PICK_WINNER)
|
||||
maxRankedPicks Int? // For RANKED mode
|
||||
maxWinners Int @default(1) // Number of winners
|
||||
audienceVotingWeight Float? // 0.0-1.0 for COMBINED mode
|
||||
|
||||
// Timing
|
||||
votingStartAt DateTime?
|
||||
votingEndAt DateTime?
|
||||
|
||||
// Results
|
||||
status AwardStatus @default(DRAFT)
|
||||
winnerProjectId String? // Single winner (for backward compat)
|
||||
|
||||
// AI eligibility
|
||||
useAiEligibility Boolean @default(false)
|
||||
criteriaText String? @db.Text // Plain-language for AI
|
||||
|
||||
// Job tracking (for AI eligibility)
|
||||
eligibilityJobStatus String?
|
||||
eligibilityJobTotal Int?
|
||||
eligibilityJobDone Int?
|
||||
eligibilityJobError String? @db.Text
|
||||
eligibilityJobStarted DateTime?
|
||||
|
||||
sortOrder Int @default(0)
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
// Relations
|
||||
competition Competition @relation(fields: [competitionId], references: [id], onDelete: Cascade)
|
||||
evaluationRound Round? @relation("AwardEvaluationRound", fields: [evaluationRoundId], references: [id], onDelete: SetNull)
|
||||
evaluationForm EvaluationForm? @relation(fields: [evaluationFormId], references: [id], onDelete: SetNull)
|
||||
juryGroup JuryGroup? @relation("AwardJuryGroup", fields: [juryGroupId], references: [id], onDelete: SetNull)
|
||||
winnerProject Project? @relation("AwardWinner", fields: [winnerProjectId], references: [id], onDelete: SetNull)
|
||||
|
||||
eligibilities AwardEligibility[]
|
||||
votes AwardVote[]
|
||||
winners AwardWinner[] // NEW: Multi-winner support
|
||||
|
||||
@@index([competitionId])
|
||||
@@index([status])
|
||||
@@index([evaluationRoundId])
|
||||
@@index([juryGroupId])
|
||||
}
|
||||
|
||||
enum AwardEligibilityMode {
|
||||
AI_SUGGESTED // AI analyzes and suggests eligible projects
|
||||
MANUAL // Admin manually selects eligible projects
|
||||
ALL_ELIGIBLE // All projects in competition are eligible
|
||||
ROUND_BASED // All projects that reach a specific round
|
||||
}
|
||||
|
||||
enum AwardEvaluationMode {
|
||||
STAY_IN_MAIN // Evaluate during main competition round
|
||||
SEPARATE_POOL // Independent evaluation flow
|
||||
}
|
||||
|
||||
enum AwardVotingMode {
|
||||
JURY_ONLY // Only jury votes
|
||||
AUDIENCE_ONLY // Only audience votes
|
||||
COMBINED // Jury + audience with weighted scoring
|
||||
}
|
||||
|
||||
enum AwardScoringMode {
|
||||
PICK_WINNER // Simple winner selection (1 or N winners)
|
||||
RANKED // Ranked-choice voting
|
||||
SCORED // Criteria-based scoring
|
||||
}
|
||||
|
||||
enum AwardStatus {
|
||||
DRAFT
|
||||
NOMINATIONS_OPEN
|
||||
EVALUATION // NEW: Award jury evaluation in progress
|
||||
DECIDED // NEW: Winner(s) selected, pending announcement
|
||||
ANNOUNCED // NEW: Winner(s) publicly announced
|
||||
ARCHIVED
|
||||
}
|
||||
```
|
||||
|
||||
### New Model: AwardWinner (Multi-Winner Support)
|
||||
|
||||
```prisma
|
||||
model AwardWinner {
|
||||
id String @id @default(cuid())
|
||||
awardId String
|
||||
projectId String
|
||||
rank Int // 1st place, 2nd place, etc.
|
||||
|
||||
// Selection metadata
|
||||
selectedAt DateTime @default(now())
|
||||
selectedById String
|
||||
selectionMethod String // "JURY_VOTE" | "AUDIENCE_VOTE" | "COMBINED" | "ADMIN_DECISION"
|
||||
|
||||
// Score breakdown (for transparency)
|
||||
juryScore Float?
|
||||
audienceScore Float?
|
||||
finalScore Float?
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
|
||||
// Relations
|
||||
award SpecialAward @relation(fields: [awardId], references: [id], onDelete: Cascade)
|
||||
project Project @relation("AwardWinners", fields: [projectId], references: [id], onDelete: Cascade)
|
||||
selectedBy User @relation("AwardWinnerSelector", fields: [selectedById], references: [id])
|
||||
|
||||
@@unique([awardId, projectId])
|
||||
@@unique([awardId, rank])
|
||||
@@index([awardId])
|
||||
@@index([projectId])
|
||||
}
|
||||
```
|
||||
|
||||
### Enhanced AwardVote Model
|
||||
|
||||
```prisma
|
||||
model AwardVote {
|
||||
id String @id @default(cuid())
|
||||
awardId String
|
||||
userId String? // Nullable for audience votes
|
||||
projectId String
|
||||
|
||||
// Voting type
|
||||
isAudienceVote Boolean @default(false)
|
||||
|
||||
// Scoring (mode-dependent)
|
||||
rank Int? // For RANKED mode (1 = first choice)
|
||||
score Float? // For SCORED mode
|
||||
|
||||
// Criteria scores (for SCORED mode)
|
||||
criterionScoresJson Json? @db.JsonB
|
||||
|
||||
votedAt DateTime @default(now())
|
||||
|
||||
// Relations
|
||||
award SpecialAward @relation(fields: [awardId], references: [id], onDelete: Cascade)
|
||||
user User? @relation(fields: [userId], references: [id], onDelete: Cascade)
|
||||
project Project @relation(fields: [projectId], references: [id], onDelete: Cascade)
|
||||
|
||||
@@unique([awardId, userId, projectId])
|
||||
@@index([awardId])
|
||||
@@index([userId])
|
||||
@@index([projectId])
|
||||
@@index([awardId, isAudienceVote])
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Eligibility System Deep Dive
|
||||
|
||||
### Eligibility Modes
|
||||
|
||||
#### 1. AI_SUGGESTED
|
||||
|
||||
AI analyzes all projects and suggests eligible ones based on plain-language criteria.
|
||||
|
||||
**Config JSON:**
|
||||
```typescript
|
||||
type AISuggestedConfig = {
|
||||
criteriaText: string // "Projects using innovative ocean tech"
|
||||
confidenceThreshold: number // 0.0-1.0 (default: 0.7)
|
||||
autoAcceptAbove: number // Auto-accept above this (default: 0.9)
|
||||
requireManualReview: boolean // All need admin review (default: false)
|
||||
sourceRoundId?: string // Only projects from this round
|
||||
}
|
||||
```
|
||||
|
||||
**Flow:**
|
||||
1. Admin triggers AI eligibility analysis
|
||||
2. AI processes projects in batches (anonymized)
|
||||
3. AI returns: `{ projectId, eligible, confidence, reasoning }`
|
||||
4. High-confidence results auto-applied
|
||||
5. Medium-confidence results flagged for review
|
||||
6. Low-confidence results rejected (or flagged if `requireManualReview: true`)
|
||||
|
||||
**UI:**
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Innovation Award — AI Eligibility Analysis │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Status: Running... (47/120 projects analyzed) │
|
||||
│ [████████████████░░░░░░░░] 68% │
|
||||
│ │
|
||||
│ Results So Far: │
|
||||
│ ✓ Auto-Accepted (confidence > 0.9): 12 projects │
|
||||
│ ⚠ Flagged for Review (0.6-0.9): 23 projects │
|
||||
│ ✗ Rejected (< 0.6): 12 projects │
|
||||
│ │
|
||||
│ [View Flagged Projects] [Stop Analysis] │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### 2. MANUAL
|
||||
|
||||
Admin manually selects eligible projects.
|
||||
|
||||
**Config JSON:**
|
||||
```typescript
|
||||
type ManualConfig = {
|
||||
sourceRoundId?: string // Limit to projects from specific round
|
||||
categoryFilter?: "STARTUP" | "BUSINESS_CONCEPT"
|
||||
tagFilters?: string[] // Only projects with these tags
|
||||
}
|
||||
```
|
||||
|
||||
#### 3. ALL_ELIGIBLE
|
||||
|
||||
All projects in the competition are automatically eligible.
|
||||
|
||||
**Config JSON:**
|
||||
```typescript
|
||||
type AllEligibleConfig = {
|
||||
minimumStatus?: ProjectStatus // e.g., "SEMIFINALIST" or above
|
||||
excludeWithdrawn: boolean // Exclude WITHDRAWN (default: true)
|
||||
}
|
||||
```
|
||||
|
||||
#### 4. ROUND_BASED
|
||||
|
||||
All projects that reach a specific round are automatically eligible.
|
||||
|
||||
**Config JSON:**
|
||||
```typescript
|
||||
type RoundBasedConfig = {
|
||||
sourceRoundId: string // Required: which round
|
||||
requiredState: ProjectRoundStateValue // PASSED, COMPLETED, etc.
|
||||
autoUpdate: boolean // Auto-update when projects advance (default: true)
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
{
|
||||
"sourceRoundId": "round-5-jury-2",
|
||||
"requiredState": "PASSED",
|
||||
"autoUpdate": true
|
||||
}
|
||||
```
|
||||
|
||||
### Admin Override System
|
||||
|
||||
**All eligibility modes support admin override:**
|
||||
|
||||
```prisma
|
||||
model AwardEligibility {
|
||||
id String @id @default(cuid())
|
||||
awardId String
|
||||
projectId String
|
||||
|
||||
// Original determination
|
||||
method EligibilityMethod @default(AUTO) // AUTO, AI, MANUAL
|
||||
eligible Boolean @default(false)
|
||||
aiReasoningJson Json? @db.JsonB
|
||||
|
||||
// Override
|
||||
overriddenBy String?
|
||||
overriddenAt DateTime?
|
||||
overrideReason String? @db.Text
|
||||
|
||||
// Final decision
|
||||
finalEligible Boolean // Computed: overridden ? override : original
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
// Relations
|
||||
award SpecialAward @relation(fields: [awardId], references: [id], onDelete: Cascade)
|
||||
project Project @relation(fields: [projectId], references: [id], onDelete: Cascade)
|
||||
overriddenByUser User? @relation("AwardEligibilityOverriddenBy", fields: [overriddenBy], references: [id])
|
||||
|
||||
@@unique([awardId, projectId])
|
||||
@@index([awardId, eligible])
|
||||
@@index([awardId, finalEligible])
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Award Jury Groups
|
||||
|
||||
### Integration with JuryGroup Model
|
||||
|
||||
Awards can have:
|
||||
1. **Dedicated Jury** — Own `JuryGroup` with unique members
|
||||
2. **Shared Jury** — Reuse existing competition jury group (e.g., Jury 2)
|
||||
3. **Mixed Jury** — Some overlap with main jury, some unique members
|
||||
|
||||
**Example:**
|
||||
```typescript
|
||||
// Dedicated jury for Innovation Award
|
||||
const innovationJury = await prisma.juryGroup.create({
|
||||
data: {
|
||||
competitionId: "comp-2026",
|
||||
name: "Innovation Award Jury",
|
||||
slug: "innovation-jury",
|
||||
description: "Technology and innovation experts",
|
||||
defaultMaxAssignments: 15,
|
||||
defaultCapMode: "SOFT",
|
||||
categoryQuotasEnabled: false,
|
||||
}
|
||||
})
|
||||
|
||||
// Add members (can overlap with main jury)
|
||||
await prisma.juryGroupMember.createMany({
|
||||
data: [
|
||||
{ juryGroupId: innovationJury.id, userId: "user-tech-1", isLead: true },
|
||||
{ juryGroupId: innovationJury.id, userId: "user-tech-2" },
|
||||
{ juryGroupId: innovationJury.id, userId: "jury-2-member-overlap" }, // Also on Jury 2
|
||||
]
|
||||
})
|
||||
|
||||
// Link to award
|
||||
await prisma.specialAward.update({
|
||||
where: { id: awardId },
|
||||
data: { juryGroupId: innovationJury.id }
|
||||
})
|
||||
```
|
||||
|
||||
### Award Jury Assignment
|
||||
|
||||
#### For STAY_IN_MAIN Mode
|
||||
|
||||
Award jury members evaluate the same projects as the main jury, but with award-specific criteria.
|
||||
|
||||
**Assignment Creation:**
|
||||
```typescript
|
||||
// Main jury assignments (created by round)
|
||||
Assignment { userId: "jury-2-member-1", projectId: "proj-A", roundId: "round-5", juryGroupId: "jury-2" }
|
||||
|
||||
// Award jury assignments (created separately, same round)
|
||||
Assignment { userId: "innovation-jury-1", projectId: "proj-A", roundId: "round-5", juryGroupId: "innovation-jury" }
|
||||
```
|
||||
|
||||
**Evaluation:**
|
||||
- Award jury uses `evaluationFormId` linked to award
|
||||
- Evaluations stored separately (different `assignmentId`)
|
||||
- Both juries can evaluate same project in same round
|
||||
|
||||
#### For SEPARATE_POOL Mode
|
||||
|
||||
Award has its own assignment workflow, potentially for a subset of projects.
|
||||
|
||||
---
|
||||
|
||||
## Award Evaluation Flow
|
||||
|
||||
### STAY_IN_MAIN Evaluation
|
||||
|
||||
**Timeline:**
|
||||
```
|
||||
Round 5: Jury 2 Evaluation (Main)
|
||||
├─ Opens: 2026-03-01
|
||||
├─ Main Jury evaluates with standard form
|
||||
├─ Innovation Award Jury evaluates with innovation form
|
||||
└─ Closes: 2026-03-15
|
||||
|
||||
Award results calculated separately but announced together
|
||||
```
|
||||
|
||||
**Step-by-Step:**
|
||||
|
||||
1. **Setup Phase**
|
||||
- Admin creates `SpecialAward { evaluationMode: "STAY_IN_MAIN", evaluationRoundId: "round-5" }`
|
||||
- Admin creates award-specific `EvaluationForm` with innovation criteria
|
||||
- Admin creates `JuryGroup` for Innovation Award
|
||||
- Admin adds members to jury group
|
||||
|
||||
2. **Eligibility Phase**
|
||||
- Eligibility determined (AI/manual/round-based)
|
||||
- Only eligible projects evaluated by award jury
|
||||
|
||||
3. **Assignment Phase**
|
||||
- When Round 5 opens, assignments created for award jury
|
||||
- Each award juror assigned eligible projects
|
||||
- Award assignments reference same `roundId` as main evaluation
|
||||
|
||||
4. **Evaluation Phase**
|
||||
- Award jurors see projects in their dashboard
|
||||
- Form shows award-specific criteria
|
||||
- Evaluations stored with `formId` = innovation form
|
||||
|
||||
5. **Results Phase**
|
||||
- Scores aggregated separately from main jury
|
||||
- Winner selection (jury vote, admin decision, etc.)
|
||||
- Results feed into confirmation round
|
||||
|
||||
### SEPARATE_POOL Evaluation
|
||||
|
||||
**Timeline:**
|
||||
```
|
||||
Round 5: Jury 2 Evaluation (Main) — March 1-15
|
||||
↓
|
||||
Round 6: Finalist Selection
|
||||
↓
|
||||
Impact Award Evaluation (Separate) — March 20 - April 5
|
||||
├─ Own voting window
|
||||
├─ Own evaluation form
|
||||
├─ Impact Award Jury evaluates finalists
|
||||
└─ Results: April 10
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Audience Voting for Awards
|
||||
|
||||
### Voting Modes
|
||||
|
||||
#### JURY_ONLY
|
||||
|
||||
Only jury members vote. Standard model.
|
||||
|
||||
#### AUDIENCE_ONLY
|
||||
|
||||
Only audience (public) votes. No jury involvement.
|
||||
|
||||
**Config:**
|
||||
```typescript
|
||||
type AudienceOnlyConfig = {
|
||||
requireIdentification: boolean // Require email/phone (default: false)
|
||||
votesPerPerson: number // Max votes per person (default: 1)
|
||||
allowRanking: boolean // Ranked-choice (default: false)
|
||||
maxChoices?: number // For ranked mode
|
||||
}
|
||||
```
|
||||
|
||||
#### COMBINED
|
||||
|
||||
Jury + audience votes combined with weighted scoring.
|
||||
|
||||
**Config:**
|
||||
```typescript
|
||||
type CombinedConfig = {
|
||||
audienceWeight: number // 0.0-1.0 (e.g., 0.3 = 30% audience, 70% jury)
|
||||
juryWeight: number // 0.0-1.0 (should sum to 1.0)
|
||||
requireMinimumAudienceVotes: number // Min votes for validity (default: 50)
|
||||
showAudienceResultsToJury: boolean // Jury sees audience results (default: false)
|
||||
}
|
||||
```
|
||||
|
||||
**Scoring Calculation:**
|
||||
```typescript
|
||||
function calculateCombinedScore(
|
||||
juryScores: number[],
|
||||
audienceVoteCount: number,
|
||||
totalAudienceVotes: number,
|
||||
config: CombinedConfig
|
||||
): number {
|
||||
const juryAvg = juryScores.reduce((a, b) => a + b, 0) / juryScores.length
|
||||
const audiencePercent = audienceVoteCount / totalAudienceVotes
|
||||
|
||||
// Normalize jury score to 0-1 (assuming 1-10 scale)
|
||||
const normalizedJuryScore = juryAvg / 10
|
||||
|
||||
const finalScore =
|
||||
(normalizedJuryScore * config.juryWeight) +
|
||||
(audiencePercent * config.audienceWeight)
|
||||
|
||||
return finalScore
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Admin Experience
|
||||
|
||||
### Award Management Dashboard
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ MOPC 2026 — Special Awards [+ New Award] │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Innovation Award [Edit ▼] │ │
|
||||
│ │ Mode: Stay in Main (Jury 2 Evaluation) • Status: EVALUATION │ │
|
||||
│ ├───────────────────────────────────────────────────────────────────────┤ │
|
||||
│ │ Eligible Projects: 18 / 20 finalists │ │
|
||||
│ │ Jury: Innovation Jury (5 members) │ │
|
||||
│ │ Evaluations: 72 / 90 (80% complete) │ │
|
||||
│ │ Voting Closes: March 15, 2026 │ │
|
||||
│ │ │ │
|
||||
│ │ [View Eligibility] [View Evaluations] [Select Winner] │ │
|
||||
│ └───────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Community Impact Award [Edit ▼] │ │
|
||||
│ │ Mode: Separate Pool • Status: DRAFT │ │
|
||||
│ ├───────────────────────────────────────────────────────────────────────┤ │
|
||||
│ │ Eligible Projects: Not yet determined (AI pending) │ │
|
||||
│ │ Jury: Not assigned │ │
|
||||
│ │ Voting Window: Not set │ │
|
||||
│ │ │ │
|
||||
│ │ [Configure Eligibility] [Set Up Jury] [Set Timeline] │ │
|
||||
│ └───────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Main Flow
|
||||
|
||||
### Awards Reference Main Competition Projects
|
||||
|
||||
Awards don't create their own project pool — they reference existing competition projects.
|
||||
|
||||
**Data Relationship:**
|
||||
```
|
||||
Competition
|
||||
├── Projects (shared pool)
|
||||
│ ├── Project A
|
||||
│ ├── Project B
|
||||
│ └── Project C
|
||||
│
|
||||
├── Main Rounds (linear flow)
|
||||
│ ├── Round 1: Intake
|
||||
│ ├── Round 5: Jury 2 Evaluation
|
||||
│ └── Round 7: Live Finals
|
||||
│
|
||||
└── Special Awards (parallel evaluation)
|
||||
├── Innovation Award
|
||||
│ └── AwardEligibility { projectId: "A", eligible: true }
|
||||
│ └── AwardEligibility { projectId: "B", eligible: false }
|
||||
└── Impact Award
|
||||
└── AwardEligibility { projectId: "A", eligible: true }
|
||||
└── AwardEligibility { projectId: "C", eligible: true }
|
||||
```
|
||||
|
||||
### Award Results Feed into Confirmation Round
|
||||
|
||||
**Confirmation Round Integration:**
|
||||
|
||||
The confirmation round (Round 8) includes:
|
||||
1. Main competition winners (1st, 2nd, 3rd per category)
|
||||
2. Special award winners
|
||||
|
||||
**WinnerProposal Extension:**
|
||||
```prisma
|
||||
model WinnerProposal {
|
||||
id String @id @default(cuid())
|
||||
competitionId String
|
||||
category CompetitionCategory? // Null for award winners
|
||||
|
||||
// Main competition or award
|
||||
proposalType WinnerProposalType @default(MAIN_COMPETITION)
|
||||
awardId String? // If proposalType = SPECIAL_AWARD
|
||||
|
||||
status WinnerProposalStatus @default(PENDING)
|
||||
rankedProjectIds String[]
|
||||
|
||||
// ... rest of fields ...
|
||||
}
|
||||
|
||||
enum WinnerProposalType {
|
||||
MAIN_COMPETITION // Main 1st/2nd/3rd place
|
||||
SPECIAL_AWARD // Award winner
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## API Changes
|
||||
|
||||
### New tRPC Procedures
|
||||
|
||||
```typescript
|
||||
// src/server/routers/award-redesign.ts
|
||||
|
||||
export const awardRedesignRouter = router({
|
||||
/**
|
||||
* Create a new special award
|
||||
*/
|
||||
create: adminProcedure
|
||||
.input(z.object({
|
||||
competitionId: z.string(),
|
||||
name: z.string().min(1).max(255),
|
||||
description: z.string().optional(),
|
||||
eligibilityMode: z.enum(['AI_SUGGESTED', 'MANUAL', 'ALL_ELIGIBLE', 'ROUND_BASED']),
|
||||
evaluationMode: z.enum(['STAY_IN_MAIN', 'SEPARATE_POOL']),
|
||||
votingMode: z.enum(['JURY_ONLY', 'AUDIENCE_ONLY', 'COMBINED']),
|
||||
scoringMode: z.enum(['PICK_WINNER', 'RANKED', 'SCORED']),
|
||||
maxWinners: z.number().int().min(1).default(1),
|
||||
}))
|
||||
.mutation(async ({ ctx, input }) => { /* ... */ }),
|
||||
|
||||
/**
|
||||
* Run eligibility determination
|
||||
*/
|
||||
runEligibility: adminProcedure
|
||||
.input(z.object({ awardId: z.string() }))
|
||||
.mutation(async ({ ctx, input }) => { /* ... */ }),
|
||||
|
||||
/**
|
||||
* Cast vote (jury or audience)
|
||||
*/
|
||||
vote: protectedProcedure
|
||||
.input(z.object({
|
||||
awardId: z.string(),
|
||||
projectId: z.string(),
|
||||
rank: z.number().int().min(1).optional(),
|
||||
score: z.number().min(0).max(10).optional(),
|
||||
}))
|
||||
.mutation(async ({ ctx, input }) => { /* ... */ }),
|
||||
|
||||
/**
|
||||
* Select winner(s)
|
||||
*/
|
||||
selectWinners: adminProcedure
|
||||
.input(z.object({
|
||||
awardId: z.string(),
|
||||
winnerProjectIds: z.array(z.string()).min(1),
|
||||
selectionMethod: z.enum(['JURY_VOTE', 'AUDIENCE_VOTE', 'COMBINED', 'ADMIN_DECISION']),
|
||||
}))
|
||||
.mutation(async ({ ctx, input }) => { /* ... */ }),
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Service Functions
|
||||
|
||||
### Award Service Enhancements
|
||||
|
||||
```typescript
|
||||
// src/server/services/award-service.ts
|
||||
|
||||
/**
|
||||
* Run round-based eligibility
|
||||
*/
|
||||
export async function runRoundBasedEligibility(
|
||||
award: SpecialAward,
|
||||
prisma = getPrisma()
|
||||
) {
|
||||
const config = award.eligibilityCriteria as RoundBasedConfig
|
||||
|
||||
if (!config.sourceRoundId) {
|
||||
throw new Error('Round-based eligibility requires sourceRoundId')
|
||||
}
|
||||
|
||||
// Get all projects in the specified round with the required state
|
||||
const projectRoundStates = await prisma.projectRoundState.findMany({
|
||||
where: {
|
||||
roundId: config.sourceRoundId,
|
||||
state: config.requiredState ?? 'PASSED',
|
||||
},
|
||||
select: { projectId: true }
|
||||
})
|
||||
|
||||
// Create/update eligibility records
|
||||
let created = 0
|
||||
let updated = 0
|
||||
|
||||
for (const prs of projectRoundStates) {
|
||||
const existing = await prisma.awardEligibility.findUnique({
|
||||
where: {
|
||||
awardId_projectId: {
|
||||
awardId: award.id,
|
||||
projectId: prs.projectId
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
if (existing) {
|
||||
await prisma.awardEligibility.update({
|
||||
where: { id: existing.id },
|
||||
data: { eligible: true, method: 'AUTO' }
|
||||
})
|
||||
updated++
|
||||
} else {
|
||||
await prisma.awardEligibility.create({
|
||||
data: {
|
||||
awardId: award.id,
|
||||
projectId: prs.projectId,
|
||||
eligible: true,
|
||||
method: 'AUTO',
|
||||
}
|
||||
})
|
||||
created++
|
||||
}
|
||||
}
|
||||
|
||||
return { created, updated, total: projectRoundStates.length }
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate combined jury + audience score
|
||||
*/
|
||||
export function calculateCombinedScore(
|
||||
juryScores: number[],
|
||||
audienceVoteCount: number,
|
||||
totalAudienceVotes: number,
|
||||
juryWeight: number,
|
||||
audienceWeight: number
|
||||
): number {
|
||||
if (juryScores.length === 0) {
|
||||
throw new Error('Cannot calculate combined score without jury votes')
|
||||
}
|
||||
|
||||
const juryAvg = juryScores.reduce((a, b) => a + b, 0) / juryScores.length
|
||||
const normalizedJuryScore = juryAvg / 10 // Assume 1-10 scale
|
||||
|
||||
const audiencePercent = totalAudienceVotes > 0
|
||||
? audienceVoteCount / totalAudienceVotes
|
||||
: 0
|
||||
|
||||
const finalScore =
|
||||
(normalizedJuryScore * juryWeight) +
|
||||
(audiencePercent * audienceWeight)
|
||||
|
||||
return finalScore
|
||||
}
|
||||
|
||||
/**
|
||||
* Create award jury assignments
|
||||
*/
|
||||
export async function createAwardAssignments(
|
||||
awardId: string,
|
||||
prisma = getPrisma()
|
||||
) {
|
||||
const award = await prisma.specialAward.findUniqueOrThrow({
|
||||
where: { id: awardId },
|
||||
include: {
|
||||
juryGroup: {
|
||||
include: { members: true }
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
if (!award.juryGroupId || !award.juryGroup) {
|
||||
throw new Error('Award must have a jury group to create assignments')
|
||||
}
|
||||
|
||||
const eligibleProjects = await getEligibleProjects(awardId, prisma)
|
||||
|
||||
const assignments = []
|
||||
|
||||
for (const project of eligibleProjects) {
|
||||
for (const member of award.juryGroup.members) {
|
||||
assignments.push({
|
||||
userId: member.userId,
|
||||
projectId: project.id,
|
||||
roundId: award.evaluationRoundId ?? null,
|
||||
juryGroupId: award.juryGroupId,
|
||||
method: 'MANUAL' as const,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
await prisma.assignment.createMany({
|
||||
data: assignments,
|
||||
skipDuplicates: true,
|
||||
})
|
||||
|
||||
return { created: assignments.length }
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Edge Cases
|
||||
|
||||
| Scenario | Handling |
|
||||
|----------|----------|
|
||||
| **Project eligible for multiple awards** | Allowed — project can win multiple awards |
|
||||
| **Jury member on both main and award juries** | Allowed — separate assignments, separate evaluations |
|
||||
| **Award voting ends before main results** | Award winner held until main results finalized, announced together |
|
||||
| **Award eligibility changes mid-voting** | Admin override can remove eligibility; active votes invalidated |
|
||||
| **Audience vote spam/fraud** | IP rate limiting, device fingerprinting, email verification, admin review |
|
||||
| **Tie in award voting** | Admin decision or re-vote (configurable) |
|
||||
| **Award jury not complete evaluations** | Admin can close voting with partial data or extend deadline |
|
||||
| **Project withdrawn after eligible** | Eligibility auto-removed; votes invalidated |
|
||||
| **Award criteria change after eligibility** | Re-run eligibility or grandfather existing eligible projects |
|
||||
| **No eligible projects for award** | Award status set to DRAFT/ARCHIVED; no voting |
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
### With Evaluation System
|
||||
- Awards use `EvaluationForm` for criteria
|
||||
- Award evaluations stored in `Evaluation` table with `formId` linkage
|
||||
- Assignment system handles both main and award assignments
|
||||
|
||||
### With Jury Groups
|
||||
- Awards can link to existing `JuryGroup` or have dedicated groups
|
||||
- Jury members can overlap between main and award juries
|
||||
- Caps and quotas honored for award assignments
|
||||
|
||||
### With Confirmation Round
|
||||
- Award winners included in `WinnerProposal` system
|
||||
- Confirmation flow handles both main and award winners
|
||||
- Approval workflow requires sign-off on all winners
|
||||
|
||||
### With Notification System
|
||||
- Eligibility notifications sent to eligible teams
|
||||
- Voting reminders sent to award jurors
|
||||
- Winner announcements coordinated with main results
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
The redesigned Special Awards system provides:
|
||||
|
||||
1. **Flexibility**: Two modes (STAY_IN_MAIN, SEPARATE_POOL) cover all use cases
|
||||
2. **Integration**: Deep integration with competition rounds, juries, and results
|
||||
3. **Autonomy**: Awards can run independently or piggyback on main flow
|
||||
4. **Transparency**: AI eligibility with admin override, full audit trail
|
||||
5. **Engagement**: Audience voting support with anti-fraud measures
|
||||
6. **Scalability**: Support for multiple awards, multiple winners, complex scoring
|
||||
|
||||
This architecture eliminates the Track dependency, integrates awards as standalone entities, and provides a robust, flexible system for recognizing excellence across multiple dimensions while maintaining the integrity of the main competition flow.
|
||||
960
docs/claude-architecture-redesign/12-jury-groups.md
Normal file
960
docs/claude-architecture-redesign/12-jury-groups.md
Normal file
@@ -0,0 +1,960 @@
|
||||
# Jury Groups — Multi-Jury Architecture
|
||||
|
||||
## Overview
|
||||
|
||||
The **JuryGroup** model is the backbone of the redesigned jury system. Instead of implicit jury membership derived from per-stage assignments, juries are now **first-class named entities** — "Jury 1", "Jury 2", "Jury 3", "Innovation Award Jury" — with explicit membership, configurable assignment caps, and per-juror overrides.
|
||||
|
||||
### Why This Matters
|
||||
|
||||
| Before (Current) | After (Redesigned) |
|
||||
|---|---|
|
||||
| Juries are implicit — "Jury 1" exists only in admin's head | JuryGroup is a named model with `id`, `name`, `description` |
|
||||
| Assignment caps are per-stage config | Caps are per-juror on JuryGroupMember (with group defaults) |
|
||||
| No concept of "which jury is this juror on" | JuryGroupMember links User to JuryGroup explicitly |
|
||||
| Same juror can't be on multiple juries (no grouping) | A User can belong to multiple JuryGroups |
|
||||
| Category quotas don't exist | Per-juror STARTUP/CONCEPT ratio preferences |
|
||||
| No juror onboarding preferences | JuryGroupMember stores language, expertise, preferences |
|
||||
|
||||
### Jury Groups in the 8-Step Flow
|
||||
|
||||
```
|
||||
Round 1: INTAKE — no jury
|
||||
Round 2: FILTERING — no jury (AI-powered)
|
||||
Round 3: EVALUATION — ► Jury 1 (semi-finalist selection)
|
||||
Round 4: SUBMISSION — no jury
|
||||
Round 5: EVALUATION — ► Jury 2 (finalist selection)
|
||||
Round 6: MENTORING — no jury
|
||||
Round 7: LIVE_FINAL — ► Jury 3 (live finals scoring)
|
||||
Round 8: CONFIRMATION — ► Jury 3 (winner confirmation)
|
||||
|
||||
Special Awards:
|
||||
Innovation Award — ► Innovation Jury (may overlap with Jury 2)
|
||||
Impact Award — ► Impact Jury (dedicated members)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Data Model
|
||||
|
||||
### JuryGroup
|
||||
|
||||
```prisma
|
||||
model JuryGroup {
|
||||
id String @id @default(cuid())
|
||||
competitionId String
|
||||
name String // "Jury 1", "Jury 2", "Jury 3", "Innovation Award Jury"
|
||||
description String? // "Semi-finalist evaluation jury — reviews 60+ applications"
|
||||
isActive Boolean @default(true)
|
||||
|
||||
// Default assignment configuration for this jury
|
||||
defaultMaxAssignments Int @default(20)
|
||||
defaultCapMode CapMode @default(SOFT)
|
||||
softCapBuffer Int @default(2) // Extra assignments above cap
|
||||
|
||||
// Default category quotas (per juror)
|
||||
defaultCategoryQuotas Json? @db.JsonB
|
||||
// Shape: { "STARTUP": { min: 5, max: 12 }, "BUSINESS_CONCEPT": { min: 5, max: 12 } }
|
||||
|
||||
// Onboarding
|
||||
onboardingFormId String? // Link to onboarding form (expertise, preferences)
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
// Relations
|
||||
competition Competition @relation(...)
|
||||
members JuryGroupMember[]
|
||||
rounds Round[] // Rounds this jury is assigned to
|
||||
assignments Assignment[] // Assignments made through this jury group
|
||||
}
|
||||
```
|
||||
|
||||
### JuryGroupMember
|
||||
|
||||
```prisma
|
||||
model JuryGroupMember {
|
||||
id String @id @default(cuid())
|
||||
juryGroupId String
|
||||
userId String
|
||||
role String @default("MEMBER") // "MEMBER" | "CHAIR" | "OBSERVER"
|
||||
|
||||
// Per-juror overrides (null = use group defaults)
|
||||
maxAssignmentsOverride Int?
|
||||
capModeOverride CapMode?
|
||||
categoryQuotasOverride Json? @db.JsonB
|
||||
|
||||
// Juror preferences (set during onboarding)
|
||||
preferredStartupRatio Float? // 0.0–1.0 (e.g., 0.6 = 60% startups)
|
||||
expertiseTags String[] // ["ocean-tech", "marine-biology", "finance"]
|
||||
languagePreferences String[] // ["en", "fr"]
|
||||
notes String? // Admin notes about this juror
|
||||
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
|
||||
// Relations
|
||||
juryGroup JuryGroup @relation(...)
|
||||
user User @relation(...)
|
||||
|
||||
@@unique([juryGroupId, userId])
|
||||
@@index([juryGroupId])
|
||||
@@index([userId])
|
||||
}
|
||||
```
|
||||
|
||||
### CapMode Enum
|
||||
|
||||
```prisma
|
||||
enum CapMode {
|
||||
HARD // Absolute maximum — algorithm cannot exceed under any circumstance
|
||||
SOFT // Target maximum — can exceed by softCapBuffer for load balancing
|
||||
NONE // No cap — unlimited assignments (use with caution)
|
||||
}
|
||||
```
|
||||
|
||||
### Cap Behavior
|
||||
|
||||
| Mode | Max | Buffer | Effective Limit | Behavior |
|
||||
|------|-----|--------|-----------------|----------|
|
||||
| HARD | 20 | — | 20 | Algorithm stops at exactly 20. No exceptions. |
|
||||
| SOFT | 20 | 2 | 22 | Algorithm targets 20 but can go to 22 if needed for balanced distribution |
|
||||
| NONE | — | — | ∞ | No limit. Juror can be assigned any number of projects |
|
||||
|
||||
```typescript
|
||||
function getEffectiveCap(member: JuryGroupMember, group: JuryGroup): number | null {
|
||||
const capMode = member.capModeOverride ?? group.defaultCapMode;
|
||||
const maxAssignments = member.maxAssignmentsOverride ?? group.defaultMaxAssignments;
|
||||
|
||||
switch (capMode) {
|
||||
case 'HARD':
|
||||
return maxAssignments;
|
||||
case 'SOFT':
|
||||
return maxAssignments + group.softCapBuffer;
|
||||
case 'NONE':
|
||||
return null; // no limit
|
||||
}
|
||||
}
|
||||
|
||||
function canAssignMore(
|
||||
member: JuryGroupMember,
|
||||
group: JuryGroup,
|
||||
currentCount: number
|
||||
): { allowed: boolean; reason?: string } {
|
||||
const cap = getEffectiveCap(member, group);
|
||||
|
||||
if (cap === null) return { allowed: true };
|
||||
|
||||
if (currentCount >= cap) {
|
||||
return {
|
||||
allowed: false,
|
||||
reason: `Juror has reached ${capMode === 'HARD' ? 'hard' : 'soft'} cap of ${cap} assignments`,
|
||||
};
|
||||
}
|
||||
|
||||
return { allowed: true };
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Category Quotas
|
||||
|
||||
### How Quotas Work
|
||||
|
||||
Each jury group (and optionally each member) can define minimum and maximum assignments per competition category. This ensures balanced coverage:
|
||||
|
||||
```typescript
|
||||
type CategoryQuotas = {
|
||||
STARTUP: { min: number; max: number };
|
||||
BUSINESS_CONCEPT: { min: number; max: number };
|
||||
};
|
||||
|
||||
// Example: group default
|
||||
const defaultQuotas: CategoryQuotas = {
|
||||
STARTUP: { min: 5, max: 12 },
|
||||
BUSINESS_CONCEPT: { min: 5, max: 12 },
|
||||
};
|
||||
```
|
||||
|
||||
### Quota Resolution
|
||||
|
||||
Per-juror overrides take precedence over group defaults:
|
||||
|
||||
```typescript
|
||||
function getEffectiveQuotas(
|
||||
member: JuryGroupMember,
|
||||
group: JuryGroup
|
||||
): CategoryQuotas | null {
|
||||
if (member.categoryQuotasOverride) {
|
||||
return member.categoryQuotasOverride as CategoryQuotas;
|
||||
}
|
||||
if (group.defaultCategoryQuotas) {
|
||||
return group.defaultCategoryQuotas as CategoryQuotas;
|
||||
}
|
||||
return null; // no quotas — assign freely
|
||||
}
|
||||
```
|
||||
|
||||
### Quota Enforcement During Assignment
|
||||
|
||||
```typescript
|
||||
function checkCategoryQuota(
|
||||
member: JuryGroupMember,
|
||||
group: JuryGroup,
|
||||
category: CompetitionCategory,
|
||||
currentCategoryCount: number
|
||||
): { allowed: boolean; warning?: string } {
|
||||
const quotas = getEffectiveQuotas(member, group);
|
||||
if (!quotas) return { allowed: true };
|
||||
|
||||
const categoryQuota = quotas[category];
|
||||
if (!categoryQuota) return { allowed: true };
|
||||
|
||||
if (currentCategoryCount >= categoryQuota.max) {
|
||||
return {
|
||||
allowed: false,
|
||||
warning: `Juror has reached max ${categoryQuota.max} for ${category}`,
|
||||
};
|
||||
}
|
||||
|
||||
return { allowed: true };
|
||||
}
|
||||
|
||||
function checkMinimumQuotasMet(
|
||||
member: JuryGroupMember,
|
||||
group: JuryGroup,
|
||||
categoryCounts: Record<CompetitionCategory, number>
|
||||
): { met: boolean; deficits: string[] } {
|
||||
const quotas = getEffectiveQuotas(member, group);
|
||||
if (!quotas) return { met: true, deficits: [] };
|
||||
|
||||
const deficits: string[] = [];
|
||||
for (const [category, quota] of Object.entries(quotas)) {
|
||||
const count = categoryCounts[category as CompetitionCategory] ?? 0;
|
||||
if (count < quota.min) {
|
||||
deficits.push(`${category}: ${count}/${quota.min} minimum`);
|
||||
}
|
||||
}
|
||||
|
||||
return { met: deficits.length === 0, deficits };
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Preferred Startup Ratio
|
||||
|
||||
Each juror can express a preference for what percentage of their assignments should be Startups vs Concepts.
|
||||
|
||||
```typescript
|
||||
// On JuryGroupMember:
|
||||
preferredStartupRatio: Float? // 0.0 to 1.0
|
||||
|
||||
// Usage in assignment algorithm:
|
||||
function calculateRatioAlignmentScore(
|
||||
member: JuryGroupMember,
|
||||
candidateCategory: CompetitionCategory,
|
||||
currentStartupCount: number,
|
||||
currentConceptCount: number
|
||||
): number {
|
||||
const preference = member.preferredStartupRatio;
|
||||
if (preference === null || preference === undefined) return 0; // no preference
|
||||
|
||||
const totalAfterAssignment = currentStartupCount + currentConceptCount + 1;
|
||||
const startupCountAfter = candidateCategory === 'STARTUP'
|
||||
? currentStartupCount + 1
|
||||
: currentStartupCount;
|
||||
const ratioAfter = startupCountAfter / totalAfterAssignment;
|
||||
|
||||
// Score: how close does adding this assignment bring the ratio to preference?
|
||||
const deviation = Math.abs(ratioAfter - preference);
|
||||
// Scale: 0 deviation = 10pts, 0.5 deviation = 0pts
|
||||
return Math.max(0, 10 * (1 - deviation * 2));
|
||||
}
|
||||
```
|
||||
|
||||
This score feeds into the assignment algorithm alongside tag overlap, workload balance, and geo-diversity.
|
||||
|
||||
---
|
||||
|
||||
## Juror Roles
|
||||
|
||||
Each JuryGroupMember has a `role` field:
|
||||
|
||||
| Role | Capabilities | Description |
|
||||
|------|-------------|-------------|
|
||||
| `MEMBER` | Evaluate assigned projects, vote in live finals, confirm winners | Standard jury member |
|
||||
| `CHAIR` | All MEMBER capabilities + view all evaluations, moderate discussions, suggest assignments | Jury chairperson — has broader visibility |
|
||||
| `OBSERVER` | View evaluations (read-only), no scoring or voting | Observes the jury process without participating |
|
||||
|
||||
### Role-Based Visibility
|
||||
|
||||
```typescript
|
||||
function getJurorVisibility(
|
||||
role: string,
|
||||
ownAssignments: Assignment[]
|
||||
): VisibilityScope {
|
||||
switch (role) {
|
||||
case 'CHAIR':
|
||||
return {
|
||||
canSeeAllEvaluations: true,
|
||||
canSeeAllAssignments: true,
|
||||
canModerateDiscussions: true,
|
||||
canSuggestReassignments: true,
|
||||
};
|
||||
case 'MEMBER':
|
||||
return {
|
||||
canSeeAllEvaluations: false, // only their own
|
||||
canSeeAllAssignments: false,
|
||||
canModerateDiscussions: false,
|
||||
canSuggestReassignments: false,
|
||||
};
|
||||
case 'OBSERVER':
|
||||
return {
|
||||
canSeeAllEvaluations: true, // read-only
|
||||
canSeeAllAssignments: true,
|
||||
canModerateDiscussions: false,
|
||||
canSuggestReassignments: false,
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Multi-Jury Membership
|
||||
|
||||
A single user can be on multiple jury groups. This is common for:
|
||||
- A juror on Jury 2 (finalist selection) AND Innovation Award Jury
|
||||
- A senior juror on Jury 1 AND Jury 3 (semi-finalist + live finals)
|
||||
|
||||
### Overlap Handling
|
||||
|
||||
```typescript
|
||||
// Get all jury groups for a user in a competition
|
||||
async function getUserJuryGroups(
|
||||
userId: string,
|
||||
competitionId: string
|
||||
): Promise<JuryGroupMember[]> {
|
||||
return prisma.juryGroupMember.findMany({
|
||||
where: {
|
||||
userId,
|
||||
juryGroup: { competitionId },
|
||||
},
|
||||
include: { juryGroup: true },
|
||||
});
|
||||
}
|
||||
|
||||
// Check if user is on a specific jury
|
||||
async function isUserOnJury(
|
||||
userId: string,
|
||||
juryGroupId: string
|
||||
): Promise<boolean> {
|
||||
const member = await prisma.juryGroupMember.findUnique({
|
||||
where: { juryGroupId_userId: { juryGroupId, userId } },
|
||||
});
|
||||
return member !== null;
|
||||
}
|
||||
```
|
||||
|
||||
### Cross-Jury COI Propagation
|
||||
|
||||
When a juror declares a Conflict of Interest for a project in one jury group, it should propagate to all their jury memberships:
|
||||
|
||||
```typescript
|
||||
async function propagateCOI(
|
||||
userId: string,
|
||||
projectId: string,
|
||||
competitionId: string,
|
||||
reason: string
|
||||
): Promise<void> {
|
||||
// Find all jury groups this user is on
|
||||
const memberships = await getUserJuryGroups(userId, competitionId);
|
||||
|
||||
for (const membership of memberships) {
|
||||
// Find assignments for this user+project in each jury group
|
||||
const assignments = await prisma.assignment.findMany({
|
||||
where: {
|
||||
userId,
|
||||
projectId,
|
||||
juryGroupId: membership.juryGroupId,
|
||||
},
|
||||
});
|
||||
|
||||
for (const assignment of assignments) {
|
||||
// Check if COI already declared
|
||||
const existing = await prisma.conflictOfInterest.findUnique({
|
||||
where: { assignmentId: assignment.id },
|
||||
});
|
||||
|
||||
if (!existing) {
|
||||
await prisma.conflictOfInterest.create({
|
||||
data: {
|
||||
assignmentId: assignment.id,
|
||||
reason: `Auto-propagated from ${membership.juryGroup.name}: ${reason}`,
|
||||
declared: true,
|
||||
},
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Jury Group Lifecycle
|
||||
|
||||
### States
|
||||
|
||||
```
|
||||
DRAFT → ACTIVE → LOCKED → ARCHIVED
|
||||
```
|
||||
|
||||
| State | Description | Operations Allowed |
|
||||
|-------|-------------|-------------------|
|
||||
| DRAFT | Being configured. Members can be added/removed freely | Add/remove members, edit settings |
|
||||
| ACTIVE | Jury is in use. Assignments are being made or evaluations in progress | Add members (with warning), edit per-juror settings |
|
||||
| LOCKED | Evaluation or voting is in progress. No membership changes | Edit per-juror notes only |
|
||||
| ARCHIVED | Competition complete. Preserved for records | Read-only |
|
||||
|
||||
### State Transitions
|
||||
|
||||
```typescript
|
||||
// Jury group activates when its linked round starts
|
||||
async function activateJuryGroup(juryGroupId: string): Promise<void> {
|
||||
await prisma.juryGroup.update({
|
||||
where: { id: juryGroupId },
|
||||
data: { isActive: true },
|
||||
});
|
||||
}
|
||||
|
||||
// Jury group locks when evaluation/voting begins
|
||||
async function lockJuryGroup(juryGroupId: string): Promise<void> {
|
||||
// Prevent membership changes during active evaluation
|
||||
await prisma.juryGroup.update({
|
||||
where: { id: juryGroupId },
|
||||
data: { isActive: false }, // Using isActive as soft-lock; could add separate locked field
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Onboarding
|
||||
|
||||
### Juror Onboarding Flow
|
||||
|
||||
When a juror is added to a JuryGroup, they go through an onboarding process:
|
||||
|
||||
1. **Invitation** — Admin adds juror to group → juror receives email invitation
|
||||
2. **Profile Setup** — Juror fills out expertise tags, language preferences, category preference
|
||||
3. **COI Pre-declaration** — Juror reviews project list and declares any pre-existing conflicts
|
||||
4. **Confirmation** — Juror confirms they understand their role and responsibilities
|
||||
|
||||
### Onboarding UI
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Welcome to Jury 1 — Semi-finalist Evaluation │
|
||||
│ │
|
||||
│ You've been selected to evaluate projects for the │
|
||||
│ Monaco Ocean Protection Challenge 2026. │
|
||||
│ │
|
||||
│ Step 1 of 3: Your Expertise │
|
||||
│ ───────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Select your areas of expertise (used for matching): │
|
||||
│ ☑ Marine Biology ☑ Ocean Technology │
|
||||
│ ☐ Renewable Energy ☑ Environmental Policy │
|
||||
│ ☐ Finance/Investment ☐ Social Impact │
|
||||
│ ☐ Data Science ☐ Education │
|
||||
│ │
|
||||
│ Preferred languages: │
|
||||
│ ☑ English ☑ French ☐ Other: [________] │
|
||||
│ │
|
||||
│ Category preference (what % Startups vs Concepts): │
|
||||
│ Startups [====●=========] Concepts │
|
||||
│ 60% / 40% │
|
||||
│ │
|
||||
│ [ Back ] [ Next Step → ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Step 2 of 3: Conflict of Interest Declaration │
|
||||
│ ───────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Please review the project list and declare any conflicts │
|
||||
│ of interest. A COI exists if you have a personal, │
|
||||
│ financial, or professional relationship with a project team. │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────┬──────────────────┐ │
|
||||
│ │ Project │ COI? │ │
|
||||
│ ├──────────────────────────────────────┼──────────────────┤ │
|
||||
│ │ OceanClean AI │ ○ None │ │
|
||||
│ │ DeepReef Monitoring │ ● COI Declared │ │
|
||||
│ │ CoralGuard │ ○ None │ │
|
||||
│ │ WaveEnergy Solutions │ ○ None │ │
|
||||
│ │ ... (60 more projects) │ │ │
|
||||
│ └──────────────────────────────────────┴──────────────────┘ │
|
||||
│ │
|
||||
│ COI Details for "DeepReef Monitoring": │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Former colleague of team lead. Worked together 2022-23. │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [ Back ] [ Next Step → ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Step 3 of 3: Confirmation │
|
||||
│ ───────────────────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ By confirming, you agree to: │
|
||||
│ ☑ Evaluate assigned projects fairly and impartially │
|
||||
│ ☑ Complete evaluations by the deadline │
|
||||
│ ☑ Maintain confidentiality of all submissions │
|
||||
│ ☑ Report any additional conflicts of interest │
|
||||
│ │
|
||||
│ Your assignments: up to 20 projects │
|
||||
│ Evaluation deadline: March 15, 2026 │
|
||||
│ Category target: ~12 Startups / ~8 Concepts │
|
||||
│ │
|
||||
│ [ Back ] [ ✓ Confirm & Start ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Admin Jury Management
|
||||
|
||||
### Jury Group Dashboard
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Jury Groups — MOPC 2026 │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Jury 1 — Semi-finalist Selection [Edit]│ │
|
||||
│ │ Members: 8 | Linked to: Round 3 | Status: ACTIVE │ │
|
||||
│ │ Cap: 20 (SOFT +2) | Avg load: 15.3 projects │ │
|
||||
│ │ ████████████████░░░░ 76% assignments complete │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Jury 2 — Finalist Selection [Edit]│ │
|
||||
│ │ Members: 6 | Linked to: Round 5 | Status: DRAFT │ │
|
||||
│ │ Cap: 15 (HARD) | Not yet assigned │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Jury 3 — Live Finals [Edit]│ │
|
||||
│ │ Members: 5 | Linked to: Round 7, Round 8 | Status: DRAFT │ │
|
||||
│ │ All finalists assigned to all jurors │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Innovation Award Jury [Edit]│ │
|
||||
│ │ Members: 4 | Linked to: Innovation Award | Status: DRAFT │ │
|
||||
│ │ Shares 2 members with Jury 2 │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [ + Create Jury Group ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Member Management
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Jury 1 — Member Management │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Group Defaults: Max 20 | SOFT cap (+2) | Quotas: S:5-12 C:5-12│
|
||||
│ │
|
||||
│ ┌──────┬──────────────┬──────┬─────┬──────┬──────┬──────────┐ │
|
||||
│ │ Role │ Name │ Load │ Cap │ S/C │ Pref │ Actions │ │
|
||||
│ ├──────┼──────────────┼──────┼─────┼──────┼──────┼──────────┤ │
|
||||
│ │ CHAIR│ Dr. Martin │ 18 │ 20S │ 11/7 │ 60% │ [Edit] │ │
|
||||
│ │ MEMBER│ Prof. Dubois│ 15 │ 20S │ 9/6 │ 50% │ [Edit] │ │
|
||||
│ │ MEMBER│ Ms. Chen │ 20 │ 20H │ 12/8 │ 60% │ [Edit] │ │
|
||||
│ │ MEMBER│ Dr. Patel │ 12 │ 15* │ 7/5 │ — │ [Edit] │ │
|
||||
│ │ MEMBER│ Mr. Silva │ 16 │ 20S │ 10/6 │ 70% │ [Edit] │ │
|
||||
│ │ MEMBER│ Dr. Yamada │ 19 │ 20S │ 11/8 │ 55% │ [Edit] │ │
|
||||
│ │ MEMBER│ Ms. Hansen │ 14 │ 20S │ 8/6 │ — │ [Edit] │ │
|
||||
│ │ OBS │ Mr. Berger │ — │ — │ — │ — │ [Edit] │ │
|
||||
│ └──────┴──────────────┴──────┴─────┴──────┴──────┴──────────┘ │
|
||||
│ │
|
||||
│ * = per-juror override S = SOFT H = HARD │
|
||||
│ S/C = Startup/Concept count Pref = preferred startup ratio │
|
||||
│ │
|
||||
│ [ + Add Member ] [ Import from CSV ] [ Run AI Assignment ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Per-Juror Override Sheet
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Edit Juror Settings — Dr. Patel │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Role: [MEMBER ▼] │
|
||||
│ │
|
||||
│ ── Assignment Overrides ────────────────────────────────────── │
|
||||
│ (Leave blank to use group defaults) │
|
||||
│ │
|
||||
│ Max assignments: [15 ] (group default: 20) │
|
||||
│ Cap mode: [HARD ▼] (group default: SOFT) │
|
||||
│ │
|
||||
│ Category quotas: │
|
||||
│ Startups: min [3 ] max [10] (group: 5-12) │
|
||||
│ Concepts: min [3 ] max [8 ] (group: 5-12) │
|
||||
│ │
|
||||
│ ── Preferences ─────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ Preferred startup ratio: [ ] % (blank = no preference) │
|
||||
│ Expertise tags: [marine-biology, policy, ...] │
|
||||
│ Language: [English, French] │
|
||||
│ │
|
||||
│ ── Notes ───────────────────────────────────────────────────── │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Dr. Patel requested reduced load due to conference │ │
|
||||
│ │ schedule in March. Hard cap at 15. │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [ Cancel ] [ Save Changes ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Assignment Algorithm
|
||||
|
||||
The assignment algorithm (see `06-round-evaluation.md`) uses JuryGroup data at every step:
|
||||
|
||||
### Algorithm Input
|
||||
|
||||
```typescript
|
||||
type AssignmentInput = {
|
||||
roundId: string;
|
||||
juryGroupId: string;
|
||||
projects: Project[];
|
||||
config: {
|
||||
requiredReviewsPerProject: number;
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
### Algorithm Steps Using JuryGroup
|
||||
|
||||
1. **Load jury members** — Fetch all active JuryGroupMembers with role != OBSERVER
|
||||
2. **Resolve effective limits** — For each member, compute effective cap and quotas
|
||||
3. **Filter by COI** — Exclude members with declared COI for each project
|
||||
4. **Score candidates** — For each (project, juror) pair, compute:
|
||||
- Tag overlap score (expertise alignment)
|
||||
- Workload balance score (prefer jurors with fewer assignments)
|
||||
- Category ratio alignment score (prefer assignment that brings ratio closer to preference)
|
||||
- Geo-diversity score
|
||||
5. **Apply caps** — Skip jurors who have reached their effective cap
|
||||
6. **Apply quotas** — Skip jurors who have reached category max
|
||||
7. **Rank and assign** — Greedily assign top-scoring pairs
|
||||
8. **Validate minimums** — Check if category minimums are met; warn admin if not
|
||||
|
||||
### Assignment Preview
|
||||
|
||||
```typescript
|
||||
type AssignmentPreview = {
|
||||
assignments: {
|
||||
userId: string;
|
||||
projectId: string;
|
||||
score: number;
|
||||
breakdown: {
|
||||
tagOverlap: number;
|
||||
workloadBalance: number;
|
||||
ratioAlignment: number;
|
||||
geoDiversity: number;
|
||||
};
|
||||
}[];
|
||||
|
||||
warnings: {
|
||||
type: 'CAP_EXCEEDED' | 'QUOTA_UNMET' | 'COI_SKIP' | 'UNASSIGNED_PROJECT';
|
||||
message: string;
|
||||
userId?: string;
|
||||
projectId?: string;
|
||||
}[];
|
||||
|
||||
stats: {
|
||||
totalAssignments: number;
|
||||
avgLoadPerJuror: number;
|
||||
minLoad: number;
|
||||
maxLoad: number;
|
||||
unassignedProjects: number;
|
||||
categoryDistribution: Record<string, { avg: number; min: number; max: number }>;
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## API Procedures
|
||||
|
||||
### New tRPC Router: jury-group.ts
|
||||
|
||||
```typescript
|
||||
export const juryGroupRouter = router({
|
||||
// ── CRUD ───────────────────────────────────────────────
|
||||
|
||||
/** Create a new jury group */
|
||||
create: adminProcedure
|
||||
.input(z.object({
|
||||
competitionId: z.string(),
|
||||
name: z.string().min(1).max(100),
|
||||
description: z.string().optional(),
|
||||
defaultMaxAssignments: z.number().int().min(1).default(20),
|
||||
defaultCapMode: z.enum(['HARD', 'SOFT', 'NONE']).default('SOFT'),
|
||||
softCapBuffer: z.number().int().min(0).default(2),
|
||||
defaultCategoryQuotas: z.record(z.object({
|
||||
min: z.number().int().min(0),
|
||||
max: z.number().int().min(0),
|
||||
})).optional(),
|
||||
}))
|
||||
.mutation(async ({ input }) => { ... }),
|
||||
|
||||
/** Update jury group settings */
|
||||
update: adminProcedure
|
||||
.input(z.object({
|
||||
juryGroupId: z.string(),
|
||||
name: z.string().min(1).max(100).optional(),
|
||||
description: z.string().optional(),
|
||||
defaultMaxAssignments: z.number().int().min(1).optional(),
|
||||
defaultCapMode: z.enum(['HARD', 'SOFT', 'NONE']).optional(),
|
||||
softCapBuffer: z.number().int().min(0).optional(),
|
||||
defaultCategoryQuotas: z.record(z.object({
|
||||
min: z.number().int().min(0),
|
||||
max: z.number().int().min(0),
|
||||
})).nullable().optional(),
|
||||
}))
|
||||
.mutation(async ({ input }) => { ... }),
|
||||
|
||||
/** Delete jury group (only if DRAFT and no assignments) */
|
||||
delete: adminProcedure
|
||||
.input(z.object({ juryGroupId: z.string() }))
|
||||
.mutation(async ({ input }) => { ... }),
|
||||
|
||||
/** Get jury group with members */
|
||||
getById: protectedProcedure
|
||||
.input(z.object({ juryGroupId: z.string() }))
|
||||
.query(async ({ input }) => { ... }),
|
||||
|
||||
/** List all jury groups for a competition */
|
||||
listByCompetition: protectedProcedure
|
||||
.input(z.object({ competitionId: z.string() }))
|
||||
.query(async ({ input }) => { ... }),
|
||||
|
||||
// ── Members ────────────────────────────────────────────
|
||||
|
||||
/** Add a member to the jury group */
|
||||
addMember: adminProcedure
|
||||
.input(z.object({
|
||||
juryGroupId: z.string(),
|
||||
userId: z.string(),
|
||||
role: z.enum(['MEMBER', 'CHAIR', 'OBSERVER']).default('MEMBER'),
|
||||
}))
|
||||
.mutation(async ({ input }) => { ... }),
|
||||
|
||||
/** Remove a member from the jury group */
|
||||
removeMember: adminProcedure
|
||||
.input(z.object({
|
||||
juryGroupId: z.string(),
|
||||
userId: z.string(),
|
||||
}))
|
||||
.mutation(async ({ input }) => { ... }),
|
||||
|
||||
/** Batch add members (from CSV or user selection) */
|
||||
addMembersBatch: adminProcedure
|
||||
.input(z.object({
|
||||
juryGroupId: z.string(),
|
||||
members: z.array(z.object({
|
||||
userId: z.string(),
|
||||
role: z.enum(['MEMBER', 'CHAIR', 'OBSERVER']).default('MEMBER'),
|
||||
})),
|
||||
}))
|
||||
.mutation(async ({ input }) => { ... }),
|
||||
|
||||
/** Update member settings (overrides, preferences) */
|
||||
updateMember: adminProcedure
|
||||
.input(z.object({
|
||||
juryGroupId: z.string(),
|
||||
userId: z.string(),
|
||||
role: z.enum(['MEMBER', 'CHAIR', 'OBSERVER']).optional(),
|
||||
maxAssignmentsOverride: z.number().int().min(1).nullable().optional(),
|
||||
capModeOverride: z.enum(['HARD', 'SOFT', 'NONE']).nullable().optional(),
|
||||
categoryQuotasOverride: z.record(z.object({
|
||||
min: z.number().int().min(0),
|
||||
max: z.number().int().min(0),
|
||||
})).nullable().optional(),
|
||||
preferredStartupRatio: z.number().min(0).max(1).nullable().optional(),
|
||||
expertiseTags: z.array(z.string()).optional(),
|
||||
languagePreferences: z.array(z.string()).optional(),
|
||||
notes: z.string().nullable().optional(),
|
||||
}))
|
||||
.mutation(async ({ input }) => { ... }),
|
||||
|
||||
// ── Queries ────────────────────────────────────────────
|
||||
|
||||
/** Get all jury groups a user belongs to */
|
||||
getMyJuryGroups: juryProcedure
|
||||
.query(async ({ ctx }) => { ... }),
|
||||
|
||||
/** Get assignment stats for a jury group */
|
||||
getAssignmentStats: adminProcedure
|
||||
.input(z.object({ juryGroupId: z.string() }))
|
||||
.query(async ({ input }) => { ... }),
|
||||
|
||||
/** Check if a user can be added (no duplicate, role compatible) */
|
||||
checkMemberEligibility: adminProcedure
|
||||
.input(z.object({
|
||||
juryGroupId: z.string(),
|
||||
userId: z.string(),
|
||||
}))
|
||||
.query(async ({ input }) => { ... }),
|
||||
|
||||
// ── Onboarding ─────────────────────────────────────────
|
||||
|
||||
/** Get onboarding status for a juror */
|
||||
getOnboardingStatus: juryProcedure
|
||||
.input(z.object({ juryGroupId: z.string() }))
|
||||
.query(async ({ ctx, input }) => { ... }),
|
||||
|
||||
/** Submit onboarding form (preferences, COI declarations) */
|
||||
submitOnboarding: juryProcedure
|
||||
.input(z.object({
|
||||
juryGroupId: z.string(),
|
||||
expertiseTags: z.array(z.string()),
|
||||
languagePreferences: z.array(z.string()),
|
||||
preferredStartupRatio: z.number().min(0).max(1).optional(),
|
||||
coiDeclarations: z.array(z.object({
|
||||
projectId: z.string(),
|
||||
reason: z.string(),
|
||||
})),
|
||||
}))
|
||||
.mutation(async ({ ctx, input }) => { ... }),
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Service Functions
|
||||
|
||||
```typescript
|
||||
// src/server/services/jury-group.ts
|
||||
|
||||
/** Create a jury group with defaults */
|
||||
export async function createJuryGroup(
|
||||
competitionId: string,
|
||||
name: string,
|
||||
config?: Partial<JuryGroupConfig>
|
||||
): Promise<JuryGroup>;
|
||||
|
||||
/** Get effective limits for a member (resolved overrides) */
|
||||
export async function getEffectiveLimits(
|
||||
member: JuryGroupMember,
|
||||
group: JuryGroup
|
||||
): Promise<{ maxAssignments: number | null; capMode: CapMode; quotas: CategoryQuotas | null }>;
|
||||
|
||||
/** Check if a juror can receive more assignments */
|
||||
export async function canAssignMore(
|
||||
userId: string,
|
||||
juryGroupId: string,
|
||||
category?: CompetitionCategory
|
||||
): Promise<{ allowed: boolean; reason?: string }>;
|
||||
|
||||
/** Get assignment statistics for the whole group */
|
||||
export async function getGroupAssignmentStats(
|
||||
juryGroupId: string
|
||||
): Promise<GroupStats>;
|
||||
|
||||
/** Propagate COI across all jury groups for a user */
|
||||
export async function propagateCOI(
|
||||
userId: string,
|
||||
projectId: string,
|
||||
competitionId: string,
|
||||
reason: string
|
||||
): Promise<void>;
|
||||
|
||||
/** Get all active members (excluding observers) for assignment */
|
||||
export async function getAssignableMembers(
|
||||
juryGroupId: string
|
||||
): Promise<JuryGroupMember[]>;
|
||||
|
||||
/** Validate group readiness (enough members, all onboarded, etc.) */
|
||||
export async function validateGroupReadiness(
|
||||
juryGroupId: string
|
||||
): Promise<{ ready: boolean; issues: string[] }>;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Edge Cases
|
||||
|
||||
| Scenario | Handling |
|
||||
|----------|----------|
|
||||
| **Juror added to group during active evaluation** | Allowed with admin warning. New juror gets no existing assignments (must run assignment again) |
|
||||
| **Juror removed from group during active evaluation** | Blocked if juror has pending evaluations. Must reassign first |
|
||||
| **All jurors at cap but projects remain unassigned** | Warning shown to admin. Suggest increasing caps or adding jurors |
|
||||
| **Category quota min not met for any juror** | Warning shown in assignment preview. Admin can proceed or adjust |
|
||||
| **Juror on 3+ jury groups** | Supported. Each membership independent. Cross-jury COI propagation ensures consistency |
|
||||
| **Jury Chair also has assignments** | Allowed. Chair is a regular evaluator with extra visibility |
|
||||
| **Observer tries to submit evaluation** | Blocked at procedure level (OBSERVER role excluded from evaluation mutations) |
|
||||
| **Admin deletes jury group with active assignments** | Blocked. Must complete or reassign all assignments first |
|
||||
| **Juror preference ratio impossible** | (e.g., 90% startups but only 20% projects are startups) — Warn in onboarding, treat as best-effort |
|
||||
| **Same user added twice to same group** | Blocked by unique constraint on [juryGroupId, userId] |
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
### Inbound
|
||||
|
||||
| Source | Data | Purpose |
|
||||
|--------|------|---------|
|
||||
| Competition setup wizard | Group config | Create jury groups during competition setup |
|
||||
| User management | User records | Add jurors as members |
|
||||
| COI declarations | Conflict records | Filter assignments, propagate across groups |
|
||||
|
||||
### Outbound
|
||||
|
||||
| Target | Data | Purpose |
|
||||
|--------|------|---------|
|
||||
| Assignment algorithm | Members, caps, quotas | Generate assignments |
|
||||
| Evaluation rounds | Jury membership | Determine who evaluates what |
|
||||
| Live finals | Jury 3 members | Live voting access |
|
||||
| Confirmation round | Jury members | Who must approve winner proposal |
|
||||
| Special awards | Award jury members | Award evaluation access |
|
||||
| Notifications | Member list | Send round-specific emails to jury |
|
||||
|
||||
### JuryGroup → Round Linkage
|
||||
|
||||
Each evaluation or live-final round links to exactly one JuryGroup:
|
||||
|
||||
```prisma
|
||||
model Round {
|
||||
// ...
|
||||
juryGroupId String?
|
||||
juryGroup JuryGroup? @relation(...)
|
||||
}
|
||||
```
|
||||
|
||||
This means:
|
||||
- Round 3 (EVALUATION) → Jury 1
|
||||
- Round 5 (EVALUATION) → Jury 2
|
||||
- Round 7 (LIVE_FINAL) → Jury 3
|
||||
- Round 8 (CONFIRMATION) → Jury 3 (same group, different round)
|
||||
|
||||
A jury group can be linked to multiple rounds (e.g., Jury 3 handles both live finals and confirmation).
|
||||
2898
docs/claude-architecture-redesign/13-notifications-deadlines.md
Normal file
2898
docs/claude-architecture-redesign/13-notifications-deadlines.md
Normal file
File diff suppressed because it is too large
Load Diff
3384
docs/claude-architecture-redesign/14-ai-services.md
Normal file
3384
docs/claude-architecture-redesign/14-ai-services.md
Normal file
File diff suppressed because it is too large
Load Diff
761
docs/claude-architecture-redesign/15-admin-ui.md
Normal file
761
docs/claude-architecture-redesign/15-admin-ui.md
Normal file
@@ -0,0 +1,761 @@
|
||||
# Admin UI Redesign
|
||||
|
||||
## Overview
|
||||
|
||||
The admin interface is the control plane for the entire MOPC competition. It must surface the redesigned Competition→Round model, jury group management, multi-round submissions, mentoring oversight, and winner confirmation — all through an intuitive, efficient interface.
|
||||
|
||||
### Design Principles
|
||||
|
||||
| Principle | Application |
|
||||
|-----------|-------------|
|
||||
| **Progressive disclosure** | Show essentials first; details on drill-down |
|
||||
| **Linear-first navigation** | Round list is a flat, ordered timeline — not nested trees |
|
||||
| **Status at a glance** | Color-coded badges, progress bars, countdowns on every card |
|
||||
| **Override everywhere** | Every automated decision has an admin override within reach |
|
||||
| **Audit transparency** | Every action logged; audit trail accessible from any entity |
|
||||
|
||||
### Tech Stack (UI)
|
||||
|
||||
- **Framework:** Next.js 15 App Router (Server Components default, `'use client'` where needed)
|
||||
- **Styling:** Tailwind CSS 4, mobile-first breakpoints (`md:`, `lg:`)
|
||||
- **Components:** shadcn/ui as base (Button, Card, Dialog, Sheet, Table, Tabs, Select, etc.)
|
||||
- **Data fetching:** tRPC React Query hooks (`trpc.competition.getById.useQuery()`)
|
||||
- **Brand:** Primary Red `#de0f1e`, Dark Blue `#053d57`, White `#fefefe`, Teal `#557f8c`
|
||||
- **Typography:** Montserrat (600/700 headings, 300/400 body)
|
||||
|
||||
---
|
||||
|
||||
## Current Admin UI Audit
|
||||
|
||||
### Existing Pages
|
||||
|
||||
```
|
||||
/admin/
|
||||
├── page.tsx — Dashboard (stats cards, quick actions)
|
||||
├── rounds/
|
||||
│ ├── pipelines/page.tsx — Pipeline list
|
||||
│ ├── new-pipeline/page.tsx — Create new pipeline
|
||||
│ └── pipeline/[id]/
|
||||
│ ├── page.tsx — Pipeline detail (tracks + stages)
|
||||
│ ├── edit/page.tsx — Edit pipeline settings
|
||||
│ ├── wizard/page.tsx — Pipeline setup wizard
|
||||
│ └── advanced/page.tsx — Advanced config (JSON editor)
|
||||
├── awards/
|
||||
│ ├── page.tsx — Award list
|
||||
│ ├── new/page.tsx — Create award
|
||||
│ └── [id]/
|
||||
│ ├── page.tsx — Award detail
|
||||
│ └── edit/page.tsx — Edit award
|
||||
├── members/
|
||||
│ ├── page.tsx — User list
|
||||
│ ├── invite/page.tsx — Invite user
|
||||
│ └── [id]/page.tsx — User detail
|
||||
├── mentors/
|
||||
│ ├── page.tsx — Mentor list
|
||||
│ └── [id]/page.tsx — Mentor detail
|
||||
├── projects/ — Project management
|
||||
├── audit/page.tsx — Audit log viewer
|
||||
├── messages/
|
||||
│ ├── page.tsx — Message center
|
||||
│ └── templates/page.tsx — Email templates
|
||||
├── programs/ — Program management
|
||||
├── settings/ — System settings
|
||||
├── reports/ — Reports
|
||||
├── partners/ — Partner management
|
||||
└── learning/ — Learning resources
|
||||
```
|
||||
|
||||
### Current Limitations
|
||||
|
||||
| Page | Limitation |
|
||||
|------|-----------|
|
||||
| Pipeline list | Shows pipelines as opaque cards. No inline status |
|
||||
| Pipeline detail | Nested Track→Stage tree is confusing. Must drill into each stage |
|
||||
| Pipeline wizard | Generic JSON config per stage type. Not type-aware |
|
||||
| Award management | Awards are separate from pipeline. No jury group link |
|
||||
| Member management | No jury group concept. Can't see "Jury 1 members" |
|
||||
| Mentor oversight | Basic list only. No workspace visibility |
|
||||
| No confirmation UI | Winner confirmation doesn't exist |
|
||||
|
||||
---
|
||||
|
||||
## Redesigned Navigation
|
||||
|
||||
### New Admin Sitemap
|
||||
|
||||
```
|
||||
/admin/
|
||||
├── page.tsx — Dashboard (competition overview)
|
||||
├── competition/
|
||||
│ ├── page.tsx — Competition list
|
||||
│ ├── new/page.tsx — Create competition wizard
|
||||
│ └── [id]/
|
||||
│ ├── page.tsx — Competition dashboard (round timeline)
|
||||
│ ├── settings/page.tsx — Competition-wide settings
|
||||
│ ├── rounds/
|
||||
│ │ ├── page.tsx — All rounds (timeline view)
|
||||
│ │ ├── new/page.tsx — Add round
|
||||
│ │ └── [roundId]/
|
||||
│ │ ├── page.tsx — Round detail (type-specific view)
|
||||
│ │ ├── edit/page.tsx — Edit round config
|
||||
│ │ ├── projects/page.tsx — Projects in this round
|
||||
│ │ ├── assignments/page.tsx — Assignments (EVALUATION rounds)
|
||||
│ │ ├── filtering/page.tsx — Filtering dashboard (FILTERING)
|
||||
│ │ ├── submissions/page.tsx — Submission status (INTAKE/SUBMISSION)
|
||||
│ │ ├── mentoring/page.tsx — Mentoring overview (MENTORING)
|
||||
│ │ ├── stage-manager/page.tsx — Live stage manager (LIVE_FINAL)
|
||||
│ │ └── confirmation/page.tsx — Confirmation (CONFIRMATION)
|
||||
│ ├── jury-groups/
|
||||
│ │ ├── page.tsx — All jury groups
|
||||
│ │ ├── new/page.tsx — Create jury group
|
||||
│ │ └── [groupId]/
|
||||
│ │ ├── page.tsx — Jury group detail + members
|
||||
│ │ └── edit/page.tsx — Edit group settings
|
||||
│ ├── submission-windows/
|
||||
│ │ ├── page.tsx — All submission windows
|
||||
│ │ └── [windowId]/
|
||||
│ │ ├── page.tsx — Window detail + requirements
|
||||
│ │ └── edit/page.tsx — Edit window
|
||||
│ ├── awards/
|
||||
│ │ ├── page.tsx — Special awards for this competition
|
||||
│ │ ├── new/page.tsx — Create award
|
||||
│ │ └── [awardId]/
|
||||
│ │ ├── page.tsx — Award detail
|
||||
│ │ └── edit/page.tsx — Edit award
|
||||
│ └── results/
|
||||
│ └── page.tsx — Final results + export
|
||||
├── members/ — User management (unchanged)
|
||||
├── audit/page.tsx — Audit log (enhanced)
|
||||
├── messages/ — Messaging (unchanged)
|
||||
├── programs/ — Program management
|
||||
└── settings/ — System settings
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Competition Dashboard
|
||||
|
||||
The central hub for managing a competition. Replaces the old Pipeline detail page.
|
||||
|
||||
### Layout
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────────────┐
|
||||
│ MOPC 2026 Competition Status: ACTIVE [Edit] │
|
||||
│ Program: Monaco Ocean Protection Challenge 2026 │
|
||||
├──────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ── Quick Stats ──────────────────────────────────────────────────── │
|
||||
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────────────┐ │
|
||||
│ │ 127 │ │ 23 │ │ 8 │ │ Round 3 of 8 │ │
|
||||
│ │ Applications│ │ Advancing │ │ Jury Groups│ │ Jury 1 Evaluation │ │
|
||||
│ │ │ │ │ │ 22 members │ │ ███████░░░ 68% │ │
|
||||
│ └────────────┘ └────────────┘ └────────────┘ └────────────────────┘ │
|
||||
│ │
|
||||
│ ── Round Timeline ───────────────────────────────────────────────── │
|
||||
│ │
|
||||
│ ✓ R1 ✓ R2 ● R3 ○ R4 ○ R5 ○ R6 ○ R7 ○ R8 │
|
||||
│ Intake Filter Jury 1 Submn 2 Jury 2 Mentor Finals Confirm │
|
||||
│ DONE DONE ACTIVE PENDING PENDING PENDING PENDING PENDING │
|
||||
│ 127 98→23 23/23 │ │
|
||||
│ eval'd │ │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Round 3: Jury 1 — Semi-finalist Selection [Manage →] │ │
|
||||
│ │ Type: EVALUATION | Jury: Jury 1 (8 members) │ │
|
||||
│ │ Status: ACTIVE | Started: Feb 1 | Deadline: Mar 15 │ │
|
||||
│ │ │ │
|
||||
│ │ ████████████████████████████████████░░░░░░░░░░░░ 68% │ │
|
||||
│ │ Evaluations: 186 / 276 complete │ │
|
||||
│ │ │ │
|
||||
│ │ ┌──────────────┬──────────────┬──────────────┬────────────┐ │ │
|
||||
│ │ │ Assigned: 276│ Complete: 186│ Pending: 90 │ COI: 12 │ │ │
|
||||
│ │ └──────────────┴──────────────┴──────────────┴────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ │ [ View Assignments ] [ View Results ] [ Advance Projects ] │ │
|
||||
│ └──────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ── Sidebar: Jury Groups ─────────────────────────────────────────── │
|
||||
│ ┌─────────────────────────────┐ ┌─────────────────────────────┐ │
|
||||
│ │ Jury 1 (8 members) [→] │ │ Jury 2 (6 members) [→] │ │
|
||||
│ │ Avg load: 15.3 / 20 │ │ Not yet assigned │ │
|
||||
│ │ ████████████████░░░░ │ │ ░░░░░░░░░░░░░░░░░░░░ │ │
|
||||
│ └─────────────────────────────┘ └─────────────────────────────┘ │
|
||||
│ ┌─────────────────────────────┐ ┌─────────────────────────────┐ │
|
||||
│ │ Jury 3 (5 members) [→] │ │ Innovation Jury (4) [→] │ │
|
||||
│ │ Assigned to R7 + R8 │ │ Award jury │ │
|
||||
│ └─────────────────────────────┘ └─────────────────────────────┘ │
|
||||
│ │
|
||||
│ ── Sidebar: Special Awards ──────────────────────────────────────── │
|
||||
│ ┌─────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Innovation Award STAY_IN_MAIN Jury: Innovation Jury [→] │ │
|
||||
│ │ Impact Award SEPARATE_POOL Jury: Impact Jury [→] │ │
|
||||
│ └─────────────────────────────────────────────────────────────────┘ │
|
||||
└──────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Key Components
|
||||
|
||||
| Component | Description |
|
||||
|-----------|-------------|
|
||||
| `<QuickStatsGrid>` | 4 stat cards showing key metrics |
|
||||
| `<RoundTimeline>` | Horizontal timeline with round status badges |
|
||||
| `<ActiveRoundCard>` | Expanded card for the currently active round |
|
||||
| `<JuryGroupCards>` | Grid of jury group summary cards |
|
||||
| `<AwardSidebar>` | List of special awards with status |
|
||||
|
||||
---
|
||||
|
||||
## Competition Setup Wizard
|
||||
|
||||
Replaces the old Pipeline Wizard. A multi-step form that creates the entire competition structure.
|
||||
|
||||
### Wizard Steps
|
||||
|
||||
```
|
||||
Step 1: Basics → Competition name, program, categories
|
||||
Step 2: Round Builder → Add/reorder rounds (type picker)
|
||||
Step 3: Jury Groups → Create jury groups, assign to rounds
|
||||
Step 4: Submission Windows → Define file requirements per window
|
||||
Step 5: Special Awards → Configure awards (optional)
|
||||
Step 6: Notifications → Deadline reminders, email settings
|
||||
Step 7: Review & Create → Summary of everything, create button
|
||||
```
|
||||
|
||||
### Step 1: Basics
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Create Competition — Step 1 of 7: Basics │
|
||||
│ ●───○───○───○───○───○───○ │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Competition Name: │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ MOPC 2026 Competition │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Program: [MOPC 2026 ▼] │
|
||||
│ │
|
||||
│ Category Mode: │
|
||||
│ ● Shared — Both Startups and Concepts in same flow │
|
||||
│ ○ Split — Separate finalist counts per category │
|
||||
│ │
|
||||
│ Finalist Counts: │
|
||||
│ Startups: [3 ] Concepts: [3 ] │
|
||||
│ │
|
||||
│ [ Cancel ] [ Next → ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 2: Round Builder
|
||||
|
||||
The core of the wizard — a drag-and-drop round sequencer.
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Create Competition — Step 2 of 7: Round Builder │
|
||||
│ ○───●───○───○───○───○───○ │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Build your competition flow by adding rounds: │
|
||||
│ │
|
||||
│ ┌────┬──────────────────────────────┬──────────────┬────────┐ │
|
||||
│ │ # │ Round │ Type │ Actions│ │
|
||||
│ ├────┼──────────────────────────────┼──────────────┼────────┤ │
|
||||
│ │ 1 │ ≡ Application Window │ INTAKE │ ✎ ✕ │ │
|
||||
│ │ 2 │ ≡ AI Screening │ FILTERING │ ✎ ✕ │ │
|
||||
│ │ 3 │ ≡ Jury 1 - Semi-finalist │ EVALUATION │ ✎ ✕ │ │
|
||||
│ │ 4 │ ≡ Semi-finalist Documents │ SUBMISSION │ ✎ ✕ │ │
|
||||
│ │ 5 │ ≡ Jury 2 - Finalist │ EVALUATION │ ✎ ✕ │ │
|
||||
│ │ 6 │ ≡ Finalist Mentoring │ MENTORING │ ✎ ✕ │ │
|
||||
│ │ 7 │ ≡ Live Finals │ LIVE_FINAL │ ✎ ✕ │ │
|
||||
│ │ 8 │ ≡ Confirm Winners │ CONFIRMATION │ ✎ ✕ │ │
|
||||
│ └────┴──────────────────────────────┴──────────────┴────────┘ │
|
||||
│ │
|
||||
│ [ + Add Round ] │
|
||||
│ │
|
||||
│ Available Round Types: │
|
||||
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ INTAKE │ │ FILTERING │ │ EVALUATION │ │ SUBMISSION │ │
|
||||
│ │ Collect │ │ AI screen │ │ Jury score │ │ More docs │ │
|
||||
│ └────────────┘ └────────────┘ └────────────┘ └────────────┘ │
|
||||
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ MENTORING │ │ LIVE_FINAL │ │ CONFIRM │ │
|
||||
│ │ Workspace │ │ Live vote │ │ Cement │ │
|
||||
│ └────────────┘ └────────────┘ └────────────┘ │
|
||||
│ │
|
||||
│ [ ← Back ] [ Next → ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 2: Round Config Sheet
|
||||
|
||||
When clicking ✎ on a round, a sheet slides out with type-specific config:
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Configure Round: Jury 1 - Semi-finalist (EVALUATION) │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Round Name: [Jury 1 - Semi-finalist Selection ] │
|
||||
│ │
|
||||
│ ── Jury Group ──────────────────────────────────────────────── │
|
||||
│ Assign jury group: [Jury 1 ▼] [ + Create New ] │
|
||||
│ │
|
||||
│ ── Assignment ──────────────────────────────────────────────── │
|
||||
│ Reviews per project: [3 ] │
|
||||
│ (Caps and quotas configured on the jury group) │
|
||||
│ │
|
||||
│ ── Scoring ─────────────────────────────────────────────────── │
|
||||
│ Evaluation form: [Standard Criteria Form ▼] │
|
||||
│ Scoring mode: ● Criteria-based ○ Global score ○ Binary │
|
||||
│ Score range: [1 ] to [10] │
|
||||
│ │
|
||||
│ ── Document Visibility ─────────────────────────────────────── │
|
||||
│ This round can see docs from: │
|
||||
│ ☑ Window 1: Application Documents │
|
||||
│ ☐ Window 2: Semi-finalist Documents (not yet created) │
|
||||
│ │
|
||||
│ ── Advancement ─────────────────────────────────────────────── │
|
||||
│ Advancement mode: │
|
||||
│ ● Top N by score │
|
||||
│ ○ Admin selection │
|
||||
│ ○ AI recommended │
|
||||
│ Advance top: [8 ] projects per category │
|
||||
│ │
|
||||
│ ── Deadline ────────────────────────────────────────────────── │
|
||||
│ Start date: [Feb 1, 2026 ] │
|
||||
│ End date: [Mar 15, 2026] │
|
||||
│ │
|
||||
│ [ Cancel ] [ Save Round ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 3: Jury Groups
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Create Competition — Step 3 of 7: Jury Groups │
|
||||
│ ○───○───●───○───○───○───○ │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Jury 1 — Semi-finalist Selection [Edit]│ │
|
||||
│ │ Linked to: Round 3 │ │
|
||||
│ │ Members: 0 (add after creation) │ │
|
||||
│ │ Default cap: 20 (SOFT +2) │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Jury 2 — Finalist Selection [Edit]│ │
|
||||
│ │ Linked to: Round 5 │ │
|
||||
│ │ Members: 0 (add after creation) │ │
|
||||
│ │ Default cap: 15 (HARD) │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Jury 3 — Live Finals + Confirmation [Edit]│ │
|
||||
│ │ Linked to: Round 7, Round 8 │ │
|
||||
│ │ Members: 0 (add after creation) │ │
|
||||
│ │ All finalists auto-assigned │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [ + Create Jury Group ] │
|
||||
│ │
|
||||
│ Note: Add members to jury groups after competition is created. │
|
||||
│ │
|
||||
│ [ ← Back ] [ Next → ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 4: Submission Windows
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Create Competition — Step 4 of 7: Submission Windows │
|
||||
│ ○───○───○───●───○───○───○ │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Window 1: Application Documents (linked to Round 1) │ │
|
||||
│ │ │ │
|
||||
│ │ File Requirements: │ │
|
||||
│ │ 1. Executive Summary (PDF, max 5MB, required) │ │
|
||||
│ │ 2. Business Plan (PDF, max 20MB, required) │ │
|
||||
│ │ 3. Team Bios (PDF, max 5MB, required) │ │
|
||||
│ │ 4. Supporting Documents (any, max 50MB, optional) │ │
|
||||
│ │ │ │
|
||||
│ │ Deadline: Jan 31, 2026 | Policy: GRACE (30 min) │ │
|
||||
│ │ [ + Add Requirement ] [Edit] │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Window 2: Semi-finalist Documents (linked to Round 4) │ │
|
||||
│ │ │ │
|
||||
│ │ File Requirements: │ │
|
||||
│ │ 1. Updated Business Plan (PDF, max 20MB, required) │ │
|
||||
│ │ 2. Video Pitch (MP4, max 500MB, required) │ │
|
||||
│ │ 3. Financial Projections (PDF/XLSX, max 10MB, required) │ │
|
||||
│ │ │ │
|
||||
│ │ Deadline: Apr 30, 2026 | Policy: HARD │ │
|
||||
│ │ [ + Add Requirement ] [Edit] │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [ + Add Submission Window ] │
|
||||
│ [ ← Back ] [ Next → ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 7: Review & Create
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Create Competition — Step 7 of 7: Review │
|
||||
│ ○───○───○───○───○───○───● │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Competition: MOPC 2026 Competition │
|
||||
│ Category Mode: SHARED (3 Startups + 3 Concepts) │
|
||||
│ │
|
||||
│ Rounds (8): │
|
||||
│ 1. Application Window (INTAKE) ─── Window 1 │
|
||||
│ 2. AI Screening (FILTERING) │
|
||||
│ 3. Jury 1 (EVALUATION) ─── Jury 1 │
|
||||
│ 4. Semi-finalist Docs (SUBMISSION) ─── Window 2 │
|
||||
│ 5. Jury 2 (EVALUATION) ─── Jury 2 │
|
||||
│ 6. Mentoring (MENTORING) │
|
||||
│ 7. Live Finals (LIVE_FINAL) ─── Jury 3 │
|
||||
│ 8. Confirm Winners (CONFIRMATION) ─── Jury 3 │
|
||||
│ │
|
||||
│ Jury Groups (3): Jury 1 (0 members), Jury 2 (0), Jury 3 (0) │
|
||||
│ Submission Windows (2): Application Docs, Semi-finalist Docs │
|
||||
│ Special Awards (2): Innovation Award, Impact Award │
|
||||
│ Notifications: Reminders at 7d, 3d, 1d before deadlines │
|
||||
│ │
|
||||
│ ⚠ Add jury members after creation. │
|
||||
│ │
|
||||
│ [ ← Back ] [ Create Competition ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Round Management
|
||||
|
||||
### Round Detail — Type-Specific Views
|
||||
|
||||
Each round type renders a specialized detail page:
|
||||
|
||||
#### INTAKE Round Detail
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Round 1: Application Window Status: ACTIVE │
|
||||
│ Type: INTAKE | Deadline: Jan 31, 2026 (16 days) │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ 127 │ │ 98 │ │ 29 │ │
|
||||
│ │ Submitted │ │ Complete │ │ Draft │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
│ Category Breakdown: 72 Startups | 55 Concepts │
|
||||
│ │
|
||||
│ Submission Progress (by day): │
|
||||
│ ▁▂▃▃▄▅▆▇████████████▇▇▆▅▄▃▃▂▂▁ │
|
||||
│ Jan 1 Jan 31 │
|
||||
│ │
|
||||
│ Recent Submissions: │
|
||||
│ ┌─────────────────────────────┬──────────┬──────────┬────────┐ │
|
||||
│ │ Team │ Category │ Status │ Files │ │
|
||||
│ ├─────────────────────────────┼──────────┼──────────┼────────┤ │
|
||||
│ │ OceanClean AI │ STARTUP │ Complete │ 4/4 │ │
|
||||
│ │ DeepReef Monitoring │ STARTUP │ Complete │ 3/4 │ │
|
||||
│ │ BlueTide Analytics │ CONCEPT │ Draft │ 1/4 │ │
|
||||
│ └─────────────────────────────┴──────────┴──────────┴────────┘ │
|
||||
│ │
|
||||
│ [ View All Submissions ] [ Export CSV ] [ Extend Deadline ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### FILTERING Round Detail
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Round 2: AI Screening Status: ACTIVE │
|
||||
│ Type: FILTERING | Auto-advance: ON │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ 98 │ │ 23 │ │ 67 │ │
|
||||
│ │ Screened │ │ Passed │ │ Failed │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │
|
||||
│ ┌──────────┐ │
|
||||
│ │ 8 │ │
|
||||
│ │ Flagged │ ← Require manual review │
|
||||
│ └──────────┘ │
|
||||
│ │
|
||||
│ Flagged for Review: │
|
||||
│ ┌─────────────────────────┬──────────┬──────┬─────────────┐ │
|
||||
│ │ Project │ AI Score │ Flag │ Action │ │
|
||||
│ ├─────────────────────────┼──────────┼──────┼─────────────┤ │
|
||||
│ │ WaveEnergy Solutions │ 0.55 │ EDGE │ [✓] [✗] [?] │ │
|
||||
│ │ MarineData Hub │ 0.48 │ LOW │ [✓] [✗] [?] │ │
|
||||
│ │ CoralMapper (dup?) │ 0.82 │ DUP │ [✓] [✗] [?] │ │
|
||||
│ └─────────────────────────┴──────────┴──────┴─────────────┘ │
|
||||
│ │
|
||||
│ [ View All Results ] [ Re-run AI Screening ] [ Override ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### EVALUATION Round Detail
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Round 3: Jury 1 — Semi-finalist Status: ACTIVE │
|
||||
│ Type: EVALUATION | Jury: Jury 1 (8 members) │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ── Evaluation Progress ─────────────────────────────────────── │
|
||||
│ █████████████████████████████████░░░░░░░░░░░░░░░ 68% │
|
||||
│ 186 / 276 evaluations complete │
|
||||
│ │
|
||||
│ Per-Juror Progress: │
|
||||
│ Dr. Martin ██████████████████████████████████████ 18/18 ✓ │
|
||||
│ Prof. Dubois██████████████████████████████░░░░░░░ 15/20 │
|
||||
│ Ms. Chen █████████████████████████████████████████ 20/20 ✓ │
|
||||
│ Dr. Patel █████████████████████░░░░░░░░░░░░░░ 12/15 │
|
||||
│ Mr. Silva ████████████████████████████████░░░░ 16/20 │
|
||||
│ Dr. Yamada ███████████████████████████████████████ 19/20 │
|
||||
│ Ms. Hansen ██████████████████████████░░░░░░░░░ 14/20 │
|
||||
│ │
|
||||
│ ── Actions ─────────────────────────────────────────────────── │
|
||||
│ [ View Assignments ] [ View Results ] [ Send Reminder ] │
|
||||
│ [ Run AI Summary ] [ Advance Top N ] [ Override Decision ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### LIVE_FINAL Stage Manager
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ LIVE STAGE MANAGER — Round 7: Live Finals [● RECORDING] │
|
||||
│ Status: IN_PROGRESS | Category: STARTUP │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Now Presenting: OceanClean AI │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Status: Q_AND_A │ │
|
||||
│ │ Presentation: 12:00 ✓ | Q&A: ██████░░ 6:23 / 10:00 │ │
|
||||
│ │ │ │
|
||||
│ │ [ ▶ Start Voting ] [ ⏸ Pause ] [ ⏭ Skip ] │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ── Jury Votes (5 jurors) ────────────────────────────────── │
|
||||
│ Dr. Martin: ○ waiting | Prof. Dubois: ○ waiting │
|
||||
│ Ms. Chen: ○ waiting | Dr. Patel: ○ waiting │
|
||||
│ Mr. Silva: ○ waiting | │
|
||||
│ │
|
||||
│ ── Audience Votes ───────────────────────────────────────── │
|
||||
│ Registered: 142 | Voted: 0 (voting not yet open) │
|
||||
│ │
|
||||
│ ── Queue ────────────────────────────────────────────────── │
|
||||
│ ┌─────┬──────────────────────┬──────────┬───────────────┐ │
|
||||
│ │ Ord │ Project │ Category │ Status │ │
|
||||
│ ├─────┼──────────────────────┼──────────┼───────────────┤ │
|
||||
│ │ ► 1 │ OceanClean AI │ STARTUP │ Q_AND_A │ │
|
||||
│ │ 2 │ DeepReef Monitoring │ STARTUP │ WAITING │ │
|
||||
│ │ 3 │ CoralGuard │ STARTUP │ WAITING │ │
|
||||
│ └─────┴──────────────────────┴──────────┴───────────────┘ │
|
||||
│ │
|
||||
│ [ Switch to CONCEPT Window ] [ End STARTUP Window ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### CONFIRMATION Round Detail
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Round 8: Confirm Winners Status: ACTIVE │
|
||||
│ Type: CONFIRMATION | Jury: Jury 3 │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────────────────────────┐ │
|
||||
│ │ STARTUP Proposal │ │
|
||||
│ │ Status: APPROVED ✓ Approvals: 5/5 │ │
|
||||
│ │ 1st: OceanClean AI (92.4) │ │
|
||||
│ │ 2nd: DeepReef (88.7) │ │
|
||||
│ │ 3rd: CoralGuard (85.1) │ │
|
||||
│ │ [ Freeze Results ] │ │
|
||||
│ └────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────────────────────────┐ │
|
||||
│ │ CONCEPT Proposal │ │
|
||||
│ │ Status: PENDING Approvals: 3/5 │ │
|
||||
│ │ 1st: BlueTide Analytics (89.2) │ │
|
||||
│ │ 2nd: MarineData Hub (84.6) │ │
|
||||
│ │ 3rd: SeaWatch (81.3) │ │
|
||||
│ │ [ Send Reminder ] [ Override ] │ │
|
||||
│ └────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ [ Freeze All Approved ] [ Export Results PDF ] │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Component Architecture
|
||||
|
||||
### Shared Components
|
||||
|
||||
| Component | Used In | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `<CompetitionSidebar>` | All /competition/[id]/* pages | Left sidebar with nav links |
|
||||
| `<RoundTimeline>` | Dashboard, round list | Horizontal visual timeline |
|
||||
| `<StatusBadge>` | Everywhere | Color-coded status chip |
|
||||
| `<ProgressBar>` | Round cards, jury progress | Animated progress bar |
|
||||
| `<CountdownTimer>` | Round detail, dashboard | Real-time countdown to deadline |
|
||||
| `<DataTable>` | Projects, members, assignments | Sortable, filterable table |
|
||||
| `<OverrideDialog>` | Filtering, evaluation, confirmation | Override modal with reason input |
|
||||
| `<AuditTrailSheet>` | Any entity detail page | Slide-out audit log viewer |
|
||||
| `<JuryGroupSelector>` | Wizard, round config | Dropdown with create-new option |
|
||||
|
||||
### Page Components (type-specific)
|
||||
|
||||
| Component | Round Type | Description |
|
||||
|-----------|-----------|-------------|
|
||||
| `<IntakeRoundView>` | INTAKE | Submission stats, file status, deadline |
|
||||
| `<FilteringRoundView>` | FILTERING | AI results, flagged queue, overrides |
|
||||
| `<EvaluationRoundView>` | EVALUATION | Juror progress, assignment stats, results |
|
||||
| `<SubmissionRoundView>` | SUBMISSION | Upload progress, locked windows |
|
||||
| `<MentoringRoundView>` | MENTORING | Workspace activity, milestone progress |
|
||||
| `<LiveFinalStageManager>` | LIVE_FINAL | Full stage manager with controls |
|
||||
| `<ConfirmationRoundView>` | CONFIRMATION | Proposals, approvals, freeze |
|
||||
|
||||
### Dynamic Round Detail Routing
|
||||
|
||||
```typescript
|
||||
// src/app/(admin)/admin/competition/[id]/rounds/[roundId]/page.tsx
|
||||
|
||||
export default function RoundDetailPage({ params }) {
|
||||
const { data: round } = trpc.competition.getRound.useQuery({
|
||||
roundId: params.roundId,
|
||||
});
|
||||
|
||||
if (!round) return <LoadingSkeleton />;
|
||||
|
||||
// Render type-specific component based on round type
|
||||
switch (round.roundType) {
|
||||
case 'INTAKE':
|
||||
return <IntakeRoundView round={round} />;
|
||||
case 'FILTERING':
|
||||
return <FilteringRoundView round={round} />;
|
||||
case 'EVALUATION':
|
||||
return <EvaluationRoundView round={round} />;
|
||||
case 'SUBMISSION':
|
||||
return <SubmissionRoundView round={round} />;
|
||||
case 'MENTORING':
|
||||
return <MentoringRoundView round={round} />;
|
||||
case 'LIVE_FINAL':
|
||||
return <LiveFinalStageManager round={round} />;
|
||||
case 'CONFIRMATION':
|
||||
return <ConfirmationRoundView round={round} />;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Responsive Design
|
||||
|
||||
| Breakpoint | Layout |
|
||||
|------------|--------|
|
||||
| `< md` (mobile) | Single column. Sidebar collapses to hamburger. Tables become cards. Stage manager simplified |
|
||||
| `md` - `lg` (tablet) | Two column. Sidebar always visible. Tables with horizontal scroll |
|
||||
| `> lg` (desktop) | Full layout. Sidebar + content + optional side panel |
|
||||
|
||||
### Mobile Stage Manager
|
||||
|
||||
The live stage manager has a simplified mobile view for admins controlling from a phone:
|
||||
|
||||
```
|
||||
┌─────────────────────────┐
|
||||
│ LIVE CONTROL [● REC]│
|
||||
│ │
|
||||
│ Now: OceanClean AI │
|
||||
│ Status: Q_AND_A │
|
||||
│ Timer: 6:23 / 10:00 │
|
||||
│ │
|
||||
│ ┌──────────────────────┐ │
|
||||
│ │ [ Start Voting ] │ │
|
||||
│ │ [ Pause ] │ │
|
||||
│ │ [ Skip → Next ] │ │
|
||||
│ └──────────────────────┘ │
|
||||
│ │
|
||||
│ Jury: 0/5 voted │
|
||||
│ Audience: 0/142 voted │
|
||||
│ │
|
||||
│ Next: DeepReef Monitoring│
|
||||
└─────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Accessibility
|
||||
|
||||
| Feature | Implementation |
|
||||
|---------|---------------|
|
||||
| **Keyboard navigation** | All actions reachable via Tab/Enter. Focus rings visible |
|
||||
| **Screen reader** | Semantic HTML, `aria-label` on badges, `role="status"` on live regions |
|
||||
| **Color contrast** | All text meets WCAG 2.1 AA. Status badges use icons + color |
|
||||
| **Motion** | Countdown timers respect `prefers-reduced-motion` |
|
||||
| **Focus management** | Dialog focus trap, return focus on close |
|
||||
|
||||
---
|
||||
|
||||
## Integration with tRPC
|
||||
|
||||
### Key Data-Fetching Hooks
|
||||
|
||||
```typescript
|
||||
// Competition dashboard
|
||||
const { data: competition } = trpc.competition.getById.useQuery({ id });
|
||||
const { data: rounds } = trpc.competition.listRounds.useQuery({ competitionId: id });
|
||||
const { data: juryGroups } = trpc.juryGroup.listByCompetition.useQuery({ competitionId: id });
|
||||
|
||||
// Round detail
|
||||
const { data: round } = trpc.competition.getRound.useQuery({ roundId });
|
||||
const { data: projects } = trpc.competition.getProjectsInRound.useQuery({ roundId });
|
||||
const { data: assignments } = trpc.assignment.listByRound.useQuery({ roundId });
|
||||
|
||||
// Live stage manager (with polling)
|
||||
const { data: ceremonyState } = trpc.liveControl.getCeremonyState.useQuery(
|
||||
{ roundId },
|
||||
{ refetchInterval: 1000 } // poll every second
|
||||
);
|
||||
|
||||
// Confirmation
|
||||
const { data: proposals } = trpc.winnerConfirmation.listProposals.useQuery({ competitionId: id });
|
||||
```
|
||||
|
||||
### Mutation Patterns
|
||||
|
||||
```typescript
|
||||
// Advance projects after evaluation
|
||||
const advance = trpc.competition.advanceProjects.useMutation({
|
||||
onSuccess: () => {
|
||||
utils.competition.getRound.invalidate({ roundId });
|
||||
utils.competition.getProjectsInRound.invalidate({ roundId });
|
||||
},
|
||||
});
|
||||
|
||||
// Freeze winner proposal
|
||||
const freeze = trpc.winnerConfirmation.freezeProposal.useMutation({
|
||||
onSuccess: () => {
|
||||
utils.winnerConfirmation.listProposals.invalidate({ competitionId });
|
||||
toast({ title: 'Results frozen', description: 'Official results are now locked.' });
|
||||
},
|
||||
});
|
||||
```
|
||||
1806
docs/claude-architecture-redesign/16-jury-ui.md
Normal file
1806
docs/claude-architecture-redesign/16-jury-ui.md
Normal file
File diff suppressed because it is too large
Load Diff
1787
docs/claude-architecture-redesign/17-applicant-ui.md
Normal file
1787
docs/claude-architecture-redesign/17-applicant-ui.md
Normal file
File diff suppressed because it is too large
Load Diff
1760
docs/claude-architecture-redesign/18-mentor-ui.md
Normal file
1760
docs/claude-architecture-redesign/18-mentor-ui.md
Normal file
File diff suppressed because it is too large
Load Diff
1734
docs/claude-architecture-redesign/19-api-router-reference.md
Normal file
1734
docs/claude-architecture-redesign/19-api-router-reference.md
Normal file
File diff suppressed because it is too large
Load Diff
2185
docs/claude-architecture-redesign/20-service-layer-changes.md
Normal file
2185
docs/claude-architecture-redesign/20-service-layer-changes.md
Normal file
File diff suppressed because it is too large
Load Diff
2972
docs/claude-architecture-redesign/21-migration-strategy.md
Normal file
2972
docs/claude-architecture-redesign/21-migration-strategy.md
Normal file
File diff suppressed because it is too large
Load Diff
1907
docs/claude-architecture-redesign/22-integration-map.md
Normal file
1907
docs/claude-architecture-redesign/22-integration-map.md
Normal file
File diff suppressed because it is too large
Load Diff
2580
docs/claude-architecture-redesign/23-implementation-sequence.md
Normal file
2580
docs/claude-architecture-redesign/23-implementation-sequence.md
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user