AI-powered assignment generation with enriched data and streaming UI
All checks were successful
Build and Push Docker Image / build (push) Successful in 8m19s

- Add aiPreview mutation with full project/juror data (bios, descriptions,
  documents, categories, ocean issues, countries, team sizes)
- Increase AI description limit from 300 to 2000 chars for richer context
- Update GPT system prompt to use all available data fields
- Add mode toggle (AI default / Algorithm fallback) in assignment preview
- Lift AI mutation to parent page for background generation persistence
- Show visual indicator on page while AI generates (spinner + progress card)
- Toast notification with "Review" action when AI completes
- Staggered reveal animation for assignment results (streaming feel)
- Fix assignment balance with dynamic penalty (25pts per existing assignment)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Matt
2026-02-17 14:45:57 +01:00
parent a7b6031f4d
commit 6743119c4d
7 changed files with 640 additions and 73 deletions

View File

@@ -35,11 +35,15 @@ const ASSIGNMENT_BATCH_SIZE = 15
const ASSIGNMENT_SYSTEM_PROMPT = `You are an expert jury assignment optimizer for an ocean conservation competition.
## Your Role
Match jurors to projects based on expertise alignment, workload balance, and coverage requirements.
Match jurors to projects based on expertise alignment, workload balance, geographic diversity, and coverage requirements. You have access to rich data about both jurors and projects — use ALL available information to make optimal assignments.
## Available Data
- **Jurors**: expertiseTags (areas of expertise), bio (background description with deeper domain knowledge), country, currentAssignmentCount, maxAssignments
- **Projects**: title, description (detailed project overview), tags (with confidence 0-1), category (e.g. STARTUP, BUSINESS_CONCEPT), oceanIssue (focus area like CORAL_REEFS, POLLUTION), country, institution, teamSize, fileTypes (submitted document types)
## Matching Criteria (Weighted)
- Expertise Match (50%): How well juror tags/expertise align with project topics. Project tags include a confidence score (0-1) — weight higher-confidence tags more heavily as they are more reliably assigned. A tag with confidence 0.9 is a strong signal; one with 0.5 is uncertain.
- Workload Balance (30%): Distribute assignments evenly; prefer jurors below capacity
- Expertise & Domain Match (50%): How well juror tags, bio, and background align with project topics, category, ocean issue, and description. Use bio text to identify deeper domain expertise beyond explicit tags — e.g., a bio mentioning "20 years of coral research" matches coral-related projects even without explicit tags. Weight higher-confidence tags more heavily.
- Workload Balance (30%): Distribute assignments as evenly as possible; strongly prefer jurors below capacity. Never let one juror get significantly more assignments than another.
- Minimum Target (20%): Prioritize jurors who haven't reached their minimum assignment count
## Output Format
@@ -51,18 +55,20 @@ Return a JSON object:
"project_id": "PROJECT_001",
"confidence_score": 0.0-1.0,
"expertise_match_score": 0.0-1.0,
"reasoning": "1-2 sentence justification"
"reasoning": "1-2 sentence justification referencing specific expertise matches"
}
]
}
## Guidelines
- Each project should receive the required number of reviews
- Each project MUST receive the required number of reviews — ensure full coverage
- Distribute assignments as evenly as possible across all jurors
- Do not assign jurors who are at or above their capacity
- Favor geographic and disciplinary diversity in assignments
- confidence_score reflects overall assignment quality; expertise_match_score reflects tag overlap only
- A strong match: shared expertise tags + available capacity + under minimum target
- An acceptable match: related domain + available capacity
- Favor geographic diversity: avoid assigning jurors from the same country as the project when possible
- Consider disciplinary diversity: mix different expertise backgrounds per project
- confidence_score reflects overall assignment quality; expertise_match_score reflects tag/expertise overlap
- A strong match: shared expertise tags + relevant bio background + available capacity
- An acceptable match: related domain/ocean issue + available capacity
- A poor match: no expertise overlap, only assigned for coverage`
// ─── Types ───────────────────────────────────────────────────────────────────
@@ -88,6 +94,8 @@ interface JurorForAssignment {
name?: string | null
email: string
expertiseTags: string[]
bio?: string | null
country?: string | null
maxAssignments?: number | null
_count?: {
assignments: number
@@ -101,6 +109,12 @@ interface ProjectForAssignment {
tags: string[]
tagConfidences?: Array<{ name: string; confidence: number }>
teamName?: string | null
competitionCategory?: string | null
oceanIssue?: string | null
country?: string | null
institution?: string | null
teamSize?: number
fileTypes?: string[]
_count?: {
assignments: number
}