Jury evaluation UX overhaul + admin review features
All checks were successful
Build and Push Docker Image / build (push) Successful in 8m53s

- Fix project documents not displaying on jury project page (rewrote MultiWindowDocViewer to use file.listByProject)
- Add working download/preview for project files via presigned URLs
- Display project tags on jury project detail page
- Add autosave for evaluation drafts (debounced 3s + save on unmount/beforeunload)
- Support mixed criterion types: numeric scores, yes/no booleans, text responses, section headers
- Replace inline criteria editor with rich EvaluationFormBuilder on admin round page
- Remove COI dialog from evaluation page
- Update AI summary service to handle boolean/text criteria (yes/no counts, text synthesis)
- Update EvaluationSummaryCard to show boolean criteria bars and text responses
- Add evaluation detail sheet on admin project page (click juror row to view full scores + feedback)
- Add Recent Evaluations dashboard widget showing latest jury reviews

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Matt
2026-02-18 12:43:28 +01:00
parent 73759eaddd
commit 9ce56f13fd
12 changed files with 1137 additions and 385 deletions

View File

@@ -42,10 +42,21 @@ interface EvaluationSummaryCardProps {
roundId: string
}
interface BooleanStats {
yesCount: number
noCount: number
total: number
yesPercent: number
trueLabel: string
falseLabel: string
}
interface ScoringPatterns {
averageGlobalScore: number | null
consensus: number
criterionAverages: Record<string, number>
booleanCriteria?: Record<string, BooleanStats>
textResponses?: Record<string, string[]>
evaluatorCount: number
}
@@ -296,10 +307,10 @@ export function EvaluationSummaryCard({
</div>
)}
{/* Criterion Averages */}
{/* Criterion Averages (Numeric) */}
{Object.keys(patterns.criterionAverages).length > 0 && (
<div>
<p className="text-sm font-medium mb-2">Criterion Averages</p>
<p className="text-sm font-medium mb-2">Score Averages</p>
<div className="space-y-2">
{Object.entries(patterns.criterionAverages).map(([label, avg]) => (
<div key={label} className="flex items-center gap-3">
@@ -323,6 +334,69 @@ export function EvaluationSummaryCard({
</div>
)}
{/* Boolean Criteria (Yes/No) */}
{patterns.booleanCriteria && Object.keys(patterns.booleanCriteria).length > 0 && (
<div>
<p className="text-sm font-medium mb-2">Yes/No Decisions</p>
<div className="space-y-3">
{Object.entries(patterns.booleanCriteria).map(([label, stats]) => (
<div key={label} className="space-y-1">
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground truncate">
{label}
</span>
<span className="text-xs text-muted-foreground flex-shrink-0 ml-2">
{stats.yesCount} {stats.trueLabel} / {stats.noCount} {stats.falseLabel}
</span>
</div>
<div className="flex h-2 rounded-full overflow-hidden bg-muted">
{stats.yesCount > 0 && (
<div
className="h-full bg-emerald-500 transition-all"
style={{ width: `${stats.yesPercent}%` }}
/>
)}
{stats.noCount > 0 && (
<div
className="h-full bg-red-400 transition-all"
style={{ width: `${100 - stats.yesPercent}%` }}
/>
)}
</div>
<div className="flex justify-between text-xs">
<span className="text-emerald-600">{stats.yesPercent}% {stats.trueLabel}</span>
<span className="text-red-500">{100 - stats.yesPercent}% {stats.falseLabel}</span>
</div>
</div>
))}
</div>
</div>
)}
{/* Text Responses */}
{patterns.textResponses && Object.keys(patterns.textResponses).length > 0 && (
<div>
<p className="text-sm font-medium mb-2">Text Responses</p>
<div className="space-y-3">
{Object.entries(patterns.textResponses).map(([label, responses]) => (
<div key={label} className="space-y-1.5">
<p className="text-sm text-muted-foreground">{label}</p>
<div className="space-y-1.5">
{responses.map((text, i) => (
<div
key={i}
className="text-sm p-2 rounded border bg-muted/50 whitespace-pre-wrap"
>
{text}
</div>
))}
</div>
</div>
))}
</div>
</div>
)}
{/* Recommendation */}
{summaryData.recommendation && (
<div className="p-3 rounded-lg bg-blue-500/10 border border-blue-200">