Tremor constructs class names via template literals (e.g. fill-${color}-${shade})
which Tailwind v4's scanner cannot detect statically. Added @source inline()
directives to explicitly safelist all color×shade×property combinations needed
by Tremor chart components.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The round breakdown was showing 200% for active rounds (assignments/projects)
and 0% for closed rounds. Now correctly computes evaluations/assignments for
active rounds and shows 100% for closed/archived rounds.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tremor generates Tailwind utility classes dynamically (fill-blue-500,
bg-emerald-500, etc). Tailwind v4 auto-content detection doesn't scan
node_modules, so these classes were missing from CSS output, causing
all charts to render in black.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- getStatusBreakdown now uses ProjectRoundState when a specific round is selected
(fixes donut showing all "Eligible")
- Filter out boolean/section_header criteria from getCriteriaScores
(removes "Move to the Next Stage?" from bar chart)
- Replace 6 insight tiles with Top Countries horizontal bar chart
- Add round-level state labels/colors to chart-theme
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use flex-1 on the Recently Reviewed card so it stretches to fill the
remaining vertical space in the left column, aligning its bottom with
Juror Workload and Activity Feed. Add className prop to AnimatedCard.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When scoringMode is not 'binary', binaryDecision is null even though
jurors answer boolean criteria (e.g. "Do you recommend?"). Now falls
back to checking boolean values in criterionScoresJson. Hides the
recommendation line entirely when no boolean data exists.
Fixed in both analytics.ts (observer) and project.ts (admin).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace breadcrumb with "Back to Projects" button on observer detail
- Remove submission links from observer project info
- Simplify files tab: remove redundant requirements checklist, show only
FileViewer (observer + admin)
- Fix round history: infer earlier rounds as PASSED when later round is
active (e.g. R2 shows Passed when project is active in R3)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Restructure dashboard: score distribution + recently reviewed stacked in left column,
full-width map at bottom, activity feed in middle row
- Show all jurors in scrollable workload list (not just top 5)
- Filter recently reviewed to exclude rejected/not-reviewed projects
- Filter transition audit logs from activity feed
- Remove completion progress bar from stat tile for equal card heights
- Fix all Tremor charts: switch hex colors to named palette (cyan/teal/emerald/amber/rose)
to fix black bar rendering
- Fix transparent chart tooltips with global CSS overrides
- Remove tilted text labels from cross-round comparison charts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix dashboard default round selection to target active round instead of R1
- Move edition selector from dashboard header to hamburger menu via shared context
- Add observer-friendly status labels (Not Reviewed / Under Review / Reviewed)
- Fix pipeline completion: closed rounds show 100%, cap all rates at 100%
- Round badge on projects list shows furthest round reached
- Hide scores/evals for projects with zero evaluations
- Enhance project detail round history with pass/reject indicators from ProjectRoundState
- Remove irrelevant fields (Org Type, Budget, Duration) from project detail
- Clickable juror workload with expandable project assignments
- Humanize activity feed with icons and readable messages
- Fix jurors table: responsive card layout on mobile
- Fix criteria chart: horizontal bars for readable labels on mobile
- Animate hamburger menu open/close with CSS grid transition
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@tremor/react@3.18.7 requires react@^18 but project uses react@19.
Adding .npmrc with legacy-peer-deps=true and copying it in Dockerfiles
so npm ci resolves correctly. Also fix implicit any in seed file.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
These packages are imported by new chart and toggle components but were
never added to package.json, causing the observer reports page to crash
client-side on load.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New page at /observer/projects/[projectId] showing project info,
documents grouped by round requirements, and jury evaluations with
click-through to full review details. Dashboard table rows now link
to project detail. Also cleans up redundant programName prefixes
and fixes chart edge cases.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
notifyAdmins was using BATCH_ASSIGNED notification type, which triggers
the juror assignment email template ('X Projects Assigned'). Admins
received confusing emails that looked like they were assigned projects.
Changed to EVALUATION_MILESTONE type for admin-facing reshuffle/COI
notifications. Also included top receivers in admin notification message.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Audit log now renders JUROR_DROPOUT_RESHUFFLE and COI_REASSIGNMENT
entries as formatted tables with resolved juror names instead of raw
JSON with opaque IDs. Uses new user.resolveNames endpoint to batch-
lookup user IDs. Also adds missing action types to the filter dropdown.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Bug fix: reassignDroppedJuror, reassignAfterCOI, and getSuggestions all
fell back to querying ALL JURY_MEMBER users globally when the round had
no juryGroupId. This caused projects to be assigned to jurors who are no
longer active in the jury pool. Now scopes to jury group members when
available, otherwise to jurors already assigned to the round.
Also adds getSuggestions jury group scoping (matching runAIAssignmentJob).
New feature: Reassignment History panel on admin round page (collapsible)
shows per-project detail of where dropped/COI-reassigned projects went.
Reconstructs retroactive data from audit log timestamps + MANUAL
assignments for pre-fix entries. Future entries log full move details.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Disable enableSlices on ResponsiveLine with single data point (causes
null reference in Nivo internal slice computation)
- Add null check for slice.points[0] in timeline tooltip
- Guard ResponsivePie from empty data array in diversity metrics
- Add fallback for scoreDistribution.distribution on both
observer and admin reports pages
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The reassignDroppedJuror flow was missing a key step — after
reshuffling unsubmitted projects to other jurors, the dropped juror
was not removed from the jury group. This meant they could be
re-assigned in future assignment runs. Now deletes the JuryGroupMember
record after reshuffle, logs removal in audit, and updates the
confirmation dialog to reflect the full action.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
All 9 chart components now have early-return null/empty checks before
calling .map() on data props. The diversity-metrics chart guards all
nested array fields (byCountry, byCategory, byOceanIssue, byTag).
Analytics backend guards p.tags in getDiversityMetrics. This prevents
any "Cannot read properties of null (reading 'map')" crashes even if
upstream data shapes are unexpected.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1. Evaluation submit: The requireAllCriteriaScored validation was
querying findFirst({ roundId, isActive: true }) to get the form
criteria, instead of using the evaluation's stored formId. If an
admin ever re-saved the evaluation form (creating a new version
with new criterion IDs), jurors who started evaluating before the
re-save had scores keyed to old IDs that didn't match the new
form. Now uses evaluation.form (the form assigned at start time).
2. Observer reports page: Two .map() calls on p.stages lacked null
guards, causing "Cannot read properties of null (reading 'map')"
crash. Added (p.stages || []) guards matching the pattern already
used in CrossStageTab.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1. COI audit log: The declareCOI mutation always logged action
'COI_DECLARED' regardless of whether the user clicked "No Conflict"
or "Yes, I Have a Conflict". Now uses 'COI_NO_CONFLICT' when
hasConflict is false, showing "confirmed no conflict of interest"
in the audit trail.
2. Evaluation submission: The requireAllCriteriaScored validation
only accepted numeric values (typeof === 'number'), but boolean
criteria (yes/no questions) store true/false. This caused jurors
to get "Missing scores for criteria: criterion-xxx" errors even
after completing all fields. Now correctly validates boolean
criteria with typeof === 'boolean'. Also improved the error
message to show criterion labels instead of cryptic IDs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When a juror declares COI, the system now automatically:
- Finds an eligible replacement juror (not at capacity, no COI, not already assigned)
- Deletes the conflicted assignment and creates a new one
- Notifies the replacement juror and admins
- Load-balances by picking the juror with fewest current assignments
Also adds:
- "Reassign (COI)" action in assignment table dropdown with COI badge indicator
- Admin "Reassign to another juror" in COI review now triggers actual reassignment
- Per-juror notify button is now always visible (not just on hover)
- reassignCOI admin procedure for retroactive manual reassignment
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds a mail icon on hover for each juror row in the Jury Progress
table, allowing admins to send assignment notifications to individual
jurors instead of only bulk-notifying all at once.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Relative linkUrl paths (e.g. /jury/competitions) were passed as-is to
email templates, causing email clients to interpret them as local file
protocols (x-webdoc:// on macOS). Now prepends NEXTAUTH_URL to any
relative path before sending.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Send Reminders button now works: added sendManualReminders() that bypasses
cron-specific window/deadline/dedup guards so admin can send immediately
- Added Notify Jurors button that sends direct BATCH_ASSIGNED emails to all
jurors with assignments (not dependent on NotificationEmailSetting config)
- Fixed checkbox component: default border is now neutral grey (border-input),
red border (border-primary) only applied when checked
- Widened Add Assignment dialog from max-w-2xl to max-w-3xl to prevent overflow
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The "537 projects" count was summing projectRoundStates across all
rounds, so a project in 3 rounds was counted 3 times. Now queries
distinct projectIds across all competition rounds to show the actual
unique project count (214).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The status summary badges (Eligible, Rejected, Assigned, etc.) were
computed from only the current page's projects. Now uses a groupBy
query on the same filters to return statusCounts for all matching
projects across all pages.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- specialAward.setEligibility: add ensureUserExists() guard and use Prisma
connect syntax to prevent FK violation on stale session user IDs
- specialAward.confirmShortlist: same ensureUserExists() guard for confirmedBy
- Round projects table: add Country column showing project origin
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix bug: AI assignment router read non-existent `(m as any).maxAssignments`
instead of the actual schema field `m.maxAssignmentsOverride`
- Wire `jurorLimits` record into AI assignment constraints so per-juror
caps are respected during both AI scoring and algorithmic assignment
- Add inline editable cap in jury members table (click to edit, blur/enter
to save, empty = no cap / use group default)
- Add inline editable cap badges on round page member list so admins can
set caps right from the assignment workflow
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Instead of 10 sequential GPT calls (which timeout with GPT-5.1 on 99
projects), use a two-phase approach:
Phase 1 - AI Scoring: ONE API call asks GPT to score each juror's
affinity for all projects, returning a compact preference matrix with
expertise match scores and reasoning.
Phase 2 - Algorithm: Uses AI scores as the preference input to a
balanced assignment algorithm that assigns N reviewers per project,
enforcing even workload distribution, respecting per-juror caps, and
filling coverage gaps.
Benefits:
- Single API call eliminates timeout issues
- AI provides expertise-aware scoring, algorithm ensures balance
- Truncated response handling (JSON repair) for resilience
- Falls back to tag-based algorithm if AI fails
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Cap maxTokens at 12000 (was unlimited dynamic calc that could exceed model limits)
- Replace massive EXISTING array with compact CURRENT_JUROR_LOAD counts and
ALREADY_ASSIGNED per-project map (keeps prompt small across batches)
- Add coverage gap-filler: algorithmically fills projects below required reviews
- Show error state inline on page when AI fails (red banner with message)
- Add server-side logging for debugging assignment flow
- Reduce batch size to 10 projects
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Problems:
- GPT only generated 1 reviewer per project despite N being required
- maxTokens (4000) too small for N×projects assignment objects
- No fallback when GPT under-assigned
Fixes:
- System prompt now explicitly explains multiple reviewers per project
with concrete example showing 3 different juror_ids per project
- User prompt includes REVIEWS_PER_PROJECT, EXPECTED_OUTPUT_SIZE
- maxTokens dynamically calculated: expectedAssignments × 200 + 500
- Reduced batch size from 15 to 10 (fewer projects per GPT call)
- Added fillCoverageGaps() post-processor: algorithmically assigns
least-loaded jurors to any project below required coverage
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Root cause: batches of 15 projects were processed independently -
GPT didn't see assignments from previous batches, so expert jurors
got assigned 18-22 projects while others got 4-5.
Fixes:
- Track cumulative assignments across batches (feed to each batch)
- Calculate ideal target per juror and communicate to GPT
- Add post-processing rebalancer that enforces hard caps and
redistributes excess assignments to least-loaded jurors
- Calculate sensible default max cap when not configured
- Reweight prompt: workload balance 50%, expertise 35%, diversity 15%
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Shows warning with juror names when they have no expertise tags or bio,
so admin can ask them to onboard before committing assignments.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Rewrite AIRecommendationsDisplay: show project titles, per-project
checkboxes, Apply and Mark as Passed button with batch transition
- Show AI jury assignment reasoning directly in rows (not tooltip)
- Fix unassigned projects badge using requiredReviews instead of hardcoded 3
- Add aiParseFiles to EvaluationConfigSchema
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add general settings fields (startupAdvanceCount, conceptAdvanceCount,
notifyOnEntry, notifyOnAdvance) to ALL round config schemas, not just
FilteringConfig. Zod was stripping them on save for other round types.
- Replace floating save bar with error-only bar since autosave handles
all config persistence (800ms debounce)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>