Fix AI filtering bugs, add special award shortlist integration
All checks were successful
Build and Push Docker Image / build (push) Successful in 8m20s
All checks were successful
Build and Push Docker Image / build (push) Successful in 8m20s
Part 1 - Bug Fixes: - Fix toProjectWithRelations() stripping file fields needed by AI (detectedLang, textContent, etc.) - Fix parseAIData() reading flat when aiScreeningJson is nested under rule ID - Fix getAIConfidenceScore() with same nesting issue (always returned 0) Part 2 - Special Award Track Integration: - Add shortlistSize to SpecialAward, qualityScore/shortlisted/confirmed fields to AwardEligibility - Add specialAwardId to Round for award-owned rounds - Update AI eligibility service to return qualityScore (0-100) for ranking - Update eligibility job with filteringRoundId scoping and auto-shortlist top N - Add 8 new specialAward router procedures (listForRound, runEligibilityForRound, listShortlist, toggleShortlisted, confirmShortlist, listRounds, createRound, deleteRound) - Create award-shortlist.tsx component with ranked table, shortlist checkboxes, confirm dialog - Add "Special Award Tracks" section to filtering dashboard Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -48,6 +48,7 @@ Return a JSON object:
|
||||
"project_id": "PROJECT_001",
|
||||
"eligible": true/false,
|
||||
"confidence": 0.0-1.0,
|
||||
"quality_score": 0-100,
|
||||
"reasoning": "2-3 sentence explanation covering key dimensions",
|
||||
"dimensionScores": {
|
||||
"geographic": 0.0-1.0,
|
||||
@@ -59,6 +60,8 @@ Return a JSON object:
|
||||
]
|
||||
}
|
||||
|
||||
quality_score is a 0-100 integer measuring how well the project fits the award criteria (used for ranking shortlists). 100 = perfect fit, 0 = no fit. Even ineligible projects should receive a score for reference.
|
||||
|
||||
## Guidelines
|
||||
- Base evaluation only on provided data — do not infer missing information
|
||||
- eligible=true only when ALL required dimensions score above 0.5
|
||||
@@ -77,6 +80,7 @@ export interface EligibilityResult {
|
||||
projectId: string
|
||||
eligible: boolean
|
||||
confidence: number
|
||||
qualityScore: number
|
||||
reasoning: string
|
||||
method: 'AUTO' | 'AI'
|
||||
}
|
||||
@@ -229,6 +233,7 @@ Evaluate eligibility for each project.`
|
||||
project_id: string
|
||||
eligible: boolean
|
||||
confidence: number
|
||||
quality_score?: number
|
||||
reasoning: string
|
||||
}>
|
||||
}
|
||||
@@ -273,6 +278,7 @@ Evaluate eligibility for each project.`
|
||||
projectId: mapping.realId,
|
||||
eligible: eval_.eligible,
|
||||
confidence: eval_.confidence,
|
||||
qualityScore: Math.max(0, Math.min(100, eval_.quality_score ?? 0)),
|
||||
reasoning: eval_.reasoning,
|
||||
method: 'AI',
|
||||
})
|
||||
@@ -305,6 +311,7 @@ Evaluate eligibility for each project.`
|
||||
projectId: mapping.realId,
|
||||
eligible: false,
|
||||
confidence: 0,
|
||||
qualityScore: 0,
|
||||
reasoning: 'AI response parse error — requires manual review',
|
||||
method: 'AI',
|
||||
})
|
||||
@@ -333,6 +340,7 @@ export async function aiInterpretCriteria(
|
||||
projectId: p.id,
|
||||
eligible: false,
|
||||
confidence: 0,
|
||||
qualityScore: 0,
|
||||
reasoning: 'AI unavailable — requires manual eligibility review',
|
||||
method: 'AI' as const,
|
||||
}))
|
||||
@@ -401,6 +409,7 @@ export async function aiInterpretCriteria(
|
||||
projectId: p.id,
|
||||
eligible: false,
|
||||
confidence: 0,
|
||||
qualityScore: 0,
|
||||
reasoning: `AI error: ${classified.message}`,
|
||||
method: 'AI' as const,
|
||||
}))
|
||||
|
||||
Reference in New Issue
Block a user