Commit Graph

7 Commits

Author SHA1 Message Date
e01d741f01 Fix GPT-5 nano empty response issue with token limits
GPT-5 nano (and other GPT-5 models) use reasoning that consumes
the output token budget. When max_tokens is too low, all tokens
get used by internal reasoning, leaving nothing for the response.

- Add needsHigherTokenLimit() to detect models needing more tokens
- Add getMinTokenLimit() to ensure minimum 16k tokens for GPT-5
- Update buildCompletionParams to apply minimum token limits
- This fixes the No response from AI error with gpt-5-nano

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-05 15:02:22 +01:00
3986da172f Fix GPT-5 API compatibility and add AIUsageLog migration
- Add AIUsageLog table migration for token tracking
- Fix GPT-5 temperature parameter (not supported, like o-series)
- Add usesNewTokenParam() and supportsTemperature() functions
- Add GPT-5+ category to model selection UI
- Update model sorting to show GPT-5+ first

GPT-5 and newer models use max_completion_tokens and don't support
custom temperature values, similar to reasoning models.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 15:04:16 +01:00
c0ce6f9f1f Fix GPT-5 max_completion_tokens parameter detection
GPT-5 and newer models require max_completion_tokens instead of max_tokens.
Added usesNewTokenParam() to detect GPT-5+ models separately from reasoning
model restrictions (temperature, json_object, system messages).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 13:08:01 +01:00
928b1c65dc Optimize AI system with batching, token tracking, and GDPR compliance
- Add AIUsageLog model for persistent token/cost tracking
- Implement batched processing for all AI services:
  - Assignment: 15 projects/batch
  - Filtering: 20 projects/batch
  - Award eligibility: 20 projects/batch
  - Mentor matching: 15 projects/batch
- Create unified error classification (ai-errors.ts)
- Enhance anonymization with comprehensive project data
- Add AI usage dashboard to Settings page
- Add usage stats endpoints to settings router
- Create AI system documentation (5 files)
- Create GDPR compliance documentation (2 files)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 11:58:12 +01:00
d068d9b6f6 Improve AI filtering error handling and visibility
- Add listAvailableModels() and validateModel() to openai.ts
- Improve testOpenAIConnection() to test configured model
- Add checkAIStatus endpoint to filtering router
- Add pre-execution AI config check in executeRules
- Improve error messages in AI filtering service (rate limit, quota, etc.)
- Add AI status warning banner on round detail page for filtering rounds

Now admins get clear errors when AI is misconfigured instead of silent flags.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 10:46:38 +01:00
bfcfd84008 Use admin-configured AI model and add GPT-5/o-series options
- Add getConfiguredModel() that reads ai_model from SystemSettings
- AI assignment and mentor matching now use the admin-selected model
- Remove duplicate OpenAI client in mentor-matching (use shared singleton)
- Add GPT-5, GPT-5 Mini, o3, o3 Mini, o4 Mini to model dropdown

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 16:24:46 +01:00
a606292aaa Initial commit: MOPC platform with Docker deployment setup
Full Next.js 15 platform with tRPC, Prisma, PostgreSQL, NextAuth.
Includes production Dockerfile (multi-stage, port 7600), docker-compose
with registry-based image pull, Gitea Actions CI workflow, nginx config
for portal.monaco-opc.com, deployment scripts, and DEPLOYMENT.md guide.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-30 13:41:32 +01:00