e01d741f01101652afac31aa2715406b7752e3b7
GPT-5 nano (and other GPT-5 models) use reasoning that consumes the output token budget. When max_tokens is too low, all tokens get used by internal reasoning, leaving nothing for the response. - Add needsHigherTokenLimit() to detect models needing more tokens - Add getMinTokenLimit() to ensure minimum 16k tokens for GPT-5 - Update buildCompletionParams to apply minimum token limits - This fixes the No response from AI error with gpt-5-nano Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Description
No description provided
Languages
TypeScript
99.4%
JavaScript
0.2%
CSS
0.2%
Shell
0.2%