Add pluggable LLM support with Gemini provider
- Add LLMProvider registry (llm/registry.py) that builds a provider from env vars (LLM_PROVIDER, GEMINI_API_KEY, GEMINI_MODEL) - Add GeminiLLMProvider using the google-genai SDK - Wire build_llm_provider() into CLI and web pipeline route (replacing llm=None) - Wrap pass 2 and pass 4 LLM calls in per-combo try/except so API errors skip individual combos rather than aborting the whole run - Add gemini optional dep to pyproject.toml; Dockerfile installs [web,gemini] - Document env vars in .env.example and README - Lower requires-python to >=3.10 to match installed system Python Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
11
.env.example
Normal file
11
.env.example
Normal file
@@ -0,0 +1,11 @@
|
||||
# Copy to .env — FLASK_SECRET_KEY is auto-generated on first run if omitted.
|
||||
FLASK_SECRET_KEY=
|
||||
|
||||
# LLM provider (leave blank to use stub estimation)
|
||||
# Supported: gemini
|
||||
LLM_PROVIDER=
|
||||
|
||||
# Gemini (required when LLM_PROVIDER=gemini)
|
||||
# Install: pip install -e '.[gemini]' (from repo root)
|
||||
GEMINI_API_KEY=
|
||||
GEMINI_MODEL=gemini-2.0-flash
|
||||
Reference in New Issue
Block a user