Add pluggable LLM support with Gemini provider
- Add LLMProvider registry (llm/registry.py) that builds a provider from env vars (LLM_PROVIDER, GEMINI_API_KEY, GEMINI_MODEL) - Add GeminiLLMProvider using the google-genai SDK - Wire build_llm_provider() into CLI and web pipeline route (replacing llm=None) - Wrap pass 2 and pass 4 LLM calls in per-combo try/except so API errors skip individual combos rather than aborting the whole run - Add gemini optional dep to pyproject.toml; Dockerfile installs [web,gemini] - Document env vars in .env.example and README - Lower requires-python to >=3.10 to match installed system Python Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -42,9 +42,10 @@ def _run_pipeline_in_background(
|
||||
conn.close()
|
||||
return
|
||||
|
||||
from physcom.llm.registry import build_llm_provider
|
||||
resolver = ConstraintResolver()
|
||||
scorer = Scorer(domain)
|
||||
pipeline = Pipeline(repo, resolver, scorer, llm=None)
|
||||
pipeline = Pipeline(repo, resolver, scorer, llm=build_llm_provider())
|
||||
|
||||
pipeline.run(
|
||||
domain, dim_list,
|
||||
|
||||
Reference in New Issue
Block a user