6.4 KiB
PhysCom — Project Status
Architecture Overview
src/
├── physcom/ Core engine (CLI + SQLite + 5-pass pipeline)
│ ├── cli.py Click CLI: init, seed, entity, domain, run, results, review, export
│ ├── db/
│ │ ├── schema.py DDL — 9 tables, 4 indexes
│ │ └── repository.py Data-access layer (24 methods)
│ ├── engine/
│ │ ├── combinator.py Cartesian product generator
│ │ ├── constraint_resolver.py Pass 1 — requires/provides/excludes/range matching
│ │ ├── scorer.py Pass 3 — weighted geometric mean normalizer
│ │ └── pipeline.py Orchestrator for passes 1–5
│ ├── llm/
│ │ ├── base.py Abstract LLMProvider interface (2 methods)
│ │ ├── prompts.py Prompt templates for passes 2 and 4
│ │ └── providers/
│ │ └── mock.py Deterministic stub for tests
│ ├── models/ Dataclasses: Entity, Dependency, Domain, MetricBound, Combination, Score
│ └── seed/
│ └── transport_example.py 9 platforms + 9 power sources + 2 domains
│
├── physcom_web/ Flask web UI
│ ├── app.py App factory, per-request DB connection
│ ├── routes/
│ │ ├── entities.py Entity + dependency CRUD (9 routes, HTMX)
│ │ ├── domains.py Domain listing (1 route)
│ │ ├── pipeline.py Run form + execution (2 routes)
│ │ └── results.py Browse, detail, human review (4 routes)
│ ├── templates/ Jinja2 + HTMX — 11 templates
│ └── static/style.css
│
tests/ 37 passing tests
Dockerfile Single-stage Python 3.13-slim
docker-compose.yml web + cli services, shared volume
What Works
| Area | Status | Notes |
|---|---|---|
| Database schema | Done | 9 tables, WAL mode, foreign keys |
| Entity/dependency CRUD | Done | CLI + web UI |
| Domain + metric weights | Done | CLI seed + web read-only |
| Pass 1 — constraint resolution | Done | requires/provides/excludes/range logic |
| Pass 2 — physics estimation | Done | Stub heuristic (force/mass-based); LLM path exists but no real provider |
| Pass 3 — scoring + ranking | Done | Weighted geometric mean with min/max normalization |
| Pass 4 — LLM plausibility review | Wired | Pipeline calls self.llm.review_plausibility() when llm is not None; only MockLLMProvider exists |
| Pass 5 — human review | Done | CLI interactive + web HTMX form |
| Web UI | Done | Entity CRUD, domain view, pipeline run, results browse + review |
| Docker | Done | Compose with web + cli services, named volume |
| Tests | 37/37 passing | Repository, combinator, constraints, scorer, pipeline |
What's Missing
LLM provider — no real implementation yet
The LLMProvider abstract class defines two methods:
class LLMProvider(ABC):
def estimate_physics(self, combination_description: str, metrics: list[str]) -> dict[str, float]: ...
def review_plausibility(self, combination_description: str, scores: dict[str, float]) -> str: ...
Pass 2 (estimate_physics) — given a combination description like "platform: Bicycle + power_source: Hydrogen Combustion Engine", return estimated metric values (speed, cost_efficiency, safety, etc.) as floats.
Pass 4 (review_plausibility) — given a combination description and its normalized scores, return a 2–4 sentence plausibility assessment.
Prompt templates already exist in src/physcom/llm/prompts.py. The pipeline already checks if self.llm: and skips gracefully when None.
To enable LLM reviews, you need to:
-
Create a real provider at
src/physcom/llm/providers/<name>.pythat subclassesLLMProvider. For example, an Anthropic provider:# src/physcom/llm/providers/anthropic.py import json from anthropic import Anthropic from physcom.llm.base import LLMProvider from physcom.llm.prompts import PHYSICS_ESTIMATION_PROMPT, PLAUSIBILITY_REVIEW_PROMPT class AnthropicProvider(LLMProvider): def __init__(self, model: str = "claude-sonnet-4-20250514"): self.client = Anthropic() # reads ANTHROPIC_API_KEY from env self.model = model def estimate_physics(self, description: str, metrics: list[str]) -> dict[str, float]: prompt = PHYSICS_ESTIMATION_PROMPT.format( description=description, metrics=", ".join(metrics), ) resp = self.client.messages.create( model=self.model, max_tokens=256, messages=[{"role": "user", "content": prompt}], ) return json.loads(resp.content[0].text) def review_plausibility(self, description: str, scores: dict[str, float]) -> str: prompt = PLAUSIBILITY_REVIEW_PROMPT.format( description=description, scores=json.dumps(scores, indent=2), ) resp = self.client.messages.create( model=self.model, max_tokens=512, messages=[{"role": "user", "content": prompt}], ) return resp.content[0].text -
Add the dependency to
pyproject.toml:[project.optional-dependencies] llm = ["anthropic>=0.40"] -
Wire it into the CLI — in
cli.py'sruncommand, instantiate the provider when an--llmflag is passed and include pass 4 in the pass list. -
Wire it into the web UI — in
routes/pipeline.py, same logic: read a config flag or env var (PHYSCOM_LLM_PROVIDER), instantiate the provider, pass it toPipeline(...). -
Set the API key —
ANTHROPIC_API_KEYenv var (or equivalent for your chosen provider). In Docker, add it todocker-compose.yml:environment: - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
The same pattern works for OpenAI, Databricks, or any other provider — just subclass LLMProvider and implement the two methods.
Other future work
- Domain creation via web UI — currently seed-only
- Database upgrade — SQLite → Postgres (docker-compose has a commented placeholder)
- Async pipeline runs — currently synchronous; fine for 81 combos, may need background tasks at scale
- Export from web UI — currently CLI-only (
physcom export) - Authentication — no auth on the web UI