Render
Analysts author Jinja-enriched YAML configs. `hcc-cli render` materializes them into manifests with deterministic hashes for change tracking.
Extract
JDBC streams data warehouse results—for example Snowflake—directly into Parquet files stored under run-specific directories.
Score
Payloads are uploaded to the remote Node scoring service. The service runs inside container infrastructure and streams risk scores back for each run.
Publish
Output Parquet files or score results can be written back to Snowflake, delivered to downstream analytics tooling, or archived with manifest metadata for audit.
CLI Commands
`hcc-cli render`
Render manifests with dry-run support and JSON/YAML output formats.
`hcc-cli score`
Materialize runs, upload payloads, and optionally trigger remote scoring.
`hcc-cli emit-dag`
Generate Airflow or Prefect scaffolds to operationalize manifests.
`hcc-cli validate`
Planned command for schema linting and connectivity diagnostics.
Power User Controls
- Templated runs: Nested Jinja `{% each %}` loops generate monthly and annual manifests with deterministic naming.
- Segment catalogs: Centralized lists (e.g., `segments.ExPreview`) map to curated source enums for consistent cohort definitions.
- Concurrency tuning: `extraction.num_readers`, `output.persist.num_writers`, and `batch_size` let operators balance speed and warehouse load.
- Schema flattening: `extract: '*'` plus targeted includes push wide score tables directly into Snowflake without hand-written SQL.