What is Hexi, and how is it different from other coding-agent tools? +
Hexi is a lightweight, contract-driven coding-agent runtime designed for local repositories, with a strong focus on controllability and refactor-safe architecture. The main difference is that Hexi treats model output as structured actions, not chat prose, then executes those actions under explicit policy constraints. Instead of running an opaque autonomous loop, Hexi executes one bounded step per run, logs structured events, and exits, which makes behavior easier to inspect, debug, and trust. It is intentionally modular: orchestration lives in core ports and services, while side effects are pushed into adapters. That means model providers, execution behavior, storage, and event presentation can be swapped without rewriting the core. Hexi also emphasizes practical DX through onboarding, doctor checks, templates, and demo flows, while keeping complexity low enough for maintainers to reason about quickly. It’s not trying to be a giant platform; it’s trying to be a reliable base layer.
Is Hexi safe to run in my repository, and what guardrails are already in place? +
Hexi includes several default guardrails intended to reduce accidental damage while still being useful for real coding tasks. File operations are restricted to repository scope, so writes cannot traverse out of the project root. Command execution is policy-controlled through an allowlist in .hexi/config.toml, and additional disallowed command bases help block obviously risky operations. The runtime is single-step by default, which intentionally limits blast radius and encourages review between actions. Every run emits structured events and appends to .hexi/runlog.jsonl, giving you a clear trace of what happened. Configuration layering also helps: shared defaults in config.toml, local overrides in local.toml, and environment-first key resolution. This does not make Hexi “automatically safe” in all contexts, but it creates a practical baseline where behavior is explicit, inspectable, and bounded. Teams can further tighten policies per repo as they move from experimentation to production workflows.
Which model providers are supported, and do I need OpenRouter to use Hexi? +
Hexi supports multiple provider paths so you can choose the integration style that matches your stack. Out of the box, it supports OpenAI-compatible and Anthropic-compatible adapters, plus optional OpenRouter adapters (raw HTTP and official SDK). OpenRouter is not required: you can run Hexi entirely with an OpenAI-compatible or Anthropic-compatible endpoint if you prefer. If you do want OpenRouter, it is packaged as optional extras so base installs stay cleaner and smaller. Provider selection happens in .hexi/config.toml, with provider-specific settings in [providers.*] blocks, while keys are resolved from environment variables first and optional local secrets second. The CLI hexi doctor command reports active provider, model, config paths, key source, and can optionally probe model connectivity. That gives you a deterministic way to confirm setup before executing coding tasks, especially when switching providers or testing in fresh environments.
Why does Hexi use a single-step execution model instead of a continuous autonomous loop? +
Hexi’s single-step model is a deliberate product choice, not a missing implementation. In early-stage agent systems, most reliability failures come from compounding uncertainty across long autonomous loops. By forcing one bounded step per invocation, Hexi improves observability, user control, and debuggability: you can review outputs, diffs, and events after each step, then decide whether to continue. This structure also makes testing and policy enforcement cleaner, because each run has a clear start, constrained action set, and deterministic end state. From an architecture perspective, single-step keeps the core small and stable while adapter capabilities evolve independently. It also aligns with practical engineering workflows where humans still own intent and risk decisions. Hexi can support iterative behavior by repeated invocations or higher-level wrappers, but the base runtime remains intentionally simple. That simplicity is a strategic advantage for trust, maintainability, and incremental hardening over time.
Should I start with `hexi new` or `hexi demo`, and when should I use each? +
Use hexi new when you want predictable scaffolding with minimal ceremony; use hexi demo when you want a creative, model-guided experience that can generate and apply idea-driven customization. hexi new is deterministic by default and optimized for reliable project bootstrap from built-in templates, with optional flags for naming, destination path, and git initialization. It is ideal for repeatable team workflows and automation scripts. hexi demo is intentionally “showcase mode”: it offers random, custom, or model-generated ideas, highlights quality disclaimers, and can run a post-scaffold customization step to shape the project according to the selected prompt. If your goal is stable infrastructure and quick start, begin with new; if your goal is discovery and inspiration, use demo. Many teams use both: new for serious repo creation, then demo in a scratch directory to explore patterns before standardizing them into templates and policies.