Reference: Configuration Keys

This document provides a reference for all the keys available in Lectic’s YAML configuration, including the main .lec file frontmatter and any included configuration files.

Top-Level Keys

  • imports: Optional list of config imports. Each entry is either a string path, or an object with path and optional optional: true. If the path is a directory, Lectic loads <path>/lectic.yaml.
  • interlocutor: A single object defining the primary LLM speaker.
  • interlocutors: A list of interlocutor objects for multiparty conversations.
  • kits: A list of named tool kits you can reference from interlocutors.
  • macros: A list of macro definitions. See Macros.
  • hooks: A list of hook definitions. See Hooks.
  • sandbox: A default sandbox command string applied to all exec tools and local mcp_command tools, unless overridden by interlocutor.sandbox or a tool’s own sandbox setting.

The imports Entry

Imports are resolved relative to the file that declares them.

Valid forms:

imports:
  - ./plugins/sales/module.yaml
  - path: ./plugins/finance
    optional: true
  • String form: required import path.
  • Object form:
    • path: (Required) import path.
    • optional: (Optional, boolean) skip missing files without error.

Imports are recursive, and cycles are reported as errors.


The kit Object

A kit is a named list of tools that can be reused from an interlocutor’s tools array using - kit: <name>.

  • name: (Required) The kit name.
  • tools: (Required) An array of tool definitions.
  • description: (Optional) Short documentation shown in editor hovers and autocomplete.

The interlocutor Object

An interlocutor object defines a single LLM “personality” or configuration.

  • name: (Required) The name of the speaker, used in the :::Name response blocks.
  • prompt: (Required) The base system prompt that defines the LLM’s personality and instructions. The value can be a string, or it can be loaded from a file (file:./path.txt) or a command (exec:get-prompt). See External Prompts for details and examples.
  • hooks: A list of hook definitions. See Hooks. These hooks fire only when this interlocutor is active.
  • sandbox: A command string (e.g. /path/to/script.sh or wrapper.sh arg1) to wrap execution for all exec tools and local mcp_command tools used by this interlocutor, unless overridden by the tool’s own sandbox setting. This overrides any top-level sandbox setting.
  • output_schema: Optional JSON Schema that constrains the assistant’s output to valid JSON. You can define it inline, or load it from file: or exec: (including file:local:...). Loaded text is parsed as YAML and then validated as a JSON Schema. This is forwarded to backends that support structured outputs. See Structured Outputs for the supported schema subset and provider notes.

Model Configuration

  • provider: The LLM provider to use. Supported values include anthropic, anthropic/bedrock, openai (Responses API), openai/chat (legacy Chat Completions), gemini, ollama, and openrouter.
  • model: The specific model to use, e.g., claude-sonnet-4-6.
  • temperature: A number between 0 and 1 controlling the randomness of the output.
  • max_tokens: The maximum number of tokens to generate in a response.
  • max_tool_use: The maximum number of tool calls the LLM is allowed to make in a single turn.
  • thinking_effort: Optional hint (used by the openai Responses provider, and by gemini-3-pro) about how much effort to spend reasoning. One of none, low, medium, or high.
  • thinking_budget: Optional integer token budget for providers that support structured thinking phases (Anthropic, Anthropic/Bedrock, Gemini). Ignored by the openai and openai/chat providers.

Providers and defaults

If you don’t specify provider, Lectic picks a default based on your environment. It checks for known API keys in this order and uses the first one it finds:

  1. ANTHROPIC_API_KEY
  2. GEMINI_API_KEY
  3. OPENAI_API_KEY
  4. OPENROUTER_API_KEY

AWS credentials for Bedrock are not considered for auto‑selection. If you want Anthropic via Bedrock, set provider: anthropic/bedrock explicitly and ensure your AWS environment is configured.

OpenAI has two provider options:

  • openai uses the Responses API. You’ll want this for native tools like search and code.
  • openai/chat uses the legacy Chat Completions API. You’ll need this for certain audio workflows that still require chat‑style models.

For a more detailed discussion of provider and model options, see Providers and Models.

Tools

  • tools: A list of tool definitions that this interlocutor can use. The format of each object in the list depends on the tool type. See the Tools section for detailed configuration guides. All tools support a hooks array for tool hooks (commonly tool_use_pre and tool_use_post) scoped to that particular tool.

Common tool keys

These keys are shared across multiple tool types:

  • name: A custom name for the tool. If omitted, a default is derived from the tool type.
  • usage: Instructions for the LLM on when and how to use the tool. Accepts a string, file:, or exec: source.
  • icon: Optional icon string serialized into <tool-call ...> XML. The LSP folding UI uses this when NERD_FONT=1.
  • hooks: A list of hooks scoped to this tool (typically tool_use_pre and/or tool_use_post).

exec tool keys

Run commands and scripts.

  • exec: (Required) The command or inline script to execute. Multi-line values must start with a shebang.
  • schema: A map of parameter name → description. When present, the tool takes named string parameters (exposed as env vars). When absent, the tool takes a required arguments array of strings.
  • sandbox: Command string to wrap execution. Arguments supported.
  • timeoutSeconds: Seconds to wait before aborting.
  • limit: Maximum output characters returned across stdout and stderr. Excess output is truncated. Default: 100000.
  • env: Environment variables to set for the subprocess.

sqlite tool keys

Query SQLite databases.

  • sqlite: (Required) Path to the SQLite database file.
  • readonly: Boolean. If true, opens the database in read-only mode.
  • limit: Maximum size of serialized response in bytes.
  • details: Extra context for the model. Accepts string, file:, or exec:.
  • extensions: A list of SQLite extension libraries to load.
  • init_sql: Optional SQL script used only when the database file is missing. Accepts plain text, file:, or exec:.

agent tool keys

Call another interlocutor as a tool.

  • agent: (Required) The name of the interlocutor to call.
  • raw_output: Boolean. If true, includes raw tool call results in the output rather than sanitized text.

MCP tool keys

Connect to Model Context Protocol servers.

  • One of: mcp_command, mcp_ws, or mcp_shttp.
  • args: Arguments for mcp_command.
  • env: Environment variables for mcp_command.
  • headers: A map of custom headers for mcp_shttp. Values support file: and exec:.
  • sandbox: Optional wrapper command to isolate mcp_command servers.
  • roots: Optional list of root objects for file access (each with uri and optional name).
  • exclude: Optional list of server tool names to blacklist.
  • only: Optional list of server tool names to whitelist.

Other tool keys

  • native: One of search or code. Enables provider built-in tools.
  • kit: Name of a tool kit to include.

If you add keys to an interlocutor object that are not listed in this section, Lectic will still parse the YAML, but the LSP marks those properties as unknown with a warning. This is usually a sign of a typo in a key name.


The macro Object

  • name: (Required) The name of the macro, used when invoking it with :name[] or :name[args].

  • expansion: (Optional) The content to be expanded. Can be a string, or loaded via file: or exec:. Equivalent to post if provided. See External Prompts for details about file: and exec:.

  • pre: (Optional) Expansion content for the pre-order phase.

  • post: (Optional) Expansion content for the post-order phase.

  • env: (Optional) A dictionary of environment variables to be set during the macro’s execution. These are merged with any arguments provided at the call site.

  • completions: (Optional) Argument completion source for :name[...] in the LSP. Either:

    • an inline list of items ({ completion, detail?, documentation? }), or
    • a source string: file:..., file:local:..., or exec:....

    For file: and exec:, the source output must be a single YAML document containing a sequence of completion objects. JSON arrays are also accepted.

  • completion_trigger: (Optional) One of auto or manual.

    • Default is auto for inline and file: sources.
    • Default is manual for exec: sources.
    • manual means completions are only returned for explicit completion invocation (CompletionTriggerKind.Invoked).

    For exec: completion sources, environment precedence is:

    1. process env + Lectic base env
    2. macro env
    3. dynamic vars (ARG, ARG_PREFIX, MACRO_NAME, LECTIC_COMPLETION=1)

The hook Object

  • on: (Required) A single event name or a list of event names to trigger the hook. Supported events are user_message, assistant_message, assistant_final, assistant_intermediate, tool_use_pre, tool_use_post, run_start, run_end, and error. error is a derived alias of run_end and fires only when RUN_STATUS=error.
  • do: (Required) The command or inline script to run when the event occurs. If multi‑line, it must start with a shebang (e.g., #!/bin/bash). Event context is provided as environment variables. See the Hooks guide for details.
  • inline: (Optional) Boolean. If true, the output of the hook is captured and injected into the conversation. Defaults to false.
  • name: (Optional) A name for the hook. Used for merging and overriding hooks from different configuration sources. For inline hooks, this is also serialized as a name attribute on <inline-attachment kind="hook"> blocks and shown in LSP fold text.
  • icon: (Optional) Icon string for inline hook attachments. If provided, it is serialized as the icon attribute and used by LSP folding when NERD_FONT=1.
  • env: (Optional) A dictionary of environment variables to be set when the hook runs.
  • allow_failure: (Optional) Boolean. If true, non-zero exit status from this hook is ignored. Defaults to false.