Reference: Configuration Keys

This document provides a reference for all the keys available in Lectic’s YAML configuration, including the main .lec file frontmatter and any included configuration files.

Top-Level Keys

  • interlocutor: A single object defining the primary LLM speaker.
  • interlocutors: A list of interlocutor objects for multiparty conversations.
  • kits: A list of named tool kits you can reference from interlocutors.
  • macros: A list of macro definitions. See Macros.
  • hooks: A list of hook definitions. See Hooks.

The interlocutor Object

An interlocutor object defines a single LLM “personality” or configuration.

  • name: (Required) The name of the speaker, used in the :::Name response blocks.
  • prompt: (Required) The base system prompt that defines the LLM’s personality and instructions. The value can be a string, or it can be loaded from a file (file:./path.txt) or a command (exec:get-prompt). See External Prompts for details and examples.
  • hooks: A list of hook definitions. See Hooks. These hooks fire only when this interlocutor is active.
  • sandbox: A command string (e.g. /path/to/script.sh or wrapper.sh arg1) to wrap execution for all exec tools and local mcp_command tools used by this interlocutor, unless overridden by the tool’s own sandbox setting.

Model Configuration

  • provider: The LLM provider to use. Supported values include anthropic, anthropic/bedrock, openai (Responses API), openai/chat (legacy Chat Completions), gemini, ollama, and openrouter.
  • model: The specific model to use, e.g., claude-3-opus-20240229.
  • temperature: A number between 0 and 1 controlling the randomness of the output.
  • max_tokens: The maximum number of tokens to generate in a response.
  • max_tool_use: The maximum number of tool calls the LLM is allowed to make in a single turn.
  • thinking_effort: Optional hint (used by the openai Responses provider, and by gemini-3-pro) about how much effort to spend reasoning. One of none, low, medium, or high.
  • thinking_budget: Optional integer token budget for providers that support structured thinking phases (Anthropic, Anthropic/Bedrock, Gemini). Ignored by the openai and openai/chat providers.

Providers and defaults

If you don’t specify provider, Lectic picks a default based on your environment. It checks for known API keys in this order and uses the first one it finds:

  1. ANTHROPIC_API_KEY
  2. GEMINI_API_KEY
  3. OPENAI_API_KEY
  4. OPENROUTER_API_KEY

AWS credentials for Bedrock are not considered for auto‑selection. If you want Anthropic via Bedrock, set provider: anthropic/bedrock explicitly and ensure your AWS environment is configured.

OpenAI has two provider options:

  • openai uses the Responses API. You’ll want this for native tools like search and code.
  • openai/chat uses the legacy Chat Completions API. You’ll need this for certain audio workflows that still require chat‑style models.

For a more detailed discussion of provider and model options, see Providers and Models.

Tools

  • tools: A list of tool definitions that this interlocutor can use. The format of each object in the list depends on the tool type. See the Tools section for detailed configuration guides. All tools support a hooks array for tool_use_pre hooks scoped to that particular tool.

Common tool keys

These keys are shared across multiple tool types:

  • name: A custom name for the tool. If omitted, a default is derived from the tool type.
  • usage: Instructions for the LLM on when and how to use the tool. Accepts a string, file:, or exec: source.
  • hooks: A list of hooks scoped to this tool (typically tool_use_pre).

exec tool keys

Run commands and scripts.

  • exec: (Required) The command or inline script to execute. Multi-line values must start with a shebang.
  • schema: A map of parameter name → description. When present, the tool takes named string parameters (exposed as env vars). When absent, the tool takes a required arguments array of strings.
  • sandbox: Command string to wrap execution. Arguments supported.
  • timeoutSeconds: Seconds to wait before aborting.
  • env: Environment variables to set for the subprocess.

sqlite tool keys

Query SQLite databases.

  • sqlite: (Required) Path to the SQLite database file.
  • readonly: Boolean. If true, opens the database in read-only mode.
  • limit: Maximum size of serialized response in bytes.
  • details: Extra context for the model. Accepts string, file:, or exec:.
  • extensions: A list of SQLite extension libraries to load.

agent tool keys

Call another interlocutor as a tool.

  • agent: (Required) The name of the interlocutor to call.
  • raw_output: Boolean. If true, includes raw tool call results in the output rather than sanitized text.

MCP tool keys

Connect to Model Context Protocol servers.

  • One of: mcp_command, mcp_ws, mcp_sse, or mcp_shttp.
  • args: Arguments for mcp_command.
  • env: Environment variables for mcp_command.
  • sandbox: Optional wrapper command to isolate mcp_command servers.
  • roots: Optional list of root objects for file access (each with uri and optional name).
  • exclude: Optional list of server tool names to hide.

Other tool keys

  • think_about: (String) Creates a thinking/scratchpad tool with the given prompt.
  • serve_on_port: (Integer) Creates a single-use web server on the given port.
  • native: One of search or code. Enables provider built-in tools.
  • kit: Name of a tool kit to include.

If you add keys to an interlocutor object that are not listed in this section, Lectic will still parse the YAML, but the LSP marks those properties as unknown with a warning. This is usually a sign of a typo in a key name.


The macro Object

  • name: (Required) The name of the macro, used when invoking it with :name[] or :name[args].
  • expansion: (Optional) The content to be expanded. Can be a string, or loaded via file: or exec:. Equivalent to post if provided. See External Prompts for details about file: and exec:.
  • pre: (Optional) Expansion content for the pre-order phase.
  • post: (Optional) Expansion content for the post-order phase.
  • env: (Optional) A dictionary of environment variables to be set during the macro’s execution. These are merged with any arguments provided at the call site.

The hook Object

  • on: (Required) A single event name or a list of event names to trigger the hook. Supported events are user_message, assistant_message, error, and tool_use_pre.
  • do: (Required) The command or inline script to run when the event occurs. If multi‑line, it must start with a shebang (e.g., #!/bin/bash). Event context is provided as environment variables. See the Hooks guide for details.
  • inline: (Optional) Boolean. If true, the output of the hook is captured and injected into the conversation. Defaults to false.
  • name: (Optional) A name for the hook. Used for merging and overriding hooks from different configuration sources.
  • env: (Optional) A dictionary of environment variables to be set when the hook runs.