Skip to content

Configuration

CodexA stores its configuration in .codex/config.json, created by codex init.

Configuration File

json
{
  "embedding": {
    "model_name": "all-MiniLM-L6-v2",
    "chunk_size": 512,
    "chunk_overlap": 64
  },
  "search": {
    "top_k": 10,
    "similarity_threshold": 0.3,
    "rrf_k": 60
  },
  "index": {
    "extensions": [".py", ".js", ".ts", ".java", ".go", ".rs", ".c", ".cpp"],
    "exclude_patterns": ["**/node_modules/**", "**/.git/**", "**/dist/**"],
    "incremental": true
  },
  "llm": {
    "provider": "openai",
    "model": "gpt-4",
    "api_key": "sk-...",
    "temperature": 0.2,
    "max_tokens": 2048
  },
  "quality": {
    "complexity_threshold": 10,
    "min_maintainability": 40.0,
    "max_issues": 20,
    "snapshot_on_index": false,
    "history_limit": 50
  }
}

Sections

embedding

Controls the sentence-transformer model used for vector encoding.

KeyTypeDefaultDescription
model_namestringall-MiniLM-L6-v2Sentence-transformer model name
chunk_sizeint512Maximum tokens per code chunk
chunk_overlapint64Overlap between consecutive chunks

Controls search behavior across all modes.

KeyTypeDefaultDescription
top_kint10Default number of results
similarity_thresholdfloat0.3Minimum cosine similarity score
rrf_kint60Reciprocal Rank Fusion constant

index

Controls which files are indexed.

KeyTypeDefaultDescription
extensionslistSee aboveFile extensions to index
exclude_patternslistSee aboveGlob patterns to exclude
incrementalbooltrueOnly re-index changed files

llm

Configure the LLM provider for AI-powered commands.

KeyTypeDefaultDescription
providerstringopenai, ollama, or mock
modelstringModel name (e.g., gpt-4, llama3)
api_keystringAPI key (OpenAI only)
temperaturefloat0.2Sampling temperature
max_tokensint2048Maximum response tokens

quality

Controls the quality analysis pipeline and gates.

KeyTypeDefaultDescription
complexity_thresholdint10Min cyclomatic complexity to flag
min_maintainabilityfloat40.0Min maintainability index for gate
max_issuesint20Max issues before gate failure
snapshot_on_indexboolfalseAuto-snapshot after indexing
history_limitint50Max stored snapshots

Environment Variables

VariableDescription
OPENAI_API_KEYOpenAI API key (overrides config)
CODEX_LLM_PROVIDERForce LLM provider
CODEX_LOG_LEVELLogging level (DEBUG, INFO, WARNING)

Project Structure

After codex init:

.codex/
├── config.json     # Configuration file
├── index/          # FAISS vector index
├── cache/          # Query and embedding caches
├── sessions/       # Multi-turn chat sessions
├── memory.json     # Quality snapshots
└── plugins/        # Custom plugin files

Released under the MIT License.