Skip to content

vxm52/ai-commit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 

Repository files navigation

🤖 ai-commit

A local-first, modular AI-powered git commit message generator designed with safety, privacy, and extensibility in mind.

ai-commit analyzes your staged git diff, generates a single-line conventional commit message, and keeps a human in the loop before committing. It uses local LLMs via Ollama, ensuring your code never leaves your machine.

Sometimes you know what changed but struggle to phrase it cleanly. This tool helps brainstorm meaningful commit messages and provides a solid starting point.

✨ Features

  • Reads staged diffs (git diff --staged)
  • Generates single-line conventional commit messages
  • Interactive CLI (accept / edit / cancel)
  • Uses local Ollama models (default: mistral:7b)
  • Clear separation between:
    • CLI logic
    • Git interaction
    • LLM client implementation

📦 Installation (local development)

Requirements

  • Python 3.10+
  • Git
  • Ollama installed and running
  • A local model pulled (default: mistral:7b)
ollama pull mistral:7b

Clone and set up the project

git clone https://github.com/vxm52/ai-commit.git
cd ai-commit

python3 -m venv .venv
source .venv/bin/activate

pip install -r requirements.txt

⚡ Usage

Stage your changes:

git add .

Run the CLI:

python3 -m ai_commit.cli

Example output:

───────────────────────── AI Commit Generator ─────────────────────────
Using local Ollama (Mistral:7b). No data leaves your machine.

Suggested commit message:

chore: ignore __pycache__

Use this commit message?
[y] yes    [e] edit    [n] cancel

🛠 Project Structure

ai_commit/
├── cli.py              # CLI entry point & user interaction
├── llm/
│   ├── base.py         # Abstract LLM client interface
│   └── ollama.py       # Ollama-specific implementation
├── README.md
└── .gitignore

llm/base.py Defines the LLM client contract used by the CLI. Any future model must implement the same interface (e.g. generate_commit_message(diff: str)).

This enables:

  • swapping models without touching CLI logic
  • adding cloud providers later if desired
  • consistent behavior across LLM backends

llm/ollama.py Concrete implementation of the LLM client using a locally running Ollama server. Handles:

  • prompt construction
  • request formatting
  • response parsing
  • timeouts and model configuration

🧠 Prompt Strategy

The prompt is intentionally restrictive to reduce hallucinations:

  • Conventional commit format
  • Single-line output
  • No inferred intent
  • No mention of unchanged files
  • Only describe added or removed lines

🧠 Design Principles

1. Local-first & privacy-preserving

  • Uses locally running LLMs via Ollama
  • No external API calls
  • Git diffs never leave your machine
  • Safe for private repositories and proprietary code

2. Human-in-the-loop by default

The tool never commits automatically.

  • Users must explicitly:
    • accept the suggested message
    • edit it
    • or cancel entirely

3. Literal diff interpretation

The LLM is constrained to:

  • Describe only what appears in the staged diff
  • Only added (+) and removed (-) lines
  • No inferred intent
  • No assumptions about previous file contents
  • Single-line output only

This intentionally trades creativity for correctness.

4. Modular LLM architecture

The project is structured so that LLM providers are interchangeable, not hard-coded.

🚧 Roadmap / Ideas

  • --dry-run mode
  • Commit type heuristics (feat, fix, chore, etc.)
  • Diff preprocessing (only + / - lines)
  • Config file for model selection
  • Package as an installable CLI (ai-commit)

Disclaimer

This tool assists with commit message generation. You are responsible for reviewing and approving every commit.

AI suggestions should never replace human judgment.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages