A guide to building long-term compounding knowledge infrastructure. See details on GitHub .

LLM

A command-line and Python toolkit for interacting with remote and local large language models.

Introduction

LLM is a command-line and Python toolkit for interacting with OpenAI, Anthropic, Google, Meta and many other large language models. It supports both remote APIs and models that can be installed and run locally via plugins, offering prompt execution, embeddings, structured extraction, and tool execution for terminal-first experimentation and automation.

Key features

  • Dual interfaces: convenient CLI plus a reusable Python API.
  • Plugin ecosystem: extendable to local runtimes (e.g., Ollama) and various cloud model providers.
  • Persistent logging: store prompts and responses in SQLite for audit and analysis.
  • Multi-modal capabilities: extract text from images and handle attachments.

Use cases

  • Quickly run and iterate on prompts from the terminal.
  • Integrate model calls into automation scripts and data pipelines.
  • Run local/offline inference by installing models via plugins.

Technical details

  • Implemented primarily in Python with a plugin-based architecture.
  • Installable via pip, pipx, Homebrew, or uvx; well-documented and tested.
  • Licensed under Apache-2.0; active community with frequent releases.

Comments

LLM
Resource Info
💻 CLI 🛠️ Dev Tools 🧬 LLM 🌱 Open Source