What Is Hermes Agent
Hermes Agent is an open-source (MIT license) autonomous agent framework created by Nous Research and released in February 2026. It has accumulated roughly 30K GitHub stars since launch. Unlike simple chatbot wrappers, Hermes implements a closed learning loop: every complex task the agent completes can become a reusable skill stored as Markdown, making the agent faster and more reliable over time.
Key capabilities include persistent memory via SQLite with full-text search, a MEMORY.md file for long-term context, built-in chat platform connectors, an OpenAI API-compatible server for third-party UI integration, and a 5-layer security architecture covering user authentication, command approval, container isolation, credential filtering, and injection scanning.
Prerequisites
Before you begin, make sure you have the following ready:
- Operating System: Linux, macOS, or Windows with WSL 2
- Node.js 18+ (for plugin ecosystem; the core binary is standalone)
- LLM API Key: Claude (Anthropic), GPT-4 (OpenAI), or a local Llama endpoint
- Optional — Docker: Required only if you prefer containerized deployment
- Optional — Telegram Bot Token: Create one via
@BotFatherif you want Telegram integration
Step 1: Install Hermes Agent
The fastest way to install is the official one-line installer. Open your terminal and run:
curl -sSL https://hermes-agent.nousresearch.com/install | bashVerify the installation by checking the version:
hermes --versionAlternative: Docker Install
If you prefer Docker, pull and run the official image:
docker run -d --name hermes -p 8080:8080 nousresearch/hermes-agentThe Docker image includes all dependencies and exposes port 8080 for the built-in API server.
Step 2: Configure Hermes Agent
After installation, create or edit ~/.hermes/hermes.config.yaml. Below are three configuration variants depending on your preferred LLM provider.
Config Variant A: Claude (Anthropic)
model:
provider: anthropic
name: claude-sonnet-4-20250514
api_key_env: ANTHROPIC_API_KEY
memory:
backend: sqlite
path: ~/.hermes/memory.db
full_text_search: true
security:
command_approval: true
container_isolation: false
credential_filtering: trueConfig Variant B: GPT-4 (OpenAI)
model:
provider: openai
name: gpt-4o
api_key_env: OPENAI_API_KEY
memory:
backend: sqlite
path: ~/.hermes/memory.db
full_text_search: true
security:
command_approval: true
container_isolation: false
credential_filtering: trueConfig Variant C: Local Llama
model:
provider: openai-compatible
name: llama-3.1-70b
base_url: http://localhost:11434/v1
api_key_env: OLLAMA_API_KEY
memory:
backend: sqlite
path: ~/.hermes/memory.db
full_text_search: true
security:
command_approval: true
container_isolation: true
credential_filtering: trueNote that api_key_env references an environment variable name, not the key itself. Export your key in ~/.bashrc or ~/.zshrc before starting Hermes.
Step 3: Connect a Chat Platform
Hermes supports multiple chat frontends. Here are the two most popular setups plus CLI mode.
Telegram Setup
- Message
@BotFatheron Telegram and create a new bot - Copy the bot token (format:
123456:ABC-DEF...) - Add to your config:
platforms:
telegram:
enabled: true
bot_token_env: TELEGRAM_BOT_TOKENDiscord Setup
- Create a Discord application at
discord.com/developers - Generate a bot token under the Bot section
- Add to your config:
platforms:
discord:
enabled: true
bot_token_env: DISCORD_BOT_TOKENCLI Mode (for Testing)
The simplest way to test your setup is CLI mode. Just run:
hermes chatThis opens an interactive terminal session where you can send messages and watch the agent respond in real time. CLI mode is ideal for verifying your config before connecting a chat platform.
Step 4: Your First Auto-Generated Skill
Hermes auto-generates skills after completing tasks that involve 5 or more tool calls. Here is how to trigger your first skill:
- Start a CLI session with
hermes chat - Give the agent a multi-step task, for example: “Research the top 5 trending GitHub repos today, clone the most starred one, analyze its README, summarize the architecture, and create a comparison table with the other four.”
- Watch the agent execute multiple tool calls (file operations, web requests, text processing)
- After completion, Hermes automatically saves a skill document
Find the generated skill in ~/.hermes/skills/. Each skill is a Markdown file with this structure:
# Skill: GitHub Trend Analysis
## Trigger
When the user asks to analyze trending repositories...
## Steps
1. Fetch trending repos from GitHub API
2. Clone the top result
3. Parse README.md
4. Generate architecture summary
5. Build comparison table
## Tools Used
- web_fetch, git_clone, file_read, text_summarizeThe next time you or anyone else asks a similar question, Hermes recognizes the pattern and executes the skill directly — faster and with fewer LLM calls. Skills accumulate over time, turning your Hermes instance into a personalized automation library.
Deployment Options
Choose a deployment method based on your uptime requirements and budget. All four options support the same feature set — the only differences are cost, maintenance, and availability.