DevOps2026-03-28

Top MCP Servers for DevOps Engineers in 2026: Git, Kubernetes, Terraform, and CI/CD

TeamDevOps Automation Team

Why DevOps Engineers Are Adding MCP to Their Workflow

DevOps work is repetitive in the wrong ways: writing boilerplate Terraform modules, debugging failing CI pipelines, untangling Kubernetes pod states, and answering the same "what's deployed in production?" questions across Slack. MCP servers let Claude handle these tasks directly — not by generating code for you to copy-paste, but by taking real actions against your infrastructure.

The key distinction from traditional AI coding assistants: MCP servers give Claude live access to your systems. Claude can list your actual running pods, read your real pipeline logs, and apply a Terraform plan — not simulate it. This changes DevOps workflows from "AI suggests, human executes" to "AI executes, human reviews."

This guide covers the most-starred and most-practical MCP servers for DevOps tasks in 2026, organized by workflow area.

Git and Code Repository MCP Servers

1. Official Git MCP Server (Anthropic)

The official Git MCP server from Anthropic's model context protocol repository is the foundation for any DevOps workflow that touches code. It exposes git operations — commit history, diff inspection, branch management, blame — as tools Claude can use natively.

What it does: Read commit history, inspect diffs, list branches, check blame for specific lines, and navigate repository structure without requiring Claude to guess from outdated training data.

Best use cases:

  • Automated code review against commit history
  • Root cause analysis ("which commit introduced this bug?")
  • Release notes generation from git log
  • Dependency audit across recent changes

Install:

npx @anthropic-ai/git-mcp-server

2. GitMCP — Remote MCP for Any GitHub Repo

GitMCP (7,800+ stars) takes a different approach: it is a free, open-source remote MCP server that converts any GitHub repository into a queryable knowledge base Claude can access without local installation. The goal is eliminating code hallucinations — Claude works from real, current repository content, not training data.

What it does: Exposes repository documentation, README files, source code structure, and commit context through a remote MCP endpoint. No local setup needed — you connect Claude to a URL.

Best use cases:

  • Working with third-party libraries without local clones
  • Getting accurate answers about project structure from any public GitHub repo
  • Teams that want zero-install MCP for repository queries

Connect Claude: Add gitmcp.io/<owner>/<repo> as an MCP server URL in your Claude configuration.

3. GitLab MCP

Teams on GitLab rather than GitHub have gitlab-mcp (1,200+ stars) — the first GitLab-native MCP server. It brings merge requests, pipelines, issues, and repository operations into Claude's tool context.

What it does: List open MRs, check pipeline status, read CI job logs, comment on issues, and manage GitLab project resources directly from Claude.

Best use cases:

  • Pipeline failure triage without leaving your AI assistant
  • Automated MR review summaries
  • Issue management and sprint planning assistance

Kubernetes MCP Servers

4. Kubernetes MCP Server

The Kubernetes MCP server (1,300+ stars) gives Claude direct access to your cluster: pods, deployments, services, namespaces, logs, and events. This is the most complete Kubernetes MCP implementation for production use, with support for both standard Kubernetes and OpenShift.

What it does: List pods, read logs, describe deployments, check resource usage, apply manifests, exec into containers, and navigate namespaces — all through Claude's tool interface.

Real-world workflow:

# Claude can now answer questions like:
# "Why is the payment-service pod in CrashLoopBackOff?"
# "Which deployments in production have less than 2 replicas?"
# "Show me the last 100 lines of the API gateway logs"

Safety note: For production clusters, configure Claude with a read-only service account first. Only grant write permissions for namespaces where you are comfortable with AI-assisted changes.

Best use cases:

  • Incident response: "What changed in the last 30 minutes?"
  • Health checks and capacity review
  • Manifest review and diff before applying
  • Log aggregation across multiple pods

5. K8sGPT

K8sGPT (7,600+ stars) is a specialized Kubernetes AI diagnostics tool that scans your cluster for common issues and explains them in plain language. While not a pure MCP server, it integrates into Claude workflows through its API backend.

What it does: Automated cluster scanning, issue detection (crashlooping pods, resource limits, misconfigured services), and AI-generated explanations with remediation suggestions.

Best for: Teams that want proactive cluster health monitoring and human-readable diagnosis, not just raw kubectl access.

Infrastructure as Code MCP Servers

6. Terraform MCP Server (Official HashiCorp)

The Terraform MCP server (1,300+ stars) is the official HashiCorp integration, making it the most reliable choice for Terraform-based infrastructure workflows. Claude can query the Terraform registry, read module documentation, and assist with plan and apply workflows.

What it does: Search the Terraform registry for providers and modules, read resource documentation, inspect module inputs/outputs, and assist with HCL configuration based on real registry data rather than training knowledge.

Best use cases:

  • Writing accurate Terraform configurations with current provider syntax
  • Module discovery: "Find the best AWS VPC module for multi-region setups"
  • Provider version migration guidance
  • Resource argument lookup without leaving your terminal

Install:

npx @hashicorp/terraform-mcp-server

CI/CD Pipeline MCP Servers

7. CircleCI MCP Server

The CircleCI MCP server (from the official MCP server list) enables Claude to directly interact with CircleCI pipelines. The core use case it is built for: letting AI agents diagnose and fix build failures automatically.

What it does: Read pipeline status, fetch failed job logs, inspect test results, trigger reruns, and provide step-by-step failure analysis.

Workflow example:

# With CircleCI MCP connected, Claude can:
# 1. Read the failing build logs
# 2. Identify the root cause (flaky test? dependency issue? config error?)
# 3. Suggest or apply a fix
# 4. Trigger a new build to verify the fix

Best for: Teams that want to reduce MTTR on CI failures without manually reading through log output.

8. Azure DevOps MCP

For organizations on the Microsoft stack, Azure DevOps MCP (1,150+ stars) brings pipelines, boards, repos, and artifacts into Claude. It is one of the most complete enterprise DevOps integrations in the MCP ecosystem.

What it does: Query pipeline runs, read build logs, manage work items and sprints, inspect code repositories, and trigger pipelines directly from Claude.

Best use cases:

  • Sprint review automation
  • Build failure triage and root cause analysis
  • Work item creation from incident tickets
  • Release tracking and deployment history queries

Cloud Infrastructure MCP Servers

9. AWS MCP

The AWS MCP server (296 stars) lets Claude "talk with your AWS" — querying resources, reading CloudWatch logs, checking service states, and answering infrastructure questions against your real AWS account using Claude.

What it does: Query EC2 instances, S3 buckets, RDS databases, Lambda functions, CloudWatch metrics, and more. All through natural language to Claude instead of manually switching between AWS Console tabs.

Safety configuration:

# Use an IAM role with least-privilege policies
# Start with read-only: ReadOnlyAccess managed policy
# Add specific write permissions only for resources you need to modify

Best use cases:

  • Cost analysis: "Which EC2 instances have been running unused this month?"
  • Security audit: "List S3 buckets with public access enabled"
  • Incident response: "Show recent CloudWatch errors for the payment service"

Docker and Container MCP Tools

10. DockerShrink

DockerShrink (416 stars) is a specialized AI assistant for Docker image optimization. It analyzes your Dockerfile and docker-compose configurations to identify and implement size reduction opportunities.

What it does: Analyze Dockerfiles for bloat, identify unnecessary layers, suggest multi-stage build patterns, remove dev dependencies from production images, and implement optimizations automatically.

Why it matters: Oversized Docker images slow CI builds, increase registry storage costs, and lengthen deployment times. A well-optimized image can be 5-10x smaller than a naive first draft.

Best for: Teams with slow CI builds or high container registry costs who want automated Dockerfile optimization without manual tuning.

Building a Complete DevOps MCP Stack

The best DevOps MCP setups layer complementary servers rather than relying on one. A practical production stack for a Kubernetes-based team might look like:

{
  "mcpServers": {
    "git": { "command": "npx", "args": ["@anthropic-ai/git-mcp-server"] },
    "kubernetes": { "command": "npx", "args": ["kubernetes-mcp-server"] },
    "terraform": { "command": "npx", "args": ["@hashicorp/terraform-mcp-server"] },
    "aws": { "command": "npx", "args": ["aws-mcp"] }
  }
}

With this stack, Claude can trace a production incident from CloudWatch alert → pod logs → recent git commits → Terraform state — without switching contexts.

Security Considerations for DevOps MCP Servers

DevOps MCP servers are powerful precisely because they can take real actions against real infrastructure. Before connecting any of these to production environments:

  • Start read-only: Configure service accounts, IAM roles, and API tokens with read-only permissions. Add write access incrementally as you build trust in the workflow.
  • Namespace isolation: For Kubernetes MCP, restrict access to non-production namespaces first. Use RBAC to limit which resources Claude can see.
  • Audit logging: Enable audit logs on any cloud account or Kubernetes cluster where Claude has access. You want a full record of every API call made through MCP.
  • Review before apply: For Terraform and Kubernetes apply operations, build a human review step into your workflow — Claude proposes the change, you approve it.

For a complete security checklist, see our guide on How to Audit MCP Server Security.

FAQ: DevOps MCP Servers

Can Claude make changes to my production infrastructure through these MCP servers?

Yes — that is the point, but it requires deliberate configuration. By default, you should start with read-only access. Claude can propose Terraform plans, Kubernetes manifest changes, or pipeline configurations, and then apply them only when you explicitly approve. Do not grant write access to production systems before you have tested the workflow in staging.

Which MCP server should I start with for DevOps?

Start with the official Git MCP server — it has the lowest risk (read-only by nature), highest usefulness (every DevOps team uses git), and zero external API credentials needed. Once you are comfortable with how Claude uses MCP tools, add Kubernetes or Terraform MCP based on your primary pain point.

Do these MCP servers work with Claude Desktop or only with the Claude API?

Most of the servers listed here work with both Claude Desktop (via the mcpServers config) and with the Claude API through MCP-compatible SDKs. Claude Desktop is the easiest starting point — edit ~/.config/claude/config.json (macOS/Linux) or %APPDATA%/Claude/config.json (Windows) to add servers.

Is there a GitHub MCP server for managing pull requests and issues?

Yes — the official GitHub MCP server (from GitHub/Anthropic) lets Claude manage pull requests, issues, repositories, and workflows. Search for github in our DevOps skills directory to find it along with community alternatives.

Can I use Kubernetes MCP in a multi-cluster environment?

Yes. The Kubernetes MCP server respects your kubeconfig contexts, so you can configure it to target specific clusters. For multi-cluster setups, run separate MCP server instances per cluster (each with its own kubeconfig context) and name them clearly in your Claude configuration so Claude knows which cluster you are querying.

How do I handle secrets and credentials for cloud infrastructure MCP servers?

Never hardcode credentials in your MCP server configuration. Use environment variables or secrets managers: AWS credentials via ~/.aws/credentials or IAM roles, Kubernetes via kubeconfig with service account tokens, Terraform via workspace environment variables. The MCP server picks up credentials from standard locations — you do not need to pass them explicitly in the config.

Where to Find More DevOps MCP Servers

The DevOps MCP ecosystem is growing rapidly. New servers for ArgoCD, Helm, Datadog, PagerDuty, and more are appearing monthly. Browse the full collection in our DevOps category, filtered by stars, tags, or specific tools. Each listing includes a security grade, installation instructions, and a link to the source repository.

How to apply this guidance in real workflows

Security advice is only useful when it changes implementation behavior. After reading this article, convert the recommendations into a short operational checklist for your team. Start by identifying where the discussed risk appears in your stack today, then assign one owner for validation and one owner for rollout. Shared ownership prevents common drift where findings are acknowledged but never implemented.

Next, classify actions by urgency. Immediate controls should block critical failure paths, such as unsafe command execution, secret leakage, or unreviewed external integrations. Secondary actions can improve observability, documentation quality, and long-term resilience. Separating urgent controls from structural improvements keeps momentum high while still building durable safeguards.

Teams adopting AI agent tooling often underestimate configuration risk. Even when a package is well maintained, local setup can introduce weak points through permissive environment variables, broad network access, or unclear update practices. Use this article as a trigger to review runtime boundaries: what the tool can read, what it can execute, and what data it can send externally.

A simple post-read implementation loop

1) Capture the top three risks in plain language. 2) Add one measurable control for each risk. 3) Run a small pilot with logs enabled. 4) Review outcomes after one week and adjust policy before broad rollout. This loop keeps decisions evidence based and avoids overreaction. It also creates a repeatable pattern that works across different tools and changing vendor landscapes.

Finally, document exceptions explicitly. If you accept a risk for business reasons, record the reason, mitigation, and review date. Transparent exception handling is a major trust signal for internal stakeholders and external auditors. It also improves future decision speed because teams can reference prior reasoning instead of reopening the same debate every release cycle.

If you run recurring retrospectives, archive lessons learned from each implementation cycle. A lightweight internal knowledge base turns individual fixes into team capability and steadily lowers incident frequency over time.

Are your skills safe?

Don't guess. Run our free security scanner now.

Open Scanner