K8sGPT is an AI-powered diagnostic tool for Kubernetes clusters that helps identify and solve issues using natural language processing. This cheatsheet provides a quick reference for common K8sGPT commands and operations.
Want to try it now? Do check out my step by step interactive tutorial on K8sGPT here: https://killercoda.com/kubetools/scenario/k8sGPT
Installation
Method | Command |
---|---|
Homebrew | brew tap k8sgpt-ai/k8sgpt && brew install k8sgpt |
Binary | Download from GitHub Releases |
Krew | kubectl krew install k8sgpt |
Go | go install github.com/k8sgpt-ai/k8sgpt@latest |
Basic Commands
Operation | Command | Description |
---|---|---|
Version | k8sgpt version |
Display the current version of K8sGPT |
Help | k8sgpt --help |
Show help information |
Authentication | k8sgpt auth |
Configure authentication for AI providers |
Analyze | k8sgpt analyze |
Run analysis on your Kubernetes cluster |
List | k8sgpt list |
List all analyses that have been performed |
Serve | k8sgpt serve |
Start K8sGPT in server mode |
Filters | k8sgpt filters |
Show and manage resource filters |
Configuration
Operation | Command | Description |
---|---|---|
Configure Provider | k8sgpt auth --provider |
Set the AI provider (openai, azure, etc.) |
Set API key | k8sgpt auth --apikey |
Configure your AI provider API key |
Set Model | k8sgpt auth --model |
Set the AI model to use |
Configure Context | k8sgpt auth --context |
Set the Kubernetes context |
Analysis Options
Operation | Command | Description |
---|---|---|
Basic Analysis | k8sgpt analyze |
Run analysis on all resources |
Filter by Namespace | k8sgpt analyze --namespace |
Analyze resources in specific namespace |
Filter by Kind | k8sgpt analyze --filter |
Analyze only specific resource types |
Specify Kubeconfig | k8sgpt analyze --kubeconfig |
Use specific kubeconfig file |
Set Severity | k8sgpt analyze --threshold |
Filter by minimum severity (0-10) |
Generate YAML | k8sgpt analyze --explain yaml |
Generate explanation in YAML format |
Generate JSON | k8sgpt analyze --explain json |
Generate explanation in JSON format |
Cache Results | k8sgpt analyze --cache |
Enable caching of results |
Max Results | k8sgpt analyze --max-results |
Limit number of results |
Filter Management
Operation | Command | Description |
---|---|---|
List Filters | k8sgpt filters list |
Show all active filters |
Add Filter | k8sgpt filters add |
Add resource kind to filter list |
Remove Filter | k8sgpt filters remove |
Remove resource kind from filter list |
Reset Filters | k8sgpt filters reset |
Reset all filters to default |
Integration Options
Integration | Command/Reference | Description |
---|---|---|
Kubectl Plugin | kubectl k8sgpt analyze |
Use as kubectl plugin |
Kubernetes Operator | k8sgpt-operator | Deploy as an operator |
Prometheus Integration | k8sgpt serve --metrics |
Expose Prometheus metrics |
Slack Integration | Configure via .k8sgpt.yaml | Send results to Slack |
Example Workflows
Task | Command Sequence | Description |
---|---|---|
Quick Troubleshooting | k8sgpt analyze |
Run a full cluster analysis |
Focused Analysis | k8sgpt analyze --filter Pod --namespace app |
Analyze only Pods in the app namespace |
Continuous Monitoring | k8sgpt serve |
Run in server mode for continuous analysis |
CI/CD Integration | k8sgpt analyze --explain json --output file.json |
Generate machine-readable output for CI/CD |
Advanced Configuration (.k8sgpt.yaml)
ai:
provider: openai
model: gpt-4
apikey: your-api-key-here
backend: http://localhost:8080
temperature: 0.7
filters:
- Pod
- Deployment
- StatefulSet
kubernetes:
context: my-cluster-context
kubeconfig: /path/to/kubeconfig
Supported Resource Kinds
Category | Resource Kinds |
---|---|
Workloads | Pod, Deployment, StatefulSet, DaemonSet, ReplicaSet, Job, CronJob |
Services | Service, Ingress, NetworkPolicy |
Config | ConfigMap, Secret, HorizontalPodAutoscaler |
Storage | PersistentVolume, PersistentVolumeClaim, StorageClass |
RBAC | Role, RoleBinding, ClusterRole, ClusterRoleBinding, ServiceAccount |
Custom | CustomResourceDefinition, plus most custom resources |
Tips and Best Practices
- Start with a broad analysis and then narrow down with filters
- Use the –explain yaml option for sharing analysis results with team members
- Configure multiple AI providers as fallbacks
- Set appropriate severity thresholds for your environment
- Integrate with monitoring tools for continuous analysis
Comments are closed.