Kubernetes clusters are complex, generating vast amounts of logs and events that are difficult to parse manually. k8sgpt empowers SREs and devops engineers by automating root cause analysis through AI, reducing mean-time-to-resolution (MTTR) considerably.
kubectl apply -f ollama/ollama-deployment.yaml
# Wait for pod to be ready
kubectl wait--for=condition=ready pod -lapp=ollama -n ollama --timeout=300s
Load Model into Ollama
1
2
3
4
5
6
7
8
# Get pod namePOD_NAME=$(kubectl get pod -n ollama -lapp=ollama -ojsonpath='{.items[0].metadata.name}')#To load a model into Ollama, simply use
kubectl exec-n ollama $POD_NAME-- ollama pull gemma3:4b
# Verify model is loaded
kubectl exec-n ollama $POD_NAME-- ollama list
# Get results as custom resources
kubectl get results -n k8sgpt-operator-system
# View detailed analysis
kubectl get results -n k8sgpt-operator-system -o yaml
# Watch for new results
kubectl get results -n k8sgpt-operator-system -w