Skip to main content
Version: 1.40

Troubleshoot your Okteto instance

Welcome to the Okteto troubleshooting guide. This page provides answers to some common issues encountered while using Okteto. Please also review our FAQ Guide for additional help.

How to extract logs from Okteto when asking for help

For Developers:

When reaching out to Okteto for support or when asking the Okteto Community, run okteto doctor to generate a doctor file with the okteto logs for a given development container.

For Administrators of Okteto:

Please use our Okteto Diagnostics tool to create a support bundle with cluster information, logs for okteto components, and other relevant information.

How to Check That Your Okteto Instance is Healthy

1. Sanity check the Okteto Helm release

On your terminal, run the following commands to check the status of your Okteto Helm release:

helm list -n okteto
helm get values "okteto" -n okteto -o yaml

The following are sample outputs of the commands:

~ % helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
okteto okteto 22 2025-10-02 10:26:07.428432 -0400 EDT deployed okteto-1.36.0 464e721e7

2. Wait for workloads to become “Available”

Check that all Deployments and StatefulSets have completed their rollouts and that all Pods are in the Ready state:

# Deployments & StatefulSets rollouts
kubectl rollout status deploy -l app.kubernetes.io/instance="okteto" -n okteto --timeout=120s
kubectl rollout status sts -l app.kubernetes.io/instance="okteto" -n okteto --timeout=120s

# All pods to Ready (every container ready)
kubectl wait pod -l app.kubernetes.io/instance="okteto" -n okteto --for=condition=Ready --timeout=180s

If any of the commands hits the timeout, consider jumping to Step 8. Anything looks bad? to investigate further.

3. Spot Unhealthy Pods

Check for Pods that are not in the Ready state or that have container restarts:

# Pods not Ready
kubectl get pods -l app.kubernetes.io/instance="okteto" -n okteto --field-selector=status.phase!=Running -o wide

# Restarts > 0
kubectl get pods -l app.kubernetes.io/instance="okteto" -n okteto -o custom-columns='POD:.metadata.name,READY:.status.containerStatuses[*].ready,RESTARTS:.status.containerStatuses[*].restartCount'| (read; echo "$REPLY"; sort -k3 -nr)

4. Inventory resources created by Okteto Helm release

Check that all the resources created by the Okteto Helm release are in a healthy state:

kubectl get all -l app.kubernetes.io/instance="okteto"
kubectl get ingress,service -l app.kubernetes.io/instance="okteto"
kubectl get cm,secret,pvc -l app.kubernetes.io/instance="okteto"

The following are sample outputs of the commands:

~ % kubectl get all -l app.kubernetes.io/instance="okteto"
NAME READY STATUS RESTARTS AGE
pod/okteto-api-d6ccbfc8d-8fgb5 1/1 Running 0 3m7s
pod/okteto-api-d6ccbfc8d-bqrlp 1/1 Running 0 3m7s
pod/okteto-buildkit-8c96f331e1-0 1/1 Running 0 3m5s
pod/okteto-daemon-ggbh2 1/1 Running 0 3m8s
pod/okteto-daemon-hr8w4 1/1 Running 0 3m8s
pod/okteto-eventsexporter-0 1/1 Running 0 3m4s
pod/okteto-frontend-56d6fc4696-rblcn 1/1 Running 0 3m6s
pod/okteto-frontend-56d6fc4696-vlv5f 1/1 Running 0 3m6s
pod/okteto-ingress-nginx-controller-7bd8f79fbc-79gjs 1/1 Running 0 3m7s
pod/okteto-ingress-nginx-controller-7bd8f79fbc-hc9cq 1/1 Running 0 3m7s
pod/okteto-ingress-nginx-defaultbackend-88d58c8bd-9xr9g 1/1 Running 0 3m6s
pod/okteto-ingress-nginx-defaultbackend-88d58c8bd-z7zlf 1/1 Running 0 3m6s
pod/okteto-mutation-webhook-66548b9669-t9rcb 1/1 Running 0 3m5s
pod/okteto-mutation-webhook-66548b9669-z6k6w 1/1 Running 0 3m4s
pod/okteto-okteto-nginx-controller-7bb7686cdb-k5h5t 1/1 Running 0 3m7s
pod/okteto-okteto-nginx-controller-7bb7686cdb-w6jh9 1/1 Running 0 3m7s
pod/okteto-prepullimages-86c2t 4/4 Running 0 3m8s
pod/okteto-prepullimages-nt9g7 4/4 Running 0 3m8s
pod/okteto-redis-5998c6c5f4-c549r 1/1 Running 0 3m6s
pod/okteto-regcreds-6c7444697-4tvmh 1/1 Running 0 3m5s
pod/okteto-regcreds-6c7444697-r5r24 1/1 Running 0 3m6s
pod/okteto-registry-565bd4ff94-9skgf 1/1 Running 0 3m5s
pod/okteto-reloader-54d478cbbd-bzjb2 1/1 Running 0 3m7s
pod/okteto-ssh-agent-59dd765ff-5kk89 1/1 Running 0 3m5s
pod/okteto-ssh-agent-59dd765ff-kr4vt 1/1 Running 0 3m5s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/okteto-api ClusterIP 34.118.228.121 <none> 8080/TCP 3m11s
service/okteto-buildkit ClusterIP 34.118.232.126 <none> 443/TCP 3m11s
service/okteto-cluster-endpoint ExternalName <none> kubernetes.default.svc.cluster.local 443/TCP 3m9s
service/okteto-eventsexporter ClusterIP 34.118.227.140 <none> 8080/TCP 3m10s
service/okteto-frontend ClusterIP 34.118.230.239 <none> 8080/TCP 3m10s
service/okteto-ingress-nginx-controller LoadBalancer 34.118.239.1 34.11.21.218 80:32340/TCP,443:31322/TCP 3m12s
service/okteto-ingress-nginx-defaultbackend ClusterIP 34.118.238.41 <none> 80/TCP 3m10s
service/okteto-mutation-webhook ClusterIP 34.118.232.212 <none> 443/TCP 3m8s
service/okteto-okteto-nginx-controller ClusterIP 34.118.232.157 <none> 80/TCP,443/TCP 3m11s
service/okteto-redis ClusterIP 34.118.238.162 <none> 6379/TCP 3m9s
service/okteto-regcreds ClusterIP 34.118.230.229 <none> 443/TCP 3m9s
service/okteto-registry ClusterIP 34.118.227.250 <none> 5000/TCP 3m9s
service/okteto-ssh-agent ClusterIP 34.118.230.24 <none> 3000/TCP 3m8s

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/okteto-daemon 2 2 2 2 2 <none> 3m8s
daemonset.apps/okteto-prepullimages 2 2 2 2 2 <none> 3m8s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/okteto-api 2/2 2 2 3m7s
deployment.apps/okteto-frontend 2/2 2 2 3m6s
deployment.apps/okteto-ingress-nginx-controller 2/2 2 2 3m8s
deployment.apps/okteto-ingress-nginx-defaultbackend 2/2 2 2 3m7s
deployment.apps/okteto-mutation-webhook 2/2 2 2 3m5s
deployment.apps/okteto-okteto-nginx-controller 2/2 2 2 3m7s
deployment.apps/okteto-redis 1/1 1 1 3m6s
deployment.apps/okteto-regcreds 2/2 2 2 3m6s
deployment.apps/okteto-registry 1/1 1 1 3m6s
deployment.apps/okteto-reloader 1/1 1 1 3m7s
deployment.apps/okteto-ssh-agent 2/2 2 2 3m5s

NAME DESIRED CURRENT READY AGE
replicaset.apps/okteto-api-d6ccbfc8d 2 2 2 3m7s
replicaset.apps/okteto-frontend-56d6fc4696 2 2 2 3m6s
replicaset.apps/okteto-ingress-nginx-controller-7bd8f79fbc 2 2 2 3m7s
replicaset.apps/okteto-ingress-nginx-defaultbackend-88d58c8bd 2 2 2 3m6s
replicaset.apps/okteto-mutation-webhook-66548b9669 2 2 2 3m5s
replicaset.apps/okteto-okteto-nginx-controller-7bb7686cdb 2 2 2 3m7s
replicaset.apps/okteto-redis-5998c6c5f4 1 1 1 3m6s
replicaset.apps/okteto-regcreds-6c7444697 2 2 2 3m6s
replicaset.apps/okteto-registry-565bd4ff94 1 1 1 3m5s
replicaset.apps/okteto-reloader-54d478cbbd 1 1 1 3m7s
replicaset.apps/okteto-ssh-agent-59dd765ff 2 2 2 3m5s

NAME READY AGE
statefulset.apps/okteto-buildkit-8c96f331e1 1/1 3m5s
statefulset.apps/okteto-eventsexporter 1/1 3m5s

NAME SCHEDULE TIMEZONE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/okteto-destroy-all-checker */3 * * * * <none> False 0 103s 3m5s
cronjob.batch/okteto-gc @hourly <none> False 0 <none> 3m5s
cronjob.batch/okteto-insights-metrics */5 * * * * <none> False 0 103s 3m5s
cronjob.batch/okteto-installer-checker */5 * * * * <none> False 0 103s 3m5s
cronjob.batch/okteto-periodic-metrics @hourly <none> False 0 <none> 3m4s
cronjob.batch/okteto-resourcemanager */5 * * * * <none> False 0 103s 3m4s
cronjob.batch/okteto-telemetry @daily <none> False 0 <none> 3m4s

5. Check that services have endpoints

Verify that all Services have healthy Endpoints:

kubectl get svc -l app.kubernetes.io/instance="okteto" -n okteto

kubectl get endpoints -l app.kubernetes.io/instance="okteto" -n okteto

The following are sample outputs of the commands:

~ % kubectl get svc -l app.kubernetes.io/instance="okteto" -n okteto
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
okteto-api ClusterIP 34.118.233.67 <none> 8080/TCP 234d
okteto-buildkit ClusterIP 34.118.236.79 <none> 443/TCP 234d
okteto-cluster-endpoint ExternalName <none> kubernetes.default.svc.cluster.local 443/TCP 234d
okteto-eventsexporter ClusterIP 34.118.230.3 <none> 8080/TCP 234d
okteto-frontend ClusterIP 34.118.232.147 <none> 8080/TCP 234d
okteto-ingress-nginx-controller LoadBalancer 34.118.231.219 34.21.79.137 80:31270/TCP,443:32017/TCP 234d
okteto-ingress-nginx-defaultbackend ClusterIP 34.118.230.224 <none> 80/TCP 234d
okteto-mutation-webhook ClusterIP 34.118.233.42 <none> 443/TCP 234d
okteto-okteto-nginx-controller ClusterIP 34.118.228.30 <none> 80/TCP,443/TCP 234d
okteto-redis ClusterIP 34.118.227.133 <none> 6379/TCP 20d
okteto-regcreds ClusterIP 34.118.226.123 <none> 443/TCP 234d
okteto-registry ClusterIP 34.118.225.14 <none> 5000/TCP 234d
okteto-ssh-agent ClusterIP 34.118.232.197 <none> 3000/TCP 234d

6. Validate your DNS configuration

Verify that the Okteto subdomain is resolving and accessible:

curl -i --max-time 5 "https://okteto.<Okteto instance subdomain>/healthz"

7. Test your Okteto Instance End-to-End

Go through the build, deploy, and up commands with the Movies Okteto sample to test that Okteto deployments are working as expected:

git clone https://github.com/okteto/movies
cd movies
okteto build
okteto deploy
okteto up

8. Anything looks bad?

Check the events and describe the failing Pods or other resources to gather more information about what might be wrong:

# a) Events (cluster is telling you what’s wrong)
kubectl get events -n okteto --sort-by=.lastTimestamp | tail -n 40

# b) Describe the failing thing (image pulls, scheduling, probe failures)
kubectl describe pod <pod-name>

UPGRADE FAILED: “okteto” has no deployed releases

This error will occur on your second install/upgrade if your initial install failed. If the first install failed, delete the existing install before trying again:

helm uninstall okteto

Registry pods keep restarting

This can happen when the pods can't read/write from your cloud storage bucket. Double check that the cloud IAM you created has read/write access to the specified bucket.

Deployment pipelines stay in "progressing" forever

This can happen for several reasons, among others, the installer job couldn't be started due to an error in Kubernetes API, or due to an overload in the cluster. In order to find out what the problem is, there's a way to list all the jobs and pods for a specific pipeline.

You need the pipeline name (it is the name displayed on Okteto UI) and the namespace where it is deployed. With that information, you can get jobs and pods running these commands:

kubectl get jobs -l=dev.okteto.com/pipeline-name=movies -l=dev.okteto.com/pipeline-namespace=cindy --namespace=okteto
kubectl get pods -l=dev.okteto.com/pipeline-name=movies -l=dev.okteto.com/pipeline-namespace=cindy --namespace=okteto

Using a Custom CNI

If you're using a custom CNI on your cluster then there may be some additional configuration needed for webhooks. In certain cases the CNI used on the worker nodes is not the same as the CNI used by the control plane and host networking will need to be used for webhooks. In addition, the ports may need to be changed to avoid collisions.

The Okteto Webhook is configured by settingwebhook.hostNetwork to true. The ports are set with webhook.port. More information on the Okteto Webhook configuration can be found here.

Docker Hub credentials misconfiguration

As you can configure your own Docker Hub account in Okteto, it could happen that the credentials are not properly set. If that is the case, kubelet won't be able to pull public images from Docker Hub, which can be an important issue in the cluster.

If this ever happens in your cluster, there is a way to fix it:

After this, the Okteto daemonset will be able to configure the right credentials for Docker Hub. Once you verify everything is working, you can restore the original base image for the Okteto daemonset.

We are here to help

Reach out to us, we're always happy to help!