Kubernetes

Fixing Kubernetes CreateContainerConfigError & CreateContainerError

createcontainerconfigerror

CreateContainerConfigError and CreateContainerError are two common error messages you might see when deploying a Pod or similar objects in Kubernetes. They occur when Kubernetes is unable to start the Pod’s containers due to invalid configuration properties or a condition not being met.

Debugging CreateContainerConfigError and CreateContainerError can be confusing because these are generic messages that each cover more than one potential problem. Don’t worry–in this article, we’ll help you understand what causes these errors and show you how to fix them.

With this tutorial, you’ll learn:

  1. What is CreateContainerConfigError
  2. How to fix CreateContainerConfigError
  3. What is CreateContainerError
  4. How to fix CreateContainerError

What is CreateContainerConfigError?

CreateContainerConfigError occurs when Kubernetes fails to validate the configuration values you provide for a Pod, Deployment, or other object that creates containers. It happens when your manifest references an invalid ConfigMap or Secret.

Common causes of CreateContainerConfigError

The common causes of CreateContainerConfigError include missing Kubernetes ConfigMaps and Secrets.

  • Missing ConfigMap – Referencing a ConfigMap that doesn’t exist will cause this error to occur. The missing ConfigMap will prevent Kubernetes from assembling the configuration data to supply to the container. (Read more about Kubernetes ConfigMap.)
  • Missing Secret – Similarly, the error will also occur when your container manifest references an invalid Secret.

Checking for CreateContainerConfigError

You can easily check whether any of your Pods have experienced CreateContainerConfigError by checking the STATUS values reported by Kubectl’s get pods command:

$ kubectl get pods
NAME   READY   STATUS                       RESTARTS   AGE
app    0/1     CreateContainerConfigError   0          10s

The app Pod is reporting a CreateContainerConfigError, which is preventing it from starting.

How to fix CreateContainerConfigError

To fix a CreateContainerConfigError, first retrieve the Pod’s status to confirm the problem, then use the following steps to troubleshoot the issue.

Step 1: Check the container’s events

Run kubectl describe to see the events that the Pod has stored:

$ kubectl describe pod app
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  73s               default-scheduler  Successfully assigned default/app to minikube
  Normal   Pulled     21s               kubelet            Successfully pulled image "nginx:latest" in 1.007071348s (1.007137195s including waiting)
  Normal   Pulling    8s (x6 over 73s)  kubelet            Pulling image "nginx:latest"
  Warning  Failed     7s (x6 over 72s)  kubelet            Error: configmap "app-config" not found
  Normal   Pulled     7s                kubelet            Successfully pulled image "nginx:latest" in 950.1683ms (950.184028ms including waiting)

Step 2: Find the “Failed” event

Within the event list, find the event that has Failed as its reason. The message for that event will reveal the cause of the CreateContainerConfigError:

Error: configmap "app-config" not found

Step 3: Fix the Configuration Issue

Now you can proceed to fix the issue. The action to take will depend on the specific cause of the CreateContainerConfigError:

1. ConfigMap not found

This means a configMapRef or configMapKeyRef field in your Pod manifest references a ConfigMap that doesn’t exist in your cluster or namespace. To resolve the problem, create a new ConfigMap with the correct name, then reapply your Pod’s manifest.

2. Secret not found

In the same way, the issue can also occur when you try to access a Secret that doesn’t exist. Solve the problem by first adding the Secret to your cluster, then recreating your Pod.

3. Couldn’t find key in ConfigMap or secret

Finally, you’ll also get a CreateContainerConfigError when you reference a specific key inside a ConfigMap or Kubernetes Secret, but that key doesn’t exist. For example, the following Pod manifest will cause the error if the app-config ConfigMap exists but is empty:

apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - image: nginx:latest
      name: app
      env:
        - name: TEST_KEY
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: test_key

Step 4: Reapply the Manifest

Once you’ve ensured your ConfigMaps and Secrets are correctly populated, Kubernetes should be able to successfully create your containers. You can force the creation to proceed by deleting and reapplying your Pod or Deployment manifest with Kubectl:

$ kubectl delete -f pod.yaml

$ kubectl apply -f pod.yaml

What is CreateContainerError?

CreateContainerError is an error that can be triggered when Kubernetes begins to transition a Pod from the Pending into the Running state. It signals that although the container’s configuration is valid, the container couldn’t be created because of a runtime problem.

Common Causes of CreateContainerError

  1. Naming Conflict – The container name could conflict with an existing container. This usually only happens if an issue with the container runtime means a previous container failed to delete successfully.
  2. Missing Command/Entrypoint – Trying to use a container image that doesn’t specify a command or entrypoint will cause this error, unless you manually set a command for the container in your Pod’s manifest. (Note that if you specify an invalid command, you will see RunContainerError instead.)
  3. Inaccessible Storage Volume – Container configurations that reference invalid storage volumes (PVs) can trigger this error.
  4. Container Runtime Issue/Failure to Start Container – Problems with the container runtime installed on the Pod’s Node can prevent new containers from being created successfully.

Checking for CreateContainerError

CreateContainerError  can be identified using the same method shown for CreateContainerConfigError above.

Use Kubectl to retrieve the list of Pods from your cluster, then look for the error in the STATUS column:

$ kubectl get pods
NAME   READY   STATUS                 RESTARTS   AGE
app    0/1     CreateContainerError   0          16s

You can see that the app Pod isn’t running because there’s a CreateContainerError.

How to fix CreateContainerError

Once you’ve identified that CreateContainerError has occurred, you can use the following steps to resolve the problem.

Step 1 – Check the container’s events

First, run Kubectl’s describe command to view the events associated with the failed Pod. The events are displayed at the bottom of the command’s output:

$ kubectl describe pod app
...
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  15s               default-scheduler  Successfully assigned default/app to minikube
  Normal   Pulled     3s (x3 over 14s)  kubelet            Container image "invalid-image" already present on machine
  Warning  Failed     3s (x3 over 14s)  kubelet            Error: Error response from daemon: No command specified

Step 2 – Find the “Failed” event

Inspect the list of events to find the entry Failed as its reason. This event will explain why Kubernetes failed to create the container and had to issue a CreateContainerError.

The event’s Message column will provide detailed text that makes it clear why the error occurred. In the case shown above, there was no command specified for the container.

Step 3 – Troubleshoot the issue

You should now edit your manifest to fix the cause of the error. The steps you’ll need to take will depend on which problem is reported in the Pod’s event history.

Here are a few possible scenarios:

1. No command specified

To fix this problem, you can set the spec.containers.command field in your Pod’s manifest:

apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - name: app
      image: example-image
      command: ["my-app"]

Alternatively, you can adjust your image’s Dockerfile so it includes an ENTRYPOINT instruction:

FROM alpine:latest
...
COPY my-app /bin/my-app
...
ENTRYPOINT ["/my-app"]

2. Storage-related issues

CreateContainerError can arise as the result of an incorrect storage configuration, such as trying to attach a hostPath volume that has an invalid path:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: \\invalid\path

The event’s message will contain guidance to help you identify the problem:

$ kubectl describe pod 
...
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  15s               default-scheduler  Successfully assigned default/app to minikube
  Normal   Pulled     3s (x3 over 14s)  kubelet            Container image "invalid-image" already present on machine
  Warning  Failed     3s (x3 over 14s)  kubelet            Error: Error response from daemon: create \invalid\path: "\\invalid\path" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path

You can fix the problem by first editing your volume’s manifest to remove the configuration issue, then re-applying both the PV and the Pod to your cluster.

3. Container name already in use

When one of your Nodes has a malfunctioning container runtime, old containers might not be cleaned up after Kubernetes terminates them. This can result in name collisions that lead to CreateContainerError.

To solve this problem, try restarting your container runtime, manually removing old containers, or using any methods detailed in your runtime’s documentation to force a clean-up to occur.

If circumstances prevent you from using any of these options, you can always try renaming your new Pod’s containers to prevent the conflict from occurring. More details about the issue may be available inside Kubelet’s log file on your Node; this can usually be found at /var/log/kubelet.log.

Step 4 – Reapply your Pod’s manifest

After you’ve fixed the issue by editing your Kubernetes manifest or container image, you’ll usually need to reapply your Pod to your cluster so Kubernetes recreates it—this time with success:

$ kubectl delete -f pod.yaml

$ kubectl apply -f pod.yaml

You should then see your containers are successfully created, allowing the Pod to enter the Running state.

Key points

CreateContainerError and CreateContainerConfigError are two frustrating Kubernetes errors that you might see when there’s a problem with your Pod manifests or the container runtime on your Nodes. Although they’re related, they occur in slightly different circumstances–CreateContainerConfigError indicates Kubernetes didn’t try to create the container because of a configuration problem, whereas CreateContainerError is used when an issue arises during the creation process.

In this article, you’ve learned how to detect and solve these errors to keep your cluster healthy. Next, check out Spacelift, our collaborative CI/CD platform for IaC that lets you easily monitor your resources, find problems, and safely apply changes!

Spacelift helps you manage the complexities and compliance challenges of using Kubernetes. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change. It also has an extensive selection of policies, which lets you automate compliance checks and build complex multi-stack workflows. Find out more about how Spacelift works with Kubernetes.

The Most Flexible CI/CD Automation Tool

Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities for infrastructure management.

Start free trial