Kaniko and how you can build images on Kubernetes using kaniko?.

If you are working with containers, you must have come across building images and saving them. The same images were used by pods and run as containers inside them. In this article, we are going to see how we can build the images on Kubernetes using Kaniko.

What is Kaniko?

Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.

Kaniko doesn’t depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. This enables building container images in environments that can’t easily or securely run a Docker daemon, such as a standard Kubernetes cluster. [Taken directly from their github repo]

In my short description, you can build an image on Kubernetes cluster using the same Dockerfile description.

Kaniko and how you can build images on Kubernetes using kaniko?.
Kaniko and how you can build images on Kubernetes using kaniko?.

How Kaniko does this?

The Kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry. Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile). We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one. After each command, we append a layer of changed files to the base image (if there are any) and update image metadata. [Again taken from their github]

Have a look at the below Kubernetes resource YAML.

apiVersion: v1
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:latest
    args: [ "--dockerfile=<dockerfile>",
            "--context=<git-ssh-location].",
            "--destination=<destination>"]
    volumeMounts:
      - name: docker-config
        mountPath: /kaniko/.docker
  restartPolicy: Never
  volumes:
  - name: docker-config
    projected:
      sources:
      - secret:
          name: regcred
          items:
            - key: .dockerconfigjson
              path: config.json

If you check in above YAML. It is mandatory to use this image gcr.io/kaniko-project/executor:lates . Next you have to give the arguments to you kaniko pod. In this example --dockerfile option is to point at Dockerfile, --context the option is used to give the location in which you want your Kaniko to work. If you give the git URL, it will pull the repository. Last is destination. This is the location where you want to push your image. This is a docker repository or artifactory or any such location. To use the secure repository you need to use login creds which are saved in docker-config. It will be used to push the images to any of your image storage.
To see more options for kaniko you can visit here. https://github.com/GoogleContainerTools/kaniko#additional-flags . You can also read more about it here.

This way you can build the image on Kubernetes and push to artifactory and later use it to deploy your workload.

If you like the article please share and subscribe.


Gaurav Yadav

Gaurav is cloud infrastructure engineer and a full stack web developer and blogger. Sportsperson by heart and loves football. Scale is something he loves to work for and always keen to learn new tech. Experienced with CI/CD, distributed cloud infrastructure, build systems and lot of SRE Stuff.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.