Building and Deploying a New API (Part 3)

1 min read

Previous Post

Time to get into Kubernetes! I’m going to go through this process manually at first, in order to familiarize myself with some of the details, before I start involving tools such as Terraform and Helm.

I’ve explored Caddy in the past, and I’m highly fond of its automatic HTTPS and straightforward syntax, so part of this setup is going to be configuring Caddy to be used as a reverse proxy:

demo-api.dango.space, localhost {
        respond / "Hello, World!"
        respond /health-check 204
        reverse_proxy localhost:8080
}

This will need to be available to our Caddy container. To do this we’ll use the command kubectl create configmap. The Caddyfile can be used with the from-file flag and we can use the output flag to create our configmap.yaml file.

In our deployment.yaml file, we’ll be adding the Caddy container.

- name: caddy
  image: caddy:2.9-alpine
  ports:
    - containerPort: 80
    - containerPort: 443
  volumeMounts:
    - name: caddyfile-volume
      mountPath: /etc/caddy/Caddyfile
      subPath: Caddyfile

    - name: caddy-data
      mountPath: /data

The data directory needs to be persisted, so a PersistentVolume and PersistentVolumeClaim have been created to handle this.

In a later post I’ll be taking a look at moving this over to an ingress controller.

Our demo-api image has been added to the deployment.yaml, and the ports opened up in our service.yaml. We have a cluster created on Digital Ocean, so we’ll run kubectl apply to provision our resources. We’ll obtain our external IP address with kubectl get services, add a new record to our domain, and we should be good to go!

https://demo-api.dango.space/

The state of the repository as of this post can be found here.

A link to the next part will be available once it is written.