Redirecting a hostname to a different IP address in your Kubernetes Pods

Imagine you’re preparing a Connections side-by-side migration and you want to install your Component pack using the final Connections url (let’s call it connections.example.org). However, that url is still pointing to the webserver of your current/old installation, while your pods need to connect to the Connections environment you’re currently building. So your pods need to get the IP address of your new webserver when they try to connect to connections.example.org. How do you do it?

I wrote this article a few weeks ago and described how to setup a simple DNS server using dnsmasq which would refer all dns queries to the usual dns servers except for the one hostname or hostnames you want to go somewhere else. If you change the /etc/resolv.conf of your Kubernetes master and worker nodes (the machines on which you installed Kubernetes) to point to this DNS server, your pods will resolve your url to the right IP address. This works and isn’t too difficult. However, I didn’t post it yet as I was wondering if there was a better way to do this. Thanks to Heidi Harding I found out there is!

The service in Kubernetes that handles the dns lookups is (in more recent Kubernetes versions) CoreDNS. Queries to hosts that aren’t part of the Kubernetes infrastructure are forwarded by CoreDNS to the DNS servers as configured on the host. You can find these servers by checking your /etc/resolv.conf. There is a way to tell CoreDNS that specific hostnames should resolve to a different IP address. This was very nicely described here. In short it boils down to these commands:

kubectl -n kube-system edit configmap/coredns
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
        hosts custom.hosts connections.example.org {
           10.8.16.80 connections.example.org
           fallthrough
        }
    }
kind: ConfigMap
[..]

In bold the part that needs to be added. Make sure you use spaces and no tabs as tabs are not allowed in yaml files.

If it saved correctly (you’ll get a message if it didn’t), you need to restart the CoreDNS servers:

kubectl delete pod -n kube-system -l k8s-app=kube-dns

Check if the pods restarted properly:

kubectl get pods --all-namespaces -o wide |grep coredns
kube-system   coredns-66bff467f8-p77f2                  1/1     Running   0          31m   192.168.19.195    con-k8s1              
kube-system   coredns-66bff467f8-v9lml                  1/1     Running   0          31m   192.168.149.194   con-k8s2              

and test if everything worked correctly:

kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml

Wait till it’s running. Then try:

kubectl exec -ti busybox -- nslookup connections.example.org

If that returns the right IP address you can remove the busybox pod and you’re good to go!

kubectl delete -f https://k8s.io/examples/admin/dns/busybox.yaml

References

Add a custom host to Kubernetes