CoreDNS with etcd backend¶
Overview¶
This tutorial describes how to deploy CoreDNS backed by etcd as a dynamic DNS provider for external-dns.
It shows how to configure external-dns to write DNS records into etcd, which CoreDNS will then serve.
TL;DR¶
After completing this lab, you will have a Kubernetes environment running as containers in your local development machine with etcd, coredns and external-dns.
Notes¶
CoreDNSand etcd here run inside the cluster for demonstration purposes.- For real deployments, you can use external etcd or secure etcd with TLS.
- The zone example.org is arbitrary â use your domain.
external-dnsautomatically maintains records in etcd under/skydns/<reversed-domain>.
Prerequisite¶
Before you start, ensure you have:
- A running kubernetes cluster.
- In this tutorial we are going to use kind
kubectlandhelmexternal-dnssource code or helm chartCoreDNShelm chart- Optional
dnstoolscontainer for testingetcdctlto interat with etcd
Bootstrap Environment¶
1. Create cluster¶
kind create cluster --config=docs/snippets/tutorials/coredns/kind.yaml
Creating cluster "coredns-etcd" ...
â Ensuring node image (kindest/node:v1.33.0) đŧ
â Preparing nodes đĻ đĻ
â Writing configuration đ
â Starting control-plane đšī¸
â Installing CNI đ
â Installing StorageClass đž
â Joining worker nodes đ
Set kubectl context to "kind-coredns-etcd"
You can now use your cluster with:
kubectl cluster-info --context kind-coredns-etcd
2. Deploy etcd as stateful set¶
There are multiple options to configure etcd
In this tutorial, we’ll use the first option.
# apply custom manifest from external-dns repository
kubectl apply -f docs/snippets/tutorials/coredns/etcd.yaml
# wait until it's ready
kubectl rollout status statefulset etcd
â¯â¯ partitioned roll out complete: 1 new pods have been updated...
Test etcd connectivity:
kubectl exec -it etcd-0 -- etcdctl member list -wtable
+------------------+---------+--------+------------------------+-------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+--------+------------------------+-------------------------+------------+
| 3b3ae05f90cfc535 | started | etcd-0 | http://10.244.1.3:2380 | http://etcd-0.etcd:2379 | false |
+------------------+---------+--------+------------------------+-------------------------+------------+
Test etcd record management:
kubectl -n default exec -it etcd-0 -- etcdctl put /skydns/org/example/myservice '{"host":"10.0.0.10"}'
â¯â¯ OK
kubectl -n default exec -it etcd-0 -- etcdctl get /skydns --prefix
â¯â¯ /skydns/org/example/myservice
â¯â¯ {"host":"10.0.0.10"}
kubectl -n default exec -it etcd-0 -- etcdctl del /skydns/org/example/myservice
â¯â¯ 1
To access etcd from host:
etcdctl --endpoints=http://127.0.0.1:32379 member list
â¯â¯ 3b3ae05f90cfc535, started, etcd-0, http://10.244.1.3:2380, http://etcd-0.etcd:2379, false
3. Deploy CoreDNS using Helm¶
helm repo add coredns https://coredns.github.io/helm
helm repo update
helm upgrade --install coredns coredns/coredns \
-f docs/snippets/tutorials/coredns/values-coredns.yaml \
-n default
â¯â¯ Release "coredns" does not exist. Installing it now.
Validate it’s running
Check the logs for errors
kubectl logs deploy/coredns -n default -c coredns --tail=50
kubectl logs deploy/coredns -n default -c resolv-check --tail=50
Test DNS Resolution
kubectl run -it --rm dnsutils --image=infoblox/dnstools
â¯â¯ curl -v http://etcd.default.svc.cluster.local:2379/version
â¯â¯ dig @coredns.default.svc.cluster.local kubernetes.default.svc.cluster.local
â¯â¯ dig @coredns.default.svc.cluster.local etcd.default.svc.cluster.local
3. Configure ExternalDNS¶
Deploy with helm and minimal configuration.
Add the external-dns helm repository and check available versions
helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
helm repo update
helm search repo external-dns --versions
Install with required configuration
helm upgrade --install external-dns external-dns/external-dns \
-f docs/snippets/tutorials/coredns/values-extdns-coredns.yaml \
-n default
â¯â¯ Release "external-dns" does not exist. Installing it now.
Validate pod status and view logs
Or run it on the host from sources
export ETCD_URLS="http://127.0.0.1:32379" # port mapping configured on kind cluster
go run main.go \
--provider=coredns \
--source=service \
--log-level=debug
3. Configure Test Services¶
Apply manifest
kubectl apply -f docs/snippets/tutorials/coredns/fixtures.yaml
kubectl get svc -l svc=test-svc
â¯â¯ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
â¯â¯ a-g1-record LoadBalancer 10.96.233.133 <pending> 80:31188/TCP 3m38s
â¯â¯ aa-g1-record LoadBalancer 10.96.93.4 <pending> 80:31710/TCP 3m38s
Patch services, to manually assign an Ingress IPs. It just makes the Service appear like a real LoadBalancer for tools/tests.
kubectl patch svc a-g1-record --type=merge \
-p '{"status":{"loadBalancer":{"ingress":[{"ip":"172.18.0.2"}]}}}' \
--subresource=status
â¯â¯ service/a-g1-record patched
kubectl patch svc aa-g1-record --type=merge \
-p '{"status":{"loadBalancer":{"ingress":[{"ip":"2001:db8::1"}]}}}' \
--subresource=status
â¯â¯ service/aa-g1-record patched
kubectl get svc -l svc=test-svc
â¯â¯ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
â¯â¯ a-g1-record LoadBalancer 10.96.233.133 172.18.0.2 80:31188/TCP 7m13s
â¯â¯ aa-g1-record LoadBalancer 10.96.93.4 2001:db8::1 80:31710/TCP 7m13s
4. Verify that records are written to etcd¶
Check etcd content. Where you should see keys similar to:
kubectl exec -it etcd-0 -- etcdctl get /skydns/org/example --prefix --keys-only
â¯â¯ /skydns/org/example/a-a/1acbad7e
â¯â¯ /skydns/org/example/a/048b0377
â¯â¯ /skydns/org/example/aa/2b981607
â¯â¯ /skydns/org/example/aaaa-aa/1228708f
5. Test DNS resolution via CoreDNS¶
Launch a debug pod:
Run with expected output
dig +short @coredns.default.svc.cluster.local a.example.org
â¯â¯ 172.18.0.2
dig +short @coredns.default.svc.cluster.local aa.example.org AAAA
â¯â¯ 2001:db8::1