Developer Reference¶
The external-dns is the work of thousands of contributors, and is maintained by a small team within kubernetes-sigs. This document covers basic needs to work with external-dns codebase. It contains instructions to build, run, and test external-dns.
Tools¶
Building and/or testing external-dns requires additional tooling.
Go Tools¶
Additional Go-based tools are managed in go.tool.mod and used for code generation:
| Tool | Purpose |
|---|---|
| controller-gen | Generates CRD manifests and deepcopy methods |
| yq | YAML processing (splitting, filtering CRD outputs) |
| yamlfmt | YAML formatting |
List all installed Go tools:
Update Go tools to their latest versions:
Note: Updates are done manually because Dependabot does not yet support
go.tool.mod
(dependabot-core#12050).
First Steps¶
Configure Development Environment
You must have a working Go environment, compile the build, and set up testing.
Building & Testing¶
The project uses the make build system. It’ll run code generators, tests and static code analysis.
Build, run tests and lint the code:
If added any flags or metrics, re-generate documentation
We require all changes to be covered by acceptance tests and/or unit tests, depending on the situation.
In the context of the external-dns, acceptance tests are tests of interactions with providers, such as creating, reading information about, and destroying DNS resources. In contrast, unit tests test functionality wholly within the codebase itself, such as function tests.
Log Unit Testing¶
Testing log messages within codebase provides significant advantages, especially when it comes to debugging, monitoring, and gaining a deeper understanding of system behavior. Log library build-in testing functionality
This practice enables:
- Early detection of logging issues
- Verification of Important Information
- Ensuring Correct Severity Levels
- Improving Observability and Monitoring
- Driving Better Logging Practices
To illustrate how to unit test log output within functions, consider the following example:
import (
"testing"
"sigs.k8s.io/external-dns/internal/testutils"
)
func TestMe(t *testing.T) {
hook := testutils.LogsUnderTestWithLogLevel(log.WarnLevel, t)
... function under tests ...
testutils.TestHelperLogContains("example warning message", hook, t)
// provide negative assertion
testutils.TestHelperLogNotContains("this message should not be shown", hook, t)
}
CRD Generation¶
The DNSEndpoint CRD manifest is generated from Go types using controller-gen and must be regenerated whenever the types in endpoint/ or apis/ change.
This runs scripts/generate-crd.sh which:
- Generates
DeepCopymethods for types inendpoint/andapis/ - Generates the CRD manifest into
config/crd/standard/ - Copies the CRD (with filtered annotations) into
charts/external-dns/crds/
The controller-gen.kubebuilder.io/version annotation in the generated YAML reflects the version of controller-gen from go.tool.mod at generation time and is updated automatically.
Integration Tests¶
Integration tests live in tests/integration/ and verify behavior that spans multiple sources or wrappers together, using a fake Kubernetes client — no real cluster is required.
Where integration tests sit¶
flowchart TD
E2E["E2E Tests<br>Real cluster + real DNS provider<br>Slow · requires cloud credentials"]
IT["Integration Tests ← tests/integration/<br>Fake Kubernetes API · no cluster needed<br>Tests source + wrapper combinations · fast<br>Declarative YAML scenarios"]
UT["Unit Tests<br>One source or wrapper in isolation<br>Mocked or minimal Kubernetes client"]
E2E --> IT --> UT
style IT fill:#bbf7d0,stroke:#15803d,stroke-width:2px
What runs during a test¶
flowchart LR
subgraph yaml["tests/integration/scenarios/tests.yaml"]
RES["resources<br>Service · Ingress · Pod"]
CFG["config<br>sources · filters · wrappers"]
EXP["expected<br>endpoints"]
end
subgraph toolkit["toolkit — fake Kubernetes"]
PARSE["ParseResources()"]
FAKE["fake.Clientset"]
WRAP["CreateWrappedSource()"]
end
subgraph pipeline["ExternalDNS pipeline under test"]
SRC["Source(s)<br>service · ingress · ..."]
WRP["Wrapper(s)<br>dedup · targetFilter · NAT64"]
OUT["Endpoints"]
end
ASSERT["ValidateEndpoints()<br>DNSName · Targets<br>RecordType · TTL"]
RES --> PARSE --> FAKE --> WRAP
CFG --> WRAP
WRAP --> SRC --> WRP --> OUT --> ASSERT
EXP --> ASSERT
When to add an integration test:
- You are adding or changing a source (e.g.
service,ingress) and want to verify it produces the correct endpoints end-to-end. - You are changing a wrapper (e.g. deduplication, target filtering, default targets, NAT64) and want to verify it behaves correctly when real Kubernetes resources are involved.
- You are changing a post-processor and want to confirm it applies correctly to endpoints produced by one or more sources.
- You are verifying multiple sources together (e.g.
serviceandingressboth pointing to the same hostname) and their combined output. - You are fixing a cross-cutting bug that only manifests when sources, wrappers, and post-processors interact.
- A unit test would require mocking too many internals — an integration test can express the scenario more clearly as a real Kubernetes resource.
How to add a scenario:
Add an entry to tests/integration/scenarios/tests.yaml. Each scenario declares Kubernetes resources (Service, Ingress, etc.), the ExternalDNS source configuration, and the expected endpoints:
- name: my-new-scenario
description: >
Brief explanation of what behavior this scenario validates.
config:
sources: ["service"]
resources:
- resource:
apiVersion: v1
kind: Service
metadata:
name: my-svc
namespace: default
annotations:
external-dns.alpha.kubernetes.io/hostname: my.example.com
spec:
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 1.2.3.4
expected:
- dnsName: my.example.com
targets: ["1.2.3.4"]
recordType: A
How to run:
Complete test on local env¶
It’s possible to run ExternalDNS locally. CoreDNS can be used for easier testing.
See the related tutorials for full instructions.
Continuous Integration¶
When submitting a pull request, you’ll notice that we run several automated processes on your proposed change. Some of these processes are tests to ensure your contribution aligns with our standards. While we strive for accuracy, some users may find these tests confusing.
Execute code without building binary¶
The external-dns does not require make build. You could compile and run Go program with the command
For this command to run successfully, it will require AWS credentials and access to local or remote access.
To run local cluster please refer to running local cluster
Deploying a local build¶
After building local images, it is often useful to deploy those images in a local cluster
We use Minikube but it could be Kind or any other solution.
- Create local cluster
- Build and load local images
- Deploy with Helm
- Deploy with kubernetes manifests
Create a local cluster¶
For simplicity, minikube can be used to create a single
node cluster.
You can set a specific Kubernetes version by setting the node’s container image.
See basic controls within the documentation about configuration for more details on this.
Once you have a configuration in place, create the cluster with
that configuration:
minikube start \
--profile=external-dns \
--memory=2000 \
--cpus=2 \
--disk-size=5g \
--kubernetes-version=v1.31 \
--driver=docker
minikube profile external-dns
After the new Kubernetes cluster is ready, identify the cluster is running as the single node cluster:
Building local images¶
When building local images with ko you can’t specify the registry used to create the image names. It will always be ko.local.
Note: You could skip this step if you build and push image to your private registry or using an official external-dns image
❯❯ export KO_DOCKER_REPO=ko.local
❯❯ export VERSION=v1
❯❯ docker context use rancher-desktop ## (optional) this command is only required when using rancher-desktop
❯❯ ls -al /var/run/docker.sock ## (optional) validate that docker runtime is configured correctly and symlink exists
❯❯ ko build --tags ${VERSION}
❯❯ docker images
$$ ko.local/external-dns-9036f6870f30cbdefa42a10f30bada63 local-v1
Push image to minikube
Refer to load image
❯❯ minikube image load ko.local/external-dns-9036f6870f30cbdefa42a10f30bada63:local-v1
❯❯ minikube image ls
$$ registry.k8s.io/pause:3.10
$$ ...
$$ ko.local/external-dns-9036f6870f30cbdefa42a10f30bada63:local-v1
$$ ...
❯❯ kubectl run external-dns --image=ko.local/external-dns-9036f6870f30cbdefa42a10f30bada63:local-v1 --image-pull-policy=Never
Build and push directly in minikube
Any docker command you run in this current terminal will run against the docker inside minikube cluster.
Refer to push directly
❯❯ eval $(minikube -p external-dns docker-env)
❯❯ echo $MINIKUBE_ACTIVE_DOCKERD
$$ external-dns
❯❯ export VERSION=v1
❯❯ ko build --local --tags ${VERSION}
❯❯ docker images
$$ REPOSITORY TAG
$$ registry.k8s.io/kube-apiserver v1.31.4
$$ ....
$$ ko.local/external-dns-9036f6870f30cbdefa42a10f30bada63 minikube-v1
$$ ...
❯❯ eval $(minikube docker-env -u) ## unset minikube
Pushing to an in-cluster using Registry addon
Refer to pushing images for a full configuration
❯❯ export KO_DOCKER_REPO=$(minikube ip):5000
❯❯ export VERSION=registry-v1
❯❯ minikube addons enable registry
❯❯ ko build --tags ${VERSION}
Building image and push to a registry¶
Build container image and push to a specific registry
Deploy with Helm¶
Build local images if required, load them on a local cluster, and deploy helm charts, run:
Render chart templates locally and display the output
❯❯ helm lint --debug charts/external-dns
❯❯ helm template external-dns charts/external-dns --output-dir _scratch
Deploy manifests to a cluster with required values
Modify chart or values and validate the diff
❯❯ helm template external-dns charts/external-dns --output-dir _scratch
❯❯ kubectl diff -f _scratch/external-dns --recursive=true --show-managed-fields=false
Helm Values¶
This helm chart comes with a JSON schema generated from values with helm schema plugin.
- Install required plugin(s)
- Ensure that the schema is always up-to-date
- When not up-to-date, update JSON schema
- Runs a series of tests to verify that the chart is well-formed, linted and JSON schema is valid
- Auto-generate documentation for helm charts into markdown files.
- Run helm unittests.
- Add an entry to the chart CHANGELOG.md under
## UNRELEASEDsection andopenpull request
Deploy with kubernetes manifests¶
Note; kubernetes manifest are not up to date. Consider to create an
examplesfolder
Contribute to documentation¶
All documentation is in docs folder. If new page is added or removed, make sure mkdocs.yml is also updated.
Install required dependencies. In order to not to break system packages, we are going to use virtual environments with pipenv.
❯❯ pipenv shell
❯❯ pip install -r docs/scripts/requirements.txt
❯❯ mkdocs serve
$$ ...
$$ Serving on http://127.0.0.1:8000/
How to add an example snippet¶
Let’s say we are improving tutorial location in docs/tutorials/aws.md.
- Add a snippet to
docs/snippets/aws/<snippet-name>.<snippet-extension> - Add snippet to a markdown file
docs/tutorials/aws.md