How AWS Load Balancer controller works¶
The following diagram details the AWS components this controller creates. It also demonstrates the route ingress traffic takes from the ALB to the Kubernetes cluster.
This section describes each step (circle) above. This example demonstrates satisfying 1 ingress resource.
: The controller watches for ingress events from the API server. When it finds ingress resources that satisfy its requirements, it begins the creation of AWS resources.
: An ALB (ELBv2) is created in AWS for the new ingress resource. This ALB can be internet-facing or internal. You can also specify the subnets it's created in using annotations.
: Target Groups are created in AWS for each unique Kubernetes service described in the ingress resource.
: Listeners are created for every port detailed in your ingress resource annotations. When no port is specified, sensible defaults (
443) are used. Certificates may also be attached via annotations.
: Rules are created for each path specified in your ingress resource. This ensures traffic to a specific path is routed to the correct Kubernetes Service.
Along with the above, the controller also...
- deletes AWS components when ingress resources are removed from k8s.
- modifies AWS components when ingress resources change in k8s.
- assembles a list of existing ingress-related AWS components on start-up, allowing you to recover if the controller were to be restarted.
AWS Load Balancer controller supports two traffic modes:
- Instance mode
- IP mode
Instance mode is used, users can explicitly select the mode via
Ingress traffic starts at the ALB and reaches the Kubernetes nodes through each service's NodePort. This means that services referenced from ingress resources must be exposed by
type:NodePort in order to be reached by the ALB.
Ingress traffic starts at the ALB and reaches the Kubernetes pods directly. CNIs must support directly accessible POD ip via secondary IP addresses on ENI.