Network Load Balancer¶
AWS Load Balancer Controller supports reconciliation for Kubernetes Services resources of type LoadBalancer by Network Load Balancer (NLB) with instance or ip target type.
secure by default
Since v2.2.0 release, the AWS Load balancer controller provisions an internal NLB by default.
To create an internet-facing NLB, following annotation is required on your service:
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
For backwards compatibility, if the service.beta.kubernetes.io/aws-load-balancer-scheme annotation is absent, existing NLB's scheme will remain unchanged.
Prerequisites¶
- AWS Load Balancer Controller >= v2.2.0
- For Kubernetes Services resources of type
LoadBalancer:- Kubernetes >= v1.20 or
- Kubernetes >= v1.19.10 for 1.19 or
- Kubernetes >= v1.18.18 for 1.18 or
- EKS >= v1.16
- For Kubernetes Services resources of type
NodePort:- Kubernetes >= v1.16
- For
iptarget type:- Pods have native AWS VPC networking configured, see Amazon VPC CNI plugin
Configuration¶
By default, Kubernetes Service resources of type LoadBalancer gets reconciled by the Kubernetes controller built into the CloudProvider component of the kube-controller-manager or the cloud-controller-manager(a.k.a. the in-tree controller).
In order to let AWS Load Balancer Controller manage the reconciliation for Kubernetes Services resources of type LoadBalancer, you need to offload the reconciliation from in-tree controller to AWS Load Balancer Controller explicitly.
AWS Load Balancer Controller supports LoadBalancerClass feature since v2.4.0 release for Kubernetes v1.22+ clusters.
LoadBalancerClass feature provides a CloudProvider agnostic way of offloading the reconciliation for Kubernetes Services resources of type LoadBalancer to an external controller.
When you specify the spec.loadBalancerClass to be service.k8s.aws/nlb on a Kubernetes Service resource of type LoadBalancer, the AWS Load Balancer Controller takes charge of reconciliation by provision an NLB.
Warning
-
If you modify a Service resource with matching
spec.loadBalancerClassby changing itstypefromLoadBalancerto anything else, the controller will cleanup provioned NLB for that Service. -
If the
spec.loadBalancerClassis set to a loadBalancerClass that is not recognized by this controller, it will ignore the Service resource regardless of theservice.beta.kubernetes.io/aws-load-balancer-typeannotation.
Tip
-
By default, the NLB will use
instancetarget-type, you can customize it via theservice.beta.kubernetes.io/aws-load-balancer-nlb-target-typeannotation -
This controller uses
service.k8s.aws/nlbas the defaultLoadBalancerClass, you can customize it to a different value via the controller flag--load-balancer-class.
Example: instance mode
apiVersion: v1
kind: Service
metadata:
name: echoserver
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
spec:
selector:
app: echoserver
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
Example: ip mode
apiVersion: v1
kind: Service
metadata:
name: echoserver
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
spec:
selector:
app: echoserver
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
The AWS in-tree controller supports an AWS specific way of offloading the reconciliation for Kubernetes Services resources of type LoadBalancer to an external controller.
When you specify the service.beta.kubernetes.io/aws-load-balancer-type annotation to be external on a Kubernetes Service resource of type LoadBalancer, the in-tree controller will ignore the Service resource. In addition, If you specify the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type annotation on the Service resource, the AWS Load Balancer Controller takes charge of reconciliation by provision an NLB.
Warning
-
It's not recommended to modify or add the
service.beta.kubernetes.io/aws-load-balancer-typeannotation on an existing Service resource. Instead, delete the existing Service resource and recreate a new one if a change is desired. -
If you modify this annotation on a existing Service resource, you might end up with leaked AWS Load Balancer resources.
backwards compatibility for nlb-ip type
For backwards compatibility, both in-tree and AWS Load Balancer controller supports nlb-ip as value of service.beta.kubernetes.io/aws-load-balancer-type annotation. The controllers treats it as if you specified both annotation below:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
Example: instance mode
apiVersion: v1
kind: Service
metadata:
name: echoserver
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
spec:
selector:
app: echoserver
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
Example: ip mode
apiVersion: v1
kind: Service
metadata:
name: echoserver
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
spec:
selector:
app: echoserver
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
Protocols¶
Controller supports both TCP and UDP protocols. Controller also configures TLS termination on NLB if you configure service with certificate annotation.
In case of TCP, NLB with IP targets does not pass the client source IP address unless specifically configured via target group attributes. Your application pods might not see the actual client IP address even if NLB passes it along, for example instance mode with externalTrafficPolicy set to Cluster.
In such cases, you can configure NLB proxy protocl v2 via annotation if you need visibility into
the client source IP address on your application pods.
To enable proxy protocol v2, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
If you enable proxy protocol v2, NLB health check with HTTP/HTTPS works only if the health check port supports proxy protocol v2. Due to this behavior, you should not configure proxy protocol v2 with NLB instance mode and externalTrafficPolicy set to Local.
Subnet tagging requirements¶
See Subnet Discovery for details on configuring ELB for public or private placement.
Security group¶
AWS currently does not support attaching security groups to NLB. To allow inbound traffic from NLB, the controller automatically adds inbound rules to the worker node security groups by default.
disable worker node security group rule management
You can disable the worker node security group rule management via annotation.
Worker node security groups selection¶
The controller automatically selects the worker node security groups that will be modified to allow inbound traffic with following rules:
- for
instancemode, the security group of the each backend worker node's primary ENI will be selected. - for
ipmode, the security group of each backend pod's ENI will be selected.
multiple security groups on an ENI
if there are multiple security groups attached on an ENI, the controller expects only one security group tagged with following tags:
| Key | Value |
|---|---|
kubernetes.io/cluster/${cluster-name} |
owned or shared |
${cluster-name} is the name of the kubernetes cluster
Worker node security groups rules¶
| Rule | Protocol | Port(s) | IpRanges(s) |
|---|---|---|---|
| Client Traffic | spec.ports[*].protocol |
spec.ports[*].port |
Traffic Source CIDRs |
| Health Check Traffic | TCP | Health Check Ports | NLB Subnet CIDRs |
| Rule | Protocol | Port(s) | IpRange(s) |
|---|---|---|---|
| Client Traffic | spec.ports[*].protocol |
spec.ports[*].port |
NLB Subnet CIDRs |
| Health Check Traffic | TCP | Health Check Ports | NLB Subnet CIDRs |