Network Load Balancer¶
The AWS Load Balancer Controller (LBC) supports reconciliation for Kubernetes Service resources of type LoadBalancer
by provisioning an AWS Network Load Balancer (NLB) with an instance
or ip
target type.
Secure by default
Since the v2.2.0 release, the LBC provisions an internal
NLB by default.
To create an internet-facing
NLB, the following annotation is required on your service:
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
For backwards compatibility, if the service.beta.kubernetes.io/aws-load-balancer-scheme
annotation is absent, an existing NLB's scheme remains unchanged.
Prerequisites¶
- LBC >= v2.2.0
- For Kubernetes Service resources of type
LoadBalancer
:- Kubernetes >= v1.20 or
- Kubernetes >= v1.19.10 for 1.19 or
- Kubernetes >= v1.18.18 for 1.18 or
- EKS >= v1.16
- For Kubernetes Service resources of type
NodePort
:- Kubernetes >= v1.16
- For
ip
target type:- Pods have native AWS VPC networking configured. For more information, see the Amazon VPC CNI plugin documentation.
Configuration¶
By default, Kubernetes Service resources of type LoadBalancer
get reconciled by the Kubernetes controller built into the CloudProvider
component of the kube-controller-manager
or the cloud-controller-manager
(also known as the in-tree controller).
In order for the LBC to manage the reconciliation of Kubernetes Service resources of type LoadBalancer
, you need to offload the reconciliation from the in-tree controller to the LBC, explicitly.
The LBC supports the LoadBalancerClass
feature since the v2.4.0 release for Kubernetes v1.22+ clusters.
The LoadBalancerClass
feature provides a CloudProvider
agnostic way of offloading the reconciliation for Kubernetes Service resources of type LoadBalancer
to an external controller.
When you specify the spec.loadBalancerClass
to be service.k8s.aws/nlb
on a Kubernetes Service resource of type LoadBalancer
, the LBC takes charge of reconciliation by provisioning an NLB.
Warning
-
If you modify a Service resource with matching
spec.loadBalancerClass
by changing itstype
fromLoadBalancer
to anything else, the controller will cleanup the provisioned NLB for that Service. -
If the
spec.loadBalancerClass
is set to aloadBalancerClass
that isn't recognized by the LBC, it ignores the Service resource, regardless of theservice.beta.kubernetes.io/aws-load-balancer-type
annotation.
Tip
-
By default, the NLB uses the
instance
target type. You can customize it using theservice.beta.kubernetes.io/aws-load-balancer-nlb-target-type
annotation. -
The LBC uses
service.k8s.aws/nlb
as the defaultLoadBalancerClass
. You can customize it to a different value using the controller flag--load-balancer-class
.
Example: instance mode
apiVersion: v1
kind: Service
metadata:
name: echoserver
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
spec:
selector:
app: echoserver
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
Example: ip mode
apiVersion: v1
kind: Service
metadata:
name: echoserver
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
spec:
selector:
app: echoserver
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
The AWS in-tree controller supports an AWS specific way of offloading the reconciliation for Kubernetes Service resources of type LoadBalancer
to an external controller.
When you specify the service.beta.kubernetes.io/aws-load-balancer-type
annotation to be external
on a Kubernetes Service resource of type LoadBalancer
, the in-tree controller ignores the Service resource. In addition, if you specify the service.beta.kubernetes.io/aws-load-balancer-nlb-target-type
annotation on the Service resource, the LBC takes charge of reconciliation by provisioning an NLB.
Warning
-
It's not recommended to modify or add the
service.beta.kubernetes.io/aws-load-balancer-type
annotation on an existing Service resource. If a change is desired, delete the existing Service resource and create a new one instead of modifying an existing Service. -
If you modify this annotation on an existing Service resource, you might end up with leaked LBC resources.
backwards compatibility for nlb-ip
type
For backwards compatibility, both the in-tree and LBC controller supports nlb-ip
as a value for the service.beta.kubernetes.io/aws-load-balancer-type
annotation. The controllers treats it as if you specified both of the following annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
Example: instance mode
apiVersion: v1
kind: Service
metadata:
name: echoserver
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
spec:
selector:
app: echoserver
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
Example: ip mode
apiVersion: v1
kind: Service
metadata:
name: echoserver
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
spec:
selector:
app: echoserver
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
Protocols¶
The LBC supports both TCP and UDP protocols. The controller also configures TLS termination on your NLB if you configure the Service with a certificate annotation.
In the case of TCP, an NLB with IP targets doesn't pass the client source IP address, unless you specifically configure it to using target group attributes. Your application pods might not see the actual client IP address, even if the NLB passes it along. For example, if you're using instance mode with externalTrafficPolicy
set to Cluster
.
In such cases, you can configure NLB proxy protocol v2 using an annotation if you need visibility into
the client source IP address on your application pods.
To enable proxy protocol v2, apply the following annotation to your Service:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
If you enable proxy protocol v2, NLB health checks with HTTP/HTTPS only work if the health check port supports proxy protocol v2. Due to this behavior, you shouldn't configure proxy protocol v2 with NLB instance mode and externalTrafficPolicy
set to Local
.
Subnet tagging requirements¶
See Subnet Discovery for details on configuring Elastic Load Balancing for public or private placement.
Security group¶
AWS doesn't support attaching security groups to NLBs. To allow inbound traffic from an NLB, the controller automatically adds inbound rules to the worker node security groups, by default.
disable worker node security group rule management
You can disable the worker node security group rule management using an annotation.
Worker node security groups selection¶
The controller automatically selects the worker node security groups that it modifies to allow inbound traffic using the following rules:
- For
instance
mode, the security group of each backend worker node's primary elastic network interface (ENI) is selected. - For
ip
mode, the security group of each backend pod's ENI is selected.
Multiple security groups on an ENI
If there are multiple security groups attached to an ENI, the controller expects only one security group tagged with following tags:
Key | Value |
---|---|
kubernetes.io/cluster/${cluster-name} |
owned or shared |
${cluster-name}
is the name of the Kubernetes cluster.
Worker node security groups rules¶
Rule | Protocol | Port(s) | IpRanges(s) |
---|---|---|---|
Client Traffic | spec.ports[*].protocol |
spec.ports[*].port |
Traffic Source CIDRs |
Health Check Traffic | TCP | Health Check Ports | NLB Subnet CIDRs |
Rule | Protocol | Port(s) | IpRange(s) |
---|---|---|---|
Client Traffic | spec.ports[*].protocol |
spec.ports[*].port |
NLB Subnet CIDRs |
Health Check Traffic | TCP | Health Check Ports | NLB Subnet CIDRs |