Annotations
Service annotations¶
- Annotation keys and values can only be strings. All other types below must be string-encoded, for example:
- boolean:
"true" - integer:
"42" - stringList:
"s1,s2,s3" - stringMap:
"k1=v1,k2=v2" - json:
"{ \"key\": \"value\" }"
- boolean:
Annotations¶
Warning
These annotations are specific to the kubernetes service resources reconciled by the AWS Load Balancer Controller. Although the list was initially derived from the k8s in-tree kube-controller-manager, this
documentation is not an accurate reference for the services reconciled by the in-tree controller.
Traffic Routing¶
Traffic Routing can be controlled with following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-namespecifies the custom name to use for the load balancer. Name longer than 32 characters will be treated as an error.limitations
- If you modify this annotation after service creation, there is no effect.
Example
service.beta.kubernetes.io/aws-load-balancer-name: custom-name -
service.beta.kubernetes.io/aws-load-balancer-typespecifies the load balancer type. This controller reconciles those service resources with this annotation set to eithernlb-iporexternal.Tip
This annotation specifies the controller used to provision LoadBalancers (as specified in legacy-cloud-provider). Refer to lb-scheme to specify whether the LoadBalancer is internet-facing or internal.
- [Deprecated] For type
nlb-ip, the controller will provision an NLB with targets registered by IP address. This value is supported for backwards compatibility. - For type
external, the NLB target type depends on the nlb-target-type annotation.
limitations
- This annotation should not be modified after service creation.
Example
service.beta.kubernetes.io/aws-load-balancer-type: external - [Deprecated] For type
-
service.beta.kubernetes.io/aws-load-balancer-nlb-target-typespecifies the target type to configure for NLB. You can choose betweeninstanceandip.-
instancemode will route traffic to all EC2 instances within cluster on the NodePort opened for your service. The kube-proxy on the individual worker nodes sets up the forwarding of the traffic from the NodePort to the pods behind the service.- service must be of type
NodePortorLoadBalancerforinstancetargets - for k8s 1.22 and later if
spec.allocateLoadBalancerNodePortsis set tofalse,NodePortmust be allocated manually
default value
If you configure
spec.loadBalancerClass, the controller defaults toinstancetarget typeNodePort allocation
k8s version 1.22 and later support disabling NodePort allocation by setting the service field
spec.allocateLoadBalancerNodePortstofalse. If the NodePort is not allocated for a service port, the controller will fail to reconcile instance mode NLB. - service must be of type
-
ipmode will route traffic directly to the pod IP. In this mode, AWS NLB sends traffic directly to the Kubernetes pods behind the service, eliminating the need for an extra network hop through the worker nodes in the Kubernetes cluster.iptarget mode supports pods running on AWS EC2 instances and AWS Fargate- network plugin must use native AWS VPC networking configuration for pod IP, for example Amazon VPC CNI plugin.
Example
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance -
-
service.beta.kubernetes.io/aws-load-balancer-subnetsspecifies the Availability Zone the NLB will route traffic to. See Network Load Balancers for more details.Tip
Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details.
You must specify at least one subnet in any of the AZs, both subnetID or subnetName(Name tag on subnets) can be used.
limitations
- Each subnets must be from a different Availability Zone
- AWS has restrictions on disabling existing subnets for NLB. As a result, you might not be able to edit this annotation once the NLB gets provisioned.
Example
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet -
service.beta.kubernetes.io/aws-load-balancer-alpn-policyallows you to configure the ALPN policies on the load balancer.supported policies
HTTP1OnlyNegotiate only HTTP/1.*. The ALPN preference list is http/1.1, http/1.0.HTTP2OnlyNegotiate only HTTP/2. The ALPN preference list is h2.HTTP2OptionalPrefer HTTP/1.* over HTTP/2 (which can be useful for HTTP/2 testing). The ALPN preference list is http/1.1, http/1.0, h2.HTTP2PreferredPrefer HTTP/2 over HTTP/1.*. The ALPN preference list is h2, http/1.1, http/1.0.NoneDo not negotiate ALPN. This is the default.
Example
service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Preferred -
service.beta.kubernetes.io/aws-load-balancer-target-node-labelsspecifies which nodes to include in the target group registration forinstancetarget type.Example
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: label1=value1, label2=value2 -
service.beta.kubernetes.io/aws-load-balancer-eip-allocationsspecifies a list of elastic IP address configuration for an internet-facing NLB.Note
- This configuration is optional, and you can use it to assign static IP addresses to your NLB
- You must specify the same number of eip allocations as load balancer subnets annotation
- NLB must be internet-facing
Example
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xyz, eipalloc-zzz -
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addressesspecifies a list of private IPv4 addresses for an internal NLB.Note
- NLB must be internal
- This configuration is optional, and you can use it to assign static IPv4 addresses to your NLB
- You must specify the same number of private IPv4 addresses as load balancer subnets annotation
- You must specify the IPv4 addresses from the load balancer subnet IPv4 ranges
Example
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 192.168.10.15, 192.168.32.16 -
service.beta.kubernetes.io/aws-load-balancer-ipv6-addressesspecifies a list of IPv6 addresses for an dualstack NLB.Note
- NLB must be dualstack
- This configuration is optional, and you can use it to assign static IPv6 addresses to your NLB
- You must specify the same number of private IPv6 addresses as load balancer subnets annotation
- You must specify the IPv6 addresses from the load balancer subnet IPv6 ranges
Example
service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses: 2600:1f13:837:8501::1, 2600:1f13:837:8504::1
Traffic Listening¶
Traffic Listening can be controlled with following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-ip-address-typespecifies the IP address type of NLB.Example
service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
Resource attributes¶
NLB resource attributes can be controlled via the following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol specifies whether to enable proxy protocol v2 on the target group. Set to '*' to enable proxy protocol v2. This annotation takes precedence over the annotation
service.beta.kubernetes.io/aws-load-balancer-target-group-attributesfor proxy protocol v2 configuration.The only valid value for this annotation is
*. -
service.beta.kubernetes.io/aws-load-balancer-target-group-attributesspecifies the Target Group Attributes to be configured.Example
- set the deregistration delay to 120 seconds (available range is 0-3600 seconds)
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=120 - enable source IP affinity
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip - enable proxy protocol version 2
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true - enable connection termination on deregistration
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.connection_termination.enabled=true - enable client IP preservation
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
- set the deregistration delay to 120 seconds (available range is 0-3600 seconds)
-
service.beta.kubernetes.io/aws-load-balancer-attributesspecifies Load Balancer Attributes that should be applied to the NLB.Only attributes defined in the annotation will be updated. To unset any AWS defaults(e.g. Disabling access logs after having them enabled once), the values need to be explicitly set to the original values(
access_logs.s3.enabled=false) and omitting them is not sufficient. Custom attributes set in this annotation's config map will be overriden by annotation-specific attributes. For backwards compatibility, existing annotations for the individual load balancer attributes get precedence in case of ties.- If
deletion_protection.enabled=trueis in the annotation, the controller will not be able to delete the NLB during reconciliation. Once the attribute gets edited todeletion_protection.enabled=falseduring reconciliation, the deployer will force delete the resource. - Please note, if the deletion protection is not enabled via annotation (e.g. via AWS console), the controller still deletes the underlying resource.
Example
- enable access log to s3
service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app - enable NLB deletion protection
service.beta.kubernetes.io/aws-load-balancer-attributes: deletion_protection.enabled=true - enable cross zone load balancing
service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
- If
-
the following annotations are deprecated in v2.3.0 release in favor of service.beta.kubernetes.io/aws-load-balancer-attributes
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled
AWS Resource Tags¶
The AWS Load Balancer Controller automatically applies following tags to the AWS resources it creates (NLB/TargetGroups/Listener/ListenerRule):
elbv2.k8s.aws/cluster: ${clusterName}service.k8s.aws/stack: ${stackID}service.k8s.aws/resource: ${resourceID}
In addition, you can use annotations to specify additional tags
-
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tagsspecifies additional tags to apply to the AWS resources.- you cannot override the default controller tags mentioned above or the tags specified in the
--default-tagscontroller flag - if any of the tag conflicts with the ones configured via
--external-managed-tagscontroller flag, the controller fails to reconcile the service
Example
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test - you cannot override the default controller tags mentioned above or the tags specified in the
Health Check¶
Health check on target groups can be configured with following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocolspecifies the target group health check protocol.- you can specify
tcp, orhttporhttps,tcpis the default tcpis the default health check protocol if the servicespec.externalTrafficPolicyisCluster,httpifLocal- if the service
spec.externalTrafficPolicyisLocal, do not usetcpfor health check
Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http - you can specify
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-portspecifies the TCP port to use for target group health check.default value
- if you do not specify the health check port, the default value will be
spec.healthCheckNodePortwhenexternalTrafficPolicy=localortraffic-portotherwise.
Example
- set the health check port to
traffic-portservice.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port - set the health check port to port
80service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80"
- if you do not specify the health check port, the default value will be
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-pathspecifies the http path for the health check in case of http/https protocol.Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz -
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-thresholdspecifies the consecutive health check successes required before a target is considered healthy.Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3" -
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-thresholdspecifies the consecutive health check failures before a target gets marked unhealthy.Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" -
service.beta.kubernetes.io/aws-load-balancer-healthcheck-intervalspecifies the interval between consecutive health checks.Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10" -
service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codesspecifies the http success codes for the health check in case of http/https protocol.Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes: "200-399" -
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeoutspecifies the target group health check timeout. The target has to respond within the timeout for a successful health check.Note
The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s.
Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "10"
TLS¶
You can configure TLS support via the following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-ssl-certspecifies the ARN of one or more certificates managed by the AWS Certificate Manager.The first certificate in the list is the default certificate and remaining certificates are for the optional certificate list. See Server Certificates for further details.
Example
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx -
service.beta.kubernetes.io/aws-load-balancer-ssl-portsspecifies the frontend ports with TLS listeners.- You must configure at least one certificate for TLS listeners
- You can specify a list of port names or port values,
*does not match any ports - If you don't specify this annotation, controller creates TLS listener for all the service ports
- Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer
Example
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, custom-port -
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policyspecifies the Security Policy for NLB frontend connections, allowing you to control the protocol and ciphers.Example
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06 -
service.beta.kubernetes.io/aws-load-balancer-backend-protocolspecifies whether to use TLS for the backend traffic between the load balancer and the kubernetes pods.- If you specify
sslas the backend protocol, NLB uses TLS connections for the traffic to your kubernetes pods in case of TLS listeners - You can specify
sslortcp(default)
Example
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl - If you specify
Access control¶
Load balancer access can be controlled via following annotations:
-
service.beta.kubernetes.io/load-balancer-source-rangesspecifies the CIDRs that are allowed to access the NLB.Tip
we recommend specifying CIDRs in the service
Spec.LoadBalancerSourceRangesinsteadDefault
0.0.0.0/0will be used if the IPAddressType is "ipv4"0.0.0.0/0and::/0will be used if the IPAddressType is "dualstack"- The VPC CIDR will be used if
service.beta.kubernetes.io/aws-load-balancer-schemeisinternal
This annotation will be ignored in case preserve client IP is not enabled. - preserve client IP is disabled by default for
IPtargets - preserve client IP is enabled by default forinstancetargetsPreserve client IP has no effect on traffic converted from IPv4 to IPv6 and on traffic converted from IPv6 to IPv4. The source IP of this type of traffic is always the private IP address of the Network Load Balancer. - This could cause the clients that have their traffic converted to bypass the specified CIDRs that are allowed to access the NLB.
Example
service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/24 -
service.beta.kubernetes.io/aws-load-balancer-schemespecifies whether the NLB will be internet-facing or internal. Valid values areinternal,internet-facing. If not specified, default isinternal.Example
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" -
service.beta.kubernetes.io/aws-load-balancer-internalspecifies whether the NLB will be internet-facing or internal.deprecation note
This annotation is deprecated starting v2.2.0 release in favor of the new aws-load-balancer-scheme annotation. It will be supported, but in case of ties, the aws-load-balancer-scheme gets precedence.
Example
service.beta.kubernetes.io/aws-load-balancer-internal: "true" -
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rulesspecifies whether the controller should automatically add the ingress rules to the instance/ENI security group.If you disable the automatic management of security group rules for an NLB, you will need to manually add appropriate ingress rules to your EC2 instance or ENI security groups to allow access to the traffic and health check ports.
Example
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false"
Legacy Cloud Provider¶
The AWS Load Balancer Controller manages Kubernetes Services in a compatible way with the legacy aws cloud provider. The annotation service.beta.kubernetes.io/aws-load-balancer-type is used to determine which controller reconciles the service. If the annotation value is nlb-ip or external, legacy cloud provider ignores the service resource (provided it has the correct patch) so that the AWS Load Balancer controller can take over. For all other values of the annotation, the legacy cloud provider will handle the service. Note that this annotation should be specified during service creation and not edited later.
The legacy cloud provider patch was added in Kubernetes v1.20 and is backported to Kubernetes v1.18.18+, v1.19.10+.