Annotations
Service annotations¶
- Annotation keys and values can only be strings. All other types below must be string-encoded, for example:
- boolean:
"true"
- integer:
"42"
- stringList:
"s1,s2,s3"
- stringMap:
"k1=v1,k2=v2"
- json:
"{ \"key\": \"value\" }"
- boolean:
Annotations¶
Warning
These annotations are specific to the kubernetes service resources reconciled by the AWS Load Balancer Controller. Although the list was initially derived from the k8s in-tree kube-controller-manager
, this
documentation is not an accurate reference for the services reconciled by the in-tree controller.
Traffic Routing¶
Traffic Routing can be controlled with following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-name
specifies the custom name to use for the load balancer. Name longer than 32 characters will be treated as an error.limitations
- If you modify this annotation after service creation, there is no effect.
Example
service.beta.kubernetes.io/aws-load-balancer-name: custom-name
-
service.beta.kubernetes.io/aws-load-balancer-type
specifies the load balancer type. This controller reconciles those service resources with this annotation set to eithernlb-ip
orexternal
.Tip
This annotation specifies the controller used to provision LoadBalancers (as specified in legacy-cloud-provider). Refer to lb-scheme to specify whether the LoadBalancer is internet-facing or internal.
- [Deprecated] For type
nlb-ip
, the controller will provision an NLB with targets registered by IP address. This value is supported for backwards compatibility. - For type
external
, the NLB target type depends on the nlb-target-type annotation.
limitations
- This annotation should not be modified after service creation.
Example
service.beta.kubernetes.io/aws-load-balancer-type: external
- [Deprecated] For type
-
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type
specifies the target type to configure for NLB. You can choose betweeninstance
andip
.-
instance
mode will route traffic to all EC2 instances within cluster on the NodePort opened for your service. The kube-proxy on the individual worker nodes sets up the forwarding of the traffic from the NodePort to the pods behind the service.- service must be of type
NodePort
orLoadBalancer
forinstance
targets - for k8s 1.22 and later if
spec.allocateLoadBalancerNodePorts
is set tofalse
,NodePort
must be allocated manually
default value
If you configure
spec.loadBalancerClass
, the controller defaults toinstance
target typeNodePort allocation
k8s version 1.22 and later support disabling NodePort allocation by setting the service field
spec.allocateLoadBalancerNodePorts
tofalse
. If the NodePort is not allocated for a service port, the controller will fail to reconcile instance mode NLB. - service must be of type
-
ip
mode will route traffic directly to the pod IP. In this mode, AWS NLB sends traffic directly to the Kubernetes pods behind the service, eliminating the need for an extra network hop through the worker nodes in the Kubernetes cluster.ip
target mode supports pods running on AWS EC2 instances and AWS Fargate- network plugin must use native AWS VPC networking configuration for pod IP, for example Amazon VPC CNI plugin.
Example
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
-
-
service.beta.kubernetes.io/aws-load-balancer-subnets
specifies the Availability Zone the NLB will route traffic to. See Network Load Balancers for more details.Tip
Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details.
You must specify at least one subnet in any of the AZs, both subnetID or subnetName(Name tag on subnets) can be used.
limitations
- Each subnets must be from a different Availability Zone
- AWS has restrictions on disabling existing subnets for NLB. As a result, you might not be able to edit this annotation once the NLB gets provisioned.
Example
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet
-
service.beta.kubernetes.io/aws-load-balancer-alpn-policy
allows you to configure the ALPN policies on the load balancer.supported policies
HTTP1Only
Negotiate only HTTP/1.*. The ALPN preference list is http/1.1, http/1.0.HTTP2Only
Negotiate only HTTP/2. The ALPN preference list is h2.HTTP2Optional
Prefer HTTP/1.* over HTTP/2 (which can be useful for HTTP/2 testing). The ALPN preference list is http/1.1, http/1.0, h2.HTTP2Preferred
Prefer HTTP/2 over HTTP/1.*. The ALPN preference list is h2, http/1.1, http/1.0.None
Do not negotiate ALPN. This is the default.
Example
service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Preferred
-
service.beta.kubernetes.io/aws-load-balancer-target-node-labels
specifies which nodes to include in the target group registration forinstance
target type.Example
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: label1=value1, label2=value2
-
service.beta.kubernetes.io/aws-load-balancer-eip-allocations
specifies a list of elastic IP address configuration for an internet-facing NLB.Note
- This configuration is optional, and you can use it to assign static IP addresses to your NLB
- You must specify the same number of eip allocations as load balancer subnets annotation
- NLB must be internet-facing
Example
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xyz, eipalloc-zzz
-
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses
specifies a list of private IPv4 addresses for an internal NLB.Note
- NLB must be internal
- This configuration is optional, and you can use it to assign static IPv4 addresses to your NLB
- You must specify the same number of private IPv4 addresses as load balancer subnets annotation
- You must specify the IPv4 addresses from the load balancer subnet IPv4 ranges
Example
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 192.168.10.15, 192.168.32.16
-
service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses
specifies a list of IPv6 addresses for an dualstack NLB.Note
- NLB must be dualstack
- This configuration is optional, and you can use it to assign static IPv6 addresses to your NLB
- You must specify the same number of private IPv6 addresses as load balancer subnets annotation
- You must specify the IPv6 addresses from the load balancer subnet IPv6 ranges
Example
service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses: 2600:1f13:837:8501::1, 2600:1f13:837:8504::1
Traffic Listening¶
Traffic Listening can be controlled with following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-ip-address-type
specifies the IP address type of NLB.Example
service.beta.kubernetes.io/aws-load-balancer-ip-address-type: ipv4
Support UDP-based services over IPv6¶
You can configure dualstack NLB to support UDP-based services over IPv6 via the following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-enable-prefix-for-ipv6-source-nat specifies whether Prefix for IPv6 source NAT is enabled or not. UDP-based support can be enabled for dualstack NLBs only if Prefix for IPv6 source NAT is enabled.
Note
- Applicable to Network Load Balancers using dualstack IP address type.
- This configuration is optional, and you can use it to enable UDP support over IPv6.
- Allowed values are either “on” or “off”
- Once the source prefix for source NATing is enabled, it cannot be disabled if load balancer has a UDP listener attached.
- Steps to disable the aws-load-balancer-enable-prefix-for-ipv6-source-nat after it is enabled and UDP listeners already attached.
- You will have to first remove the UDP listeners and apply the manifest.
- Update the manifest to set source NATing to "off" and then apply the manifest again.
Example
- Enable prefix for IPv6 Source NAT
service.beta.kubernetes.io/aws-load-balancer-enable-prefix-for-ipv6-source-nat: "on"
-
service.beta.kubernetes.io/aws-load-balancer-source-nat-ipv6-prefixes specifies a list of IPv6 prefixes that should be used for IPv6 source NATing.
Note
- Applicable to Network Load Balancers using dualstack IP address type.
- This annotation can be specified only if service.beta.kubernetes.io/aws-load-balancer-enable-prefix-for-ipv6-source-nat annotation is set to “on”.
- This configuration is optional and it can be used to specify custom IPv6 prefixes for IPv6 source NATing to support UDP based services routing in Network Load Balancers using dualstack IP address type.
- If service.beta.kubernetes.io/aws-load-balancer-enable-prefix-for-ipv6-source-nat annotation is set to “on”, and you don’t specify this annotation, then IPv6 prefix/CIDR for source NATing will be auto-assigned to each subnet.
- If you are specifying this annotation, you must specify the same number of items in the list as the load balancer subnets annotation and following the same order. Each item in the list can have value of either “auto_assigned” or a valid IPv6 prefix/CIDR with prefix length of 80 and it should be in range of the corresponding subnet CIDR.
- Once the source NAT IPv6 prefixes are set, the IPv6 prefixes cannot be updated if the load balancer has a UDP listener attached.
Example
service.beta.kubernetes.io/aws-load-balancer-source-nat-ipv6-prefixes: 1025:0223:0009:6487:0001::/80, auto_assigned, 1025:0223:0010:6487:0001::/80
Resource attributes¶
NLB resource attributes can be controlled via the following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol specifies whether to enable proxy protocol v2 on the target group. Set to '*' to enable proxy protocol v2. This annotation takes precedence over the annotation
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes
for proxy protocol v2 configuration.The only valid value for this annotation is
*
. -
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes
specifies the Target Group Attributes to be configured.Example
- set the deregistration delay to 120 seconds (available range is 0-3600 seconds)
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.timeout_seconds=120
- enable source IP affinity
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip
- enable proxy protocol version 2
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true
- enable connection termination on deregistration
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.connection_termination.enabled=true
- enable client IP preservation
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
- disable immediate connection termination for unhealthy targets and configure a 30s draining interval (available range is 0-360000 seconds)
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: target_health_state.unhealthy.connection_termination.enabled=false,target_health_state.unhealthy.draining_interval_seconds=30
- set the deregistration delay to 120 seconds (available range is 0-3600 seconds)
-
service.beta.kubernetes.io/aws-load-balancer-attributes
specifies Load Balancer Attributes that should be applied to the NLB.Only attributes defined in the annotation will be updated. To unset any AWS defaults(e.g. Disabling access logs after having them enabled once), the values need to be explicitly set to the original values(
access_logs.s3.enabled=false
) and omitting them is not sufficient. Custom attributes set in this annotation's config map will be overriden by annotation-specific attributes. For backwards compatibility, existing annotations for the individual load balancer attributes get precedence in case of ties.- If
deletion_protection.enabled=true
is in the annotation, the controller will not be able to delete the NLB during reconciliation. Once the attribute gets edited todeletion_protection.enabled=false
during reconciliation, the deployer will force delete the resource. - Please note, if the deletion protection is not enabled via annotation (e.g. via AWS console), the controller still deletes the underlying resource.
Example
- enable access log to s3
service.beta.kubernetes.io/aws-load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app
- enable NLB deletion protection
service.beta.kubernetes.io/aws-load-balancer-attributes: deletion_protection.enabled=true
- enable cross zone load balancing
service.beta.kubernetes.io/aws-load-balancer-attributes: load_balancing.cross_zone.enabled=true
- enable client availability zone affinity
service.beta.kubernetes.io/aws-load-balancer-attributes: dns_record.client_routing_policy=availability_zone_affinity
- If
-
service.beta.kubernetes.io/aws-load-balancer-listener-attributes.${Protocol}-${Port}
specifies listener attributes that should be applied to the listener.Only attributes defined in the annotation will be updated. To reset any AWS defaults, the values need to be explicitly set to the original values and omitting it is not sufficient.
Example
- configure TCP idle timeout value.
service.beta.kubernetes.io/aws-load-balancer-listener-attributes.TCP-80: tcp.idle_timeout.seconds=400
- configure TCP idle timeout value.
-
the following annotations are deprecated in v2.3.0 release in favor of service.beta.kubernetes.io/aws-load-balancer-attributes
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled
-
service.beta.kubernetes.io/aws-load-balancer-multi-cluster-target-group
Allows you to share the created Target Group ARN with other Load Balancer Controller managed clusters.This feature does not offer any Deletion Protection. Deleting the service will still delete the Target Group. If you need to support Target Groups shared with multiple clusters, it's recommended to use an out-of-band Target Group that is not managed by a Load Balancer Controller.
- It is not recommended to change this value frequently, if ever. The recommended way to set this value is on creation of the service.
Example
service.beta.kubernetes.io/aws-load-balancer-multi-cluster-target-group: "true"
AWS Resource Tags¶
The AWS Load Balancer Controller automatically applies following tags to the AWS resources it creates (NLB/TargetGroups/Listener/ListenerRule):
elbv2.k8s.aws/cluster: ${clusterName}
service.k8s.aws/stack: ${stackID}
service.k8s.aws/resource: ${resourceID}
In addition, you can use annotations to specify additional tags
-
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags
specifies additional tags to apply to the AWS resources.- you cannot override the default controller tags mentioned above or the tags specified in the
--default-tags
controller flag - if any of the tag conflicts with the ones configured via
--external-managed-tags
controller flag, the controller fails to reconcile the service
Example
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test
- you cannot override the default controller tags mentioned above or the tags specified in the
Health Check¶
Health check on target groups can be configured with following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol
specifies the target group health check protocol.- you can specify
tcp
, orhttp
orhttps
,tcp
is the default tcp
is the default health check protocol if the servicespec.externalTrafficPolicy
isCluster
,http
ifLocal
- if the service
spec.externalTrafficPolicy
isLocal
, do not usetcp
for health check - Supports only single protocol per service
Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http
- you can specify
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port
specifies the TCP port to use for target group health check.default value
- if you do not specify the health check port, the default value will be
spec.healthCheckNodePort
whenexternalTrafficPolicy=local
ortraffic-port
otherwise.
Example
- set the health check port to
traffic-port
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port
- set the health check port to port
80
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80"
- if you do not specify the health check port, the default value will be
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path
specifies the http path for the health check in case of http/https protocol.Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold
specifies the consecutive health check successes required before a target is considered healthy.Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3"
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold
specifies the consecutive health check failures before a target gets marked unhealthy.Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval
specifies the interval between consecutive health checks.Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes
specifies the http success codes for the health check in case of http/https protocol.Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-success-codes: "200-399"
-
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout
specifies the target group health check timeout. The target has to respond within the timeout for a successful health check.Note
The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s.
Example
service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "10"
TLS¶
You can configure TLS support via the following annotations:
-
service.beta.kubernetes.io/aws-load-balancer-ssl-cert
specifies the ARN of one or more certificates managed by the AWS Certificate Manager.The first certificate in the list is the default certificate and remaining certificates are for the optional certificate list. See Server Certificates for further details.
Example
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx
-
service.beta.kubernetes.io/aws-load-balancer-ssl-ports
specifies the frontend ports with TLS listeners.- You must configure at least one certificate for TLS listeners
- You can specify a list of port names or port values,
*
does not match any ports - If you don't specify this annotation, controller creates TLS listener for all the service ports
- Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer
Example
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, custom-port
-
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy
specifies the Security Policy for NLB frontend connections, allowing you to control the protocol and ciphers.Example
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
-
service.beta.kubernetes.io/aws-load-balancer-backend-protocol
specifies whether to use TLS for the backend traffic between the load balancer and the kubernetes pods.- If you specify
ssl
as the backend protocol, NLB uses TLS connections for the traffic to your kubernetes pods in case of TLS listeners - You can specify
ssl
ortcp
(default)
Example
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
- If you specify
Access control¶
Load balancer access can be controlled via following annotations:
-
service.beta.kubernetes.io/load-balancer-source-ranges
specifies the CIDRs that are allowed to access the NLB.Tip
we recommend specifying CIDRs in the service
spec.loadBalancerSourceRanges
insteadDefault
0.0.0.0/0
will be used if the IPAddressType is "ipv4"0.0.0.0/0
and::/0
will be used if the IPAddressType is "dualstack"- The VPC CIDR will be used if
service.beta.kubernetes.io/aws-load-balancer-scheme
isinternal
This annotation will be ignored in case preserve client IP is not enabled. - preserve client IP is disabled by default for
IP
targets - preserve client IP is enabled by default forinstance
targetsPreserve client IP has no effect on traffic converted from IPv4 to IPv6 and on traffic converted from IPv6 to IPv4. The source IP of this type of traffic is always the private IP address of the Network Load Balancer. - This could cause the clients that have their traffic converted to bypass the specified CIDRs that are allowed to access the NLB.
this annotation will be ignored if
service.beta.kubernetes.io/aws-load-balancer-security-groups
is specified.Example
service.beta.kubernetes.io/load-balancer-source-ranges: 10.0.0.0/24
-
service.beta.kubernetes.io/aws-load-balancer-security-group-prefix-lists
specifies the managed prefix lists that are allowed to access the NLB.this annotation will be ignored if
service.beta.kubernetes.io/aws-load-balancer-security-groups
is specified.If you'd like to use this annotation, make sure your security group rule quota is enough. If you'd like to know how the managed prefix list affects your quota, see the reference in the AWS documentation for more details.
If you only use this annotation without
load-balancer-source-ranges
, the controller managed security group would ignore theload-balancer-source-ranges
default settings.Example
service.beta.kubernetes.io/aws-load-balancer-security-group-prefix-lists: pl-00000000, pl-1111111
-
service.beta.kubernetes.io/aws-load-balancer-scheme
specifies whether the NLB will be internet-facing or internal. Valid values areinternal
,internet-facing
. If not specified, default isinternal
.Example
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
-
service.beta.kubernetes.io/aws-load-balancer-internal
specifies whether the NLB will be internet-facing or internal.deprecation note
This annotation is deprecated starting v2.2.0 release in favor of the new aws-load-balancer-scheme annotation. It will be supported, but in case of ties, the aws-load-balancer-scheme gets precedence.
Example
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-security-groups
specifies the frontend securityGroups you want to attach to an NLB.
When this annotation is not present, the controller will automatically create one security group. The security group will be attached to the LoadBalancer and allow access from
load-balancer-source-ranges
andaws-load-balancer-security-group-prefix-lists
to thelisten-ports
. Also, the securityGroups for target instances/ENIs will be modified to allow inbound traffic from this securityGroup.If you specify this annotation, you need to configure the security groups on your target instances/ENIs to allow inbound traffic from the load balancer. You could also set the
manage-backend-security-group-rules
if you want the controller to manage the security group rules.Both name and ID of securityGroups are supported. Name matches a
Name
tag, not thegroupName
attribute.Example
service.beta.kubernetes.io/aws-load-balancer-security-groups: sg-xxxx, nameOfSg1, nameOfSg2
-
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules
specifies whether the controller should automatically add the ingress rules to the instance/ENI security group.If you disable the automatic management of security group rules for an NLB (e.g.: by setting
service.beta.kubernetes.io/aws-load-balancer-security-groups
), you will need to manually add appropriate ingress rules to your EC2 instance or ENI security groups to allow access to the traffic and health check ports.Example
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false"
-
service.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic
specifies whether to apply security group rules to traffic sent to the load balancer through AWS PrivateLink.Example
service.beta.kubernetes.io/aws-load-balancer-inbound-sg-rules-on-private-link-traffic: "off"
Legacy Cloud Provider¶
The AWS Load Balancer Controller manages Kubernetes Services in a compatible way with the AWS cloud provider's legacy service controller.
- For users on v2.5.0+, The AWS LBC provides a mutating webhook for service resources to set the
spec.loadBalancerCLass
field for Serive of type LoadBalancer, effectively making the AWS LBC the default controller for Service of type LoadBalancer. Users can disable this feature and revert to using the AWS Cloud Controller Manager as the default service controller by setting the helm chart valueenableServiceMutatorWebhook
to false with--set enableServiceMutatorWebhook=false
. - For users on older versions, the annotation
service.beta.kubernetes.io/aws-load-balancer-type
is used to determine which controller reconciles the service. If the annotation value isnlb-ip
orexternal
, recent versions of the legacy cloud provider ignore the Service resource so that the AWS LBC can take over. For all other values of the annotation, the legacy cloud provider will handle the service. Note that this annotation should be specified during service creation and not edited later. Support for the annotation was added to the legacy cloud provider in Kubernetes v1.20, and is backported to v1.18.18+ and v1.19.10+.