Skip to content

AWS Load Balancer Controller installation

The AWS Load Balancer controller (LBC) provisions AWS Network Load Balancer (NLB) and Application Load Balancer (ALB) resources. The LBC watches for new service or ingress Kubernetes resources and configures AWS resources.

The LBC is supported by AWS. Some clusters may be using the legacy "in-tree" functionality to provision AWS load balancers. The AWS Load Balancer Controller should be installed instead.

Existing AWS ALB Ingress Controller users

The AWS ALB Ingress controller must be uninstalled before installing the AWS Load Balancer Controller. Please follow our migration guide to do a migration.

Supported Kubernetes versions

  • AWS Load Balancer Controller v2.0.0~v2.1.3 requires Kubernetes 1.15+
  • AWS Load Balancer Controller v2.2.0~v2.3.1 requires Kubernetes 1.16-1.21
  • AWS Load Balancer Controller v2.4.0+ requires Kubernetes 1.19+

Deployment considerations

Additional requirements for non-EKS clusters:

  • Ensure subnets are tagged appropriately for auto-discovery to work
  • For IP targets, pods must have IPs from the VPC subnets. You can configure the amazon-vpc-cni-k8s plugin for this purpose.

Using the Amazon EC2 instance metadata server version 2 (IMDSv2)

If you are using the IMDSv2, set the hop limit to 2 or higher in order to allow the LBC to perform the metadata introspection.

You can set the IMDSv2 hop limit as follows:

aws ec2 modify-instance-metadata-options --http-put-response-hop-limit 2 --region <region> --instance-id <instance-id>

Instead of depending on IMDSv2, you can specify the AWS Region and the VPC via the controller flags --aws-region and --aws-vpc-id.

Configure IAM

The controller runs on the worker nodes, so it needs access to the AWS ALB/NLB APIs with IAM permissions.

The IAM permissions can either be setup using IAM roles for service accounts (IRSA) or can be attached directly to the worker node IAM roles. This is the recommended method if you're using Amazon EKS. If you're using kOps or self-hosted Kubernetes, you must manually attach polices to node instances.

Option A: IAM roles for service accounts (IRSA)

The reference IAM policies contain the following permissive configuration:

    "Effect": "Allow",
    "Action": [
    "Resource": "*"

We recommend further scoping down this configuration based on the VPC ID or cluster name resource tag.

Example condition for VPC ID:

    "Condition": {
        "ArnEquals": {
            "ec2:Vpc": "arn:aws:ec2:<REGION>:<ACCOUNT-ID>:vpc/<VPC-ID>"

Example condition for cluster name resource tag:

    "Condition": {
        "Null": {
            "aws:ResourceTag/<CLUSTER-NAME>": "false"

  1. Create an IAM OIDC provider. You can skip this step if you already have one for your cluster.

    eksctl utils associate-iam-oidc-provider \
        --region <region-code> \
        --cluster <your-cluster-name> \

  2. Download an IAM policy for the LBC using one of the following commands:

    If your cluster is in a US Gov Cloud region:

    curl -o iam-policy.json
    If your cluster is in a China region:
    curl -o iam-policy.json 
    If your cluster is in any other region:
    curl -o iam-policy.json  

  3. Create an IAM policy named AWSLoadBalancerControllerIAMPolicy. If you downloaded a different policy, replace iam-policy with the name of the policy that you downloaded.

    aws iam create-policy \
        --policy-name AWSLoadBalancerControllerIAMPolicy \
        --policy-document file://iam-policy.json
    Take note of the policy ARN that's returned.

  4. Create an IAM role and Kubernetes ServiceAccount for the LBC. Use the ARN from the previous step.

    eksctl create iamserviceaccount \
    --cluster=<cluster-name> \
    --namespace=kube-system \
    --name=aws-load-balancer-controller \
    --attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \
    --override-existing-serviceaccounts \
    --region <region-code> \

Option B: Attach IAM policies to nodes

If you're not setting up IAM roles for service accounts, apply the IAM policies from the following URL at a minimum.

curl -o iam-policy.json

The following IAM permissions subset is for those using TargetGroupBinding only and don't plan to use the LBC to manage security group rules:

    "Statement": [
            "Action": [
            "Effect": "Allow",
            "Resource": "*"
    "Version": "2012-10-17"

Network configuration

Review the worker nodes security group docs. Your node security group must permit incoming traffic on TCP port 9443 from the Kubernetes control plane. This is needed for webhook access.

If you use eksctl, this is the default configuration.

Add controller to cluster

We recommend using the Helm chart to install the controller. The chart supports Fargate and facilitates updating the controller.

If you want to run the controller on Fargate, use the Helm chart, since it doesn't depend on the cert-manager.

Detailed instructions

Follow the instructions in the aws-load-balancer-controller Helm chart.


  1. Add the EKS chart repo to Helm
    helm repo add eks
  2. If upgrading the chart via helm upgrade, install the TargetGroupBinding CRDs.

    kubectl apply -k ""


    The helm install command automatically applies the CRDs, but helm upgrade doesn't.

Helm install command for clusters with IRSA:

helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=<cluster-name> --set serviceAccount.create=false --set

Helm install command for clusters not using IRSA:

helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=<cluster-name>

Install cert-manager

kubectl apply --validate=false -f

Apply YAML

  1. Download the spec for the LBC.
  2. Edit the saved yaml file, go to the Deployment spec, and set the controller --cluster-name arg value to your EKS cluster name
    apiVersion: apps/v1
    kind: Deployment
    . . .
    name: aws-load-balancer-controller
    namespace: kube-system
        . . .
                    - args:
                        - --cluster-name=<INSERT_CLUSTER_NAME>
  3. If you use IAM roles for service accounts, we recommend that you delete the ServiceAccount from the yaml spec. If you delete the installation section from the yaml spec, deleting the ServiceAccount preserves the eksctl created iamserviceaccount.
    apiVersion: v1
    kind: ServiceAccount
  4. Apply the yaml file
    kubectl apply -f v2_4_7_full.yaml
  5. Optionally download the default ingressclass and ingressclass params
  6. Apply the ingressclass and params
    kubectl apply -f v2_4_7_ingclass.yaml

Create Update Strategy

The controller doesn't receive security updates automatically. You need to manually upgrade to a newer version when it becomes available.

You can upgrade using helm upgrade or another strategy to manage the controller deployment.