In this blog post you will learn how to set up ingress for your Kubernetes application with AWS Application Load Balancer.
Mots of workloads run on your Kubernetes cluster ultimately need to be exposed to end-users. This is where Ingress comes to play – a special Kubernetes object responsible for providing external access to the services in a cluster. Typically, you define there rules, how HTTP traffic is routed to the backend services. But every ingress resource must be of some Ingress class that points to a controller, which will, in the end, implement rules with cloud infrastructure and services. When you are on AWS the natural choice is to use Application Load Balancer for it. Let’s see how to do that!
Installation
Before we can use ALB, we must first install the controller in our cluster. In this tutorial, I’ll use an alb-demo EKS cluster located in eu-central-1 region. As usually in AWS, let’s start with defining some IAM permissions so that controller can access ALB resources. First, we need to create an IAM OIDC provider for our cluster with the following command.
eksctl utils associate-iam-oidc-provider \ --region eu-central-1 \ --cluster alb-demo \ --approve
Next, we create a dedicated IAM policy called AWSLoadBalancerControllerIAMPolicy
. We use a predefined template, but feel free to modify it if needed.
curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.1.2/docs/install/iam_policy.json aws iam create-policy \ --policy-name AWSLoadBalancerControllerIAMPolicy \ --policy-document file://iam-policy.json
Finally, we can create an IAM role and ServiceAccount for the controller in the kube-system
namespace. Please note, that we must use the policy ARN from the previous step.
eksctl create iamserviceaccount \ --cluster=alb-demo \ --region=eu-central-1 \ --namespace=kube-system \ --name=aws-load-balancer-controller \ --attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \ --override-existing-serviceaccounts \ --approve
Now we are ready to deploy the ALB controller to our cluster. The easiest way to do that is to use a Helm chart from the official EKS chart repository.
helm repo add eks https://aws.github.io/eks-charts kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master" helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system \ --set clusterName=alb-demo \ --set serviceAccount.create=false \ --set serviceAccount.name=aws-load-balancer-controller
And that’s it, you can now use ALB when creating ingress for the application. Let’s see how to do that!
ALB configuration
For the purpose of this tutorial, we will deploy a simple web application into the Kubernetes cluster and expose it to the Internet with an ALB ingress controller. Complete source code is available in the GitLab repository.
Let’s first run the application on the EKS cluster by creating a deployment and service.
kubectl apply -f app-deployment.yaml kubectl apply -f app-service.yaml
Now we can deploy the ingress, however before we do that, let’s take a closer look at it.
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: alb-demo-ingress annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]' alb.ingress.kubernetes.io/healthcheck-protocol: HTTP alb.ingress.kubernetes.io/healthcheck-port: http-healthcheck alb.ingress.kubernetes.io/healthcheck-path: /healthcheck spec: rules: - http: paths: - path: /* backend: serviceName: app-service servicePort: 80
Apart from the standard ingress definition you probably noticed the special annotations attached to this resource. That is where we tell the ingress controller how to configure ALB. In our case, we want to expose our backend service to the Internet at HTTP port 80. We also define a health check so that the load balancer won’t route traffic to instances that are down. Let’s deploy the ingress.
kubectl apply -f ingress.yaml kubectl describe ingress/alb-demo-ingress
After executing the above command you will probably see the following output.
Name: alb-demo-ingress Namespace: alb-demo Address: Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) Rules: Host Path Backends ---- ---- -------- * /* alb-demo-service:80 (172.31.24.80:80) Annotations: alb.ingress.kubernetes.io/healthcheck-path: /healthcheck alb.ingress.kubernetes.io/healthcheck-port: http-healthcheck alb.ingress.kubernetes.io/healthcheck-protocol: HTTP alb.ingress.kubernetes.io/scheme: internet-facing kubernetes.io/ingress.class: alb Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedBuildModel 2s (x10 over 6s) ingress Failed build model due to couldn't auto-discover subnets: unable to discover at least one subnet
Deploy failed with error message: Failed build model due to couldn’t auto-discover subnets: unable to discover at least one subnet. This is because we must explicitly mark our subnets with a special tag: kubernetes.io/role/elb=1
. Once this is done, deploying ingress is successful and you can access the app from the browser.
SSL Support
It’s always a good idea to serve your application over a secure SSL connection so let’s see how we can enable it in ALB. As with most of the configuration, this can be done by adding additional annotations to the ingress definition.
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: alb-demo-ingress annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]' alb.ingress.kubernetes.io/healthcheck-protocol: HTTP alb.ingress.kubernetes.io/healthcheck-port: http-healthcheck alb.ingress.kubernetes.io/healthcheck-path: /healthcheck alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-central-1:<ACCOUNT_ID>:certificate/<CERTIFICATE_ID> alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' spec: rules: - http: paths: - path: /* backend: serviceName: ssl-redirect servicePort: use-annotation - path: /* backend: serviceName: app-service servicePort: 80
Summary
As you can see using the ALB ingress controller to expose Kubernetes workloads to the Internet is pretty straightforward, although it requires few preparation steps. Most of the configuration is done using annotations, so make sure you’ve read the full documentation.
With ALB ingress controller your web application is exposed to end users over the Internet. When traffic grows you may need to scale your infrastructure to ensure best user experience. Read our blog post How to enable Kubernetes cluster autoscaling in AWS to ensure it’s done automatically!
8 replies on “How to set up Kubernetes Ingress with AWS ALB Ingress Controller”
what could be other issues though we have the subnet tags required added for autodiscovery? I am finding hard to fix this
Failed build model due to couldn’t auto-discover subnets: unable to discover at least one subnet. The tag exists
Hi Tara,
I’m not sure what could be other reasons. Please verify again if you’ve marked proper subnets.
Thank you so much. I was stuck while using aws example. your blog solved my issue.
You’re welcome. I’m glad that we could help.
Why is it every single example uses a mixture of k8s manifests, helm & cli commands. Why can this not all be done in helm to simplify automation?
Hi Paul,
the idea here was to show the atomic steps required to enable Ingress with AWS ELB, so the reader can understand the low level details.
As there are many use cases for this, it can be automated with Helm, Terraform or other tools. With the examples provided it shouldn’t be a big deal.
can AWS load balancer controller in aws account 1 discover subnets in aws account 2 if vpc in account 1 and account 2 are connected using transit gateway?
I don’t think it’s possible, but there might be some not well documented trick to achieve that. Maybe try to reach out to AWS support as they are very helpful with such problems.