Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add note about internal load balancers on AWS #313

Closed
wants to merge 3 commits into from

Conversation

ubergesundheit
Copy link
Member

This PR attempts to solve issues with internal load balancers on AWS.

Tests on workload clusters (not always required)

For changes in the chart, chart templates, and ingress controller container images, I executed the following tests
to verify them working in live enviromnents:

Test / Provider AWS Azure KVM
Upgrade from previous version
Existing Ingress resources are reconciled correctly
Fresh install
Fresh Ingress resources are reconciled correctly

Testing was done using hello-world-app.

Hint for KVM:

kubectl port-forward -n kube-system svc/nginx-ingress-controller-app 8080:80
ingress_domain=host.configured.in.ingress; curl --connect-to "$ingress_domain:80:127.0.0.1:8080" "http://$ingress_domain" -v

Comment on lines +181 to +185
**Important:** On AWS, internal Services require the following prerequisites:

- The cluster is running workload cluster release [v17.3.2](https://docs.giantswarm.io/changes/workload-cluster-releases-aws/releases/aws-v17.3.2/) or above
- At least one node pool [`MachineDeployment`](https://docs.giantswarm.io/ui-api/management-api/crd/machinedeployments.cluster.x-k8s.io/) has been created with `alpha.giantswarm.io/internal-elb: true` annotation
- Controller pods are scheduled on nodes of the node pool with the above annotation. (For example selecting nodes with matching `giantswarm.io/machine-deployment` label using `controller.nodeAffinity` configuration value)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@njuettner can you quickly tell me if these instructions point in the right direction?

Also an additional question: Do you know if external aws load balancers on these nodes will also work?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only pods which are exposed via node port need to placed on those machinedeployments. We figured out that kube-controller updates security groups for pods which have a node port.

AFAIK ingress controller doesn't need to on those machinedeployment as long as they don't use node ports for themselves.

Yes that should work as well

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only pods which are exposed via node port need to placed on those machinedeployments.

The load balancer health check exposes a nodePort through spec.healthCheckNodePort. Unless the health check succeeds, AWS does not route traffic to the nodes. So I would say this seems to be required.

Another alternative which may work is to set the alb.ingress.kubernetes.io/healthcheck-port: traffic-port annotation on the Service. I'll test once I have a working cluster.

@Gacko
Copy link
Member

Gacko commented Jun 15, 2022

This is a duplicate, I guess?

@ubergesundheit
Copy link
Member Author

nope

@Gacko
Copy link
Member

Gacko commented Jun 15, 2022

But where is the difference to #316?

@ubergesundheit
Copy link
Member Author

But where is the difference to #316?

A note about internal load balancers on AWS

@ubergesundheit ubergesundheit changed the title Add controller.nodeAffinity configuration value Add note about internal load balancers on AWS Jun 15, 2022
@ubergesundheit
Copy link
Member Author

Closing. This seems already fixed for newer platform releases.

@Gacko Gacko deleted the internal-ingress-controller-docs branch October 17, 2022 10:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants