-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add note about internal load balancers on AWS #313
Conversation
**Important:** On AWS, internal Services require the following prerequisites: | ||
|
||
- The cluster is running workload cluster release [v17.3.2](https://docs.giantswarm.io/changes/workload-cluster-releases-aws/releases/aws-v17.3.2/) or above | ||
- At least one node pool [`MachineDeployment`](https://docs.giantswarm.io/ui-api/management-api/crd/machinedeployments.cluster.x-k8s.io/) has been created with `alpha.giantswarm.io/internal-elb: true` annotation | ||
- Controller pods are scheduled on nodes of the node pool with the above annotation. (For example selecting nodes with matching `giantswarm.io/machine-deployment` label using `controller.nodeAffinity` configuration value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@njuettner can you quickly tell me if these instructions point in the right direction?
Also an additional question: Do you know if external aws load balancers on these nodes will also work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only pods which are exposed via node port need to placed on those machinedeployments. We figured out that kube-controller updates security groups for pods which have a node port.
AFAIK ingress controller doesn't need to on those machinedeployment as long as they don't use node ports for themselves.
Yes that should work as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only pods which are exposed via node port need to placed on those machinedeployments.
The load balancer health check exposes a nodePort through spec.healthCheckNodePort
. Unless the health check succeeds, AWS does not route traffic to the nodes. So I would say this seems to be required.
Another alternative which may work is to set the alb.ingress.kubernetes.io/healthcheck-port: traffic-port
annotation on the Service. I'll test once I have a working cluster.
This is a duplicate, I guess? |
nope |
But where is the difference to #316? |
A note about internal load balancers on AWS |
Closing. This seems already fixed for newer platform releases. |
This PR attempts to solve issues with internal load balancers on AWS.
Tests on workload clusters (not always required)
For changes in the chart, chart templates, and ingress controller container images, I executed the following tests
to verify them working in live enviromnents:
Testing was done using
hello-world-app
.Hint for KVM: