Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NLB IP dualstack mode support (ipv6 support for NLB) #2050

Closed
wants to merge 1 commit into from

Conversation

logingood
Copy link

Description

AWS Loadbalancer Ingress controller supports service resource of
LoadBalancer type
.
Currently dualstack annotation for service is ignored by external dns
and corresponding AAAA record is not being created.
This PR adds support of ipv6 similar to ingress resource/ALB.

Fixes #ISSUE

Checklist

  • Unit tests updated
  • End user documentation updated

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Apr 15, 2021
@k8s-ci-robot
Copy link
Contributor

Welcome @logingood!

It looks like this is your first PR to kubernetes-sigs/external-dns 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/external-dns has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Apr 15, 2021
@seanmalloy
Copy link
Member

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 15, 2021
Copy link
Contributor

@Raffo Raffo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a few comments. Did you test this @logingood? I don't think this is going to be enough to create AAAA records 🤔


## Dualstack NLBs

The ALB ingress controller satisifies service of "LoadBalancer" type with
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you mean NLB?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well actually it is aws loadbalancer ingress controller. It satisfies ingress resource and service resource of type loadbalancer in presence of annotation. I will change just in case to avoid confusion.

docs/tutorials/alb-ingress.md Outdated Show resolved Hide resolved
@logingood
Copy link
Author

Added a few comments. Did you test this @logingood? I don't think this is going to be enough to create AAAA records 🤔

I'll test it today, I was relying on this piece of code, which expects the annotation to be present

for _, endpoint := range endpoints {
change, dualstack := p.newChange(action, endpoint, recordsCache, zones)
changes = append(changes, change)
if dualstack {
// make a copy of change, modify RRS type to AAAA, then add new change
rrs := *change.ResourceRecordSet
change2 := &route53.Change{Action: change.Action, ResourceRecordSet: &rrs}
change2.ResourceRecordSet.Type = aws.String(route53.RRTypeAaaa)
changes = append(changes, change2)
}
}
and here's the test that validates this condition
func TestAWSCreateRecordsWithALIAS(t *testing.T) {

@logingood
Copy link
Author

logingood commented Apr 15, 2021

Added a few comments. Did you test this @logingood? I don't think this is going to be enough to create AAAA records 🤔

I'll test it today, I was relying on this piece of code, which expects the annotation to be present

for _, endpoint := range endpoints {
change, dualstack := p.newChange(action, endpoint, recordsCache, zones)
changes = append(changes, change)
if dualstack {
// make a copy of change, modify RRS type to AAAA, then add new change
rrs := *change.ResourceRecordSet
change2 := &route53.Change{Action: change.Action, ResourceRecordSet: &rrs}
change2.ResourceRecordSet.Type = aws.String(route53.RRTypeAaaa)
changes = append(changes, change2)
}
}

and here's the test that validates this condition

func TestAWSCreateRecordsWithALIAS(t *testing.T) {

@Raffo I've just tested, it worked fine

image
image

@logingood logingood requested a review from Raffo April 22, 2021 01:35
@logingood
Copy link
Author

@Raffo @seanmalloy any feedback on that ? We are actively using forked version and it perfectly creates AAAA for nlb-ip load balancers. It would be very useful to fix this behavior.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 17, 2021
@crawforde
Copy link

Any movement on this?

@logingood
Copy link
Author

hey is there any update on that ? @Raffo @seanmalloy is there any alternatives to that ?

@crawforde
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2021
@crawforde
Copy link

/assign @njuettner

@krish7919
Copy link

Bump please?

@forsberg
Copy link

How does this one compare to #2309?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 13, 2022
@logingood
Copy link
Author

How does this one compare to #2309?

this PR covers IPv6 for NLB #2309 might be more generic ?

@@ -40,6 +40,10 @@ import (

const (
defaultTargetsCapacity = 10
// NLBDualstackAnnotationKey is the annotation used for determining if an NLB LoadBalancer is dualstack
NLBDualstackAnnotationKey = "service.beta.kubernetes.io/aws-load-balancer-ip-address-type"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't look right, there shouldn't be any provider specific setting inside generic source like ingress or service

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More specifically, what records to create should be determined by spec.ipFamilies. If you strictly test for IPv4 and IPv6 you'll also support LoadBalancer implementations that are ipv6-only.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is for AWS NLB and that is how AWS does it. AWS currently does not use spec.ipFamilies.

For what I see it replicates what is being done for ingresses:

func (sc *ingressSource) setDualstackLabel(ingress *networkv1.Ingress, endpoints []*endpoint.Endpoint) {
	val, ok := ingress.Annotations[ALBDualstackAnnotationKey]
	if ok && val == ALBDualstackAnnotationValue {
		log.Debugf("Adding dualstack label to ingress %s/%s.", ingress.Namespace, ingress.Name)
		for _, ep := range endpoints {
			ep.Labels[endpoint.DualstackLabelKey] = "true"
		}
	}
}

I'd suggest to move forward with this.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@james-callahan james-callahan Mar 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From kubectl explain service.spec.ipFamilies:

These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field.

To me this means that the load balancer controller shouldn't set the ipFamilies to contain IPv6 unless the Service also has an IPv6 clusterIPs entry. Considering that the load balancer controller is distinct from the CNI, this doesn't seem like it can be used to indicate if the load balancer has a public IPv6 address. And hence they needed to use the annotation instead.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@szuecs I'm asking for clarification because I don't know what this means in the context of external-dns with type: LoadBalancer Service behind an AWS NLB:

you have to follow the ClusterIPs entries (i.e. the Services fields), otherwise, if the load balancer has ipv6 and ipv4 but the service only ipv4, or they implement some nat64 or it will not work, at least I can't see how

The external-dns entries do not have anything to do with ClusterIPs on the Service. I don't know what the task associated with that sentence would be. I'm asking for clarification on what you think the solution looks like.

Once again, for clarification, the use case is: there is a single stack Kubernetes cluster where everything is IPv4. All clusterIPs for all services within the cluster will always be IPv4 addresses. The load balancer controller is capable of provisioning a dual-stack load balancer. The dual-stack load balancer has a routable IPv6 address. Incoming IPv6 traffic arriving at the load balancer is NAT'd and sent to the Kubernetes Service's IPv4 clusterIP(s). Nothing in the Kubernetes API entities knows the external IPv6 address. The load balancer controller provisions this type of load balancer based on an annotation on the Service entity. The exact same behavior happens for Ingress entities as well. That use case is handled in external-dns with provider specific code in the source/ingress.go file here:

func (sc *ingressSource) setDualstackLabel(ingress *networkv1.Ingress, endpoints []*endpoint.Endpoint) {
. It works perfectly out of the box.

An example of a type: LoadBalancer Service which is configured for NLB DualStack looks like this:

apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: anexternalname.example.com.,anotherexternalname.example.com.
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"external-dns.alpha.kubernetes.io/hostname":"anexternalname.example.com.,anotherexternalname.example.com.","service.beta.kubernetes.io/aws-load-balancer-ip-address-type":"dualstack","service.beta.kubernetes.io/aws-load-balancer-scheme":"internet-facing","service.beta.kubernetes.io/aws-load-balancer-target-group-attributes":"proxy_protocol_v2.enabled=true"},"labels":{"skaffold.dev/run-id":"edec5eb5-9454-4d5f-afb4-7134aad56c3b"},"name":"edgeproxy-external","namespace":"edgeproxy"},"spec":{"externalTrafficPolicy":"Cluster","loadBalancerClass":"service.k8s.aws/nlb","ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":8443}],"selector":{"app":"edgeproxy"},"type":"LoadBalancer"}}
      service.beta.kubernetes.io/aws-load-balancer-ip-address-type: dualstack
      service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
      service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: proxy_protocol_v2.enabled=true
    creationTimestamp: "2023-04-28T23:17:29Z"
    finalizers:
    - service.k8s.aws/resources
    labels:
      skaffold.dev/run-id: edec5eb5-9454-4d5f-afb4-7134aad56c3b
    name: edgeproxy-external
    namespace: edgeproxy
    resourceVersion: "125018992"
    uid: 404573f8-4163-48c2-a2b6-2c42ccaf0e76
  spec:
    allocateLoadBalancerNodePorts: true
    clusterIP: 172.20.7.113
    clusterIPs:
    - 172.20.7.113
    externalTrafficPolicy: Cluster
    internalTrafficPolicy: Cluster
    ipFamilies:
    - IPv4
    ipFamilyPolicy: SingleStack
    loadBalancerClass: service.k8s.aws/nlb
    ports:
    - name: http
      nodePort: 30027
      port: 80
      protocol: TCP
      targetPort: 8080
    - name: https
      nodePort: 31165
      port: 443
      protocol: TCP
      targetPort: 8443
    selector:
      app: edgeproxy
    sessionAffinity: None
    type: LoadBalancer
  status:
    loadBalancer:
      ingress:
      - hostname: k8s-edgeproxy-edgeproxyext-fd8d0357e7ab-e7068f6d4543.elb.us-west-2.amazonaws.com
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

We do not want to create non-ALIAS A/AAAA records – we already have a solution which works for non-ALIAS records, which is CNAMEs. Presumably #3554 is another solution which works for non-ALIAS records.

We want to create ALIAS records in Route53. A simple reason to prefer ALIAS records is that they are cheaper to serve to clients in Route53's billing model. They are also the default behavior of external-dns and ideally exeternal-dns would support dual-stack NLBs out of the box without further configuration. It does support dual-stack ALBs targeting K8s Ingresses today.

A potential solution is to have the provider attempt to resolve the A/AAAA records and then to create the corresponding ALIAS records based on which resolutions return valid addresses. There are many valid solutions. We could change the domain model to pass all the service annotations up and let the provider pick dual stack instead of making source choose it when the creating the CNAME endpoint. We could change the provider to always create AAAA records if its making ALIASes. We could use an external-dns-specific annotation to demand source put dual-stack = true in the endpoint labels.

I'm asking for guidance on what is acceptable here. Obviously I think the current PR is perfectly reasonable: it follows existing practices in the code base; it works and is documented; it solves a surprising potential for brokeness which someone can encounter when they deploy external-dns. As a solution, it has the marked benefit of actually existing – the external-dns project is one click away from supporting DualStack NLBs on AWS! The external-dns maintainers seem to feel otherwise. That's fine. But for two years there has been very little help in actually driving towards a solution which lets this use case work out of the box with external-dns. People want this to work with their clusters. Given the information in this PR, if you tell us what is acceptable, someone will probably make it happen.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interesting, so AWS does NAT64?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, exactly. I don't think it's technically NAT64, but there's only IPv4 involved on the service / target group / Kubernetes cluster side. The load balancer has its own externally routable IPv6 addresses. The load balancer has explicit targets which are eligible to receive traffic based on configured target groups and health check status and the likes. It translates incoming IPv6 connection traffic into corresponding IPv4 connection traffic when it forwards the traffic to the backends.

Press release here: https://aws.amazon.com/about-aws/whats-new/2020/11/network-load-balancer-supports-ipv6/. The language they use there is: Your Network Load Balancer seamlessly converts IPv6 traffic to IPv4 before routing it to back-end targets..

Documentation for the feature is partially available here, for example: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#target-group-ip-address-type

AFAIK, the IP protocol family used by clients communicating with the load balancer and the IP protocol family used by the load balancer communicating with the target groups is completely decoupled. You could also, for example, have an IPv4 NLB in front of a purely IPv6 target group.

GCP load balancers can similarly terminate IPv6 traffic and translate it to IPv4 going to the backends: https://cloud.google.com/load-balancing/docs/ipv6. The language there is more explicit about the IPv6 connection being terminated at the GCP load balancer.

For TLS traffic to an NLB, the AWS documentation is explicit about connection termination, but for IPv6 -> IPv4 TCP translated traffic, I'm not 100% certain they claim full termination – it might be more reframing + fragmentation/MTU handling + NAT or something like that. Seems hairy to get right without connection termination, so who knows...all speculation on my part.

Copy link

@aojea aojea May 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, it really is Proxy6 to 4

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@reltuk I am very happy to be that close to having dual stack support.
I have to think about and test in a cluster how it can be done best.
I think if we can ALIAS an ALIAS it would be the best. In the past afaik it wasn't possible, that's why we CNAME the lb ALIAS. Resolve to IPs A/AAAA is a poor solution as I tried to tell. It would work low scale cloud loadbalancers, because at scale you will get more and more IPs in the pool of a cloud loadbalancer and DNS clients don't get all IPs resolved and that means we depend on which kind of IPs we resolved that can easily lead to flapping and even worse loadbalancer can not scale to all handlers they have, if we don't find all IPs.

The correct wording, I think what nlb does is forwarding. It's not proxying in case of ip preservation. I didn't check if they can do ip preservation for dualstack but I don't see the problem.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@sylr
Copy link

sylr commented Oct 6, 2022

New custom build with this patch

source: https://github.com/sylr/external-dns/commits/v0.12%2Bsylr
image: ghcr.io/sylr/external-dns:v0.12.2-3-g264cde1a

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 3, 2023
@sylr
Copy link

sylr commented Jan 31, 2023

@Raffo @seanmalloy @njuettner I've been running this patch for 6 months now, can we merge it please ?

@cdobbyn
Copy link

cdobbyn commented Apr 11, 2023

Not sure why but adding this to our service while using your latest image didn't work @sylr. The aws load balancer controller did update it appropriately to be dualstack to the outside world.

service.beta.kubernetes.io/aws-load-balancer-ip-address-type: "dualstack"

ingress-nginx-service.yml.txt

Would love to see this working and then merged into mainline.

AWS Loadbalancer Ingress controller supports [service resource of
LoadBalancer
type](https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/guide/service/nlb_ip_mode.md).
Currently dualstack annotation for `service` is ignored by external dns
and corresponding `AAAA` record is not beging created.
This PR adds support of ipv6 similar to ingress resource/ALB.
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: crawforde, logingood, mesge, sylr
Once this PR has been reviewed and has the lgtm label, please ask for approval from njuettner. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 16, 2023
@logingood
Copy link
Author

I've rebased just in case if it remains relevant, this code have been running without issues more than a year. If someone has a better way/more generic to implement this that'll be great!

@reltuk
Copy link

reltuk commented Apr 18, 2023

I just independently implemented this after needing the missing functionality and realizing external-dns did not support it. I searched issues before I implemented it but I only searched PRs as I went to open my own.

This is seemingly a perfectly reasonable PR, following the same approach as is used to handle the ALB dual stack annotation, and it's been open for over 2 years now. Multiple people have commented that it works and is useful for them. I don't understand the reticence to merge. If there is a problem with this approach, can you please provide guidance on a different approach which would be acceptable?

The issue at hand is caused by a specific quirk in Route53 ALIAS records, where both A and AAAA alias records need to be created for certain load balancers. The Endpoint domain model is currently modeling these as CNAMEs. If you run external-dns with --aws-prefer-cname things general work as expected, since A and AAAA queries will resolve through the CNAME. But when the AWS provider is converting these entries into ALIAS A records it doesn't create the required AAAA record.

Some alternative options:

  1. Change the AWS provider to always create AAAA alias records as well. AFAICT, this does not change query response behavior. In particular, a AAAA query of a AAAA ALIAS record pointing to an NLB that does not itself have an AAAA record will return a 0-answer NOERROR response. You get the same response if there is no AAAA ALIAS record at all. I'm not sure if this would have scale or backwards compatibility concerns...

  2. Create a non-provider specific annotation which helps external-dns know to create the AAAA record. It will look pretty similar to the above, but maybe it will meet your requirements. It should only apply to ALIAS-type records, and I don't know if any other DNS providers have such things. Something like placing the following annotation on the relevant Service: external-dns.alpha.kubernetes.io/create-dualstack-alias-records: "true" or external-dns.alpha.kubernetes.io/alias-record-ip-address-type: "ipv6"|"ipv4"|"dualstack" or external-dns.alpha.kubernetes.io/alias-record-types: "a"|"aaaa"|"a, aaaa" or something...

I would really love to see some solution here. It's a useful feature and it's surprising missing behavior when someone runs across it.

@johngmyers
Copy link
Contributor

A concern I have with this approach is that it is making assumptions about the behavior of the AWS Load Balancer Controller that aren't always going to be true. It's already possible to configure LBC to create dualstack ALBs by specifying that in the IngressClassParams instead of in an annotation. It's possible LBC could in the future be made configurable to create dual-stack NLBs by default, particularly since I plan to submit such a feature.

Is there a problem with always creating both A and AAAA alias records?

@szuecs
Copy link
Contributor

szuecs commented May 8, 2023

@johngmyers I wonder why we need this at all, because if ALIAS would return A/AAAA, then CNAME to the ALIAS would just work. Of course it would depend on the client that resolves to ask for AAAA instead of A.

@reltuk
Copy link

reltuk commented May 8, 2023

@johngmyers I wonder why we need this at all, because if ALIAS would return A/AAAA, then CNAME to the ALIAS would just work. Of course it would depend on the client that resolves to ask for AAAA instead of A.

We need this due a particular quirk of ALIAS records in Route53. In particular. ALIAS is not a record type, it's a type of record value, which allows a record entry to resolve to an existing other record in Route53. Whereas a CNAME is a record type which has specific resolution semantics that resolvers follow. So, the way to setup an AAAA record in Route53 using an ALIAS record value is to create the AAAA record. And if you want an A record, you have to create the A record.

As mentioned above, creating both records in all cases, regardless of whether the actual alias'd name has an AAAA record or not, is potentially fine. Questions remain around backward compatibility, any potential scale concerns arising from blindly doubling the number of records, etc.

AFAICT, the concerns around IngressClassParams do not apply to this PR. They apply to existing code in external-dns. It's true that any provider-specific code will always need to be updated to support new behavior and functionality in providers. It's true that the existing modeling of ALIAS's as CNAMEs in source and the use of an endpoint label to model dual-stack CNAMEs in source makes it somewhat awkward to keep provider specific logic out of source. Those don't seem like compelling reasons to leave external-dns broken for this use case. Certainly further development work in the future could improve the situation if it's something the external-dns developers feel warrants further investment.

@johngmyers
Copy link
Contributor

My proposal would be to remove endpoint.DualstackLabelKey, changing the AWS provider to act as if it was always present. That would probably keep all of the logic inside the AWS provider.

There might need to be some logic so that the provider would, possibly with the help of core, be able to reconcile discrepancies between the two alias records, such as the AAAA record not existing.

I can take a look at putting together such a PR if @logingood doesn't want dibs.

@johngmyers
Copy link
Contributor

My competing PR is #3605.

@reltuk
Copy link

reltuk commented Jun 12, 2023

While I thought CNAMEs were a reasonable workaround, I did come across one situation where it does not work. If you have a delegated zone like infra.mycorp.com, and you want to make the name infra.mycorp.com itself point to an external load balancer, then it's fine to create (ALIAS) records of type A and AAAA, but, of course, there cannot be a CNAME record for the apex of the zone.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 20, 2024
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.