Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tons of useless warning log messages for instrumented_http.Transport #2348

Closed
fivesheep opened this issue Oct 2, 2021 · 20 comments · Fixed by #2361, #2375, #2379 or #3870
Closed

Tons of useless warning log messages for instrumented_http.Transport #2348

fivesheep opened this issue Oct 2, 2021 · 20 comments · Fixed by #2361, #2375, #2379 or #3870
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@fivesheep
Copy link

fivesheep commented Oct 2, 2021

What happened:
external-dns v0.10.0 process logs out tons of useless warning message each cycle

│ time="2021-10-02T06:13:28Z" level=debug msg="No endpoints could be generated from service monitoring/datadog-cluster-agent"                                                                                           │
│ time="2021-10-02T06:13:28Z" level=debug msg="No endpoints could be generated from service kube-system/kube-dns"                                                                                                       │
│ time="2021-10-02T06:13:28Z" level=info msg="All records are already up to date"                                                                                                                                       │
│ W1002 06:13:52.742729       1 transport.go:288] Unable to cancel request for *instrumented_http.Transport                                                                                                             │
│ W1002 06:13:52.742773       1 transport.go:288] Unable to cancel request for *instrumented_http.Transport                                                                                                             │
│ W1002 06:13:52.742769       1 transport.go:288] Unable to cancel request for *instrumented_http.Transport                                                                                                             │
│ W1002 06:13:52.756313       1 transport.go:288] Unable to cancel request for *instrumented_http.Transport                                                                                                             │
│ W1002 06:13:52.758841       1 transport.go:288] Unable to cancel request for *instrumented_http.Transport                                                                                                             │
│ W1002 06:14:22.742906       1 transport.go:288] Unable to cancel request for *instrumented_http.Transport                                                                                                             │
│ W1002 06:14:22.742960       1 transport.go:288] Unable to cancel request for *instrumented_http.Transport                                                                                                             │
│ W1002 06:14:22.743193       1 transport.go:288] Unable to cancel request for *instrumented_http.Transport                                                                                                             │
│ W1002 06:14:22.757389       1 transport.go:288] Unable to cancel request for *instrumented_http.Transport                                                                                                             │
│ W1002 06:14:22.759595       1 transport.go:288] Unable to cancel request for *instrumented_http.Transport

The message was not helpful for debugging, and it also seemed to have a different log format than the others, even though it didn't seem to have real impact on functionality. I didn't see this in v0.9.0 with the same config

What you expected to happen:
Improve the log message to help identify the issue, and shall not log out useless info if there's no real issue.

How to reproduce it (as minimally and precisely as possible):
some thing like below on aws eks

apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns-public
  namespace: {{.Release.Namespace}}
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: external-dns-public
  template:
    metadata:
      labels:
        app: external-dns-public
    spec:
      serviceAccountName: external-dns
      priorityClassName: {{ .Values.priority_class }}
      containers:
        - name: external-dns
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          env:
            - name: AWS_REGION
              value: {{ .Values.aws.region }}
          args:
            - --source=service
            - --source=ingress
            - --domain-filter={{ .Values.public_domain.domain_filter }}
            - --provider=aws-sd
            - --aws-zone-type=public
            - --txt-owner-id={{ .Values.public_domain.hosted_zone_id }}
      securityContext:
        fsGroup: 65534

Anything else we need to know?:

Environment: k8s 1.20 on EKS

  • External-DNS version (use external-dns --version): 0.10.0 (offical image)
  • DNS provider: aws-sd
  • Others:
@fivesheep fivesheep added the kind/bug Categorizes issue or PR as related to a bug. label Oct 2, 2021
@fivesheep fivesheep changed the title Tons of useless Tons of useless warning log messages for instrumented_http.Transport Oct 2, 2021
@samip5
Copy link
Contributor

samip5 commented Oct 2, 2021

The provider doesn't seem to matter, same messages on pdns and on-prem k3s v1.21.4.

@mycrEEpy
Copy link

mycrEEpy commented Oct 4, 2021

Looks like klog to me. I would assume it might have been the same in previous versions but klog had been silenced before with:

replace (
	k8s.io/klog => github.com/mikkeloscar/knolog v0.0.0-20190326191552-80742771eb6b
)

This replace has been removed since v0.10.0

@marcincuber
Copy link

Seeing exactly the same issue on EKS 1.21 with 0.10.0 version of external-dns.

@svonliebenstein
Copy link
Contributor

@mycrEEpy I've found it was removed in #2218, but the PR doesn't really say anything about removing the replace. Maybe this was an unintentional side effect?

Looks like it's a small fix to put it back in if this was the case.

@Raffo
Copy link
Contributor

Raffo commented Oct 9, 2021

I didn't see the error when removing the replace, sorry. We can put it back and have a 0.10.1 released soon.

@Raffo
Copy link
Contributor

Raffo commented Oct 9, 2021

#2361 is the fix, we will work to get this tested/merged/released next week.

@Raffo
Copy link
Contributor

Raffo commented Oct 11, 2021

I tried the new build and it still happens even with the klog fix. I'm gonna have to take a deeper look at this, reopening for now.

@Raffo Raffo reopened this Oct 11, 2021
@rouke-broersma
Copy link

rouke-broersma commented Oct 12, 2021

@Raffo
Copy link
Contributor

Raffo commented Oct 12, 2021

@rouke-broersma yeah, I will do the diff as soon as I find some time and try to figure out what's wrong.

@michelefa1988
Copy link

Same issue here

1 similar comment
@ionutz89
Copy link

Same issue here

@svonliebenstein
Copy link
Contributor

@Raffo Does #2375 help you?
This could very well be the fix, but I lack the proper setup to verify it.

@Raffo
Copy link
Contributor

Raffo commented Oct 14, 2021

We can try getting this in so that we get the image built and then I can re-run the e2e tests.

@Raffo
Copy link
Contributor

Raffo commented Oct 14, 2021

Nope, not even that fixed the problem 😓 I have to take some time which I don't really have right now 😅

@Raffo Raffo reopened this Oct 14, 2021
@Raffo Raffo self-assigned this Oct 14, 2021
@svonliebenstein
Copy link
Contributor

svonliebenstein commented Oct 15, 2021

Aah too bad, sorry it didn't fix it.

I do have a good idea on what is caused the issue now. https://github.com/kubernetes-sigs/external-dns/pull/2218/files#diff-33ef32bf6c23acb95f5902d7097b7a1d5128ca061167ec0716715b0b9eeaa5f6R73 updated the client-go version to 0.22.0, which doesn't have the klog dependency anymore. It now uses the newer k8s.io/klog/v2 dependency.

It should be possible to set a loglevel instead of replacing the klog module, so we don't get every warning of client-go logged in external-dns. I don't actually know how to do this, it would require someone with some more Go experience to have a go at this.

@Raffo
Copy link
Contributor

Raffo commented Oct 15, 2021

I can figure that out, it just requires a bit more time than what I had this week 😅 Hopefully I can tackle this next week, given that we ran out of all possible easy wins.

@Raffo
Copy link
Contributor

Raffo commented Oct 18, 2021

I can confirm #2379 fixes this for good, end to end tested. We'll make a patch release this week.

@svonliebenstein
Copy link
Contributor

Super, thanks @Raffo!

@vipulJain05
Copy link

I am facing the same issue. Getting these errors every single minute **Unable to cancel request for *instrumented_http.Transport.**I have v 0.10.1 and still facing the same issue. I installed it using bitnami charts.

@gazal-k
Copy link

gazal-k commented Nov 19, 2021

I am facing the same issue. Getting these errors every single minute **Unable to cancel request for *instrumented_http.Transport.**I have v 0.10.1 and still facing the same issue. I installed it using bitnami charts.

Application version: v0.10.1+ & chart versions 1.4.0+ have the fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment