Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use k8s.pod.ip to record resource IP instead of just ip #183

Merged
merged 1 commit into from
Apr 22, 2020
Merged

Use k8s.pod.ip to record resource IP instead of just ip #183

merged 1 commit into from
Apr 22, 2020

Conversation

owais
Copy link
Contributor

@owais owais commented Apr 22, 2020

Description: Use k8s.pod.ip to record resource IP instead of just ip
Since the k8s processor prefixes all labels with k8s.*., adding the
same prefix to the IP label. We'll still continue to look for the ip
label on resource/node when we can't find the IP by other means but will
only write the IP back to k8s.pod.ip.

Testing: Tested locally and updated unit tests

}

// Jaeger client libs tag the process with the process/resource IP and
// jaeger to OC translator maps jaeger process to OC node.
// TODO: Should jaeger translator map jaeger process to OC resource instead?
if podIP == "" && td.SourceFormat == sourceFormatJaeger {
if td.Node != nil {
podIP = td.Node.Attributes[ipLabelName]
podIP = td.Node.Attributes[clientIPLabelName]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible that k8sIPLabelName exists in td.Node.Attributes at this point? Do we need to check for it as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unlikely but possible. I think it can happen if Otel agent detects and adds the IP to resource but then exports to Otel collector in Jaeger or SAPM instead of OTLP/OC. I don't see any downsides to covering for this case. Will update.

Since the k8s processor prefixes all labels with `k8s.*.`, adding the
same prefix to the IP label. We'll still continue to look for the `ip`
label on resource/node when we can't find the IP by other means but will
only write the IP back to `k8s.pod.ip`.
Copy link
Member

@tigrannajaryan tigrannajaryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tigrannajaryan tigrannajaryan merged commit 7370900 into open-telemetry:master Apr 22, 2020
@tigrannajaryan
Copy link
Member

Thanks @owais !

@owais owais deleted the k8s-processor-use-k8s-prefixed-label-for-ip branch June 5, 2020 12:23
wyTrivail referenced this pull request in mxiamxia/opentelemetry-collector-contrib Jul 13, 2020
Since the k8s processor prefixes all labels with `k8s.*.`, adding the
same prefix to the IP label. We'll still continue to look for the `ip`
label on resource/node when we can't find the IP by other means but will
only write the IP back to `k8s.pod.ip`.
mxiamxia referenced this pull request in mxiamxia/opentelemetry-collector-contrib Jul 22, 2020
* Use batcher and queued-retry on perf tests

The typical usage is expected to have both the batcher and the queued-retry so changing the perf tests to use them. Also adding a test with addattributesprocessor that requires a differenc configuration file.

* Merge corrections

* Fix configuration, do not batch for now

* Adjust memory for TestNoBackend10kSPS
bjrara pushed a commit to bjrara/opentelemetry-collector-contrib that referenced this pull request Apr 3, 2024
open-telemetry#183)

* [receivers/awscontainerinsightsreceiver] Add podresourcesstore in awscontainerinsightreceiver (open-telemetry#167)

ref : amazon-contributing#167
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants