Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeletstatsreceiver: Add ability to collect detailed data from PVC #743

Merged
merged 3 commits into from
Aug 25, 2020

Conversation

asuresh4
Copy link
Member

Description: For pods using a Persistent Volume Claim, the Kubelet API only provides information about the claim and not the actual underlying physical resource. Add ability to optionally look up the underlying physical volume and collect labels accordingly.

Testing: Added tests.

Documentation: Updated README.

@asuresh4 asuresh4 requested a review from a team August 17, 2020 22:02
@asuresh4
Copy link
Member Author

Depends on #690. Will rebase once that's merged.

@codecov
Copy link

codecov bot commented Aug 17, 2020

Codecov Report

Merging #743 into master will increase coverage by 0.06%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #743      +/-   ##
==========================================
+ Coverage   88.06%   88.13%   +0.06%     
==========================================
  Files         233      233              
  Lines       12359    12428      +69     
==========================================
+ Hits        10884    10953      +69     
  Misses       1120     1120              
  Partials      355      355              
Flag Coverage Δ
#integration 72.00% <ø> (ø)
#unit 87.95% <100.00%> (+0.06%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
receiver/kubeletstatsreceiver/receiver.go 88.23% <ø> (ø)
receiver/kubeletstatsreceiver/config.go 100.00% <100.00%> (ø)
receiver/kubeletstatsreceiver/kubelet/metadata.go 100.00% <100.00%> (ø)
receiver/kubeletstatsreceiver/kubelet/metrics.go 100.00% <100.00%> (ø)
receiver/kubeletstatsreceiver/kubelet/resource.go 100.00% <100.00%> (ø)
receiver/kubeletstatsreceiver/kubelet/volume.go 100.00% <100.00%> (ø)
receiver/kubeletstatsreceiver/runnable.go 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 01b752a...bf15443. Read the comment docs.

@dmitryax
Copy link
Member

look like it includes changes from the previous PR. could you rebase please?

@asuresh4 asuresh4 force-pushed the kubelet-volumes branch 2 times, most recently from 1a46a26 to ebf3db4 Compare August 20, 2020 21:08
@@ -86,7 +91,7 @@ func (r *runnable) Run() error {
}

metadata := kubelet.NewMetadata(r.extraMetadataLabels, podsMetadata)
mds := kubelet.MetricsData(r.logger, summary, metadata, typeStr, r.metricGroupsToCollect)
mds := kubelet.MetricsData(r.logger, summary, metadata, typeStr, r.metricGroupsToCollect, r.getPersistentVolumeLabelsFromClaim)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we add conditionally prefetched volumeMetadata to the metadata structure above instead of passing around volumeClaimLabelsSetter ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we could that instead. It also seems like doing that might make it cleaner to cache information about volumes. So, the Metadata struct will gather all the required metadata about volumes upfront and later on map it back to the corresponding volumes while actually collecting the metrics. Will try that out and push an update.

Copy link
Member Author

@asuresh4 asuresh4 Aug 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, as errors are encountered while collecting metadata about volumes, they're logged and metric collection for that specific volume is skipped. If the receiver were to collect all the metadata upfront, to achieve the current behavior, it would also need to keep track of the errors along with the collected metadata so we can log out the exact error. To avoid that, I moved the volumes metadata setter method to the Metadata struct for now but it's still invoked when the metrics are collected. Let me know what you think.

@asuresh4 asuresh4 force-pushed the kubelet-volumes branch 2 times, most recently from 238ad2c to ce31ed0 Compare August 21, 2020 02:41
Copy link
Member

@dmitryax dmitryax left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, looks good


If `k8s_api_config` set, the receiver will attempt to collect metadata from underlying storage resources for
Persistent Volume Claims. For example, if a Pod is using a PVC backed by an EBS instance on AWS, the receiver
would set the `k8s.volume.type` label to be `awsElasticBlockStore` rather than `persistentVolumeClaim`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we put couple more sentences describing extra metadata labels that would be added from underlying storage resources? Based on this description it looks like only k8s.volume.type will be set.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, added a note stating this.

@bogdandrutu
Copy link
Member

rebase to pass tests?

For pods using a Persistent Volume Claim, the Kubelet API only provides information about
the claim and not the actual underlying physical resource. Add ability to optionally look
up the underlying physical volume and collect labels accordingly.
@asuresh4
Copy link
Member Author

Rebased, to re-trigger tests.

@bogdandrutu bogdandrutu merged commit 2675a1d into open-telemetry:master Aug 25, 2020
@asuresh4 asuresh4 deleted the kubelet-volumes branch September 3, 2020 15:27
dyladan referenced this pull request in dynatrace-oss-contrib/opentelemetry-collector-contrib Jan 29, 2021
Signed-off-by: Bogdan Drutu <bogdandrutu@gmail.com>
ljmsc referenced this pull request in ljmsc/opentelemetry-collector-contrib Feb 21, 2022
* Name the BSP tests

* Add a drain wait group; use the stop wait group to avoid leaking a goroutine

* Lint & comments

* Fix

* Use defer/recover

* Restore the Add/Done...

* Restore the Add/Done...

* Consolidate select stmts

* Disable the test

* Lint

* Use better recover
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants