Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Features: Token caching and rate-limit #99

Open
toomai opened this issue Apr 4, 2024 · 0 comments
Open

Features: Token caching and rate-limit #99

toomai opened this issue Apr 4, 2024 · 0 comments

Comments

@toomai
Copy link

toomai commented Apr 4, 2024

Hello,

I'd like to start by saying thank you for this Puppet function.
It's really useful to integrate easily with Vault without needing to deploy additional tools like Vault agent etc...

We plan on deploying this function and use it on a fairly large scale.
Thus I studied the code quite thoroughly.
I noticed some features that could be added and about which I'd like your opinion.
I'm happy to contribute if you feel these ideas are useful !

Reuse Vault token and handle lifecycle of the identity token.

Currently the secrets are cached for the duration of the catalog compilation or application depending if deferred or not.
It is not the case for the token though, each secret lookup will result in a POST call to the login endpoint to get a new token.
Tokens are not revoked after the lookup either.

What do you think about caching the token as well (opt-in) ?
This would bring some additional complexities if we want to handle the lifecycle of the token (renew, extend TTL...).
But we could say that the token is cached for the duration of the catalog application, not extended nor renewed to avoid too many difficulties.
I do think revoking the token upon catalog completion could be nice no matter the decision about the token caching.

Handle HTTP rate-limit

Vault allows administrators to define some quotas above which clients will receive 429 responses.
Additionally Vault can be configured to send back rate-limit headers Retry-After, X-Ratelimit-Limit, X-Ratelimit-Remaining and X-Ratelimit-Reset as per RFC 9110.
These could be used to do intelligent retries without killing the backing Vault service.
It would of course need a max deadline to avoid hanging puppet for hours.
This max deadline should be configurable as well.

Let me know what you think about these ideas, again happy to contribute.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant