-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Protect multiple RSA requesters from each other #866
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems significant enough (and subtle enough) to warrant adding some tests too, I think? No better way to prevent future regressions than enforcing it in code!
// cache when we wake up in the browser as we initially provide on the | ||
// server. | ||
// | ||
this.dfd.resolve(JSON.stringify(res)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I apparently can't leave comments willy-nilly in this file, but should we do the same thing for error responses (line ~209)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's annoying that you can click to see those lines but can't comment on them. 👿
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, didn't realize those were handled separately. 👍
Rather than pass a different mutable copy to each requester, why not pass a single immutable reference to each requester? |
@doug-wade How would that work? |
If we were willing to add a library for immutability, it'd be really simple
If instead we wanted to roll our own, we could do a recursive freeze:
|
@doug-wade I see. That would be a pretty big breaking change. Worth discussing, but I'd still like to get a fix for the bug out in the meanwhile. 🐛 |
@@ -223,7 +217,7 @@ class CacheEntry { | |||
// server-side, we increment the number of requesters | |||
// we expect to retrieve the data on the frontend | |||
this.requesters += 1; | |||
return this.dfd.promise; | |||
return this.dfd.promise.then(val => JSON.parse(val)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't we need to parse the JSON in both the if
and else
blocks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, probably. 😁
Nice catch @drewpc!
Seems like there's more work and it's failing the tests. Should I close the PR for now? |
It's not dead yet! It could pull through! |
If two ReactServerAgent requests are made to a given endpoint only one upstream http request is actually issued, and the result is provided to both requesters. Previously this result was passed by reference, so mutations by one requester interfered with the data for others. This patch provides a fresh deep copy to each requester. This has the unfortunate side effect of introducing a deep copy in the browser where we previously thought we could get away without one. It's a minor perf hit, but it's important for data integrity.
Good call. Done.
I don't think so, actually. Error responses are less likely to be mutated, and their handling is out of band.
The coverage here was already pretty good (thanks @roblg), so I just modified existing tests to verify that values are deeply equal, but not references. So... how about it? 😁 |
If two ReactServerAgent requests are made to a given endpoint only one upstream http request is actually issued, and the result is provided to both requesters. Previously this result was passed by reference, so mutations by one requester interfered with the data for others.
This patch provides a fresh deep copy to each requester.
This has the unfortunate side effect of introducing a deep copy in the browser where we previously thought we could get away without one. It's a minor perf hit, but it's important for data integrity.