-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Various cache read and write performance optimizations. #5948
Commits on Feb 15, 2020
-
Use Set for DeepMerger pastCopies instead of array/indexOf.
When I wrote this code, I thought this array would usually be so short that indexOf would be faster than Set.prototype.has, but of course the pathological cases are what end up mattering, and I've recently seen some result objects that cause thousands of shallow copies to be made over a series of many merges using one DeepMerger instance.
Configuration menu - View commit details
-
Copy full SHA for d6edbae - Browse repository at this point
Copy the full SHA d6edbaeView commit details -
Optimize shouldInclude for common case of no directives.
Since shouldInclude gets called for every single field in any read or write operation, it's important that it takes any available shortcuts to handle the common case (no directives) as cheaply as possible.
Configuration menu - View commit details
-
Copy full SHA for db1a73d - Browse repository at this point
Copy the full SHA db1a73dView commit details -
Avoid repeatedly encoding context.variables with JSON.stringify.
Since any of the provided variables could be consumed by any of the fields in a selection set that we're reading, all variables are potentially relevant as part of the result object cache key, so we don't make any attempt to stringify just a subset of the variables. However, since we use the same stringified variables in every cache key, there's no need to perform that encoding repeatedly. JSON.stringify may be fast, but the variables object can be arbitrarily large.
Configuration menu - View commit details
-
Copy full SHA for 96b5a64 - Browse repository at this point
Copy the full SHA 96b5a64View commit details -
Track policies.rootIdsByTypename as well as policies.rootTypenamesById.
Believe it or not, iterating over the values of policies.rootTypenamesById was noticeably expensive according to Chrome devtools profiling. Since this information almost never changes, we might as well maintain it in the format that's most convenient.
Configuration menu - View commit details
-
Copy full SHA for 0c6ee74 - Browse repository at this point
Copy the full SHA 0c6ee74View commit details -
Avoid using JSON.stringify([...]) in makeDepKey helper function.
Creating a throwaway array just to call JSON.stringify was much more expensive than string concatenation. The exact format of these cache keys is an invisible implementation detail, so I picked something that seemed unlikely ever to be ambiguous, though we can easily change it later.
Configuration menu - View commit details
-
Copy full SHA for 1b8208e - Browse repository at this point
Copy the full SHA 1b8208eView commit details -
Avoid calling policies.applyMerges unless mergeable fields found.
Since policies.applyMerges doesn't change anything unless there are custom merge functions to process, we can skip calling it if no merge functions were found while processing the current entity.
Configuration menu - View commit details
-
Copy full SHA for e3dd0b9 - Browse repository at this point
Copy the full SHA e3dd0b9View commit details -
Avoid forEach and fragment recursion in processSelectionSet.
Instead of recursively calling processSelectionSet to handle fragments, we can simply treat their fields as fields of the current selection set.
Configuration menu - View commit details
-
Copy full SHA for 852681d - Browse repository at this point
Copy the full SHA 852681dView commit details -
Avoid forEach and fragment recursion in executeSelectionSet.
This change means fragment results will no longer be cached separately from normal selection set results, which is potentially a loss of caching granularity, but there's also a reduction in caching overhead because we're caching fewer result objects, and we don't have to merge them all together, and (most importantly) the result caching system still tracks dependencies the same way as before. It's as if we transformed the query by inlining fragment selections, except without doing any work!
Configuration menu - View commit details
-
Copy full SHA for df20ff5 - Browse repository at this point
Copy the full SHA df20ff5View commit details
Commits on Feb 16, 2020
-
Use a Set to process and deduplicate fields.
Although this may seem like a reversion to forEach instead of a for loop, the for loop had an unexpectedly negative impact on minification, and a Set has the ability to deduplicate selection objects, so we never re-process the same field multiple times through different fragments.
Configuration menu - View commit details
-
Copy full SHA for dc61865 - Browse repository at this point
Copy the full SHA dc61865View commit details -
Configuration menu - View commit details
-
Copy full SHA for 7945544 - Browse repository at this point
Copy the full SHA 7945544View commit details -
Configuration menu - View commit details
-
Copy full SHA for 0a4ae81 - Browse repository at this point
Copy the full SHA 0a4ae81View commit details