Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various cache read and write performance optimizations. #5948

Merged
merged 11 commits into from
Feb 16, 2020

Commits on Feb 15, 2020

  1. Use Set for DeepMerger pastCopies instead of array/indexOf.

    When I wrote this code, I thought this array would usually be so short
    that indexOf would be faster than Set.prototype.has, but of course the
    pathological cases are what end up mattering, and I've recently seen some
    result objects that cause thousands of shallow copies to be made over a
    series of many merges using one DeepMerger instance.
    benjamn committed Feb 15, 2020
    Configuration menu
    Copy the full SHA
    d6edbae View commit details
    Browse the repository at this point in the history
  2. Optimize shouldInclude for common case of no directives.

    Since shouldInclude gets called for every single field in any read or
    write operation, it's important that it takes any available shortcuts to
    handle the common case (no directives) as cheaply as possible.
    benjamn committed Feb 15, 2020
    Configuration menu
    Copy the full SHA
    db1a73d View commit details
    Browse the repository at this point in the history
  3. Avoid repeatedly encoding context.variables with JSON.stringify.

    Since any of the provided variables could be consumed by any of the fields
    in a selection set that we're reading, all variables are potentially
    relevant as part of the result object cache key, so we don't make any
    attempt to stringify just a subset of the variables. However, since we use
    the same stringified variables in every cache key, there's no need to
    perform that encoding repeatedly. JSON.stringify may be fast, but the
    variables object can be arbitrarily large.
    benjamn committed Feb 15, 2020
    Configuration menu
    Copy the full SHA
    96b5a64 View commit details
    Browse the repository at this point in the history
  4. Track policies.rootIdsByTypename as well as policies.rootTypenamesById.

    Believe it or not, iterating over the values of policies.rootTypenamesById
    was noticeably expensive according to Chrome devtools profiling. Since
    this information almost never changes, we might as well maintain it in the
    format that's most convenient.
    benjamn committed Feb 15, 2020
    Configuration menu
    Copy the full SHA
    0c6ee74 View commit details
    Browse the repository at this point in the history
  5. Avoid using JSON.stringify([...]) in makeDepKey helper function.

    Creating a throwaway array just to call JSON.stringify was much more
    expensive than string concatenation. The exact format of these cache keys
    is an invisible implementation detail, so I picked something that seemed
    unlikely ever to be ambiguous, though we can easily change it later.
    benjamn committed Feb 15, 2020
    Configuration menu
    Copy the full SHA
    1b8208e View commit details
    Browse the repository at this point in the history
  6. Avoid calling policies.applyMerges unless mergeable fields found.

    Since policies.applyMerges doesn't change anything unless there are custom
    merge functions to process, we can skip calling it if no merge functions
    were found while processing the current entity.
    benjamn committed Feb 15, 2020
    Configuration menu
    Copy the full SHA
    e3dd0b9 View commit details
    Browse the repository at this point in the history
  7. Avoid forEach and fragment recursion in processSelectionSet.

    Instead of recursively calling processSelectionSet to handle fragments, we
    can simply treat their fields as fields of the current selection set.
    benjamn committed Feb 15, 2020
    Configuration menu
    Copy the full SHA
    852681d View commit details
    Browse the repository at this point in the history
  8. Avoid forEach and fragment recursion in executeSelectionSet.

    This change means fragment results will no longer be cached separately
    from normal selection set results, which is potentially a loss of caching
    granularity, but there's also a reduction in caching overhead because
    we're caching fewer result objects, and we don't have to merge them all
    together, and (most importantly) the result caching system still tracks
    dependencies the same way as before.
    
    It's as if we transformed the query by inlining fragment selections,
    except without doing any work!
    benjamn committed Feb 15, 2020
    Configuration menu
    Copy the full SHA
    df20ff5 View commit details
    Browse the repository at this point in the history

Commits on Feb 16, 2020

  1. Use a Set to process and deduplicate fields.

    Although this may seem like a reversion to forEach instead of a for loop,
    the for loop had an unexpectedly negative impact on minification, and a
    Set has the ability to deduplicate selection objects, so we never
    re-process the same field multiple times through different fragments.
    benjamn committed Feb 16, 2020
    Configuration menu
    Copy the full SHA
    dc61865 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    7945544 View commit details
    Browse the repository at this point in the history
  3. Mention PR #5948 in CHANGELOG.md.

    benjamn committed Feb 16, 2020
    Configuration menu
    Copy the full SHA
    0a4ae81 View commit details
    Browse the repository at this point in the history