Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

To review, DO NOT MERGE #6022

Draft
wants to merge 139 commits into
base: dev
Choose a base branch
from
Draft

To review, DO NOT MERGE #6022

wants to merge 139 commits into from

Conversation

bnjjj
Copy link
Contributor

@bnjjj bnjjj commented Sep 18, 2024

No description provided.

abernix and others added 30 commits July 9, 2024 16:16
…#5630)

Currently the `Schema` struct created in Record/Replay plugin is missing api_schema (it's `None`!) information that's necessary when parsing an executable document. This means that any incoming operation with Record/Replay plugin enabled, will panic with `missing API schema`.

This change adds an additional `schema.with_api_schema()` call before instantiating this plugin. The fix does reparse the schema, but this is also a debugging plugin. The expectation is that this change will be superseded by #5623 in the 1.52.0 release.
The parsing and validation step is a blocking task that can be expensive on large queries. When performed on a tokio executor threads, that thread is unavailable to handle other asynchronous requests in the mean time. Which means that if a lot of large queries come in at once, they could lock up all of the executor threads, and the router then stops handling traffic.
This moves parsing validation in tokio's blocking tasks pool. This is a set of threads allocated purely to blocking tasks, exposing an async interface for the rest of the code, so executor threads can offload the parsing there then go back to the rest of the traffic while the query is parsed. If too many large queries come in at once, the blocking pool might get temporarily used entirely, but this will not affect traffic for other queries that were already parsed and planned, and the request handling timeout can trigger if it waits too long for its query to be parsed.

Co-authored-by: Marc-Andre Giroux <mgiroux@netflix.com>
Co-authored-by: Marc-Andre Giroux <mgiroux0@gmail.com>
Co-authored-by: Gary Pennington <gary@apollographql.com>
Signed-off-by: Benjamin Coenen <5719034+bnjjj@users.noreply.github.com>
Co-authored-by: Benjamin Coenen <5719034+bnjjj@users.noreply.github.com>
Co-authored-by: Jeremy Lempereur <jeremy.lempereur@iomentum.com>
Co-authored-by: Dylan Anthony <dylan@apollographql.com>
Co-authored-by: Matthew Hawkins <matthew.hawkins@apollographql.com>
Co-authored-by: Nicholas Cioli <nicholas.cioli@apollographql.com>
Co-authored-by: Taylor Jones <taylor@apollographql.com>
Co-authored-by: Tim Hingston <tim@apollographql.com>
Co-authored-by: Dylan Anthony <dylan@apollographql.com>
license enforcement now uses the expanded supergraph so we need to ensure that the connect link (serialized as join__directive) ends up in the schema
lennyburdette and others added 22 commits September 18, 2024 09:44
Co-authored-by: Lenny Burdette <lenny@apollographql.com>
Co-authored-by: Matthew Hawkins <matthew.hawkins@apollographql.com>
Co-authored-by: Lenny Burdette <lenny@apollographql.com>
Co-authored-by: Matthew Hawkins <matthew.hawkins@apollographql.com>
…#6086)

This is a breaking change, but seems necessary to prevent parsing ambiguities.
…::Path` items (#6097)

Co-authored-by: Dylan Anthony <dylan@apollographql.com>
Co-authored-by: Lenny Burdette <lenny@apollographql.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.