Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry service-busy errors after a delay #1174

Merged
merged 9 commits into from
Nov 7, 2022

Conversation

Groxx
Copy link
Contributor

@Groxx Groxx commented Jun 22, 2022

Builds on #1167, but adds delay before retrying service-busy errors.

For now, since our server-side RPS quotas are calculated per second, this delays
at least 1 second per service busy error.
This is in contrast to the previous behavior, which would have retried up to about
a dozen times in the same period, which is the cause of service-busy-based retry
storms that cause lots more service-busy errors.


This also gives us an easy way to make use of "retry after" information in errors
we return to the caller, though currently our errors do not contain that.

Eventually this should probably come from the server, which has a global view of
how many requests this service has sent, and can provide a more precise delay to
individual callers.
E.g. currently our server-side ratelimiter works in 1-second slices... but that
isn't something that's guaranteed to stay true. The server could also detect truly
large floods of requests, and return jittered values larger than 1 second to more
powerfully stop the storm, or to allow prioritizing some requests (like activity
responses) over others simply by returning a lower delay.

@Groxx Groxx requested review from mantas-sidlauskas and a team June 22, 2022 02:11
@coveralls
Copy link

coveralls commented Jun 22, 2022

Pull Request Test Coverage Report for Build 0183905f-6dc6-40b7-861c-6866b8ff66be

  • 46 of 58 (79.31%) changed or added relevant lines in 4 files are covered.
  • 1 unchanged line in 1 file lost coverage.
  • Overall coverage increased (+0.04%) to 64.183%

Changes Missing Coverage Covered Lines Changed/Added Lines %
internal/common/backoff/retry.go 30 32 93.75%
internal/internal_task_pollers.go 12 22 54.55%
Files with Coverage Reduction New Missed Lines %
internal/common/backoff/retry.go 1 96.59%
Totals Coverage Status
Change from base Build 01838582-1814-4eb0-a654-48c8844fff22: 0.04%
Covered Lines: 12648
Relevant Lines: 19706

💛 - Coveralls

Groxx added a commit that referenced this pull request Jun 22, 2022
#1167)

Part 1 of 2 for solving retry storms, particularly around incorrectly-categorized
errors (e.g. limit exceeded) and service-busy.

This PR moves us to `errors.As` to support wrapped errors in the future, and
re-categorizes some incorrectly-retried errors. This is both useful on its own,
and helps make #1174 a smaller and clearer change.

Service-busy behavior is actually changed in #1174, this commit intentionally
maintains its current (flawed) behavior.
Builds on uber-go#1167, but adds delay before retrying service-busy errors.

For now, since our server-side RPS quotas are calculated per second, this delays
at least 1 second per service busy error.
This is in contrast to the previous behavior, which would have retried up to about
a dozen times in the same period, which is the cause of service-busy-based retry
storms that cause lots more service-busy errors.

---

This also gives us an easy way to make use of "retry after" information in errors
we return to the caller, though currently our errors do not contain that.

Eventually this should probably come from the server, which has a global view of
how many requests this service has sent, and can provide a more precise delay to
individual callers.
E.g. currently our server-side ratelimiter works in 1-second slices... but that
isn't something that's guaranteed to stay true.  The server could also detect truly
large floods of requests, and return jittered values larger than 1 second to more
powerfully stop the storm, or to allow prioritizing some requests (like activity
responses) over others simply by returning a lower delay.
internal/common/backoff/retry.go Outdated Show resolved Hide resolved
@CLAassistant
Copy link

CLAassistant commented Jul 11, 2022

CLA assistant check
All committers have signed the CLA.

@@ -103,16 +106,40 @@ Retry_Loop:
return err
}

// Check if the error is retryable
if isRetryable != nil && !isRetryable(err) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isRetryable == nil was only true in tests, so what was changed and now it's just assumed to exist in all cases.

//
// note that this is only a minimum, however. longer delays are assumed to
// be equally valid.
func ErrRetryableAfter(err error) (retryAfter time.Duration) {
Copy link
Contributor Author

@Groxx Groxx Sep 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

decided to move it here because it's tightly related to retry logic, and one thing needs it externally, so it's exposed. and I kinda like the backoff.ErrRetryableAfter package/name, keeps it clear that it's retry-backoff-related.

Copy link
Contributor

@davidporter-id-au davidporter-id-au left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice!

@Groxx
Copy link
Contributor Author

Groxx commented Nov 7, 2022

Merging, will try to follow up this week with a cleanup (if feasible, given the custom behavior I remember... I suspect it won't be, but worth checking on anyway).

@Groxx Groxx merged commit 2618d0c into uber-go:master Nov 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants