Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for pool_count #636

Merged
merged 2 commits into from
Oct 7, 2024
Merged

Add support for pool_count #636

merged 2 commits into from
Oct 7, 2024

Conversation

josevalim
Copy link
Member

Benchmarks coming soon.

include:
- elixirbase: "1.11.4-erlang-23.3.4.9-alpine-3.16.9"
postgres: "16.2-alpine"
pool_count: "4"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was not picked up? 🤔

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@greg-rychlewski do you know if this was meant to be picked up? Or only after we merge?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@josevalim sorry I missed this. do you still want me to look at it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do! I tried investigating this but I could not come up with anything. It seems the whole include is ignored. :(

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I think I know what it is. I think I need a default pool_count 1. Let me give it a try.

@josevalim
Copy link
Member Author

josevalim commented Oct 6, 2024

The following benchmark:

Benchee.run(
  %{"checkout" => fn -> Ecto.Bench.PgRepo.checkout(fn -> :ok end) end},
  time: 5,
  parallel: 8
)

Goes from 20k (per process) with pool_count of 1, 30k with pool count of 4, to 48k with pool count of 8 on a machine with 10 cores.

When doing a Repo.get with parallel: 32, going from pool count of 1 to 16 increased throughout in 40% (from 1.4k to 2.0k per process).

@greg-rychlewski
Copy link
Member

Sorry I might be mistaken because I haven't looked at partition supervisor. Are you basically starting multiple pools to reduce contention at checkout time?

If this is the case, I was wondering if your tests were run by splitting up the total number of connections. For example if you would start 100 connections on 1 pool did you start 100 / num_partitions connections on each partitioned pool?

The reason I ask is because normally it's better to watch the number of connections you are making to your db to avoid issues.

@josevalim
Copy link
Member Author

josevalim commented Oct 7, 2024

Good call. I double checked and here are the numbers:

pool_count pool_size ips
1 32 1.53 K
2 16 1.87 K
4 8 2.13 K
8 4 2.39 K
16 2 2.54 K
32 2 2.54 K
64 2 2.54 K

One may ask: should we instead aim for a high pool_count instead of pool_size? A pool is useful because, in case of high traffic, it gives you the first connection available. Having multiple pools with low size means you have to wait even if other pool is available.

So in case of benchmarks, were all queries take roughly the same time, pool_count will definitely be better. In case of heterogeneous workflows, a pool will most likely give you better overall latency.

But the benchmarks also show the pool itself has some contention over high concurrency count. I am unsure how much it matters in practice, but it will definitely help us squeeze additional performance in benchmarks such as TechEmpower.

@josevalim josevalim merged commit 994292e into master Oct 7, 2024
20 checks passed
@josevalim
Copy link
Member Author

💚 💙 💜 💛 ❤️

I will investigate CI if it continues to not run the new build post-merging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants