Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(blockstore): benchmarks #275

Open
wants to merge 22 commits into
base: main
Choose a base branch
from
Open

feat(blockstore): benchmarks #275

wants to merge 22 commits into from

Conversation

dadepo
Copy link
Contributor

@dadepo dadepo commented Sep 18, 2024

@0xNineteen 0xNineteen changed the title feat(Blockstore): Blockstore benchmark feat(blockstore): benchmarks Sep 23, 2024
pub const max_iterations = 5;

// Analogous to [bench_write_small](https://github.com/anza-xyz/agave/blob/cfd393654f84c36a3c49f15dbe25e16a0269008d/ledger/benches/blockstore.rs#L59)
pub fn benchWriteSmall() !u64 {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchWriteSmall                     5     757783     873809 2151500994     795654

Agave at 9c2098450ca7e5271e3690277992fbc910be27d0

running 1 test
test bench_write_small             ... bench:  23,708,904.20 ns/iter (+/- 222,018.81)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are your benchmarks from release or debug builds of sig?

Here's what I'm seeing:

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchWriteSmall                     5      49872      66281   29995977      57241
test bench_write_small             ... bench:  17,406,472.70 ns/iter (+/- 8,623,383.67)

Copy link
Contributor Author

@dadepo dadepo Oct 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was indeed a debug build.

Having a ReleaseSmall build I also get similar numbers as you got.

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchWriteSmall                     5      30669      41452   13606829      34371

}

// Analogous to [bench_read_sequential]https://github.com/anza-xyz/agave/blob/cfd393654f84c36a3c49f15dbe25e16a0269008d/ledger/benches/blockstore.rs#L78
pub fn benchReadSequential() !u64 {
Copy link
Contributor Author

@dadepo dadepo Sep 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NOTE: debug build

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchReadSequential                 5     133011     156933  102519429     143209

Agave at 9c2098450ca7e5271e3690277992fbc910be27d0

running 1 test
test bench_read_sequential         ... bench:   2,740,116.70 ns/iter (+/- 304,996.16)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchReadSequential                 5       6110       8331     603665       7538
test bench_read_sequential         ... bench:   2,403,107.92 ns/iter (+/- 584,953.52)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ReleaseSmall

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchReadSequential                 5       3484       3816      12416       3680

}

// Analogous to [bench_read_random]https://github.com/anza-xyz/agave/blob/92eca1192b055d896558a78759d4e79ab4721ff1/ledger/benches/blockstore.rs#L103
pub fn benchReadRandom() !u64 {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchReadRandom                     5    1993658    2326961 12944783078    2120888

Agave at 9c2098450ca7e5271e3690277992fbc910be27d0

running 1 test
test bench_read_random             ... bench:   2,820,841.70 ns/iter (+/- 28,719.65)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchReadRandom                     5     120573     132475   24537031     126483
test bench_read_random             ... bench:   2,000,161.67 ns/iter (+/- 541,693.84)

The random performance is a lot worse than agave. The other benchmarks were much closer. It's strange, since the majority of what's being benchmarked here in both cases is rocksdb itself. I would have expected similar performance. Maybe the database needs to be tuned for random access.

I noticed that the agave benchmark reads 4369 shreds, whereas the sig benchmark reads 10499 indices. But that wouldn't explain a 50x slowdown. I also saw that in the agave benchmark, they only read 1/15 of the total number of shreds. Whereas in the sig benchmark, we read all of the shreds. But 4369 is not 1/15 of 10499, so something isn't adding up here.

It looks like benchReadSequential and benchReadRandom use the same input shreds in agave.blockstore.bench_read.shreds.bin. Is that intentional? Maybe each benchmark should use shreds that were generated by the respective agave benchmark, to ensure the data is comparable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With releasemall, I also got

benchReadRandom                     5      53996      72074   38589915      60359

it looks like benchReadSequential and benchReadRandom use the same input shreds in agave.blockstore.bench_read.shreds.bin. Is that intentional? Maybe each benchmark should use shreds that were generated by the respective agave benchmark, to ensure the data is comparable.

I'll look into this.

}

// Analogous to [bench_serialize_write_bincode](https://github.com/anza-xyz/agave/blob/9c2098450ca7e5271e3690277992fbc910be27d0/ledger/benches/protobuf.rs#L88)
pub fn benchSerializeWriteBincode() !u64 {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if this is 100% identical tobench_serialize_write_bincode in agave has Agave has put_bytes that first serialized the values before insertion. Sig currently does not have similar method.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, the agave benchmark includes both serialization and insertion. So I think it is actually the same

Side note, regarding put_bytes not existing in sig. I tried to abstract away the bytes. Converting between types and bytes is the whole purpose of the Database layer. Ideally the ledger code can just use a Database like it's a hashmap, without worrying about serialization. It feels like a leaky abstraction if the Database layer can be bypassed, exposing its internal data to the business logic of the ledger.

That said, actually we do have a getBytes method, so I didn't do a great job at my goal haha. This is because shreds themselves are already bytes. The only reason getBytes exists is to provide a mechanism for managing the lifetime of the shred bytes without needing to copy them into a new allocation. I think this could actually be handled better using the type system, and we could probably eliminate getBytes completely. I decided was willing to compromise on the abstraction a bit, to get the job done quickly.

@dadepo dadepo marked this pull request as ready for review September 30, 2024 11:41
@dnut dnut self-requested a review October 1, 2024 19:42
@@ -425,3 +487,130 @@ pub fn TestDB(scope: []const u8) type {
}
};
}

pub const BenchmarLegger = struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you be opposed to moving this into another file called benchmarks.zig?

Suggested change
pub const BenchmarLegger = struct {
pub const BenchmarkLedger = struct {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. Won't be opposed. Kept it in the same test file as I noticed a couple of benchmarks were like that...

Copy link
Contributor Author

@dadepo dadepo Oct 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -3,6 +3,7 @@
const std = @import("std");
const sig = @import("../sig.zig");
const ledger = @import("lib.zig");
const transaction_status = @import("./transaction_status.zig");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be used as ledger.transaction_status and does not require the file to be imported.

Currently our convention is that you only import root source files from the current module. For example you can import sig.zig if you're in the sig project, and you can import lib.zig from the current folder. But all other imports go through the decls defined by those imports.

This is definitely up for discussion. If you'd like to adopt a different convention, I'm happy to discuss it. But if you have no preference, let's import this transaction_status the ledger decl.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Importing via the root source file is indeed better. Updated in 692d561

@@ -336,6 +360,22 @@ pub fn loadEntriesFromFile(allocator: Allocator, path: []const u8) ![]const Entr
return entries.toOwnedSlice();
}

fn create_rewards(allocator: std.mem.Allocator, count: usize) !Rewards {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to switch to snake case for functions, but for now, the convention in Sig is to use camel case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Old habits (from Rust) die hard. Updated in 44fa660

Comment on lines 413 to 415
/// Differs from init in that it uses the std.heap.c_allocator instead
/// of the test allocator.
pub fn initBench(comptime test_name: []const u8) !*Self {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually uses the GPA allocator, just like the normal init function. See that it's hardcoded above with const _allocator = gpa.allocator(). The leak check only exists for the sole purpose of verifying that the deinit function has been called, by taking advantage of the automatic memory leak checks that occur in tests.

My assumption is that you're trying to accomplish two goals here:

  1. use the c allocator for better performance
  2. avoid using the testing allocator because it only works in tests

For reason 1, you'll need to change how TestState works to make it so it can be initialized with an arbitrary allocator. It'll be complicated to make this work with the way the gpa is currently used. Instead, it would probably be easiest to just get rid of the gpa and change init to accept an allocator. For tests, pass in the testing allocator instead of the gpa. This will reduce the amount of debugging info during errors, but that doesn't really matter. Then, you could also remove the leak check since it's no longer needed.

For reason 2, it would already be mitigated if you go with the solution I just mentioned for reason 1. If you don't use that approach, and the leak check is kept around, you might just want to make the leak check optional, and set it to null outside of tests.

If you want some more context on why the gpa and leak check are included here, I'm happy to hop on a huddle to explain it. But you can also feel free to remove it, because it was probably not a great idea for me to put it here in the first place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed in a separate PR #305 to keep the PRs contained.

@@ -407,6 +461,14 @@ pub fn TestState(scope: []const u8) type {
_allocator.destroy(self);
_ = gpa.detectLeaks();
}

pub fn deinitBench(self: *Self) void {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can probably be removed (see my other comment)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed in dbcd012

Comment on lines 504 to 506
defer inline for (.{shreds}) |slice| {
deinitShreds(allocator, slice);
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a loop?

pub const max_iterations = 5;

// Analogous to [bench_write_small](https://github.com/anza-xyz/agave/blob/cfd393654f84c36a3c49f15dbe25e16a0269008d/ledger/benches/blockstore.rs#L59)
pub fn benchWriteSmall() !u64 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are your benchmarks from release or debug builds of sig?

Here's what I'm seeing:

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchWriteSmall                     5      49872      66281   29995977      57241
test bench_write_small             ... bench:  17,406,472.70 ns/iter (+/- 8,623,383.67)

}

// Analogous to [bench_read_sequential]https://github.com/anza-xyz/agave/blob/cfd393654f84c36a3c49f15dbe25e16a0269008d/ledger/benches/blockstore.rs#L78
pub fn benchReadSequential() !u64 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchReadSequential                 5       6110       8331     603665       7538
test bench_read_sequential         ... bench:   2,403,107.92 ns/iter (+/- 584,953.52)

}

// Analogous to [bench_read_random]https://github.com/anza-xyz/agave/blob/92eca1192b055d896558a78759d4e79ab4721ff1/ledger/benches/blockstore.rs#L103
pub fn benchReadRandom() !u64 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark                  Iterations    Min(us)    Max(us)   Variance   Mean(us)
---------------------------------------------------------------------------------
benchReadRandom                     5     120573     132475   24537031     126483
test bench_read_random             ... bench:   2,000,161.67 ns/iter (+/- 541,693.84)

The random performance is a lot worse than agave. The other benchmarks were much closer. It's strange, since the majority of what's being benchmarked here in both cases is rocksdb itself. I would have expected similar performance. Maybe the database needs to be tuned for random access.

I noticed that the agave benchmark reads 4369 shreds, whereas the sig benchmark reads 10499 indices. But that wouldn't explain a 50x slowdown. I also saw that in the agave benchmark, they only read 1/15 of the total number of shreds. Whereas in the sig benchmark, we read all of the shreds. But 4369 is not 1/15 of 10499, so something isn't adding up here.

It looks like benchReadSequential and benchReadRandom use the same input shreds in agave.blockstore.bench_read.shreds.bin. Is that intentional? Maybe each benchmark should use shreds that were generated by the respective agave benchmark, to ensure the data is comparable.

}

// Analogous to [bench_serialize_write_bincode](https://github.com/anza-xyz/agave/blob/9c2098450ca7e5271e3690277992fbc910be27d0/ledger/benches/protobuf.rs#L88)
pub fn benchSerializeWriteBincode() !u64 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, the agave benchmark includes both serialization and insertion. So I think it is actually the same

Side note, regarding put_bytes not existing in sig. I tried to abstract away the bytes. Converting between types and bytes is the whole purpose of the Database layer. Ideally the ledger code can just use a Database like it's a hashmap, without worrying about serialization. It feels like a leaky abstraction if the Database layer can be bypassed, exposing its internal data to the business logic of the ledger.

That said, actually we do have a getBytes method, so I didn't do a great job at my goal haha. This is because shreds themselves are already bytes. The only reason getBytes exists is to provide a mechanism for managing the lifetime of the shred bytes without needing to copy them into a new allocation. I think this could actually be handled better using the type system, and we could probably eliminate getBytes completely. I decided was willing to compromise on the abstraction a bit, to get the job done quickly.

dnut pushed a commit that referenced this pull request Oct 7, 2024
This PR removes the hardcoded gpa, with manual leak detection from the TestState used in tests.

I believe the reason for this was to have more stack trace frames than what the std.testing.allocator offers, but this proved problematic when using the TestingState in other contexts like benchmarking.

See #275 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants