Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bytes specialization, optimised from iterator implementation #4424

Closed

Conversation

davidhewitt
Copy link
Member

Similar to #4423, this is a followup to #4417 which sacrifices a bit of performance to instead stay away from unsafe code, while still trying to optimise things.

Relevant benchmarks before:

byte_slice_into_pyobject_small
                        time:   [8.0587 ns 8.1316 ns 8.2012 ns]

byte_slice_into_pyobject_medium
                        time:   [56.836 ns 57.127 ns 57.386 ns]

byte_slice_into_pyobject_large
                        time:   [6.8279 µs 7.0452 µs 7.2630 µs]

And after:

byte_slice_into_pyobject_small
                        time:   [4.6786 ns 4.7819 ns 4.9469 ns]

byte_slice_into_pyobject_medium
                        time:   [50.219 ns 50.611 ns 50.983 ns]

byte_slice_into_pyobject_large
                        time:   [7.4864 µs 7.5923 µs 7.7109 µs]

@davidhewitt
Copy link
Member Author

Superseded by #4442

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants