Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: encodeInto(obj, dest, [options]) #96

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft

Conversation

rvagg
Copy link
Owner

@rvagg rvagg commented Sep 13, 2023

@Gozala @alanshaw I was experimenting with this as an encode API - BYO Uint8Array. Unfortunately I can't really measure particularly amazing performance benefits from this, we're already attempting to do a bit of optimisation around allocation already; I think most of the bottlenecks are just in being tied to Uint8Arrays and their perf limitations in the first place. But perhaps you have a use-case that would make this worthwhile?

I'd have to spend a bit more timing cleaning this up and documenting but wanted to see if you had any input on the API and its utility before I bother.

@alanshaw
Copy link

I'm interested - is it working? I'd like to try it out. Currenly seeing this flame:

https://bafybeiar654dcjpol4bxqtyfjzyi5n5jleld2ple2qsy4dgpmksavhhikm.ipfs.w3s.link/30826.clinic-flame.html

@rvagg
Copy link
Owner Author

rvagg commented Sep 13, 2023

Yeah, it works. I'm familiar with similar flame graphs, no matter how much I tinker I can't make it budge much. I have branches here experimenting with flattening the token array so it's not so nested, also tried different methods of writing into the designation Uin8Array because of concerns about set(), none of it makes too much difference. The bottlenecks inherent in Uint8Array handling keep on getting in the way.
Happy to take suggestions on new magic to attempt but mostly I keep on coming back here to try new things and walk away disappointed! Like with this attempt. I'd love to claim in the docs that this offers perf benefits but it's not great enough to crow about from my measurements. There may be usage patterns where it does make sense though, where size is known or consistent so you can skip all of the internal allocation dance here and come out winning. Or perhaps you're writing into a very specific piece of a larger buffer and encoding some large chunk where a set() is going to be a problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants