Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new function savejson_fastfile to speed up exporting directly to files #17

Closed
spichardo opened this issue Dec 16, 2015 · 3 comments
Closed

Comments

@spichardo
Copy link

I prepared an alternative function to savejson for the specific case when saving directly to files: savejson_fastfile, I'm also including a tiny fix in savejson to use "FileName" in the options field (right now it mixes "filename" and "FileName" and makes it crash)

I'm also adding a simple Benchmark.m and representative data that I use often and that helped to identify this issue with the speed in savejson. You truly need to have very large datasets with thousands of cell entries including all sort of mixed structures.

When running in my Macbook pro as I write this, the speed of the test with savejson is 220.17 s vs 62.3 s with savejson_fastfile, giving an average speedup of 3.5X. This test is using 1000 cell entries, you can inspect the data to have an idea of the type of mixed entries I use. In regular basis, I have datasets with 30,000 or more cell entries as this one, so you can imagine why I was motivated to speed it up a little.

The larger is the data collection the more pronounced the difference becomes as concatenating so many strings creates a lot of dynamic reallocation of the memory. Using 2000 entries of my typical data savejson requires 824.9 s vs 112.58 with savejson_fastfile, then the speedup is 7.3x. Writing directly to file keeps the penalty costs linear, but with string concatenation the costs seems at least quadratic.

Thanks for sharing your library, by all means it has been very useful for my work. The produced JSON is imported in other libraries systems (such as .Net) with no issues.

savejson_fastfile.zip

@fangq
Copy link
Member

fangq commented Dec 16, 2015

@spichardo thanks for pointing out the efficiency issue from string concatenations.

As I mentioned in the File Exchange discussions, we have been focusing on optimizing loadjson because it was the (much) slower one between the two. We haven't paid much attention to savejson because most data I tested were simple data structures.

I was able to download and run your test cases, and I confirm that the provided savejson_fastfile script is about 2x faster than the latest savejson in the git (45 s vs 92 s on an i7 Ubuntu box with ssd). However, the implementation in savejson_fastfile has limitations. It requires users to provide a file and writes to disk with file IO, and it can not return JSON strings directly in memory.

Since the key issue for the performance difference is the string concatenation, I was able to update savejson to use a more efficient way to combine strings. Here is my latest commit:

https://github.com/fangq/jsonlab/commit/5ae51f1130a0f5f617695a432dced9ae683da7ef

In this update, I replaced all sprintf() based string concatenations to string cell operations, and flattened the string cell once at the end of each sub-function. Such change essentially eliminates the overhead due to string concatenations (except for matdata2json because we have user specified printing format).

With this updated savejson, I was able to cut the run time for your benchmark from 92 seconds to 39 seconds, ~15% faster than savejson_fastfile. The outputs are identical (except for a pair of empty square brackets due to other prior changes).

Can you try out the new savejson and let me know if this works for you?

Again, thank you for helping making savejson faster!

@spichardo
Copy link
Author

Excellent, I just tested it, and now definitively using cell concatenation and then flattening gives a huge boost. Funny because I should have thought on that first myself since I used cell arrays precisely to facilitate dynamic growing with minimal impact to performance. Since myself I was purely focused in output files, that's the reason I thought in simply redirecting the output to files.

In my conditions with a double Xeon proc, the updated savejson takes 48.3s vs 53 s with savejson_fastfile, more importantly, when doubling the cell array to 2000, the penalty remains linear: 96 s with savejson vs 104.9s with savejson_fastfile

Thanks a lot for taking a look on this

Cheers

@fangq
Copy link
Member

fangq commented Dec 17, 2015

terrific, thanks for confirming.

I am closing this issue for now. I am pretty sure there are places in savejson can be further accelerated. Patches and suggestions are always welcome!

@fangq fangq closed this as completed Dec 17, 2015
fangq added a commit that referenced this issue Jan 7, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants