Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zfs_arc_max what is the default, what happens when there is memory pressure #1134

Closed
byteharmony opened this issue Dec 7, 2012 · 4 comments
Labels
Type: Documentation Indicates a requested change to the documentation
Milestone

Comments

@byteharmony
Copy link

I had a machine configured for about 19.5 GB of ram when it only had about 16 GB of ram.

ZFS 8G for zfs_arc_max
Linux (I estimate it would use 1GB (which is high I know)
KVM VM: 10.5

It was running with RAM used, buffers at 100MB and 700 MB of swap. One VM crashed daily, the other 2 were stable. The strange thing was when I shutdown all the VMs I expected to see 8GB of ram in use, I had 4???

This is parameter and kernel / zfs updates are about the only reason for a reboot most of the time.

Any ideas if this is normal behavior?

BK

@behlendorf
Copy link
Contributor

The default zfs_arc_max value is 1/2 of all available ram. Under memory pressure from applications that cache space will be released for the apps much like the page cache.

@byteharmony
Copy link
Author

The default settings in RHEL / CENTOS fight each other signficantly resulting in memory wars.

Players: ksmd, swappiness, buffer and arc

KSM default values pound the box, when it runs it frees big chunks of ram, but perhaps the ARC is doing it because it also senses the pressure??
swappiness is 60 (pushes lots to swap SLOOOOOWWW)
buffer will it purge when ram pressure hits?
arc not sure how much and what the configs do.

Namely, if i set zfs_arc_max (override the 1/2 default) to a lower value will the system still release ram from the cache?

Is this memory war really what's best for these systems? I can certainly find ways to micro manage the configs for our implementations but should it be that hard?

My best guess right now which has shown good results is drop swappiness down to 0, reduce arc cash and let Linux ballance buffer after the KSM push. Perhaps this is as good as it gets. I did update a few systems to metadata for the arc to see how performance / load vary.

If this info isn't helpful please let me know.

Thanks,
BK

@ryao
Copy link
Contributor

ryao commented Dec 8, 2012

ZFS will still respond to memory pressure if you set zfs_arc_max.

The default on Solaris is 3/4 of system memory. It used to be the same on ZFSOnLinux as well, but it was changed due to memory management issues. Some of them have been solved, but some will remain until page cache unification is done.

As a side note, my original belief that fragmentation was the cause of these problems was incorrect. The upperbound on fragmentation in the SLAB allocator is lower than it is in the HOARD allocator. With that said, Linux seems to be happiest with zfs_arc_max set to 1/2 or less.

@behlendorf
Copy link
Contributor

Closing issue since this is largely just a matter of documentation..

pcd1193182 pushed a commit to pcd1193182/zfs that referenced this issue Sep 26, 2023
…object_agent (openzfs#1134)

build(deps): bump windows_i686_gnu in /cmd/zfs_object_agent

Bumps [windows_i686_gnu](https://github.com/microsoft/windows-rs) from 0.48.2 to 0.48.5.
- [Release notes](https://github.com/microsoft/windows-rs/releases)
- [Commits](https://github.com/microsoft/windows-rs/commits/0.48.5)

---
updated-dependencies:
- dependency-name: windows_i686_gnu
  dependency-type: indirect
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Documentation Indicates a requested change to the documentation
Projects
None yet
Development

No branches or pull requests

3 participants