-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Caching dir entries twice #3155
Comments
|
The dnode cache is called a cache because it uses the SPL SLAB allocator via the kmem_cache_* functions. The term dnode in ZFS means "DMU object node" and is essentially an abstract inode. The only dnodes in the "dnode cache" should roughly correspond to what the Linux kernel would consider to be in-memory inodes. That said, there is a risk of long lived SLAB objects keeping SLABs from being reclaimed, which can cause it to use more RAM than necessary. The solution for this on Illumos was to implement the ability to defragment the dnode slab cache. We don't presently have this functionality implemented in ZoL. |
I also tend to think this and related problems are likely caused by fragmentation. If I fill the dnode cache by traversing lots of inodes and then immediately apply memory pressure, the cache frees up nicely. On one of my own customer's rsync servers recently, the dnode_t cache active/size ratio was only 35% and It was never being freed (spending lots of time in the arc adapt thread). |
If needed the dnode cache can now be limited in size by setting the |
Is there anything we can do about dentry cache, so that we wouldn't have everything cached twice?
I know that vm.vfs_cache_pressure can be set and it then puts some pressure on VFS to evict dentries early, but still dentry cache on my systems is about 3.5GB in average, dnode_t slab is about 12G (for one example machine).
I'd like to have either dnode cache or dentry cache, not both at once.
Is this possible with linux VFS? If not, then discard this issue :) But that would be a shame.
On related note, on the same server dmu_buf_impl_t is about 7 GB. The reason why I'm even investigating this is, that on one of our servers there was a shift in load about a few hours ago and it caused ARC to shrink to a third of its original size, most of the RAM has went somewhere into SLAB and doesn't seem reclaimable.
I have yet to see if it is going to shrink any time soon, but this is really asking for a system reset, since it's not really all that usable with low ARC.
The text was updated successfully, but these errors were encountered: