Skip to content

Commit

Permalink
KMC_KVMEM disrupts kv_alloc() memory alignment expectations
Browse files Browse the repository at this point in the history
On kernels with KASAN enabled the following failure can be observed as
soon as the zfs module is loaded:

  VERIFY(IS_P2ALIGNED(ptr, PAGE_SIZE)) failed
  PANIC at spl-kmem-cache.c:228:kv_alloc()

The problem is kmalloc() has never guaranteed aligned allocations; this
requirement resulted in openzfs/spl@8b45dda which removed all
kmalloc() usage in kv_alloc().

Until a GFP_ALIGNED flag (or equivalent functionality) is provided by
the kernel this commit partially reverts 6695588 and 6d948c3 to
prevent k(v)malloc() allocations in kv_alloc().

Reviewed-by: Kjeld Schouten <kjeld@schouten-lebbing.nl>
Reviewed-by: Michael Niewöhner <foss@mniewoehner.de>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Closes #9813
  • Loading branch information
loli10K authored and behlendorf committed Jan 14, 2020
1 parent 68a192e commit 7e2da77
Showing 1 changed file with 2 additions and 20 deletions.
22 changes: 2 additions & 20 deletions module/os/linux/spl/spl-kmem-cache.c
Original file line number Diff line number Diff line change
Expand Up @@ -202,26 +202,8 @@ kv_alloc(spl_kmem_cache_t *skc, int size, int flags)
if (skc->skc_flags & KMC_KMEM) {
ASSERT(ISP2(size));
ptr = (void *)__get_free_pages(lflags, get_order(size));
} else if (skc->skc_flags & KMC_KVMEM) {
ptr = spl_kvmalloc(size, lflags);
} else {
/*
* GFP_KERNEL allocations can safely use kvmalloc which may
* improve performance by avoiding a) high latency caused by
* vmalloc's on-access allocation, b) performance loss due to
* MMU memory address mapping and c) vmalloc locking overhead.
* This has the side-effect that the slab statistics will
* incorrectly report this as a vmem allocation, but that is
* purely cosmetic.
*
* For non-GFP_KERNEL allocations we stick to __vmalloc.
*/
if ((lflags & GFP_KERNEL) == GFP_KERNEL) {
ptr = spl_kvmalloc(size, lflags);
} else {
ptr = __vmalloc(size, lflags | __GFP_HIGHMEM,
PAGE_KERNEL);
}
ptr = __vmalloc(size, lflags | __GFP_HIGHMEM, PAGE_KERNEL);
}

/* Resulting allocated memory will be page aligned */
Expand Down Expand Up @@ -249,7 +231,7 @@ kv_free(spl_kmem_cache_t *skc, void *ptr, int size)
ASSERT(ISP2(size));
free_pages((unsigned long)ptr, get_order(size));
} else {
spl_kmem_free_impl(ptr, size);
vfree(ptr);
}
}

Expand Down

0 comments on commit 7e2da77

Please sign in to comment.