Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WORKAROUND for zthr_cancelled() under spa_raidz_expand_cb() #32

Conversation

fuporovvStack
Copy link

The assert backtrace:
#0 0x00007f9a75cda974 in zthr_iscancelled (t=0x0) at ../../module/zfs/zthr.c:447
447 ASSERT3P(t->zthr_thread, ==, curthread);
[Current thread is 1 (Thread 0x7f99faa29700 (LWP 282061))]
(gdb) bt
#0 0x00007f9a75cda974 in zthr_iscancelled (t=0x0) at ../../module/zfs/zthr.c:447
#1 0x00007f9a75c31ca7 in spa_raidz_expand_cb (arg=0x564206c2f820, zthr=) at ../../module/zfs/vdev_raidz.c:3927
#2 0x00007f9a75cda508 in zthr_procedure (arg=0x5642072aa6f0) at ../../module/zfs/zthr.c:241
#3 0x00007f9a75921609 in start_thread (arg=) at pthread_create.c:477
#4 0x00007f9a75848293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

From other side:
(gdb) frame 1
#1 0x00007f9a75c31ca7 in spa_raidz_expand_cb (arg=0x564206c2f820, zthr=) at ../../module/zfs/vdev_raidz.c:3927
3927 i < raidvd->vdev_ms_count &&
(gdb) list
3922 vdev_t *raidvd = vdev_lookup_top(spa, vre->vre_vdev_id);
3923
3924 uint64_t guid = raidvd->vdev_guid;
3925
3926 for (uint64_t i = vre->vre_offset >> raidvd->vdev_ms_shift;
3927 i < raidvd->vdev_ms_count &&
3928 !zthr_iscancelled(spa->spa_raidz_expand_zthr) &&
3929 vre->vre_failed_offset == UINT64_MAX; i++) {
3930 metaslab_t *msp = raidvd->vdev_ms[i];
3931
(gdb) print spa->spa_raidz_expand_zthr
$1 = (zthr_t *) 0x5642072aa6f0

So, we can see, that zthr_iscancelled() argument on stack is NULL, but the passed value
spa->spa_raidz_expand_zthr is correct. The only way, I can see, how it can happen:
The spa_raidz_expand_cb() was preempted on zthr_iscancelled() call, when spa->spa_raidz_expand_zthr was NULL.
The sleep() in the beggining of spa_raidz_expand_cb() allows to avoid this situation.
Not sure, if this race could be reproduced on kernel side.

ahrens and others added 4 commits October 13, 2021 18:54
This feature allows disks to be added one at a time to a RAID-Z group,
expanding its capacity incrementally.  This feature is especially useful
for small pools (typically with only one RAID-Z group), where there
isn't sufficient hardware to add capacity by adding a whole new RAID-Z
group (typically doubling the number of disks).

== Initiating expansion ==

A new device (disk) can be attached to an existing RAIDZ vdev, by
running `zpool attach POOL raidzP-N NEW_DEVICE`, e.g. `zpool attach tank
raidz2-0 sda`.  The new device will become part of the RAIDZ group.  A
"raidz expansion" will be initiated, and the new device will contribute
additional space to the RAIDZ group once the expansion completes.

The `feature@raidz_expansion` on-disk feature flag must be `enabled` to
initiate an expansion, and it remains `active` for the life of the pool.
In other words, pools with expanded RAIDZ vdevs can not be imported by
older releases of the ZFS software.

== During expansion ==

The expansion entails reading all allocated space from existing disks in
the RAIDZ group, and rewriting it to the new disks in the RAIDZ group
(including the newly added device).

The expansion progress can be monitored with `zpool status`.

Data redundancy is maintained during (and after) the expansion.  If a
disk fails while the expansion is in progress, the expansion pauses
until the health of the RAIDZ vdev is restored (e.g. by replacing the
failed disk and waiting for reconstruction to complete).

The pool remains accessible during expansion.  Following a reboot or
export/import, the expansion resumes where it left off.

== After expansion ==

When the expansion completes, the additional space is available for use,
and is reflected in the `available` zfs property (as seen in `zfs list`,
`df`, etc).

Expansion does not change the number of failures that can be tolerated
without data loss (e.g. a RAIDZ2 is still a RAIDZ2 even after
expansion).

A RAIDZ vdev can be expanded multiple times.

After the expansion completes, old blocks remain with their old
data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but
distributed among the larger set of disks.  New blocks will be written
with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been
expanded once to 6-wide, has 4 data to 2 parity).  However, the RAIDZ
vdev's "assumed parity ratio" does not change, so slightly less space
than is expected may be reported for newly-written blocks, according to
`zfs list`, `df`, `ls -s`, and similar tools.

Sponsored-by: The FreeBSD Foundation
Contributions-by: Fedor Uporov <fuporov.vstack@gmail.com>
Contributions-by: Stuart Maybee <stuart.maybee@comcast.net>
Contributions-by: Thorsten Behrens <tbehrens@outlook.com>
Contributions-by: Fmstrat <nospam@nowsci.com>
@ahrens
Copy link
Owner

ahrens commented Nov 16, 2021

In spa_start_raidz_expansion_thread(), it starts the thread and then sets spa_raidz_expand_zthr. So the thread could start running and then look at spa_raidz_expand_zthr before it's been set. The solution is for the thread to use the passed-in zthr_t rather than spa_raidz_expand_zthr.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants