Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add raidz expansion to ztest #12

Merged
merged 9 commits into from
Sep 30, 2020
Merged

add raidz expansion to ztest #12

merged 9 commits into from
Sep 30, 2020

Conversation

stuartmaybee
Copy link

Motivation and Context

Description

How Has This Been Tested?

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (a change to man pages or other documentation)

Checklist:

@@ -709,6 +739,7 @@ usage(boolean_t requested)
"\t[-o variable=value] ... set global variable to an unsigned\n"
"\t 32-bit integer value\n"
"\t[-G dump zfs_dbgmsg buffer before exiting due to an error\n"
"\t[-X off raidz_expand test, killing at off bytes into reflow\n"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"\t[-X off raidz_expand test, killing at off bytes into reflow\n"
"\t[-X off] raidz_expand test, killing at off bytes into reflow\n"

The above line has a similar typo, should be [-G]

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will fix, I just copied the typo above, will fix that one too


/*
* BUGBUG raidz expansion do not run this for now
* VERIFY0(vdev_raidz_impl_set("cycle"));
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still broken even though we have the SIMD support added?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not, I will try adding it back in

* Set pool check done flag, main program will run azdb check of
* the pool when we exit.
*/
ztest_shared_opts->zo_raidz_expand_test = (UINT64_MAX - 1);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might make sense to have macros for the special values of this shared variable.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea, will fix

}

/*
* XXX - should we clear the reflow pause here?
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so, because we want it to be paused here when we kill it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking here was that we won't have writes in flight/or data structure updates in progress if we are paused so we sort of have the window closed for some types of potential damage

Comment on lines 7414 to 7418
done:
#if 0
/*
* XXX - we don't have any threads here because we killed the test and
* then ran a scrub in this thread rather than starting sub threads.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it doesn't seem like the "goto" is necessary, we could move this code to the one place that we "goto done".

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is an artifact of copying ztest_run and hacking it around. will clean up.

ztest_opts.zo_pool);
ztest_spa_import_export(ztest_opts.zo_pool, name);
ztest_spa_import_export(name, ztest_opts.zo_pool);
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this was copied from the ztest_run(), but I'm not sure that all of these checks are especially relevant for expansion. Alternatively, we might break up ztest_run() into 3 sections, allowing us to reuse the code:

  1. Set up shared state, open pool, create deadman thread
  2. run threads (this would be different for regular vs expansion tests)
  3. final checks and close pool

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, here from copying ztest_run and hacking it for our needs will remove if not useful for expansion test. Left it in as it just seemed like it might provide an additional sanity check.


if (ztest_opts.zo_raidz_expand_test != 0 &&
ztest_opts.zo_raidz_expand_test < UINT64_MAX) {
desreflow = ztest_opts.zo_raidz_expand_test;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what values have you tried stopping at? Should this become part of zloop, to run with various values?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only really tried a couple of offset values, was mostly concentrating on getting things working. I envisioned this test would be run via an outer driving script that varied the parameters, raidz size, offset, etc. but wasn't sure if it was zloop or something else. BTW, this test has revealed a problem I will send a separate e-mail describing.

@ahrens ahrens changed the title raidz add raidz expansion to ztest Aug 30, 2020
@@ -425,6 +425,7 @@ struct spa {
kcondvar_t spa_waiters_cv;
int spa_waiters; /* number of waiting threads */
boolean_t spa_waiters_cancel; /* waiters should return */
boolean_t spa_raidz_expanding; /* expansion in progress */
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't we use spa_raidz_expand != NULL, rather than adding a new field? Are they set/cleared at different times? It looks to me like spa_raidz_expand and spa_raidz_expanding are both set while the namespace lock is held, so don't think there's any race condition when the expansion starts.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because since the spa_raidz_expand pointer is set in Syncing context which is run in a different task AIUI. we would have a race where we could exit here and another expand request could get in here and also start an expand before the pointer gets set. I wanted to use the pointer setting but this seemed better. (Unless I am misunderstanding how things work)

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah so it happens via:

		dmu_tx_t *tx = dmu_tx_create_assigned(spa->spa_dsl_pool, txg);
		dsl_sync_task_nowait(spa->spa_dsl_pool, vdev_raidz_attach_sync,
		    newvd, 0, ZFS_SPACE_CHECK_EXTRA_RESERVED, tx);

which means that it will happen when that txg is synced. Which we wait for with the call to spa_vdev_exit(spa, newrootvd, dtl_max_txg, 0);. It looks like this all happens while holding the spa_namespace_lock, so another removal couldn't be initiated before we set spa_raidz_expand.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I missed that spa_vdev_exit() waited till the TXG is finished. That makes it simpler, will change to use the expand pointer.

@ahrens ahrens merged commit 8047b91 into ahrens:raidz Sep 30, 2020
ahrens pushed a commit that referenced this pull request Mar 9, 2023
Under certain loads, the following panic is hit:

    panic: page fault
    KDB: stack backtrace:
    #0 0xffffffff805db025 at kdb_backtrace+0x65
    #1 0xffffffff8058e86f at vpanic+0x17f
    #2 0xffffffff8058e6e3 at panic+0x43
    #3 0xffffffff808adc15 at trap_fatal+0x385
    #4 0xffffffff808adc6f at trap_pfault+0x4f
    #5 0xffffffff80886da8 at calltrap+0x8
    #6 0xffffffff80669186 at vgonel+0x186
    #7 0xffffffff80669841 at vgone+0x31
    #8 0xffffffff8065806d at vfs_hash_insert+0x26d
    #9 0xffffffff81a39069 at sfs_vgetx+0x149
    #10 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    #11 0xffffffff8065a28c at lookup+0x45c
    #12 0xffffffff806594b9 at namei+0x259
    #13 0xffffffff80676a33 at kern_statat+0xf3
    #14 0xffffffff8067712f at sys_fstatat+0x2f
    #15 0xffffffff808ae50c at amd64_syscall+0x10c
    #16 0xffffffff808876bb at fast_syscall_common+0xf8

The page fault occurs because vgonel() will call VOP_CLOSE() for active
vnodes. For this reason, define vop_close for zfsctl_ops_snapshot. While
here, define vop_open for consistency.

After adding the necessary vop, the bug progresses to the following
panic:

    panic: VERIFY3(vrecycle(vp) == 1) failed (0 == 1)
    cpuid = 17
    KDB: stack backtrace:
    #0 0xffffffff805e29c5 at kdb_backtrace+0x65
    #1 0xffffffff8059620f at vpanic+0x17f
    #2 0xffffffff81a27f4a at spl_panic+0x3a
    #3 0xffffffff81a3a4d0 at zfsctl_snapshot_inactive+0x40
    #4 0xffffffff8066fdee at vinactivef+0xde
    #5 0xffffffff80670b8a at vgonel+0x1ea
    #6 0xffffffff806711e1 at vgone+0x31
    #7 0xffffffff8065fa0d at vfs_hash_insert+0x26d
    #8 0xffffffff81a39069 at sfs_vgetx+0x149
    #9 0xffffffff81a39c54 at zfsctl_snapdir_lookup+0x1e4
    #10 0xffffffff80661c2c at lookup+0x45c
    #11 0xffffffff80660e59 at namei+0x259
    #12 0xffffffff8067e3d3 at kern_statat+0xf3
    #13 0xffffffff8067eacf at sys_fstatat+0x2f
    #14 0xffffffff808b5ecc at amd64_syscall+0x10c
    #15 0xffffffff8088f07b at fast_syscall_common+0xf8

This is caused by a race condition that can occur when allocating a new
vnode and adding that vnode to the vfs hash. If the newly created vnode
loses the race when being inserted into the vfs hash, it will not be
recycled as its usecount is greater than zero, hitting the above
assertion.

Fix this by dropping the assertion.

FreeBSD-issue: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=252700
Reviewed-by: Andriy Gapon <avg@FreeBSD.org>
Reviewed-by: Mateusz Guzik <mjguzik@gmail.com>
Reviewed-by: Alek Pinchuk <apinchuk@axcient.com>
Reviewed-by: Ryan Moeller <ryan@iXsystems.com>
Signed-off-by: Rob Wing <rob.wing@klarasystems.com>
Co-authored-by: Rob Wing <rob.wing@klarasystems.com>
Submitted-by: Klara, Inc.
Sponsored-by: rsync.net
Closes openzfs#14501
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants