Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New config framework for the different schedulers #913

Merged
merged 40 commits into from
Mar 13, 2019
Merged

Conversation

na--
Copy link
Member

@na-- na-- commented Jan 29, 2019

This is a (very WIP) subset with the config-related changes from the massive refactoring before we can implement the arrival-rate based execution. I haven't committed any of the unfinished tests, since this is mostly intended to serve as a starting point for discussions about the different TODO statements that litter the current code. Still, it should be able to parse (and mostly validate) something like this:

export let options = {
    execution: {
        someKey: {
            type: "constant-looping-vus",
            vus: 10,
            duration: "60s",
            startTime: "10s",
            exec: "someExportedFunctionThatsNotTheDefault",
        },
        otherKey: {
            type: "variable-looping-vus",
            startVUs: 10,
            stages: [
                { target: 50, duration: "1m" },
                { target: 50, duration: "3m30s" },
                { target: 0, duration: "30s" },
            ],
        },
        yetAnotherKey: {
            type: "shared-iterations",
            vus: 50,
            iterations: 1000,
        },
        newStuff: {
            type: "constant-arrival-rate",
            rate: 20,
            timeUnit: "1m",
            duration: "1h",
            preAllocatedVUs: 50,
            maxVUs: 100,
        },
        maybeAlsoNewStuff: {
            type: "variable-arrival-rate",
            startRate: 20,
            // Initialize 50 VUs, if more than 50 are needed, they will be dynamically allocated
            preAllocatedVUs: 50,
            // Hard limit, no more than 100 VUs will be run, no matter what
            maxVUs: 100,

            stages: [
                { /* something something */ },
                //TODO: figure out stages? same as variable-looping-vus?
            ],
        },
    }
};

@na-- na-- requested a review from mstoykov January 29, 2019 11:04
var _ Config = &ConstantArrivalRateConfig{}

// Validate makes sure all options are configured and valid
func (carc ConstantArrivalRateConfig) Validate() []error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am pretty sure we will have to either use a validation library or write ourself one. Even if it means adding code generation. Maybe we can leave it until we actually redo the configuration but ... I don't like code like this that will most definitely miss something

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked at several validation libraries, and none of them seemed like a big improvement. The biggest obstacle were the nullable types we use - we'd have to make a bunch of adapters to support them properly, which isn't very worth it...

I think we can consolidate some of the repeating checks we'll have into helper functions, or reuse some of the validator functions (but not struct tags and reflection-based stuff) from some library for things like IP addresses, urls, etc., but I'm not sure if will make sense to do more than that. Code generation for sure feels an extreme response to this problem...

And I doubt any other solution will allow us the current flexibility. Consider this validation code below:

if !carc.Rate.Valid {
	errors = append(errors, fmt.Errorf("the iteration rate isn't specified"))
} else if carc.Rate.Int64 <= 0 {
	errors = append(errors, fmt.Errorf("the iteration rate should be positive"))
}

Notice how only one of the errors will be specified, since it doesn't make sense to show both the not specified and should be positive errors at the same time. Not sure how libraries will handle that correctly. Also, we have the flexibility to write very user-friendly and specific error messages. And we can have subtly different validations between the different configs, for example the validation for the variable arrival-rate doesn't require the value and allows it to be 0, but not negative.

I'm sure we can create something with all of the flexibility, but with less boilerplate, it's just likely going to be very difficult, i.e. I'm not sure if it'd end up worth it.

@codecov-io
Copy link

codecov-io commented Jan 30, 2019

Codecov Report

Merging #913 into master will increase coverage by 1.35%.
The diff coverage is 86.88%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #913      +/-   ##
==========================================
+ Coverage   70.72%   72.07%   +1.35%     
==========================================
  Files         121      131      +10     
  Lines        9188     9598     +410     
==========================================
+ Hits         6498     6918     +420     
+ Misses       2283     2267      -16     
- Partials      407      413       +6
Impacted Files Coverage Δ
cmd/common.go 40% <ø> (ø) ⬆️
cmd/login_cloud.go 9.25% <0%> (ø) ⬆️
cmd/login_influxdb.go 3.77% <0%> (ø) ⬆️
lib/scheduler/shared_iterations.go 100% <100%> (ø)
cmd/root.go 34% <100%> (+7.33%) ⬆️
lib/scheduler/helpers.go 100% <100%> (ø)
lib/scheduler/variable_arrival_rate.go 100% <100%> (ø)
js/bundle.go 83.33% <100%> (ø) ⬆️
converter/har/converter.go 64.36% <100%> (ø) ⬆️
lib/scheduler/constant_arrival_rate.go 100% <100%> (ø)
... and 26 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cf8bccc...d88be34. Read the comment docs.

@na-- na-- changed the title [wip] proposal for config framework for the different schedulers New config framework for the different schedulers Feb 26, 2019
Copy link
Contributor

@mstoykov mstoykov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly LGTM, just some minor questions and 2 ❤️ ;)

cmd/config.go Show resolved Hide resolved
cmd/config.go Show resolved Hide resolved
lib/scheduler/helpers.go Show resolved Hide resolved
lib/scheduler/helpers.go Show resolved Hide resolved
lib/scheduler/schedulers_test.go Show resolved Hide resolved
lib/scheduler/configmap.go Show resolved Hide resolved
lib/scheduler/constant_looping_vus.go Show resolved Hide resolved
lib/scheduler/interfaces.go Outdated Show resolved Hide resolved
cmd/config.go Outdated Show resolved Hide resolved
With this, script runs without any execution params will still have a 1 iteration in 1 VU execution plan, but it will be the per-VU scheduler, so it could eventually be executed both locally and in the cloud.
This is woefully insufficient, but even that was painful to write... Further refactoring and tests will follow, especially relating to JSON/JS configs and mixing configuration between the CLI/env-vars/JS/JSON layers. The effort will hopefully be worth it, since we should be able to reuse these tests to verify that we're not breaking things when we finally tackle #883 properly.
@na-- na-- mentioned this pull request Mar 1, 2019
@na--
Copy link
Member Author

na-- commented Mar 1, 2019

I'm mostly pausing work here until #935 is done and merged back in this branch. The main remaining work there involves adding more config tests, especially ones involving different config layers. And cleaning up the JSON config file support by fixing at least some of the issues described in this comment #883 (comment)

I still plan to add a separate error type for execution config conflicts ( #913 (comment) ), but still not sure if that will be in the other branch or in this one

cmd/config.go Outdated Show resolved Hide resolved
cmd/config_consolidation_test.go Show resolved Hide resolved
cmd/cloud.go Show resolved Hide resolved
lib/scheduler/helpers_test.go Outdated Show resolved Hide resolved
lib/scheduler/schedulers_test.go Show resolved Hide resolved
cmd/root.go Show resolved Hide resolved
cmd/root.go Outdated Show resolved Hide resolved
lib/scheduler/configmap.go Show resolved Hide resolved
lib/scheduler/configmap.go Show resolved Hide resolved
lib/scheduler/schedulers_test.go Show resolved Hide resolved
@na-- na-- requested a review from mstoykov March 11, 2019 12:57
Copy link
Contributor

@mstoykov mstoykov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
although we should probably do some pretty significant testing before releasing this. I suppose you have tested it manually as well, as the test currently can't test this accurately, as you have mentioned

@na--
Copy link
Member Author

na-- commented Mar 11, 2019

@mstoykov, yeah, manual testing is definitely still necessary. Unfortunately, despite all of the refactoring efforts to make the config more testable, there are huge gaps... I've done some manual testing already, but I want to do quite a bit more today and tomorrow, so and I won't merge this until I'm fairly confident there aren't any obvious issues

@na-- na-- requested a review from mstoykov March 12, 2019 14:10
The `execution` setting is the only one that could cause an error at this time, when both it and a shortcut (duration/iterations/stages) are set, even if it's not used by k6 itself in the PR. So it needs to be zeroed-out when shortcuts are used, otherwise it couldn't be overwritten at all.
Copy link
Contributor

@mstoykov mstoykov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@na-- na-- merged commit 3a5ef34 into master Mar 13, 2019
@na-- na-- deleted the scheduler-config-wip branch March 13, 2019 11:50
@na-- na-- mentioned this pull request May 15, 2019
39 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants