Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make AWS auth roles "roundtrippable" so we can manage them with terraform. #3837

Closed
tomwilkie opened this issue Jan 24, 2018 · 3 comments · Fixed by #3843
Closed

Make AWS auth roles "roundtrippable" so we can manage them with terraform. #3837

tomwilkie opened this issue Jan 24, 2018 · 3 comments · Fixed by #3843

Comments

@tomwilkie
Copy link
Contributor

tomwilkie commented Jan 24, 2018

If I underspecify my AWS auth role in terraform as a vault_generic_secret like this:

resource "vault_generic_secret" "node_role" {
  path         = "auth/aws/role/node"
  depends_on   = ["vault_auth_backend.aws"]
  disable_read = false

  data_json = <<EOF
{
  "allow_instance_migration": false,
  "auth_type": "ec2",
  "bound_account_id": "${data.aws_caller_identity.current.account_id}",
  "bound_region": "${data.aws_region.current.name}",
  "bound_vpc_id": "${var.vpc_id}",
  "disallow_reauthentication": false,
  "max_ttl": 3600,
  "period": 0,
  "policies": [
    "fetch_ssh_public_key"
  ],
  "resolve_aws_unique_ids": true,
  "role_tag": "",
  "ttl": 3600
}
EOF
}

I get the following plan-time diff when nothing has changed.:

  ~ module.vault.vault_generic_secret.node_role
      data_json:                 "{\"allow_instance_migration\":false,\"auth_type\":\"ec2\",\"bound_account_id\":\"xxx\",\"bound_ami_id\":\"\",\"bound_iam_instance_profile_arn\":\"\",\"bound_iam_principal_arn\":\"\",\"bound_iam_principal_id\":\"\",\"bound_iam_role_arn\":\"\",\"bound_region\":\"us-west-2\",\"bound_subnet_id\":\"\",\"bound_vpc_id\":\"vpc-xxx\",\"disallow_reauthentication\":false,\"inferred_aws_region\":\"\",\"inferred_entity_type\":\"\",\"max_ttl\":3600,\"period\":0,\"policies\":[\"fetch_ssh_public_key\"],\"resolve_aws_unique_ids\":true,\"role_tag\":\"\",\"ttl\":3600}" => "{\"allow_instance_migration\":false,\"auth_type\":\"ec2\",\"bound_account_id\":\"xxx\",\"bound_region\":\"us-west-2\",\"bound_vpc_id\":\"vpc-xxx\",\"disallow_reauthentication\":false,\"max_ttl\":3600,\"period\":0,\"policies\":[\"fetch_ssh_public_key\"],\"resolve_aws_unique_ids\":true,\"ttl\":3600}"
      disable_read:              "false" => "true"

If I instead fully specify the role like this:

resource "vault_generic_secret" "node_role" {
  path         = "auth/aws/role/node"
  depends_on   = ["vault_auth_backend.aws"]
  disable_read = false

  data_json = <<EOF
{
  "allow_instance_migration": false,
  "auth_type": "ec2",
  "bound_account_id": "${data.aws_caller_identity.current.account_id}",
  "bound_ami_id": "",
  "bound_iam_instance_profile_arn": "",
  "bound_iam_principal_arn": "",
  "bound_iam_principal_id": "",
  "bound_iam_role_arn": "",
  "bound_region": "${data.aws_region.current.name}",
  "bound_subnet_id": "",
  "bound_vpc_id": "${var.vpc_id}",
  "disallow_reauthentication": false,
  "inferred_aws_region": "",
  "inferred_entity_type": "",
  "max_ttl": 3600,
  "period": 0,
  "policies": [
    "fetch_ssh_public_key"
  ],
  "resolve_aws_unique_ids": true,
  "role_tag": "",
  "ttl": 3600
}
EOF
}

I indeed get no plan diff on existing clusters, but I get the following error when I apply to a new cluster:

Error: Error applying plan:

1 error(s) occurred:

* module.vault.vault_generic_secret.node_role: 1 error(s) occurred:

* vault_generic_secret.node_role: error writing to Vault: Error making API request.

URL: PUT https://vault.uswest-cluster.aws.grapeshot.co.uk:8200/v1/auth/aws/role/node
Code: 400. Errors:

* failed updating the unique ID of ARN "": &errors.errorString{s:"unrecognized arn: contains 1 colon-separated parts, expected 6"}

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

It seems to me we should support setting the various fields to empty to indicate they are no in use. WDYT?

@joelthompson
Copy link
Contributor

Hey @tomwilkie -- can you run an experiment? For a new cluster or role, try setting resolve_aws_unique_ids to false instead of true and see if it still gives you that error (I don't believe it should) or if there are other errors. If that works, then it'll be a pretty simple bug fix. Note: this must be on a new cluster or role, since you can't change that value from true to false on an existing role.

If this works for you, then it should be a simple fix and I'll try to get a PR in soon.

joelthompson added a commit to joelthompson/vault that referenced this issue Jan 25, 2018
In cases where there doesn't need to be a bound_iam_principal_arn, i.e.,
either auth_type is ec2 or there are other bindings with the iam
auth_type, but it is specified explicitly anyway, Vault tried to parse
it to resolve to internal unique IDs. This now checks to ensure that
bound_iam_principal_arn is non-empty before attempting to resolve it.

Fixes hashicorp#3837
jefferai pushed a commit that referenced this issue Jan 25, 2018
* auth/aws: Fix error with empty bound_iam_principal_arn

In cases where there doesn't need to be a bound_iam_principal_arn, i.e.,
either auth_type is ec2 or there are other bindings with the iam
auth_type, but it is specified explicitly anyway, Vault tried to parse
it to resolve to internal unique IDs. This now checks to ensure that
bound_iam_principal_arn is non-empty before attempting to resolve it.

Fixes #3837

* Fix extraneous newline
@tomwilkie
Copy link
Contributor Author

Hi @joelthompson, sorry for delay. Can confirm predict behaviour is correct. Thanks for the quick fix!

@joelthompson
Copy link
Contributor

Awesome, glad I could help :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants