Since long before I worked at HashiCorp I've been interested in the problems with how Terraform interacts with security-sensitive information like passwords and private keys.

Using Terraform to manage anything that handles such information often causes today's Terraform to need to persist those values in artifacts such as saved plan files or state snapshots, which means that these artifacts become an effective second source for compromising those secrets. This is an example of secret sprawl, which is a general term for the situation where a secret is stored in multiple locations and thus only one of those locations needs to be compromised for the secret itself to become compromised.

Because of this problem, we've typically warned folks away from using Terraform to directly manage secrets, and to instead find alternative strategies that can, for example, allow Terraform to only handle metadata about the secrets rather than the secrets themselves.

Pragmatism tends to prevail though, and so folks reasonably prioritize the convenience of using Terraform's workflow for everything over the risks of secret sprawl. I did this myself, which ultimately led me to propose the relatively-straightforward solution of just encrypting state snapshots in their entirety, which mimicked a custom solution I'd developed myself in my former employer's Terraform automation.

I made that proposal back in late 2016, before I worked on the Terraform team at HashiCorp. In the meantime I've had the opportunity to discuss variants of this problem (and proposed solution!) with a number of different Terraform module authors and operators, which put me in the interesting position of disagreeing with my earlier proposal but not wanting to close it because it had good feedback, discussion, and upvotes associated with it.

This article is honestly something I should have written a while ago: an overview of how my thinking on this topic has evolved in the meantime, and why I think encrypting the entire state doesn't actually solve the underlying problem, and potentially creates new problems.

The Original Proposal

The high-level idea of my original 2016 proposal was to teach Terraform's "state managers" (the code that deals with taking snapshots of the live state and persisting them to a configured location) to optionally pass each new snapshot to a configurable encryption service, such as HashiCorp Vault or Amazon KMS.

The idea then was that the client would pre-encrypt the snapshots and then the remote storage would only "see" the ciphertext. Any future Terraform run would need to be able to take that saved ciphertext and decrypt it again to recover the cleartext state snapshot, which it could then parse and decode as normal.

This is, in retrospect, a pretty naive approach. Doing anything with Terraform at all requires the ability to read and write state snapshots, and so this approach means that any person or system running Terraform would need access to the encryption/decryption service. That could either mean them having direct posession of a local encryption key, or them having access to call a remote transit encryption service like HashiCorp Vault.

That means that in effect we've not really solved the problem: instead of reducing exposure to the secrets, we've added yet another secret that we now need to control access to, in addition to all of the existing necessary secrets such as credentials for the target infrastructure platform. While this could be considered a plausible defense-in-depth strategy, I think it falls short because the new key (or access to it) must be brooadly available for practical use of Terraform.

At the time I made that proposal Terraform was a relatively simple product, but even then I'd already identified significant drawback of full-state encryption in the proposal: it would be impossible to use the terraform_remote_state data source without giving the operators of the consuming configuration access to decrypt the latest state snapshot, thereby creating yet another entry point through which someone might compromise the state.

The proposal also did nothing about saved plan files, which are effectively a superset of a state snapshot: they contain the entire prior state and potentially more sensitive information from the current plan.

In the meantime it's become common to build various kinds of automation workflow around the terraform show -json facilities, which effectively create a copy of the state or plan data in a documented JSON format that is then processed by some other software. Should a configuration with fully-encrypted state snapshots also produce encrypted JSON representations of plan and state? But that means that now the consuming automation also needs access to the key, causing even further sprawl.

All of this led to me to peel away the details of my naive proposal to reveal the root problem: Terraform ought not to be handling some of these secrets at all, and for those where Terraform does need to handle them it should do so in a way that exposes them only when performing an operation that relies on them, so that routine maintenence of a system that does not involve the secrets can be performed without any access to those secrets.

The modified goal, then, is to find ways to minimize Terraform's exposure to secrets, and in particular to avoid saving secrets in state snapshots or saved plan files.

Ephemeral Values: A new building block

After trying a number of different variations, I came to realize that there was an important missing piece in the programming model for Terraform modules: Today's Terraform largely assumes that values evaluated during the plan phase will remain unchanged during the apply phase -- this is what makes the plan phase meaningful at all -- and Terraform relies on retaining values from the previous plan/apply round to decide when actions need to be taken in a subsequent round.

But in practice not everything is shaped like that, even in today's Terraform: provider configurations get evaluated separately during plan and then during apply, because a provider instance is a transient object that exists only during a single phase. Provisioner configurations don't get evaluated at all (beyond basic validation) until the apply phase.

I decided to label this concept of values and objects that don't persist between phases or between rounds as "ephemeral", and then think about how we might generalize that idea as a cross-cutting concern in the Terraform language. Doing so requires some care, because the Terraform plan/apply workflow relies on non-ephemerality in certain contexts, but I wondered if we could carefully introduce some new capabilities that can narrowly solve new problems without breaking Terraform's existing fundamentals.

The result of that design iteration is the concept of ephemeral values, which I expect Terraform would treat in a similar manner to its current concept of "sensitive values": it's a piece of metadata about a value that travels with it to derived expressions but does not change the value itself.

Terraform uses this to perform a dynamic analysis that ensures that by default any result derived from a sensitive value is considered sensitive itself, and then the UI layer knows that whenever it's asked to render such a value it should produce a redaction message instead.

Although "sensitive values" are largely a UI concern, the language itself does constrain their use in a few ways to minimize the possibility of leaking sensitive information through metadata. For example, Terraform won't allow using a sensitive value to decide the count for a resource because then the number of instances that are planned might imply what the sensitive value was.

Marking a value as "ephemeral" is, on the other hand, primarily a language concern rather than a UI one: it allows Terraform to permit ephemerality in the parts of the language where it's acceptable, while rejecting it in locations where Terraform relies on being able to preserve values between phases.

I'll say more about that below, but as a high-level example: it's fine to use an ephemeral value in a provider block to configure a provider, because provider instances are ephemeral themselves anyway, but it's forbidden to use an ephemeral value to configure a managed resource in a resource block because the planning mechanism relies on values persisting from plan to apply and from one round to the next. (I'll discuss a potential narrow exception to that later in this article.)

Ephemeral Input Variables

The concept of ephemerality, and the idea of values potentially being ephemeral, are both cross-cutting concerns that potentially affect the entire language. But the purpose of introducing them is to give room for specific new capabililties that rely on ephemerality, which is a useful characteristic for secrets but also has applications beyond passing secrets, as we'll see.

A relatively-simple starting point is the idea of an input variable being ephemeral. Input variables don't really "do anything" themselves, so they don't have any direct need to be persisted, but today's Terraform requires them to be so that it's safe to use them to configure other objects that do require persistence.

After giving the language the abililty to track ephemerality of values, we can allow declaring that a specific input variable is ephemeral:

variable "aws_jwt" {
  type      = string
  sensitive = true
  ephemeral = true

Setting ephemeral = true has a few new implications:

  • The value obtained by a reference like var.aws_jwt will be marked as ephemeral, making it invalid to use in locations where persistence is required.

  • If the variable is declared in the root module, Terraform will no longer save the value in a saved plan file, and instead the operator must provide the value again -- or, optionally, a different value -- during the apply phase.

Even with just this small building block we solve a challenge with today's Terraform: we can now pass in a time-limited credential during the plan phase without the risk that it would've expired by the time we get to apply, because it's fine to just provide a new credential (that has equivalent access) during the apply phase:

provider "aws" {
  region = "us-west-2"

  assume_role_with_web_identity {
    role_arn           = "..."
    web_identity_token = var.aws_jwt

    # NOTE: Today's AWS provider also offers an alternative
    # web_identity_token_file argument that reads the
    # token from a file on disk, which is a viable current
    # workaround for this problem because you can change
    # the content of that file between plan and apply
    # without Terraform Core noticing. This new feature
    # avoids the need for such workarounds, and is a
    # more general solution that can work with other
    # providers that don't support such a workaround.

Because ephemerality is "infectious" when deriving new values from other ephemeral values, we need to be careful to avoid creating confusing situations where someone tries to use a module that wasn't designed with ephemerality in mind. Therefore it's invalid to pass an ephemeral value to a child module's input variable unless the module author declared the variable as ephemeral. This uses the module boundary -- which is also a typical cross-team collaboration boundary -- as a limit on the dynamic analysis, allowing Terraform to give better feedback about incorrect use.

For symmetry then, output values can also be ephemeral to get the equivalent effect in the opposite direction:

output "example" {
  value     = var.something_ephemeral
  ephemeral = true

The above both ensures that the module author is intentionally returning something ephemeral (since that will always constrain what the calling module can do with it) and encourages module authors to document that guarantee/constraint in a way that documentation generation tools can "see" it, without having to perform full dynamic analysis themselves.

Ephemeral Resources

Although ephemeral input variables are helpful, things really get interesting if we allow arbitrary code in providers to model ephemeral objects in remote systems.

Two interesting examples I've been using for my prototyping are:

  • HashiCorp Vault secret leases: Vault has a concept of secrets being "leased" to clients, with an explicit expiration time. After the expiration time is reached, the issued secret may become invalid. Ideally leases should have a short validity period, and so Vault also allows clients to explicitly renew a lease if it is needed for longer than originally expected, which therefore discourages requesting a longer lease "just in case".

    The current hashicorp/vault provider uses some trickery to try to compromise on this despite Terraform not having true understanding of it, but it both causes secret sprawl (values copied into the state) and encourages using longer-than-ideal lease periods to reduce the risk that a secret used during planning will have expired before the apply phase is complete.

  • SSH tunnels: This is an interesting use-case that doesn't directly relate to secret values, and instead addresses another long-standing Terraform feature request of Terraform offering a mechanism to temporarily gain access to services in a remote network while it does its work.

    This use-case illustrates that ephemerality is a concept broader than, and somewhat independent of, "sensitivity". An SSH tunnel is a security-sensitive object in that it grants access to a remote network that would otherwise be inaccessible, but the local port number of an SSH tunnel is not a secret, despite it being ephemeral.

    It's always encouraging when a cross-cutting language feature can solve problems beyond the direct motivation, as seems to be the case here.

In an earlier prototype I experimented with extending the concept of data resources to allow them to optionally be treated as ephemeral, which would mean that they would get read separately during both plan and apply -- potentially generating different results each time -- and would not have their results persisted anywhere.

However, that only really solves part of the problem. The two use-cases above require at least two lifecycle events, which I'm calling "open" and "close" to help distinguish from the CRUD-like actions we use for other kinds of resources. "Close" for a Vault secret means immediately terminating the lease, potentially allowing the temporary secret to be revoked sooner. "Close" for an SSH tunnel means disconnecting from the SSH server and closing the tunnel listen socket, preventing any further access to the remote network through the tunnel.

The Vault use-case also requires a third event: "renew". The result from "opening" a Vault secret lease includes an expiration time before which Terraform must renew the lease to keep using it. The Vault provider would tell Terraform that timestamp and then Terraform Core would ask the provider to renew the lease some safe margin before the expiration time.

This new lifecycle is different enough that it seems to deserve an entirely new kind of resource, similar to how we added "data resources" as a separate kind than "managed resources" (a long time ago) to be explicit that Terraform interacts with them in a very different way.

That then leads to the new idea of "ephemeral resources", declared with ephemeral blocks:

# (this is assuming that the hashicorp/vault provider would be extended to
# offer most or all of its current data resource types as ephemeral resource
# types too, using shorter lease durations, automatic renewal, and closing
# the lease immediately once it's no longer needed)
ephemeral "vault_aws_access_credentials" "main" {
  backend = "aws/example"
  role    = "terraform"

provider "aws" {
  region = "us-west-2"

  access_key = ephemeral.vault_aws_access_credentials.main.access_key
  secret_key = ephemeral.vault_aws_access_credentials.main.secret_key

This example example shows how we could arrange to use a Vault token as the sole credential issued directly to Terraform, and then have Terraform that that to obtain other credentials dynamically at runtime.

# (this is assuming a hypothetical "SSH provider" that implements
# the SSH client and tunnel functionality as a plugin, rather than
# built in to Terraform Core.)
ephemeral "ssh_tunnels" "vault" {
  server   = ""
  username = "terraform"

  auth_methods = [
    { password = var.bastion_password },

  tcp_local_to_remote "api" {
    # This address is resolved from the perspective of the
    # remote bastion server, so it can use the remote
    # network's internal DNS zone to refer to the Vault servers.
    remote = "vault.internal:8200"

provider "vault" {
  # This is set to the randomly-selected port on localhost that
  # represents the local end of the SSH tunnel. Connecting here
  # will cause the SSH server to connect to the remote endpoint
  # and then forward packets through the tunnel.
  address = ephemeral.ssh_tunnels.vault.tcp_to_remote["api"].local

This example shows how we could use a hypothetical new "SSH provider" to establish SSH tunnels into a remote network and then configure providers for private-network-hosted services like Vault to connect through the established tunnels. Terraform would notice that the provider "vault" block refers to ephemeral.ssh_tunnels.vault and therefore know that the SSH tunnel resource instance must remain open for as long as that provider instance is working, but can be closed once the provider's work is all done.

The value at ephemeral.ssh_tunnels.vault, and everything inside that object, is ephemeral.

Write-only Resource Attributes

I mentioned earlier that it would be generally forbidden to use ephemeral values in a resource block (aside from provisioners), but that there might be a narrow exception. This section is about that exception.

There exists a genre of resource types that manage something that either is a secret, or has a secret. Initially I'd hoped to find a general solution for everything in this genre, but after various failed attempts I've concluded that we need to use different treatment for two different variants.

Firstly we have generated secrets that exist entirely in state. For example:

  • hashicorp/tls offers the tls_private_key managed resource type, which manages an RSA or ECDSA private key entirely within Terraform, using the Terraform state as the sole storage location.

  • hashicorp/random offers the random_password managed resource type, which manages a randomly-generated string that's intended for use as a password.

These essentially have no future for anyone who wants to avoid storing secrets in Terraform state, because for these ones the Terraform state is the only storage. For these then, I'm anticipating a new pattern using ephemeral resources which I'll discuss in the next section.

Secondly, we have resource types that manage remote objects that want to store their own secrets. For example:

  • hashicorp/vault offers various managed resource types for writing secrets into Vault.

  • hashicorp/aws has aws_db_instance which represents a database instance in Amazon RDS, which isn't directly a secret but which has an admin password that can be specified as part of its configuration, which would therefore be persisted in plan and state in today's Terraform.

These are a more tractable problem to solve, because we can assert that Terraform should handle the secret information only temporarily when writing it, and then a remote system becomes the source of truth for that secret. This variation is what this section is about.

Today's Terraform relies on comparing the prior state to the desired state (as described in the configuration) to determine what actions are required. That can work only if we have a prior state to compare with, which is why today's Terraform wants to save all of the finalized attribute values in a state snapshot to use during the next round.

If we want to avoid storing certain values in the state then we need to choose a new rule for deciding what action is required, if any. My current idea for this is to allow a provider to declare that a particular attribute of a particular resource type is "write-only", which then has the following effects:

  • The "prior state" is effectively always null -- representing "unspecified" in this case -- and any "planned new value" saved in a plan file is always either null or an unknown value of the attribute's type, where the latter represents "to be defermined during apply" as usual.

  • The rule for deciding a change action is that assigning any non-null value represents "change to this value", while assigning null represents leaving the value unchanged.

    (This echoes a convention already followed by some providers to avoid the need to constantly re-supply a secret on future runs once the object has been created, and is consistent with the idea that the prior state is always null.)

  • If another part of the module refers directly or indirectly to the attribute's value, it appears as an ephemeral value to represent that it is never persisted, but it can be used in locations where ephemeral values are acceptable.

  • Ephemeral values can be assigned to write-only attributes, as an exception to the typical rule that managed resource attributes must always be persistable.

An important detail in the above is that the planning phase is effectively persisting a decision based on the "nullness" of the value, despite it otherwise being treated as ephemeral. To avoid defining a new subvariant of ephemeral value, I resolved this by changing the general definition of ephemeral values to include a new rule: although the value of an ephemeral value is allowed to change between plan and apply, its "nullness" may not. If it's null during planning then it must stay null during apply.

That new rule then backpropagates to ephemeral input variables, adding the requirement that if you intend to set an ephemeral input variable to something non-null during apply then you must set it to a non-null value during planning. If you set an ephemeral variable to a non-null value during planning then that variable is effectively required during the apply phase.

With that extra rule in place, we can imagine a new pattern for managing objects that have secrets while minimizing exposure to those secrets. The simplest case is to specify a new secret value directly using an ephemeral input variable:

variable "new_rds_password" {
  type      = string
  default   = null
  sensitive = true
  ephemeral = true

resource "aws_db_instance" "main" {
  instance_class = "db.t3.micro"
  username       = "admin"
  # ...

  # The following would be declared as a write-only attribute
  # in the provider schema.
  new_password = var.new_rds_password

My assumption here is that whenever new_password is non-null in aws_db_instance that represents intent to change the password. Leaving it as null means "do not change the password", but notice that this case doesn't give the operator access to any information about the old password, which is stored only in AWS.

Therefore in routine use where no password changes are necessary an operator can just ignore the new_rds_password input variable and let the password remain unchanged, but when the operator's intent is to rotate this password then then can declare that explicitly by setting the variable via any of the usual means.

Specifying the new password directly as an input variable is a simple option, but not the only option. Another variation would be for the input variable to be a boolean flag that represents only the intent to change the password, which then enables an ephemeral resource to generate or fetch the new password from elsewhere and pass the non-null result into the new_password argument. In that case the operator doesn't need to directly handle the new password at all.

This part of the design is the least developed and so has the most unanswered questions, including but not limited to:

  • Do we need some way to make it clearer in the source configuration that a particular argument is defining a write-only attribute, so that a reader can know about the different treatment without having to refer to the provider documentation?

    Once variation I've considered, but am not yet settled on, is to require all write-only attributes to be segregated into a standardly-named nested block, which would then draw attention to the fact that there's something special about them and allow using the name of that block as a search term to find the relevant documentation about how write-only attributes behave.

  • How would we move safely from the current implementation to this new design without providers becoming incompatible with older versions of Terraform Core, and without forcing breaking changes on the providers themselves?

    The most important thing we need to deal with is what should happen when an older version of Terraform sends a value to a write-only attribute even though it doesn't understand what a write-only attribute is. In particular we'd want to avoid misleading the author that the value won't be written into plan files and state snapshots, because that would not be true for older Terraform versions.

    There are other similar concerns, such as how to safely migrate from an existing "normal" attribute, like aws_db_instance's current password attribute, to using the corresponding write-only attribute without requiring the operator to have access to the current password.

Resources that represent locally-generated secrets

By far the trickiest case is, I think, the resource types where a provider generates a secret locally and uses the Terraform state as the sole location to persist it between rounds.

I actually accept the blame for this pattern existing in the first place, since I started it by implementing tls_private_key in what eventually became the hashicorp/tls provider. It was at the time a pragmatic solution to dynamically-generating SSH keys for new VMs that Terraform would then use to provision over SSH, where there was no expectation of subsequently using those keys for any other access but alas they persisted in the state (and in the VM that accepted them) indefinitely anyway.

I even encouraged this sort of behavior in a subsequent post in my older blog, Running a TLS CA with Terraform, though I did acknowledge that the state snapshots generated by such a thing would need to be treated as secrets themselves.

All of this is to say that I have come to regret this pattern, since of course it's completely incompatible with the idea that Terraform should not persist secret values in the state: there is literally no other place to store them, since all of the work for these is happening directly on the computer where Terraform is running, with no remote service to store the results in.

The new features for ephemerality come together to offer a nice alternative, though: we can split the work of generating the secret from the work of storing it by reframing the generation part as an ephemeral resource type -- just generating one-off secrets and immediately discarding them -- and the storage part as a managed resource type with a write-only attribute.

For example, this combines some of the earlier examples to randomly generate a password for RDS and then persist it both in RDS itself and in HashiCorp Vault, with the latter then allowing authorized database clients to obtain the password through the Vault API:

variable "reset_database_credentials" {
  type    = bool
  default = false

ephemeral "random_password" "database" {
  # We'll generate a new password only if we're going to use it.
  # (This isn't strictly necessary, since a spuriously-generated
  # password would never be written anywhere anyway, but this
  # would be more interesting if we were reading a generated password
  # from a network service.)
  count = var.reset_database_credentials ? 1 : 0

  # (largely the same as the random_password managed resource type,
  # but with the result existing only in memory rather than saved in
  # the state.)

resource "aws_db_instance" "main" {
  instance_class = "db.t3.micro"
  username       = "admin"
  # ...

  # The following would be declared as a write-only attribute
  # in the provider schema, so assigning an ephemeral value
  # is allowed.
  # The "one" function and splat operator here ensure that the
  # value will be null if there are zero instances of the
  # ephemeral resource, meaning that we aren't resetting the
  # password.
  new_password = one(ephemeral.random_password.database[*].result)

resource "vault_kv_secret" "db_credentials" {
  path = "kv/database"

  # This would also need to be declared as a write-only attribute
  # in the Vault provider's schema, to allow the ephemeral value
  # to be included.
  # We need an explicit null check in this case, because the
  # result is derived from the ephemeral value rather than just
  # directly that value. This result will be non-null only if
  # the new password is non-null, which in turn is true only
  # when var.reset_database_credentials is set.
  new_data_json = (
    aws_db_instance.main.new_password != null ?
      username = aws_db_instance.main.username
      password = aws_db_instance.main.new_password
    }) :

As with the earlier example of explicitly resetting the RDS instance password, the operator signals their intent to reset the password by setting reset_database_credentials to true when planning. That input variable is not ephemeral, meaning that if it's set to true during planning then it is guaranteed to remain true during apply. That means that our derived expressions, which use this flag to decide whether or not to be null, will preserve the rule that the "nullness" of ephemeral values must remain constant between plan and apply.

This new approach lets us split generation from storage while avoiding persisting the newly-generated password in any of Terraform's own artifacts. Unless someone attaches a debugger (or similar) to the Terraform process while it's running, or intercepts the (encrypted) communication between Terraform Core and providers, the new password is not exposed anywhere other than the two remote systems that this module intentionally sent it to.

Other Assorted Ideas

Once I figured out the general concept of "ephemeral values", it reminded me about assorted other ideas that had previously been difficult to solve, such as:

  • A terraform.applying symbol that produces an ephemeral boolean value that is true during the apply phase but false otherwise. This could, for example, allow requesting a more privileged role for a provider during the apply phase than during the planning phase, and using remote system policy to prevent the more privileged role from being assumed from anywhere except the production automation environment.

  • Exposing one-off results as ephemeral root module output values to allow the operator to easily access them without exposing them for use elsewhere. For example, I've seen folks who are working in a development-only mode want to expose an SSH key Terraform generated so that they can immediately log into a VM, and this would enable a variation where the key is returned only once -- with the intent of saving it to a local secure location -- but can be regenerated (destroying the old key) if lost.

  • Ephemeral versions of the impure functions timestamp, uuid, and bcrypt that therefore don't need to be unknown during the planning phase, and could be used in the configuration of ephemeral resources. Today they always return unknown values during the planning phase, to avoid breaking Terraform's assumptions about value persistance.

These smaller ideas probably wouldn't arrive on day one, but I always like to explore wider possibililties for each new concept I consider for the Terraform language, because it's better to solve many problems with one orthogonal feature than to complicate the product with many small features that all then interact in hard-to-predict ways.

Do we still need state encryption?

Throughout all of this I kept in mind the idea that even if we remove direct secrets from the plan and state, it's still desirable to keep these artifacts confidential because they effectively serve as a map of infrastructure that might help escalate an attack even though no passwords and keys are exposed.

However, my current opinion is that with the secret sprawl resolved the stakes are lowered enough that it's more reasonable to rely on server-side access control and encryption for these artifacts. For example, if you store your state in Amazon S3 then you can rely on IAM policies for access control and make use of a few different mechanisms to encrypt the state at rest, with several different variations of key custody.

Although there will always be exceptions, for a typical Terraform user I expect they'd be far better off relying on the well-tested and widely-reviewed features of services like Amazon S3 rather than trying to build their own solution for client-side key management, and that's particularly true when running Terraform in automation where both the client and the server are effectively long-lived services that could be attacked.

That is not to say that Terraform would never offer client-side state encryption, and this is certainly not the official opinion of HashiCorp or the Terraform team (as is true of everything I wrote here), I think something like the set of features I described in this article would reduce client-side encryption to just a "nice-to-have" feature for defense-in-depth, rather than being crucial part of any team's security posture.

What's next?

Currently this is largely just a semi-refined pile of ideas that need further validation and refinement before they could be implemented. However, I did start prototyping parts of this in pull request #35078.

My product management collegues on the Terraform team are currently presenting variations of these ideas to those who previously shared feedback to help with getting this into a ready state to implement. Depending on what sort of feedback we get, anything I wrote here could potentially change.

However, having spent now at least eight years iterating on this problem I feel optimistic that this variation is pretty close: it seems to offer solutions to all of the most common concerns, and also helps solve other problems beyond the storage of sensitive values.

I'm excited to see what happens next!