In Terraform's own documentation, and in a previous article of my own Terraform is presented as having a declarative language, and this has tended to lead to discussion about what exactly is meant by that and whether a particular new language feature is "declarative enough" to be within the spirit of Terraform.
This is another language-theoretic look at Terraform taking a different perspective on what it means for Terraform to be declarative: that it is a data-flow language rather than a control-flow language.
The general purpose languages that many of us use to write software today are primarily control-flow languages: we write code that expresses a sequence of steps for the computer to follow, and insert into that sequence conditional jumps that create a control flow graph that expresses all of the possible execution paths through the program.
Data-flow programming, on the other hand, is concerned with the movement of data rather than with an exact order of execution. In Terraform we write code that expresses the relationships between different objects -- or, more specifically, how to use data from one object to construct another object. The result of evaluating a Terraform program is a data flow graph rather than a control flow graph.
Another familiar example of data-flow programming is spreadsheets: rather than writing down a sequence of calculations, a spreadsheet author writes inline expressions that consume data from elsewhere in the same sheet or from other sheets to produce new results.
In both Terraform and in spreadsheets we are concerned primarily with using existing data to produce new data, which we express by writing expressions that refer to existing data and transform it to produce the new result we need.
Conditional Expressions vs. Control Flow
I've seen a few folks express concern that the addition of a few new constructs in Terraform 0.12 suggest that Terraform is abandoning its declarative roots:
The conditional operator
cond ? expr : expr
.The
for
repetition operator, in either of its two forms:Building sequences:
[ for x, y in z : transform(x, y) ]
.Building mappings:
{ for x, y in z : transform(x, y) => transform(x, y) }
.
The
for_each
meta-argument for resources.
These are all data-flow constructs. The two expression operators are just more
generalized ways to combine values to produce new values -- like the built-in
functions and operators that preceded them. The dynamic
block construct
extends data flow programming from individual expressions into the world of
nested blocks, which are a Terraform-specific idea. The for_each
meta-argument
generalizes Terraform's existing count
meta-argument to allow resource
repetition based on maps rather than on sequences.
What they all have in common is that none of them introduce control flow: while these constructs do imply a sequencing of operations, the evaluation order is ultimately decided by the Terraform language engine rather than the Terraform module author.
The for
operator in particular borrows the idea of
list comprehensions
to describe how to project and filter one collection to produce a new
collection. It describes a relationship between the input collection and the
output collection using the same expression notation used outside of
the for
operator.
Data-flow in General-purpose Languages
Terraform has a domain-specific language designed to make data-flow programming read as a set of simple declarations, but data-flow programming is possible in general-purpose imperative languages too, with varying degrees of convenience.
A common manifestation of data-flow programming is in futures and promises, which (in certain implementations, at least) introduce data-flow into a control-flow language by describing how to combine data results obtained asynchronously to produce a single result, leaving the promises implementation and its associated language runtime to decide dynamically exactly what order of operations to take based on when different data items become available:
return Promise.all([ fetchJSON("http://example.com/inputs.json"), fetchJSON("http://example.net/more-inputs.json") ]).then(results => results[0].number + results[1].otherNumber);
The JavaScript language syntax adds a lot more visual noise into proceedings,
but the above has a lot in common with how Terraform evaluation proceeds:
the Promise.all
states explicitly that this result depends on two other
objects (which would be more implicit in Terraform via references) and the
function in then
describes how to combine data from those two objects to
produce a new value.
The async/await features in various languages are, interestingly, a move to invert this once again and return to working primarily in control-flow. Control-flow programming has shown itself to be a more comfortable paradigm for general-purpose programming for many people, and I think tends to make things easier to follow particularly in programs that contain visible side-effects where the sequence of operations must be carefully controlled.
Terraform has a much smaller space of possible operations (create, read, update, delete) and we work at a higher level of abstraction where many separate smaller operations are grouped together into a single "action", which makes data-flow programming a more practical proposition for many cases.
Conditionals based on Data Sources
Since the introduction of data sources in Terraform 0.7, some module authors have tried to write modules that would produce a very different result based on some data determined dynamically via a data source.
For example, rather than stating that a particular module expects that a given EC2 VPC already exists and creates other objects that use it, one might be tempted to write a module that checks whether the VPC exists and creates it if not. Such authors find themselves blocked by the fact that data sources are generally (with some careful exceptions) intentionally designed to fail and block further progress if the referenced object does not exist.
The above certainly could be expressed within a data-flow paradigm: the presence or absense of the VPC would just be another data item that can be used to produce (or not produce) other values downstream.
However, to do that would introduce to Terraform many of the complexities of mixing control-flow and data-flow programming in general-purpose languages, particularly in larger systems that have been decomposed into many smaller Terraform configurations. Terraform's declarative language lacks the control-flow constructs that help make such conditional branching manageable (after a fashion) in general-purpose languages.
Instead, data
blocks in Terraform serve a dual purpose both as a way to
refer to external objects from a Terraform configuration and as a declaration
of the assumption that the external object exists: If you've decomposed your
system into a configuration to set up the network and another configuration
that makes use of that network, the retrieval of the network settings in the
second configuration also serves to check that these configurations are being
applied in the expected order. If you make a mistake and apply the
configurations in the wrong order, Terraform will report that the network
doesn't exist yet, rather than having the network end up owned by some
other module.
If you have a problem to solve that does require this sort of dynamic decision-making, that's likely a problem better solved in a general-purpose programming language, ideally using a library or framework that employs the data-flow paradigm so you can retain many of the other benefits of Terraform: convergence on a stable state, looser order of operations, etc.
Other Applications of Data-flow Programming
If you were not previously familiar with data-flow programming as a concept, you might be interested in some other situations where it's used, beyond describing infrastructure with Terraform:
Verilog and VHDL are languages used to describe digital logic, which use expressions to describe the connectivity of different signals and to concisely declare the existence of logic gates, etc. Digital logic design is a very interesting application of data-flow programming because arguably it embodies the truest form of the idea: the computation constructs are fixed in place (as part of an electrical circuit) and the signals -- the data -- flow around and between them.
Functional reactive programming is not necessarily data-flow based but in certain applications the two paradigms can compose well by propagating signal changes through a graph of expressions. One example of that is Elm, a programming language for functional reactive programming of client-side web applications where the idiom is to change state based on incoming events and then derive an HTML DOM structure from that constantly-changing state.
Gstreamer is a framework for processing audio and video streams and their associated metadata, which employs a graph-based model (a "pipeline"). It includes source components that might e.g. read raw data bytes from disk, sink components that know how to e.g. send audio to a display, and transformation components that can split common container formats like AVI, unpack compressed audio and video, apply various filters, etc. The synchronous propagation of data between these components is handled automatically within Gstreamer so that the calling program need only describe the graph.
Data-flow programming is not a solution for every problem, but it can be very effective at raising the level of abstraction when applied to the right use-cases. Many of Terraform 0.12's language enhancements were focused on improving the data-flow programming experience in the Terraform language, allowing us to more conveniently describe the relationships between objects that might not tesselate as well as we'd like.