1. 27
  1.  

    1. 11

      To successfully use an abstraction, you need to understand the problem the abstraction is trying to solve and also understand how the abstraction has solved the problem.

      At some level, yes, but I think not always, and not fully. Some cases that come to mind:

      • to use the abstraction of serialization, I don’t need to understand the many many ways that a DB without it can go wrong, only that when using it, it will execute the queries as if they all happened in some specific order
      • to use a package manager, I need to understand very little about the problems that are solved
      • to use a hashmap in a modern programming language, I don’t really need to understand hash codes (I probably need to understand how to make my own class hashable, but this can be done with a method call that doesn’t require knowing how the hash is computed).

      Perhaps I’m talking about different kinds of abstractions than the author. But if so, I think it’s a valuable point, because I think we often generalize about the meaning of “abstractions” when we can make our point better by being specific about the type of abstractions in question.

      1. 11

        To elaborate on this (with the caveat that I’m not a professional engineer so everything I’m about to say could be bunk), I draw a distinction between automation and abstraction. Automation doesn’t necessarily hide details and using an automation tool properly requires you to understand the technical details of what it’s doing and how that relates to the ways in which you use it. Abstraction does aim to let you reason about a more simplified model than whatever got abstracted. Automation makes you more efficient at executing and iterating, but it doesn’t help you plan things; abstraction helps you reason about what you’re working on and can reduce the amount of iteration you need to do, but it won’t help you execute once you have your plan worked out.

        To my mind, Terraform mostly operates as an automation tool. A well-designed Terraform module with the goal of abstracting standard operational goals can absolutely be a good abstraction tool, but that’s a property of the module, not of Terraform. It is true that Terraform abstracts cloud providers CLIs, but it crucially doesn’t abstract cloud providers’ services and options, which seem to be the example topic at hand here. If we didn’t have Terraform or similar tools, we’d do about the same amount of reasoning about infrastructure and relationships between components, but we’d be moving slower because we’d be maintaining and tweaking them in Bash scripts or something else brittle and overgeneralized.

        Anyway where I’m going with this is that it’s important not to confuse one thing for the other. Trying to build abstraction features into an automation tool isn’t inherently wise, and unless it also helps automation I think it’d be bloat, and vice versa. Expecting automation tools to provide good, non-leaky abstractions is a recipe for disappointment, but conversely automation usually needs to be designed on the same level abstraction it’s meant to be used in, or it’s practically going to force leaks trying to accommodate a different number of Cuils.

      2. 7

        Arguably for a perfect abstraction you wouldn’t need to understand how the problem was solved. That is the point of the abstraction.

        Unfortunately few abstractions are perfect and knowing how they solve the problem is often useful.

        1. 2

          This! The whole point of an abstraction is to abstract away the implementation details and provide a familiar/simplified interface. Unfortunately, too often the abstraction doesn’t work in some corner case (often because it wasn’t considered because the space is huge), and you have no choice but diving into how the abstraction is implemented. At that point, the abstraction isn’t really helpful.

      3. 5

        I think Joel’s “abstraction leak” is a good summary of the situation. Yes, we want an abstraction that will let us ignore the underlying situation. Most of the time, it works and we can. But when something goes wrong, the abstraction “leaks” and we have to understand at least one more level down the stack to debug it well.

        1. 5

          Isn’t this just the definition of how good an abstraction is? Most abstractions aren’t perfect, but with a good abstraction most people could go their entire career without ever having to care about the details. There are hundreds of such abstractions; the only reason you don’t think about them is because they’re actually good.

      4. 2

        When using database serialization, it’s pretty important to understand whether it’s giving you repeatable-read, full serialization, etc and what the performance impact of that decision are.

        When using a package manager, it’s fairly important to understand when it will consider an update (especially security backports when you are a major version behind the latest).

        To use a hashmap in a modern programming language, you should be aware of whether key iteration has a defined order, and whether it’s vulnerable to DOS when using untrusted hash keys (both have been relevant in the past decade).

      5. 2

        Perhaps I’m talking about different kinds of abstractions than the author.

        I think that’s the case, given the context of IaC. You have to keep in mind that cloud level abstractions leak all the time. They can’t hold a single drop of water. Not really comparable to lower level abstractions (for example, the way the OS abstracts over hardware) which tend to leak far less. You might not jump into the code of java.util.HashMap very often, but looking inside Terraform modules and cross referencing the relevant cloud API is routine. In addition, the need to handle incidents means you should have 200% knowledge even if your abstractions aren’t leaking right now, because they will do so at the least convenient time.

      6. 1

        I think there are often abstractions that simplify things. After all an ext4 driver and vfs is usually a better way to interact with a block device than just seeking around /dev/sda.

        But the key difference with something like terraform is that often abstraction of configuring a service isn’t a goal. The goal is to abstract the CRUD operation of that resource. So yes the 200% problem applies but don’t forget you might be getting something new in return.

        I personally don’t like using terraform and I like better than the other solutions out there. Except maybe pulumi. I’m still on the fence because writing pulumi unironically is more like writing React than it is plain javascript. You’re writing code sure, but fundamentally you’re still configuring some graph that’s being executed/resolved in a non-straighforward way.

    2. 4

      the desire to create Terraform modules that meet every single user’s possible use case means that often the module will expose the entire surface area of the APIs the module is managing to the user

      This is an interesting way to phrase it… The way I see Terraform, it’s purpose is to expose the whole API of external services to the configuration. You can also have an abstraction available, but that part is optional. If the modules don’t expose 100% of the basic API in the first place, they’re usually just broken or missing functionality.

      1. 2

        I completely agree with this. To me, Terraform is just “API calls as configuration, in a deterministic manner” - I want/need it to match whatever I can do manually myself, it’s just a way of having infrastructure as code.

    3. 4

      To successfully use an abstraction, you need to understand the problem the abstraction is trying to solve and also understand how the abstraction has solved the problem.

      This is very, very wide of the mark. The author themselves probably uses plenty of abstractions successfully without understanding the abstraction as they prescribe here. The majority of developers develop applications successfully, likely not understanding properly some of the abstractions they utilize - it’s reality.

      Now, if the application you’re deploying is an in-house application written by a team of super-smart developers, you’re in a good spot. If you’re deploying a third-party application, things get a little trickier.

      If the argument is that you need to learn about how third party applications run (performance, scaling etc), I think the same applies to internally built services as well - there’s no magic here.

    4. 4

      the 200% knowledge problem. To successfully use an abstraction, you need to understand the problem the abstraction is trying to solve and also understand how the abstraction has solved the problem.

      I don’t think this is right at all. At the edges you may need that higher level knowledge, or some lossy version thereof. For example, people manage to make web pages with CSS and HTML without understanding pretty much anything else. Sometimes when they get extra clever they can trigger cases where understanding a computer on a deeper level matters, but as a successful abstraction these cases are relatively rare.

    5. 3

      What is often overlooked is how important it is that decisions and responsibilities that can be shared are made to appease the opinions of a small number of potential experts rather than the broader organisation. Intelligent individuals with political power will happily introduce this framework, that deployment model, or the other tool because they like it and are experts and they believe it’s simple, which ultimately it is - to them.

      When I’m designing a new system or making important architectural decisions at work, I often find myself caught between choosing the simplest, most sustainable tool for the job and choosing the comparatively humongous, comically overcomplicated hyperpostmetaframework that’s going to win the most political points with the powers that be because they’ve already invested so much time in learning it (or building it) and, by golly, the sunk cost doctrine and a decade of tangentially related work experience all but guarantee that it’s the best tool for the job. With depressing frequency, I voluntarily eschew the former in favor of the latter.

      Come promotion season, shrewd politicking will be far more generously rewarded than responsible engineering. More importantly, should I fail to make the powers that be think it was their idea to proceed without their eternally infallible hyperpostmetaframework of choice, I risk finding myself first in line at the scapegoat slaughterhouse when the project inevitably encounters any turbulence for any reason whatsoever. Personally, I prefer to think that I have better hills to die on.

      Don’t get me wrong; I’m loath to be so cynical. I’m sure there are some bonafide meritocracies out there, and I genuinely hope to work for one some day. However, in industry and academia alike, I’ve witnessed tangled mess after tangled mess of misaligned incentives mechanically sacrificing long-term organizational well-being at the altar of short-term rational self-interest.

      What’s the way out of this Molochian shitshow? What can institutions do? What can individuals do?

    6. 1

      The example of Terraform modules only rings somewhat true to me. There are, indeed, public Terraform modules that expose nearly every underlying configuration option. And there are bad public Terraform modules that are no easier to use than the raw resources. But the good ones still add value because:

      • They abstract away a lot of the graph of resources one needs to construct to arrive at a working configuration. This is probably where I take the most issue with the thrust of the article. There may indeed be knobs to control the relevant features of all those underlying resources, but you can often use the knobs without caring that module option A ends up applying to resource X while module option B ends up applying to resource Y and that X and Y are connected by resource Z. A nontrivial infrastructure configuration has a lot of connective tissue that isn’t that interesting, and good modules hide these implementation details.
      • They provide reasonable defaults. You can fiddle with a lot of options, but you usually don’t have to if you’re doing something common and boring. The underlying resources aren’t (and shouldn’t be) opinionated, but good modules often are. In practice, when I use a module for the first time, I can very often get the configuration I want by filling in the required parameters and completely ignoring everything else. Are the other settings visible? Yes. Are they a cognitive barrier? Not really.
      • They eliminate a lot of footguns. If a module is, say, creating the correct security group rules for you, you can’t accidentally forget to create those rules. This is also successful abstraction: you get to skip learning a lot of pitfalls by trial and error.

      And that’s just public modules. Most of the modules in my configuration are private ones that configure infrastructure according to my organization’s standards and conventions. When I want to spin up, say, a new database, I use my private module and I don’t have to think about the logging configuration or how to wire up alerting for errors or whether to enable on-disk encryption. That’s all abstracted away. (To be fair, I’m the one who wrote the module! But I no longer need to keep those details in my head.)