Threads for kristof

  1. 29

    Nah. I don’t use dockerfiles. I don’t use nix. I’ll wait another 5 years for the sediment to settle and let everyone else dig through the shit

    1. 22

      But how do you even get anything done without using Kubernetes, Nomad, 3 different service discovery mechanisms, thousands of lines of YAML, and at least two different key/value stores?

      1. 16

        I am incredibly thankful for the existence of Docker.

        I have less-than-fond memories of trying to set up local instances of web stacks over the past 20 years, and the pile of different techniques that range from Vagrant boxes (which didn’t ever build properly) to just trying to install all the things – web server, DB server, caching server, all of it – on a single laptop and coordinate running them.

        Now? Docker + docker-compose, and it works. The meme is that Docker takes “works on my machine” and lets you deploy that to production, but the reality is that it goes the opposite direction: it lets you write a spec for what’s in production and also have it locally with minimal effort.

        Things like k8s I can take or leave and would probably leave if given the choice, but Docker I will very much keep, thank you.

        1. 4

          Containers are absolutely here to stay but I think the parent comment is mostly complaining about the vast ocean of orchestration solutions, which have created whole new categories of complexity.

          1. 1

            Oh I like Docker, or more specifically the idea of containers. My issue is more that on one side you have containers, and on the other side you have this incredibly complex stack of services you probably don’t need. And yet people tend to lean towards that side, because some blog post told them you really can’t have containers without at least a dozen other services.

          2. 2

            apt, mostly, and some ansible. :-P

          3. 16

            docker and nix solve a very particular problem that many but not all developers experience. If you don’t experience that problem then awesome! count yourself lucky. But some amount of us need to use multiple versions of both languages and libraries in our various activities. Some languages provide a way to do this within the language and for specific C libraries. But almost none of them solve it in a cross language polyglot environment. For that you will need Nix, Docker, or something in the Bazel/Pants build family.

            But as you astutely note those are not pain free. You should really only use them if the pain of not using them is worse than the pain of using them.

            1. 3

              Can confirm. Have a Pants managed monorepo. It’s painful.

          1. 10

            you need to initialize quickenv per-project using quickenv reload, and rerun that command everytime the .envrc changes.

            These parts are less attractive to me. I like direnv in the context of a team project where I can know that once direnv is configured, everyone will always have up-to-date environment variables. If people will need to run quickenv reload after the file changes then it will get missed and people (including myself) will likely end up using wrong env vars.

            1. 4

              yeah, I know. for the first version I chose to forego such hard problems as cache invalidation to keep my goals of making quickenv significantly faster than direnv. I see too many people at my job opting out of our direnv setup entirely because it’s too slow, and that seemed even worse than using outdated envvars to me. At least as long as there is an obvious “fix-it-button” such as quickenv reload that seemed good enough to me.

              1. 2

                I’m not familiar with direnv but I’ve heard of it and know what it’s for. How slow is it and why?

                1. 7

                  direnv adds to your shell’s PROMPT_COMMAND to execute a file called .envrc everytime you cd into a directory containing such a file. it doesn’t cache the result, and every time you create a new shell session or cd in/out of your project, you have to wait until that file completes executing. in the projects i’m working on, that .envrc does a ton of sanity checks that take a few hundred milliseconds to execute. too laggy for my taste.

                  direnv doesn’t have a cache, mainly because cache invalidation is hard. quickenv does have a cache, but doesn’t bother with cache invalidation (you just have to run reload or it’s stale). that’s what makes quickenv faster: it accepts a potentially stale environment over perf slowdown.

              2. 1

                I filed an issue because I eventually want to make cache invalidation a thing: – would be interested in your feedback there

              1. 3

                For any Rustaceans, I built woah for exactly the need this article lays out. It provides a type woah::Result<T, L, F>, where L is a “local error” you can handle, and F is a fatal error you can’t. woah’s Result type propagates fatal errors up with the ? operator, giving you a std::result::Result<T, L> to handle local errors.

                It’s pre-1.0 until the Try trait is stabilized, but can be used on stable if you’re willing to forgo the ? operator and do the propagation manually.

                1. 1

                  Managing multiple different levels of errors is still difficult for me in Rust. I’ll take a look at the crate but I’m wondering if you can explain what the advantage is of using that over Result<T, LayerError> where

                  enum LayerError {
                  impl From<std::io::Error> for LayerError { ... }
                  1. 1

                    The difference is in having separation between the “local errors” and “fatal errors” and how that interacts with the ? operator.

                    woah::Result is equivalent to Result<Result<T, L>, F> in terms of how it works with the ? operator. Take the following example, copied from the project’s

                    use woah::prelude::*;
                    use rand::prelude::*;
                    fn main() {
                        match get_data() {
                            Success(data) => println!("{}", data),
                            LocalErr(e) => eprintln!("error: {:?}", e),
                            FatalErr(e) => eprintln!("error: {:?}", e),
                    /// Get data from an HTTP API.
                    fn get_data() -> Result<String, LocalError, FatalError> {
                        match do_http_request()? {
                            Ok(data) => Success(data),
                            Err(e) => {
                                eprintln!("error: {:?}... retrying", e);
                    /// Make an HTTP request.
                    /// This is simulated with randomly returning either a time out
                    /// or a request failure.
                    fn do_http_request() -> Result<String, LocalError, FatalError> {
                        if random() {
                        } else {
                    /// Errors which can be handled.
                    enum LocalError {
                    /// Errors which can't be handled.
                    enum FatalError {

                    The key line is the match do_http_request()?. The ? operator will pass any FatalError up to the caller of get_data, and return a std::result::Result<String, LocalError> which is what get_data is matching on. In the error case on that match, we have a RequestTimedOut, so we can retry.

                1. 3

                  They finally fixed the ballast thing!

                  1. 2

                    I plan to continue to allocate an extra 100MB slab, for luck.

                  1. 3


                    This is a data serialization format and library (like Protocol Buffers) with first class support for algebraic data types (like Rust enums). It has a novel approach for ensuring compatibility between schema versions when adding fields to structs or cases to choice types.

                    In the case of adding a field to a struct and marking it “asymmetric”:

                    • new readers assume the field is optional
                    • new writers assume the field is required
                    • old writers still out in the field just keep writing messages without the new field

                    In the case of adding a case to a choice:

                    • new writers that use the new case MUST specify a fallback option
                    • new readers will happily consume the new case
                    • old readers automatically handle cases they don’t know by reading the fallback option (this can be a chain that will eventually bottom out to a case that the old reader does know how to handle)

                    The author intends for people to use this feature to gradually update readers and writers over time. Once the updates have rolled out over a sufficient period of time or a suitable fraction of users have upgraded, the asymmetric qualifiers can be removed.

                    1. 23

                      Tabs for indentation, spaces for alignment.

                      1. 7

                        Exactly. Variable-width characters at the start of a line are great. Variable-width characters in the middle of a line are annoying because they won’t line up with other fixed-width things. Recent versions of clang-format now support this style, so there’s no reason to use spaces anymore.

                        1. 4

                          I have to suffer through clang-format at work, I can tell you it’s pretty bad. The worst aspect so far is that it does not let me chose where to put line breaks. It’s not enough to stay below the limit, we have to avoid unnecessary line breaks (where “unnecessary” is defined by clang-format).

                          Now to enforce line lengths, clang format has to assume a tab width. At my workplace it assumes 4 columns.

                          Our coding style (Linux) explicitly assumes 8.

                          1. 2

                            You can tell clang-format how wide it should assume for tabs. If people choose to use a wider tabstop value, then it’s up to them to ensure that their editor window is wider. That remains their personal choice.

                            1. 1

                              I’ve found out that clang-format respects line comments:

                              void f( //
                                  void *aPtr, size_t aLen, //
                                  void *bPtr, size_t bLen //
                          2. 7

                            I think when people say this they imagine tabs will only occur at the start of the line. But what about code samples in comments? This is common for giving examples of how to use an function or for doc-tests. It’s much harder to maintain tab discipline there because your formatter would have to parse Markdown (or whatever markup you use) to know whether to use tabs in the comment. And depending on the number of leading comment characters, the indentation can look strange due to rounding to the next tabstop. Same thing goes for commented out sections of code.

                            1. 3

                              Go uses tabs for indentation and spaces for alignment. It works pretty well in practice. I can’t say that I’ve ever noticed anything misaligned because of it.

                              1. 4

                                If you wrote some example code in a // comment, would you indent with spaces or tabs? If tabs, would you write //<space><tab> since the rest of the comment has a single space, or just //<tab>? gofmt can’t fix it for you, so in a large Go codebase I expect you’ll end up with a mix of both styles. With spaces for indentation it’s a lot easier to be consistent: the tab character must not appear in source files at all.

                                1. 1

                                  I can’t say that I’ve ever written code in a comment, because I just write a ton of ExampleFunctions, which Go automatically adds to the documentation and tests. Those are just normal functions in a test file. I think what’s interesting about Go is that they don’t add all the features but the ones they do add tend to reinforce each other, like go fmt, go doc, and go test.

                                2. 3

                                  Personally, I think it would have annoyed me if go fmt didn’t exist. Aligning code with spaces is annoying, and remembering to switch between them even more so.

                                  1. 1

                                    Yes, it’s only practical if a machine does it automatically.

                              2. 1

                                I said this elsewhere in the thread, but it’s worth reiterating here: I’d bee with you 100% if it weren’t for Lisp, which simply can’t be idiomatically indented with tabs (short of elastic tabs) because it doesn’t align indentation with any regular tab-stops.

                              1. 1

                                Validating YAML is important. I don’t know if this exists but I think it would be cool to have a second pane open in my editor that uses the same spec and the context of my first pane to tell me what keys are available to me and what they mean. It would make writing k8 manifests easier. It might even work for Helm… but I don’t know how this would work for dynamically generated keys.

                                1. 1

                                  Would probably require a specialized LSP, but all that information is accessible via kubectl explain

                                  1. 1

                                    Shelling out on every key stroke might be kind of slow? Could cache though.

                                    1. 2

                                      The kubectl command is just printing struct information from the Go code. An LSP could access the same metadata and do something more efficient.

                                      I merely mention the command because you could construct a helper tool for your environment that could print something useful without the effort of writing an LSP.

                                1. 5

                                  What your code actually needs in terms of infrastructure should be inferred as you build your application, instead of you having to think upfront about what infrastructure piece is needed and how to wire it up.

                                  I am very, very doubtful that this is the right approach. Provisioning costs money. Shouldn’t that mean it’s better to be explicit than implicit about the underlying computing resources being used? If costs overrun the budget or you decide you need more capacity, you need to be able to go change things yourself, it needs to be easy to find, and it needs to be explicitly written out what is being provisioned and why. Furthermore how can you decide what kind of resources you need without dynamic information obtained from objective profiling and subjectively weighing tradeoffs? A procedural macro could only ever obtain static information.

                                  1. 3

                                    I’m very confused by the quote. I mean, the code may include a database query, but we need real world context to know if that can be a file backed shim, or a globally distributed cluster with sharding based on local law.

                                  1. 1

                                    Does anyone know any good textbooks on this subject?

                                    1. 5

                                      Designing Data-Intensive Applications tends to be the go-to recommendation here

                                      1. 2

                                        Second that, plus the comparatively brief Distributed systems for fun and profit as an introduction.

                                    1. 3

                                      Question: is becoming a niche programmer possible for someone with little to no experience? Is it a good nontraditional route to enter the industry?

                                      1. 5

                                        I think it depends on the niche? For Clojure, my sense is that you can pick it up with little or no experience, but you have to be intelligent in a particular way

                                        e.g. In my experience probably some smart music majors can pick it up, but others might have problems. That is, people “just looking for jobs” will likely have issues with Clojure. There is a tendency to do more from first principles and not follow canned patterns. Also smaller number of StackOverflow answers

                                        1. 5

                                          Absolutely, my previous job hired lots of interns to work with Clojure who had very little programming experience. The interesting part we found was that people without experience often have an easier time learning Clojure because they don’t have any preconceptions about how code should be written. A few of the students my team hired ended up specializing in Clojure after and haven’t had to work with anything else since.

                                          Since Clojure is niche, companies end up having to train most of their devs, so having familiarity with the language is seen as a big plus. Having some small project on GitHub that you can link to in your resume would go a long way here.

                                          1. 2

                                            I think all this post shows is that it’s possible to do much better than the mainstream in a niche, but it’s also possible to do a lot worse.

                                            The thing about a niche is it’s unique, so how can you generalize about it? It completely depends on what niche it is.

                                          1. 1

                                            I stopped reading when it told me a pointer is a variable meant to store a memory address. That is an incredibly unhelpful thing to tell programmers. A pointer stores a value that is provided by some memory allocator (operator new, automatic storage, or static storage) and that conveys the right to access an object. The compiler will typically represent it as a memory address but if you put an arbitrary memory address in one then there’s a good chance that you’re hitting undefined behaviour and your compiler is allowed to be mean to you (and will). On a system with any kind of memory safety hardware (CHERI, MTE, and so on) then the lowering will also include some other data to help the hardware to be mean to you in similar situations.

                                            1. 1

                                              That’s unfortunate that you stopped reading at that sentence because the very next paragraph says what you wrote.

                                              I say meant to, because if it’s correctly initialized it either stores nullptr or the address of another variable - it can even store the address of another pointer -, but if it’s not correctly initialized, it will contain random data which is quite dangerous, it can lead to undefined behaviour.

                                              1. 2

                                                I read that, but it doesn’t in any way alter my opinion. At the abstract machine level, it does not store the address of a variable. It stores something that permits you to access that variable. This may be an address. On segmented architectures, it may be a segment descriptor ID. On CHERI systems, it may be a capability. On Arm systems with MTE, it will store both the memory colour and the address. On Arm systems with PAC, it may contain an encrypted version of the address. On systems with software memory safety, it may store the address and some encoding of the bounds. With Intel’s cryptographic computing, it is a complicated encrypted value.

                                                The fact that, on some architectures, it stores an address is an implementation detail. It is not part of the abstract machine and it is increasingly not even true on more modern architectures.

                                            1. 14

                                              I’d love to see more technical details in part 2. :)

                                              For those wanting to ask, don’t use k8s. I went down the k8s route for a home lab and after spending man, many, hours I deleted it all and went back to my compose setup. Unless you’re specifically wanting to learn k8s for professional reasons there are better ways to spend your time, in my opinion of course.

                                              Nomad is intriguing for its simplicity over k8s but I’ve been hesitant to go down another path when what I have works and is low maintenance.

                                              1. 5

                                                I have a similar nomad+tailscale for my homelab with two nodes, one running on an arm machine and one running on an amd64. Nomad is definitely an overkill if you want to just run containers on a single node. I guess that you would be swap docker compose files with nomad config files at that point.

                                                What I really like about my setup is that I was able to plug in a drone CI instance into the nomad stack, so that the individual CI runners are allocated by the cluster rather than the CI instance itself.

                                                Nomad has its quirks (OP covers some of them under the gitea setup section). What I learned is that everything works well enough until you get yourself in a weird setup hedge case then it’s difficult to make it do what you want it to do.

                                                I’m also looking forward part 2!

                                                1. 2

                                                  Not only is it overkill, I genuinely don’t understand what a container scheduler does for a single node set-up; what exactly is the scheduling algorithm deciding? Isn’t some combination of systemd and podman better?

                                                  1. 2

                                                    It still can do nice things like rolling upgrades (not available with Docker Compose AFAIK; only Swarm has that feature) and has a nice workflow for running deployments from outside that node (e.g. from a CI/CD pipeline).

                                                    About that second point about deployments, I’ve never done Docker Compose deployments from outside the node that will run the workload. It seemed not quite fit for that — but I may have just assumed that without exploring the options. Nomad allows me to render entire configuration files as part of the deployment, and I’ve been finding that very handy.

                                                    But I’d still look for simpler solutions that provide the two benefits above. Some months ago I did some research and couldn’t find anything (it’s possible that my research wasn’t deep enough).

                                              1. 3

                                                Are there other chapters? It seems like only the Prologue is available.

                                                1. 4

                                                  This example takes advantage of beta-optimality to implement multiplication using lambda-encoded bitstrings. Once again, HVM halts instantly, while GHC struggles to deal with all these lambdas. Lambda encodings have wide practical applications. For example, Haskell’s Lists are optimized by converting them to lambdas (foldr/build), its Free Monads library has a faster version based on lambdas, and so on. HVM’s optimality open doors for an entire unexplored field of lambda-encoded algorithms that were simply impossible before.

                                                  Probably my favorite part of the writeup.

                                                  1. 11

                                                    This is entirely subjective but I have looked at zig a few times and it does feel enormously complicated to the point of being unapproachable. This is coming from someone with a lot of experience using systems programming languages. Other people seem to really enjoy using it, though… to each their own.

                                                    1. 8

                                                      Huh, that’s interesting to hear. Out of curiosity what where the features you found the most complicated?

                                                      I’ve had the exact opposite experience actually. I’m comparing against Rust, since it’s the last systems language I tried learning. Someone described rust as having “fractaling complexity” in its design, which is true in my experience. I’ve had a hard time learning enough of the language to get a whole lot done (even though I actually think rust does the correct thing in every case I’ve seen to support its huge feature set).

                                                      Zig, on the other hand, took me an afternoon to figure out before I was able to start building stuff once I figured out option types and error sets. (@cImport is the killer feature for me. I hate writing FFI bindings.) It’s a much smaller core language than rust, and much safer than C, so I’ve quite enjoyed it. Although, the docs/stdlib are still a bit rough, so I regularly read the source code to figure out how to use certain language features…

                                                      1. 17
                                                        • An absurd proponderance of keywords and syntaxes. threadlocal? orselse? The try operator, a ++ b, error set merging?
                                                        • An overabundance of nonstandard operators that overload common symbols. When I see | and % I don’t think “saturating and wrapping”, even though it makes sense if you think about it a lot. Mentioned earlier, error set merging uses || which really throws me off.
                                                        • sentinel terminated arrays using D, Python, and Go’s slicing syntax is just cruel.
                                                        • why are there so many built-in functions? Why can’t they just be normal function calls?
                                                        • there seem to be a lot of features that are useful for exactly one oddly shaped usecase. I’m thinking of anyopaque, all the alignment stuff… do “non-exhaustive enums” really need to exist over integer constants? Something about zig just suggests it was not designed as a set of orthogonal features that can be composed in predictable ways. That isn’t true for a lot of the language but there are enough weird edges that put me off entirely.
                                                        • Read this section from the documentation on the switch keyword and pretend you have only ever used algol family languages before:
                                                        Item.c => |*item| blk: {
                                                                    item.*.x += 1;
                                                                    break :blk 6;

                                                        It’s sigil soup. You cannot leverage old knowledge at all to read this. It is fundamentally newcomer hostile.

                                                        • what does errdefer add to the language and why does e.g. golang not need it?
                                                        • the async facility is actually quite unique to zig and just adds to the list of things you have to learn

                                                        Any of these things in isolation are quite simple to pick up and learn, but altogether it’s unnecessarily complex.

                                                        Someone said something about common lisp that I think is true about Rust as well: the language manages to be big but the complexity is opt-in. You can write programs perfectly well with a minimal set of concepts. The rest of the features can be discovered at your own pace. That points to good language design.

                                                        1. 13

                                                          I’m thinking of anyopaque, all the alignment stuff…

                                                          Maybe your systems programming doesn’t need that stuff, but an awful lot of mine does.

                                                          what does errdefer add to the language and why does e.g. golang not need it?

                                                          A lot! Deferring only on error return lets you keep your deallocation calls together with allocation calls, while still transferring ownership to the caller on success. Go being garbage-collected kinda removes half of the need, and the other half is just handled awkwardly.

                                                          I didn’t quite see the point for a while when I was starting out with Zig, but I pretty firmly feel errorsets and errdefer are (alongside comptime) some of Zig’s biggest wins for making a C competitor not suck in the same fundamental ways that C does. Happy to elaborate.

                                                          Someone said something about common lisp that I think is true about Rust as well: the language manages to be big but the complexity is opt-in. You can write programs perfectly well with a minimal set of concepts.

                                                          Maybe, if you Box absolutely everything, but I feel that stretches “perfectly well”. I don’t think this is generally true of Rust.

                                                          I think Zig’s a pretty small language once you actually familiarise yourself with the concepts; maybe the assortment of new ones on what looks at first blush to be familiar is a bit arresting. orelse is super good (and it’s not like it comes from nowhere; Erlang had it first). threadlocal isn’t different to C++’s thread_local keyword.

                                                          I get that it might seem more unapproachable than some, but complexity really isn’t what’s here; maybe just a fair bit of unfamiliarity and rough edges still in pre-1.0. It’s enormously simplified my systems programming experience, and continues to develop into what I once hoped Rust was going to be.

                                                          1. 8

                                                            Being unfamiliar, nonstandard and having features for what you consider ‘oddly shaped usecase’ may be exactly what makes it a worthwhile attempt at something ‘different’ that may actually solve some problems of other languages, instead of being just another slight variant that doesn’t address the core problems?

                                                            I personally think it’s unlikely another derivative of existing languages is likely to improve matters much. Something different is exactly what is needed.

                                                            1. 2

                                                              The question is how different we need. Everything must be different or just some aspects?

                                                            2. 3

                                                              what does errdefer add to the language and why does e.g. golang not need it?

                                                              This is a joke, right? Please tell me this is a joke.

                                                              Go does not need errdefer because it (used to?) has if err != nil and all of the problems that came with that.

                                                        1. 8

                                                          This looks really well done! But I’m always compelled, in response to tutorials like these, to advocate for using parser/lexer generator tools instead of hand-writing them. In my experience writing your own parser is sort of a boiling-the-frog experience, where it seems pretty simple at first and then gets steadily hairier as you add more features and deal with bugs.

                                                          Of course if the goal is to learn how parsers work, it’s great to write one from scratch. I wrote a nontrivial one in Pascal back in the day (and that’s part of why I don’t want to do it again!)

                                                          Of the available types of parser generators, I find PEG ones the nicest to use. They tend to unify lexing and parsing, and the grammars are cleaner than the old yacc-type LALR grammars.

                                                          1. 27

                                                            This looks really well done! But I’m always compelled, in response to tutorials like these, to advocate for using parser/lexer generator tools instead of hand-writing them. In my experience writing your own parser is sort of a boiling-the-frog experience, where it seems pretty simple at first and then gets steadily hairier as you add more features and deal with bugs.

                                                            That’s the opposite of my experience. Writing a parser in a parser generator is fine for prototyping and when you don’t care about particularly good error reporting or want something that you can reuse for things like LSP support but then it will hurt you. In contrast, a hand-written recursive descent parser is more effort to write at the start but is then easy to maintain and extend. I don’t think any of the production compilers that I’ve worked on has used a parser generator.

                                                            1. 8

                                                              Also once you know the “trick” to recursive descent (that you are using the function call stack and regular control flow statements to model the grammar) it is pretty straightforward to write a parser in that style, and it’s all code you control and understand, versus another tool to learn.

                                                              1. 3

                                                                What’s the trick for left recursive grammars, like expressions with infix operators?

                                                                1. 4

                                                                  Shunting yard?

                                                                  1. 3

                                                                    The trick is the while loop. If you have something like A = A '.' ident | ident you code this as

                                                                    loop {
                                                                      if !eat('.') { break }
                                                                    1. 3

                                                                      You can’t just blindly do left-recursion for obvious reasons, but infix is pretty easy to deal with with Pratt parsing (shunting yard, precedence climbing - all the same).

                                                                  2. 1

                                                                    Having used and written both parser generators and hand-written parsers, I agree: parser generators are nice for prototypes of very simple formats, but end up becoming a pain for larger formats.

                                                                  3. 7

                                                                    Thanks! My thought process is normally: handwritten is simple enough to write, easy to explain, and common in real world software, so why learn a new tool when the fun part is what’s after the parser? I just try to focus on the rest.

                                                                    1. 4

                                                                      Lua’s grammar is also pretty carefully designed to not put you into weird ambiguous situations, with only one exception I can think of. (The ambiguity of a colon-based method call in certain contexts.)

                                                                    2. 6

                                                                      In general that is true, but with Lua there are compelling reasons (which I won’t get into here) to handwrite a single step parser for real implementations.

                                                                      That’s what the reference implementation does, despite its author being well-known for his research in PEGs (and authoring LPEG).

                                                                      1. 2

                                                                        do you have suggestions on PEG parsers that generate JS as well Javascript/Kotlin ?
                                                                        I was searching for something that I can use on a frontend webapp as well as on a backend (which is in Java).

                                                                        1. 2

                                                                          No, sorry; the one I’ve used only generates C.

                                                                      1. 5

                                                                        Generics in Go are probably gonna be a big deal, and the implementation looks nice and everything.

                                                                        But I’m a bit concerned that the rest of the language doesn’t really “support” generics as well as most other languages which feature them. Take the Max example from the article:

                                                                        func Max[T constraints.Ordered](x, y T) T {
                                                                            if x > y {
                                                                                return x
                                                                            return y

                                                                        This is a Max function which can take “any ordered type”. But “ordered” here just means the types which have comparison operators – which is a fixed set of built-in types. The Max function works with the signed integer types, the unsigned integer types, the floating point types, and strings. And that’s much better than the situation before generics where each built-in type would need a separate Max function (or, more likely, as is the case with Go’s standard library, just support float64 and require lossy casting between your type and float64). But it’s extremely limiting compared to any other generics I’ve heard of, because no user-defined type can be an “ordered” type.

                                                                        Generics (well, templates) work in C++ because a user-defined type can implement essentially all the operators a built-in type can, and can therefore act indistinguishably from a built-in type. A max function template in C++ can use the < operator to compare, and then any type with a < operator will work with that function.

                                                                        Generics “work” in Java because the generic type must be an object, not a primitive, and so a generic function can just require that the type implements the Comparable interface, which is implemented by all the relevant boxed types (Integer, Float, etc), and can be implemented by all user-defined types. So again, a max function in Java can use the compare method to compare, and then any type which implements Comparable will work with that function.

                                                                        Go is somewhere in the middle, where you can’t overload operators, so you can’t make a custom type act like a built-in primitive type, but it also doesn’t require boxing so people will be reluctant to make generic functions which require boxing (after all, in Go, that would be kind of ridiculous; you might just as well use runtime polymorphism using interfaces if you’re gonna box your types anyways).

                                                                        Maybe this turns out to not really be a problem in practice. Or maybe it turns out to be one of those things which Go users live with and Go haters bring up in every discussion as (valid, if overblown) criticism. I’d be curious to read other people’s thoughts on this. Has it been discussed in the Go community or core team? Does anyone here have experience with generics in other languages and in Go?

                                                                        Maybe the solution will be to pass in the operators/functions you need? A proper generic Max in Go could be written like this:

                                                                        func Max[T any](x, y T, compare func(T, T) int) T {
                                                                            if compare(x, y) > 0 {
                                                                                return x
                                                                            } else {
                                                                                return y

                                                                        Though this looks like it would be rather cumbersome to call all the time. I’m also slightly worried about how smart the Go compiler will be when it comes to inlining that comparison function, which will be important to reach performance comparable to C++ templates.

                                                                        All in all, generics look like a positive development for the Go language, but from what I can tell, I think it demands that we re-tread old discussions about operator overloading, function overloading, or if there are better ways to achieve what we need.

                                                                        1. 3

                                                                          But “ordered” here just means the types which have comparison operators – which is a fixed set of built-in types.

                                                                          No, constraints.Ordered matches all types who underlying types are one of those builtin types. See source code and note the ~ notation in interfaces. For example, time.Duration is defined as type Duration int64 for example and satisfies this constraint.

                                                                          This mechanism doesn’t inherit any methods, but it does inherit all the operators. (The builtin primitive types don’t have methods, but if you do type A int64 and then type B A, B doesn’t inherit methods from A.) I don’t know whether this was an intentional design decision or not, but it now certainly poses a practical challenge for treating operators like methods (which is probably never going to happen anyway).

                                                                          Operator overloading has also been explicitly rejected in the Go FAQ (see In comparison, the FAQ never explicitly rejected generics - it used to say something like “it’s an open question, we may or may not add it”. Some people took that as a corp-speak way of rejecting generics, but it has now turned out to be an earnest statement.

                                                                          1. 3

                                                                            Fair, I suppose “fixed set of built-in types or type aliases of those built-in types” would be more technically correct. I don’t think that materially changes much of what I wrote, but it’s an important detail.

                                                                            1. 2

                                                                              These aren’t type aliases. Type aliases look like type T = int64 (note the =) and T becomes equivalent to int64 everywhere. Defining type T int64 creates a new type. The new type has the same machine representation as int64 and “inherits” the operators (which work between values of type T, not between T and int64) - but nothing else. The new type is convertible to int64 (and vice versa) that an explicit type cast is needed.

                                                                              This is probably the more quirky aspect of Go’s type system. It is internally consistent though. If you have two type definitions type T struct{} and type U struct{}, these are not the same types and have distinct method sets, but they can be converted to each other with an explicit type cast.

                                                                          2. 2

                                                                            But it’s extremely limiting compared to any other generics I’ve heard of, because no user-defined type can be an “ordered” type.

                                                                            Just to clarify, are you saying it’s limited because there’s no way to implement constraints.Ordered yourself, or because there’s no way to implement a generic “ordered”? If it’s the former, then yes, right now there’s still a difference between primitives and types with methods. If it’s the latter, then the self-referential trick documented in the type parameters proposal can implement what you want:

                                                                            package main
                                                                            import "fmt"
                                                                            type Lesser[T any] interface {
                                                                            	Less(T) bool
                                                                            func Max[T Lesser[T]](a T, more ...T) T {
                                                                            	max := a
                                                                            	for _, candidate := range more {
                                                                            		if max.Less(candidate) {
                                                                            			max = candidate
                                                                            	return max
                                                                            type FlippedInt int
                                                                            func (n FlippedInt) Less(other FlippedInt) bool{
                                                                            	return n > other
                                                                            func main() {
                                                                            	a := FlippedInt(1)
                                                                            	b := FlippedInt(2)
                                                                            	c := FlippedInt(3)
                                                                            	fmt.Println(Max(b,a,c)) // Prints 1

                                                                            I wouldn’t say this is “extremely limiting” though. It just means there has to be two methods instead of just one, and you need to pick the right one based on what kind of type you’re dealing with. I mean, OCaml has two different operator sets for integers and floats, and it’s not considered “extremely limiting” from what I gather.

                                                                            For containers, they’ll likely be implemented by using a comparison function underneath. If the community/standard library decides on a particular pattern, you’ll likely have one convenience function for the ones that are constraints.Ordered, another for Lesser (if that ever becomes a thing), then a raw one which lets you specify whatever comparison function you want.

                                                                            1. 2

                                                                              You can write func Max[T constraints.Ordered | interface{ Less(T) bool }](a, b T) T, but with the current implementation, it’s hard to do a type switch inside of Max to figure out if you got something with a Less method or not. It can be done, but it requires reflection or some other trickery. There’s a proposal to allow type switching on T, and I think it will happen in 1.19 or 1.20. Everyone wants type switching, and it’s just a matter of ironing out all the weird corner cases with it.

                                                                              1. 2

                                                                                One alternative to switching would be to have methods which duplicate the effect of the binary operators. So instead of having to have FlippedInt , we’d have an action .Less method on all types which are ordered, so generic code which wanted to support - say bigint and int64 - could just call the methods and eschew the operators.

                                                                                One nice side effect is that the method sets define natural interfaces which can be used as constratins, and I think you no longer need typesets.

                                                                                Do you happen to know if this has been discussed (or if it has obvious negatives I haven’t seen)

                                                                                1. 2

                                                                                  Yes, it was discussed. It turned out be hard to get a set of methods that captured things like, uint can’t be negative, int8 can’t be more than 128, etc. so they need type classes, and then at that point natural methods are redundant.

                                                                                2. 1

                                                                                  You can try casting as the interface just like any other type cast, where the second return value indicates if it was successful.

                                                                                3. 1

                                                                                  Author here. I agree, the current implementation is limited!

                                                                                  Coming from C++, not being to implement comparison operators on user-defined types is a big gap. The core team actually acknowledge that in their generics presentation at GopherCon this past moth. I’m not sure if they plan to address this gap, but it does feel like generics (at this point) are really designed for primitives.

                                                                                  When I was first exploring the constraint system, I was excited to see that, in theory, any interface could be used in a generic type set. But then I had to pause and question when I would use type-set contained generics and would I would use method-set constrained interfaces. In the absence of user-defined operators, I don’t think there’s a compelling reason to switch from the latter to the former.

                                                                                  For example, consider two user defined interfaces Foo and Bar. Now we have the option to define a type-set

                                                                                  type Buzz interface {
                                                                                      Foo | Bar
                                                                                  func Do[T Buzz](x T) {

                                                                                  But we already had a variant on this pattern with the Reader, Writer, and ReaderWriter interfaces, which (in my opinion, at least) is one of the more powerful interfaces in the standard library.

                                                                                  type ReadWriter interface {
                                                                                  func Do(rw ReaderWriter) {

                                                                                  So that leaves the question of “what’s the advantage of generics over ‘compound’ interfaces”. I argue that being able to generalize over operators is the biggest selling point. We’re never going to implement OOM patterns like Java, so to your point, yeah, we’re sitting in limbo until we can do operational comparisons on user defined types.

                                                                                  All that said, I think the constraint system is a compelling and clean implementation. If anything, I want to see how they expand on this concept to include variables etc. I’m confident this constraint system can be leveraged elsewhere with the right support.

                                                                                  1. 1

                                                                                    Coming from C++, not being to implement comparison operators on user-defined types is a big gap.

                                                                                    I don’t want comparison operators to be overloaded, but I do wish we could define methods on any type so we could define a “Less() bool” method on builtin types to make them implement a more general interface. This comes up a lot, and creating a subtype doesn’t work because there’s no cheap way to cast a slice of ints to a slice of MyInt, for example. And on top of that it’s not very ergonomic to have to subtype (remember the days before sort.Slice(), anyone?).

                                                                                1. 1

                                                                                  Does anyone use Prolog ? Because I only know of this being taught in school, nothing in production. For Ada we know of at least one industry using it.

                                                                                  1. 5

                                                                                    There are few projects. For example:

                                                                                    1. 2

                                                                                      Prolog is used in a log of linguistics and processing. I know of a couple of companies using it.

                                                                                      Fun fact: my only paper credit ever is a piece of Prolog code ;).

                                                                                      1. 2

                                                                                        Erlang was originally implemented in Prolog. The paper Use of Prolog for developing a new programming language (Armstrong, Virding, and Williams, 1992) is a short and fun read.

                                                                                        1. 1

                                                                                          Gerrit Code Review uses Prolog rules for allowing to customize rules (e.g. “What kind of Code-Review/Verified/… flags are needed, and by to whom, to allow a change to be submitted).

                                                                                          The verifier of pre-2014 SPARK (that Ada variant) was written in Prolog.

                                                                                          1. 1

                                                                                            I just found out about ProbLog, and this is supposedly built in ProbLog:

                                                                                            1. 1

                                                                                              Tau prolog is used by the Yarn package manager.

                                                                                              1. 1

                                                                                                I think it’ll become more and more important going forward. Mostly this is a safe bet since type systems are getting more important, but there’s a lot of other places where these ideas haven’t reached yet .. personally; I’d like to replace file systems with a logic programming engine.. and you also end up with a variant of logic programming if you only want to describe coordination free programs (which are a super important class of programs in the cloud world)

                                                                                                1. 1

                                                                                                  Can you elaborate on what “coordination of free programs” means and how logic programming improves the situation there?

                                                                                                  1. 1

                                                                                                    The class of coordination-free distributed programs.. the programs that do not require coordination. Something like conflict free replicated datatypes for example (CRDTs).

                                                                                                    Edit: Logic programming enters the picture when you start talking about propagator networks; you can look at edward kmett talk about it or the keynote from popl this year.

                                                                                                2. 1

                                                                                                  I use SWI Prolog for hobby stuff & a few little things in prod. There are a few companies that I know of which use it, to the extent that they’ve sponsored development of SWI and reached out for Prolog devs; in most cases though, the fact that they use Prolog is not advertised.

                                                                                                  1. 1

                                                                                                    More links to practical applications via here:

                                                                                                    My favourite is AusPig, an expert system for pig farming.

                                                                                                  1. 4

                                                                                                    There was an attempt.

                                                                                                    But after this:

                                                                                                    This is where I call bullsh*t. I have never had a static type checker (regardless of how sophisticated it is) help me prevent anything more than an obvious error (which should be caught in testing anyway). What static type checkers do, however, is get in my way. Always. Without fail.

                                                                                                    Discourage himself.

                                                                                                    And all this brag, what s-expressions is only trees. What about: [ ] ‘ # : ?

                                                                                                    1. 9

                                                                                                      Honestly, if a type checker gets in your way either the type checker is too simple or you have an attitude problem. Sometimes both

                                                                                                      1. 4

                                                                                                        Or you are doing something really, really weird.

                                                                                                        1. 2

                                                                                                          I have a question for you or anyone else interested. I’m not a type-system skeptic… I cut my teeth on Ocaml and Haskell as a language hobbyist. Lately I’ve been using more Python and I’ve been wondering how to get a specific function to compile. In dynamically typed languages it’s actually very common to come across a very generic “find” function over lists which takes an object to look for, the list, and optionally takes a key function (which produces the comparison object from each item in the list) and a test function (which defaults to the equality operator). This is really handy because the less names I have to learn (i.e. both find and findBy), the better. Naturally I wanted to see if I could do this in a statically typed language (without dependent types… those are hard for me. I know you’re the formal methods guy around here but forgive me for being recalcitrant about learning new tools).

                                                                                                          I like OCaml, so I took a crack at it in OCaml but can’t get it to be as polymorphic as I want it to be. Here’s a simple version.

                                                                                                          let rec find (o : 'b) (xs : 'a list) (key : ('a -> 'b) option) (test : ('a -> 'a -> 'bool) option) =
                                                                                                            match xs with
                                                                                                            | [] -> None
                                                                                                            | x :: rest ->
                                                                                                              match key, test with
                                                                                                              | Some(k), Some(t) ->
                                                                                                                  if (t o (k x)) then (Some x)
                                                                                                                  else find o rest (Some k) (Some t)
                                                                                                              | Some(k), None ->
                                                                                                                  if (o = (k x)) then (Some x)
                                                                                                                  else find o rest (Some k) None
                                                                                                              | None, Some(t) ->
                                                                                                                  if (t o x) then (Some x)
                                                                                                                  else find o rest None (Some t)
                                                                                                              | None, None ->
                                                                                                                  if (o = x) then (Some x)
                                                                                                                  else find o rest None None

                                                                                                          I’d like to be able to call this like find 2 ["my"; "name"; "is"; "kris"] (Some String.length) (Some (=)). Unfortunately Ocaml gives me back val find : 'a -> 'a list -> ('a -> 'a) option -> ('a -> 'a -> bool) option -> 'a option = <fun> It’s been a while since I worked through a Hindley-Milner example but I think this is the correct refinement because: what if the needle and the haystack items are differently typed, but there’s no key or test function? The default is structural equality, which is 'a -> 'a -> bool, so it follows that the needle must always have type 'a. I tried a few different things, like breaking the match arms into different functions, but I’m not sure what to do. One thing I haven’t tried yet is trying this in Rust and using the Any trait. Specifically, how can I, or what is the closest I can get to, have one function I can call to do all the different kinds of list searching that I want to do?

                                                                                                          I don’t think this is “weird”. In general there are a lot of examples that make command line interfaces, dynamic languages, etc. feel so ergonomic, and it’s also why I feel like the answer might lie in dependent types: there are a lot of times where a function or verb is parameterized by the types of the arguments or the way that you call it. I suspect a hypothetical language that supports overloading by keyword argument would suffice for this example but there are other examples I have where there’s a similar tradeoff between completeness and ergonomics. But furthermore, this isn’t really an ad-hoc polymorphism example, because I can write a single definition in dynamic languages. Something like

                                                                                                          find (o, xs, key=None, test=None):
                                                                                                              if len(xs) == 0: return None
                                                                                                                  x = xs[0]
                                                                                                                  if key: x = key(x)
                                                                                                                  t = default_equality
                                                                                                                  if (test): t = test
                                                                                                                  if test(o, x): return x
                                                                                                                  else: return find(o, x[1:], key, test)
                                                                                                          1. 4

                                                                                                            Specifically, how can I, or what is the closest I can get to, have one function I can call to do all the different kinds of list searching that I want to do?

                                                                                                            Have a find function that takes a predicate for an item, and call it with different predicates. This is how List.find already works in Base: depending on how you call it, you can search for a needle in a haystack:

                                                                                                            List.find haystack ~f:(My_type.(=) needle)

                                                                                                            You can search for some nested key, of a different type:

                                                                                                            List.find haystack ~f:(fun x -> Other_type.(=) x.key needle)

                                                                                                            You can find something using something other than equality:

                                                                                                            List.find haystack ~f:(fun x -> Shape.fits_inside x.hole shape)

                                                                                                            I get what you’re asking, though. I don’t mean this to be snarky or annoying, just to say that this isn’t… the thing that works well in Python doesn’t really work well here.

                                                                                                            I think one difficulty is that you’re using the polymorphic equality function, which is… basically shouldn’t really exist. It has surprising (illogical) behavior in the face of any nontrivial abstract type; it will throw exceptions if you pass it certain types of objects; it just shouldn’t exist.

                                                                                                            So let’s pretend that we have no polymorphic equality. Then how do you write that function?

                                                                                                            Well, now you need to pass an explicit comparator. Basically, make test a required argument. Okay; no problem. You wind up with a signature like this:

                                                                                                            let find (needle : 'a) (haystack : 'a list) ?(key : ('a -> 'b)) (test : ('b -> 'b -> 'bool)) = ...

                                                                                                            I’m making key a named optional argument instead of an explicit positional option because that’s more stylistically natural, but it’s equivalent here.

                                                                                                            But what’s the default value of key? Easy: the identity function. We want it to default to if it’s not provided explicitly.

                                                                                                            Except… that doesn’t typecheck. That doesn’t actually make sense. isn’t 'a -> 'b; it’s 'a -> 'a. This makes sense if my test predicate expects an 'a, but the type signature says right there that it accepts a 'b. What if I call it with a differently typed test, but don’t provide a key argument?

                                                                                                            You kind of want to say “you can’t do that.” In other words, make key required when test has a different type, and allow it to be optional when test doesn’t. But you can’t express that – you can’t decide that an argument is optional or not, based on the type of the function passed to test.

                                                                                                            And you know all this, which is why you’re bringing up dependent types. Can they let you determine, at runtime, whether this argument is optional or not?

                                                                                                            Well, not in OCaml. But in some other type system? Sure! Dependent types wouldn’t even be necessary: just moving the type-checking of the defaulted arguments to each callsite would get you the behavior you want (you’d have to use actual optional arguments, instead of defaulting them in a match statement in the function body, but whatever).

                                                                                                            You can express the behavior you want in OCaml; you can construct predicates like this dynamically at runtime – but just not using optional arguments and a single function call. There are other ways to do it; heck, as soon as you make key a required argument, you have done it – callsites can typecheck whether is valid for each individual invocation of your function. But that’s not the API you want.

                                                                                                            I think this is an interesting illustration of a point the original author of this post was trying to make: why is the type system getting in my way like this? After all, Python lets you have the API you want. Why doesn’t OCaml?

                                                                                                            Well, this example makes just as little sense in Python as it does in OCaml. Python just allows programs that use the “wrong” types. Want to compare a string and an int? Go ahead. False, I guess? Hope that was what you wanted; hope you meant to compare a string and an int. The Python function works if you use it right – if o is the same type that test expects, etc. But OCaml wants to ensure that all possible uses of the function are valid. It doesn’t want you to say “there is a right way to call this function,” it wants to say “all ways you can call this function are valid.”

                                                                                                            These are very different goals! There are some contexts where you want that extra assurance, and some where you don’t. Depends on what you’re doing. (Er, sorry – what I meant to say was “static typing is pointless.”)

                                                                                                            I have run into this probably before (specifically the “I want an optional argument to transform this value that defaults to the identity function”), and it’s annoying. I wish I could express that; I wish I could hold the compiler and soothe it and explain to it that it’s going to be okay. But instead I just make it a required argument and move on.

                                                                                                            But in this case, you don’t even need to do that. The canonical OCaml way to do this is simpler and more flexible than the function you’re trying to write. You say you don’t think your function is “weird,” but it does look weird to me: why write this very specifically-shaped find function instead of the far more general one that lets you pass any predicate you want?

                                                                                                            List.find haystack ~f:(My_type.(=) needle)

                                                                                                            The key example you want where the needle is the same type as elements in the list and you still want to compare it can be accomplished with the not-very-well-named Comparable.lift:

                                                                                                            List.find haystack ~f:(Comparable.lift Identifier.(=) ~f:(fun x -> needle)

                                                                                                            It’s verbose and kinda ugly, but OCaml is a verbose and kinda ugly language :)

                                                                                                            Last thing: you say you want a single function to do everything so you don’t have to remember which function to use. But why is find special? What if you want to, say, filter a list next? Do you have a filter function that takes these same optional arguments?

                                                                                                            By breaking out the functions that construct these predicates from the functions that use the predicates, you end up with a (subjectively!) simpler program with less to think about. Find takes a predicate. Filter takes a predicate. Partially applying a comparator will get you a predicate. Comparable.lift is one way to compose comparators. String these together, and now any function that takes a predicate gets this compare-on-a-key function; you don’t need to go in and add these optional arguments to every function you want to call.

                                                                                                            Okay I’m just rambling now and I have an undefined-is-not-a-function exception to track down so I’ll go now.

                                                                                                            1. 2

                                                                                                              I’m not an OCaml expert but I probably wouldn’t introduce option types into this signature. Your key and test parameters have reasonable “default” implementation that are as convenient as typing None or Some:

                                                                                                              let rec find (o : 'b) (xs : 'a list) (k : ('a -> 'b)) (t : ('b -> 'b -> 'bool)) =
                                                                                                                match xs with
                                                                                                                | [] -> None
                                                                                                                | x :: rest ->
                                                                                                                if (t o (k x)) then (Some x)
                                                                                                                else find o rest k t
                                                                                                              let itself x = x ;;
                                                                                                              let always x _ _ = x ;;
                                                                                                              find 2 ["my"; "name"; "is"; "kris"] String.length (=);;
                                                                                                              find "my" ["my"; "name"; "is"; "kris"] itself (always true);;
                                                                                                              1. 1

                                                                                                                I presented a simplified problem. The arguments are optional because I wanted to use optional arguments in OCaml so that I didn’t have to always provide a key or test function (the typical case). Furthermore the find function might take an arbitrarily large number of options, like fromEnd, so even though two optional arguments are simple enough to write explicitly, it would get unwieldy for more. Obviously the more power you give a function the less likely it is to be able to find a single signature for it. You can easily take this too far to the point of making a function with uselessly large surface area but I think there’s a grey area where a function is conceptually small enough to be useful and ergonomic for a lot of people but way too large to be statically typed.

                                                                                                                1. 3

                                                                                                                  Well, you asked “Specifically, how can I [do the above]” and both I and ianthehenry gave pretty specific examples :)

                                                                                                                  I do agree with ianthehenry that it sounds like you want OCaml to be more like Python but there are some fundamental language design choices which mean that Python can do some “unsound” things that OCaml just won’t allow.

                                                                                                                  I think I get what you mean referring to large number of optional parameters. This seems to be a common pattern in Python, especially in the popular scientific and ML library APIs. sklearn.svm.SVR as an example has 11 options, and that’s not even a lot in comparison to other sklearn functions!

                                                                                                                  That sklearn API has more design issues than just having “too many” parameters. Two of those options are only valid when combined with certain choices for another option. Although this is documented, the language does nothing to help prevent usage mistakes here. Some of the options specify that they must be positive real numbers, which is a case where a stronger, dependent type system might come in handy.

                                                                                                                  Human errors in setting up complex options like this are common, and stupid mistakes like that are easy to make. This sort of bug is often much more subtle and pernicious than an error in logic. How will I know what will happen if I try to fit an SVR while invalidating some of those documented constraints? At a quick glance in the Python code, there’s no obvious runtime checking either, so who knows! How could unit testing solve this problem? Furthermore, why should the burden of testing be passed off to us end users, when we didn’t design this API?

                                                                                                                  But in a typed language this API would probably should be designed differently from the start. I would imagine some sort of Builder pattern might be appropriate, as it is well known as an alternative to having lots of constructor parameters. (Again, I’m not an OCaml expert so I don’t know the idiomatic forms.)

                                                                                                                  The typed language design tenets say to “make illegal program states unrepresentable.” That does mean that typed API design requires some more effort, to carefully model parameter variants rather than dumping a bunch of defaults into the function signature. In exchange for the cost of design, you get to make certain guarantees to your clients. This is what the typing proponents mean when they say typing “eliminates a whole class of bugs.”

                                                                                                            2. 1

                                                                                                              Thing is that I’m always doing something really really weird when I’m between states of working code, i.e. when I’m doing anything that I cannot instantly get right. I find that being in that dark zone is usually more fun and efficient with a less restricting language. A less restricting language kinda tries to work with me during the journey, but a stricter language just goes “dude, where the fuck are you even going?”

                                                                                                              1. 1

                                                                                                                I’d love to see an example of a language working with you here. I’ll admit my primary dynamic typing experience is Ruby where doing something invalid immediately results in an exception and (usually) a crash.

                                                                                                                1. 1

                                                                                                                  Perhaps I’m thinking via negative examples when working with languages like Rust or Haskell. If I do something weird there, I just get an entirely unhelpful type error.

                                                                                                            3. 4

                                                                                                              Having worked in Haskell and Rust, I’ve definitely had the type checker get in the way. I mean pages of Haskell community questions are filled with people asking how to chain operations through lenses or questions about how to lift one monad into another. When you work enough in Haskell, it comes naturally as you’re holding the types (e.g. monad transformer stacks, or free monads) in your head, but let’s not forget that if you were coding in a language that doesn’t require this type-level specificity, that you wouldn’t need to do any of this.

                                                                                                              I presume (though the author does little more than rant about this so I’m being charitable here lol) that the author’s position is that the contortions necessary to satisfy the type checker in cases where behavior is well understood is too high, and that guards/assertions/contracts around tricky parts of your code offer much of the same benefits as types without forcing the contortions around all of your code. I’m not sure how to test the veracity of either side of this claim, but I do think it has merit, having had to spend more time than I’d like building up State monads and lifting results out.

                                                                                                              FWIW I still largely think this is a neurotype thing. I suspect folks who find abstractions easy/empowering will have little trouble juggling abstractions in their head and will prefer to use that rather than think about weird/silly edge cases. I suspect other folks prefer focusing on a more concrete approach in their code, and these folks will lean on things like contracts, property testing, and regular old unit testing for assurance. sqlite certainly offers us a window into what the latter approach could look like. I don’t really have the data to back up my thoughts on the matter though so I’m idly speculating as much as everyone else 🤷.

                                                                                                              1. 3

                                                                                                                Monad transformer stacks (besides being an advanced feature most code doesn’t need) aren’t a type system thing, if you want the same behaviour in a unityped language you’d need very similar machinery. Free monads are a cool technique, but same thing. I don’t use lenses often (and avoid lens especially) but maybe they are poorly enough set up that the type system fights you, I couldn’t say.

                                                                                                                In general I find that if the compiler says I’m wrong, it is I who is wrong. The idea that “oh, this code would work but the dang type system won’t take it” is the thing I just can’t see. Sure, if you’re using advancing tools you need to understand the tools, but if you took away the type system all the compile errors would become equivalent runtime crashes. You’re not fighting the types usually, at most fighting yourself / your understanding of the tool. I definitely wouldn’t suggest most new Haskellers use monad transformer stacks (and only occasionally transformers at all… maybe ExceptT) or lenses, just as I wouldn’t suggest a new Rubyist reach straight for refinements or metaprogramming or a new rust programmer use Rc or RefCell

                                                                                                                1. 3


                                                                                                                  This continues to be one of the most smug and contemptuous ways people refer to dynamic typing. Not as bad as “unethical” but up there

                                                                                                                  1. 2

                                                                                                                    How is it smug or contemptuous? I’m a Rubyist and have nothing against dynamic typing or anything like that.

                                                                                                                  2. 1

                                                                                                                    Monad transformer stacks (besides being an advanced feature most code doesn’t need) aren’t a type system thing, if you want the same behaviour in a unityped language you’d need very similar machinery. Free monads are a cool technique, but same thing.

                                                                                                                    Hm. I feel that there aren’t many benefits to using an ML type system if I can’t layer guarantees with a monad transformer stack or a free monad approach. Without those, I feel like I’m just working with any other somewhat strong type system. I may be throwing out the baby for the bathwater here since I haven’t used more “relaxed” languages like Ocaml or SML, mostly spending time in Haskell.

                                                                                                                    In general I find that if the compiler says I’m wrong, it is I who is wrong.

                                                                                                                    Sure and I don’t think that’s what TFA is talking about. I think TFA is saying that, the cost the author pays for using an ML-style type system is too high in instances of simple and clear logic to justify the benefits that it offers when the compiler tells you you’re wrong. This is a more complicated value tradeoff and I think it largely depends on the person. Knuth is famous for advocating to write prose to accompany logic in order, so the reader of code follows along as one does a math textbook and I presume this allows Knuth to keep the bugs in his code very low.

                                                                                                                    1. 2

                                                                                                                      I may be throwing out the baby for the bathwater here since I haven’t used more “relaxed” languages like Ocaml or SML, mostly spending time in Haskell.

                                                                                                                      Me too. I’ve played with SML and done some rust and swift, but mostly I use Haskell.

                                                                                                                      the cost the author pays for using an ML-style type system is too high in instances of simple and clear logic

                                                                                                                      Right, whereas I find for those cases (in Haskell anyway) I don’t need to even write any types at all. I just write the code as I would in Ruby (in ref to types, obviously syntax and approach will differ a bit) and that’s it. Types get inferred and if I make a typo or logic error I get a message when I run it with runhaskell just like I would when running the equivalent ruby script with ruby.

                                                                                                                      Anyway, I think maybe the issue is that it’s hard to discuss this in the abstract. Might be more constructive in the context of a particular fight someone is having, which is really a topic for pairing and not a forum, unfortunately.

                                                                                                            1. 9

                                                                                                              On this subject, I think the addition of low latency garbage collectors to the JVM (Shenandoah and ZGC) has made JVM applications super attractive again for networked services. People who use Clojure tend to really like it so I’m glad there is still a lot going on in the community.