Threads for lorddimwit

    1. 11

      This is the most beautiful thing I’ve seen in quite some time. Leaning on stack orientation to obviate the monad/dyad duality is an interesting choice, the choice of glyphs is great, the auto conversion from “human typable” names is nice.

      Really, really great work. It’s just beautiful.

      1. 2

        Yeah, I’ve tried to get into J and apl a couple of times, and the whole monad/dyad thing and the trains and forks just doesn’t want to stay in my head, but this just makes a lot more sense to me :)

    2. 4

      I really like Lil and the Decker project of which it’s a part. It’s beautiful.

    3. 1

      Me and the kids made (and ate) a delicious apple cake together, then I’m taking them to a friend’s birthday party. Tomorrow we’re going to play miniature golf, weather permitting. I may be picking up my father from my sister’s house to have him stay with us for a week to give her a break as well (he lives with her).

    4. 3

      A friend of mine built a working 6502 out of discrete logic back in the late 90’s. This sort of thing is just incredible to me.

    5. 1

      I’ve always loved this. From the hardware up!

      I actually have a dead tree copy of the original Project Oberon on my bookshelf. Not nearly as comprehensive as this, but…I’ve always loved Oberon.

    6. 2

      I don’t see an advantage over a bitmap, which is simpler and only uses 1/16 to 1/64 the memory (depending on the maximum int you need to store.)

      Yes, the bitmap does need to be fully zeroed at the start, and iteration requires scanning the whole map, but since you’re clearing/scanning at least 64 items per memory access, I don’t think that’s a disadvantage. (And you can use vector instructions and popcount to speed up iteration further.)

      1. 5

        Russ Cox points out the case when you’ll be frequently clearing the bitset/sparseset. The technique comes from register allocation, and I think Russ also uses it in the context of tracking the set of NFA states that map to a specific DFA state.

        https://research.swtch.com/sparse

      2. 4

        The best use of this structure that I’m aware of is in regex engines. I’ve used it in my own implementations and it’s also used in the Go standard library’s implementation, and also Google’s RE2 I believe.

        Basically the set is used to track the active program counters for threads (in the threaded interpreter sense). Using bitmaps would require scanning the entire bitmap on each step to see which threads were active, and clearing is an O(1) operation versus O(n).

        Now, whether it’s a meaningful difference for all practical sizes is a different question, but it’s a really elegant solution to the problem IMHO.

        Russ Cox writes about the use of the structure in regex engines in the following articles:

        https://research.swtch.com/sparse https://swtch.com/~rsc/regexp/regexp3.html

      3. 2

        Hierarchical bit sets offer very fast iteration, with some caveats (limit on number of elements, mostly).

        https://docs.rs/hibitset/latest/hibitset/

    7. 5

      I don’t know if the author is aware, but they’re describing the technique first described in “An Efficient Representation of Sparse Sets” by Briggs and Torczon (1993)

    8. 4

      It is in part a love letter to BeOS, and to this day I still can’t explain how nice BeOS was to use in the late 90’s. It just felt…right. Still my favorite desktop operating system, ever (though don’t tell the Amiga that).

      1. 2

        My most favorite computer artifact is my press release BeOS R4.5 kit I got from my comptia A+ teacher at the turn of the century. I really do wonder what would be different if Apple had sided with Jean Louis instead of Steve…

        It was and still is a remarkable system. It has features and that many things today still aspire to accomplish.

        1. 5

          The big failure of BeOS was that it depended on BeOS features for things to be nice, just at the point heterogeneous networks were becoming widespread. Apple put a lot of effort into things like being able to use an NFS or SMB file share instead of local disk for a lot of things. Even Spotlight can do local indexing of remote shares. BeOS relied on the (fantastic) features of BFS and so you got a usability cliff when you used a FAT-formatted removable medium or a network share.

          I think a Be-based Mac would have been much harder to do well. Remember that, when Jobs returned, Apple’s maker share was around 3% and falling. The OS X Mac did well, in part, because you could buy one or two, integrate them into your mostly Windows (or mostly Linux / AIX / Solaris) environment and they would be nicer to use (except the stupid puck mouse) than anything else, so then you’d buy more. A BeOS machine in such an environment would have been nicer only for things that didn’t involve sharing with the rest of the company and so would have remained siloed, without a significant reworking of BeOS.

          1. 1

            Very good points, and ones I’d not previously considered.

            I think there was another one, though, a bigger one for the question of developing a native software market: NeXTstep’s dev tools were, AFAIK, way ahead of Be’s.

            I fear that if Apple bought Be instead, it would be a fading memory now, or maybe a brand of some bigger company.

            The company, IMHO, that Be should have partnered with was Acorn.

            Acorn was on the threshold of releasing the Risc PC 2, a dual-CPU ARM desktop. But its OS didn’t support SMP.

            Be’s did. I think the marriage could have worked.

            1. 4

              I think there was another one, though, a bigger one for the question of developing a native software market: NeXTstep’s dev tools were, AFAIK, way ahead of Be’s.

              I’m somewhat biased to agree here, but it took Apple a lot of developer relations effort to persuade app developers to adopt Objective-C. Remember that when 10.0 launched, there were four application development environments:

              • Carbon, the classic MacOS Toolkit ported to OS X.
              • Cocoa, rebranded OpenStep.
              • Java with AWT or Swing
              • Mocha (not the official name), Java with the Core Foundation classes toll-free bridged and the Cocoa UI classes accessible.

              Apple was really uncertain that Cocoa would catch on. They put a lot of effort into Java early on because it looked like the future. For example, OS X was the first platform where multiple JVM instances could share memory for the class library. I bought my first Mac in 2003 and fully expected to mostly write Java code on it (I then started my PhD on the same grant as a GNUstep developer and drank entirely different koolaid).

              The Be platform looked like a safer bet here. They had a multithreaded GUI that looked future proof and it was written in C++, the same language that an infinite number of Windows programmers were using.

              A bunch of the things that I really like in macOS (e.g. Spotlight) were implemented by ex-Be folks, so we might have had them earlier in that alternative timeline, but I don’t think Be had the drive to build something people wanted. The key thing Steve Jobs did was create companies that build machines for Steve Jobs to use and amortise the cost by selling to other people. This led to a unified design, because there was one customer. Sometimes a stupid design (he was partially deaf, so the iPod volume defaulted to dangerously loud, for example), but at least a coherent one. If your requirements are similar to his, the machines are great. If your requirements are different, the machines suck. The closer to mainstream his requirements were, the more successful the products.

              The company, IMHO, that Be should have partnered with was Acorn.

              I’m not sure this would have helped. Acorn had really nice hardware but was basically dead from a market perspective at this point. Schools moved over to Windows on x86 because ‘that’s what companies use in the workforce’. One of the very few good things Thatcher did was institute programming-led computing teaching in schools (no government money to buy computers if they didn’t support structured programming!) and that was gone by 1998, replaced with MS Office (if you were lucky, MS Works if not) training in schools. The Acorn machines cost less, ran faster, and were far more pleasant to use, but they struggled to sell because they weren’t what companies used.

              They’d have needed something quite compelling to get back. Adding a RiscOS compat layer to BeOS might have been possible, but RiscOS was still mostly written in assembly, so it would not have been trivial to move it to a different hosted environment (unlike Symbian, which was intended to run as a DOS process for debugging and had really nice hardware abstractions).

              RiscOS and BeOS also both suffered from the same problem: lack of a decent web browser. RiscOS had NetSurf, but even by 1998 there were a lot of sites that didn’t work well with it (‘best viewed in…’). BeOS had NetPositive (everything was NetSomething, it seems), which had similar problems. As part of the settlement from a lawsuit, Microsoft had to ship Office and IE for Mac for a while, but neither Acorn nor Be had that advantage. IE5 for Mac was mostly fine (it rendered transparent PNGs, unlike IE for Windows!) but was different enough to be a problem. There was, I believe, a Netscape port for BeOS, so it had that advantage, but Netscape was dying. Maybe Firefox could have become the default browser on RiscBeOS but it would have been a lot of engineering work. Apple’s investment in WebKit and Safari turned out to be one of the best management decisions of the decade.

              BeCorn would also have hit problems with 64-bit. Apple shipped their first 64-bit PowerPC machine in 2003. By 2006, almost all new x86 machines were 64-bit. AArch64 didn’t come along until 2011 and then the first non-Apple silicon was a few years later.

              Oh, and a lot of Arm cores weren’t actually SMP capable. I don’t think any of the XScale line were (they should have been, given the Alpha heritage, but I can’t find any reference to SMP versions) and they were the performance winners for a long time. You can fake it for small core counts. I know of one company that made SMP 386 systems by adding hardware in the memory controller that let you lock the bus for exclusive memory operations, but that made atomics very expensive and effectively made other cores store whole one core was doing a memory operation. That’s mostly fine for two processors, starts to hurt at four, and kills you above eight. You really need to build it into the cache logic and fabric and, for Arm cores, the caches and fabric are part of the block that you license as a single unit.

              The ARM11 was available in 2002, but the MPCore version (SMP-capable) wasn’t until 2005. The A15 was the first of the ARMv7 cores to be SMP-capable, I believe (in 2012!) and supported up to 8 cores in two clusters.

              It’s possible that, if BeCorn had been selling desktops, Arm would have prioritised SMP and 64-bit (or they’d have been able to do in-house designs - Arm was pretty liberal about handing out architecture licenses back then and I think Acorn had some pre-existing license as part of the spin-out process). Lee Smith pushed 64-bit quite hard but got a lot of push-back from partners who didn’t see any demand for it (right up until Apple announced the first 64-bit iPhone and then suddenly everyone wanted it). Intel (XScale owners), in particular, really wouldn’t have wanted a 64-bit ARM in 2002 when they were still in denial about Itanium’s failure, so I doubt they’d have been able to do AArch64 before 2008, at the absolute earliest.

              1. 1

                Splendidly detailed and considered reply. Thank you.

                TBH I don’t know about the low-level details of SMP on Arm. I did occasionally work on at least one or two multi-core SMP 80386DX systems, and I can say that it was doable, in the early 1990s, and while the hardware cost a lot, for some users, it delivered. One of my clients back then, here on the Isle of Man, replaced an IBM PS/2 with Xenix that I brought up myself with a Compaq SystemPro with dual 386s, and while that box was more than twice as expensive and cost as much as a nice car, it scaled well enough that their app, running on dumb terminals for a few dozen users, got materially better performance.

                I was also a member of CIX which for a while ran on a Sequent box with lots of 386s, and handled thousands of concurrent CoSy users, with excellent stability.

                In the mid-1990s I owned an AST Premmia box with dual Pentium 1 chips, which was my home NT 4 server, driving a big (physically) external JBOD RAID. 7 full-height 5.25” SCSI drives. Sounded like a Harrier manuevering outside the house; I couldn’t take phone calls with the server running. On this free, cast-off box, NT Server ran like a scalded cat.

                Acorn, ISTM, had 2 ways to go that it didn’t embrace: thin-and-light laptops, with its cool-running CPUs that sipped power, where RISC OS would have been a good fit… or SMP workstations. The “Phoebe”, the Risc PC 2, had 2 CPU slots. The prototypes could run 2 x 233MHz StrongARM, but it was planned to ship with a single 400MHz StrongARM; IOMC2 should have supported up to 4 StrongARMs.

                Some leaked specs: https://marutan.net/wikiref/Acorn%20Registered%20Developer%20Docs/MISC/CONF98/HTML/HARDWARE.HTM

                Acorn wouldn’t have gone for it because it had deranged daydreams of its own all-singing all-dancing microkenrnel OS, Galileo.

                But – also a daydream – ISTM that Acorn could potentially have offered an SMP Arm workstation with BeOS in the late 1990s at a lower price, and with lower power and cooling requirements, than any other RISC vendor, and continued to offer a quad-CPU model before the first dual-core CPUs ever appeared.

                A single-cpu Arm laptop with BeOS would have been a pretty nifty device as well, of course. BeOS’ GUI was quite adaptable and making it look more RISC OS like perfectly doable. Maybe even some filesystem fakery to make the layout superficially BeOS-like, and a copy of RISC OS in a VM, akin to BlueBox on OS X 10.0-10.4.

                It’s only a pipedream. (Punny reference to Colton Software intentional.) Whether the niche markets (high-performance thin-and-light laptops, beast-spec SMP workstations) would have kept them going another decade is moot.

              2. 1

                As I understand it, BeOS was a single-user OS without any internal security partitioning, so it would not have survived well if it became at all popular on the Internet. I suspect it was better than RISC OS, tho, because RISC OS did not protect OS memory from applications.

                1. 2

                  I am not convinced by that. The same was true of Windows. The BeOS kernel did have some concepts of protection domains but it didn’t fully export a POSIX protection model. That might have been a limitation, but I suspect not. The UNIX model was designed for time sharing systems where you had multiple users and needed to protect them from each other. The new world has a single user and the need to protect them from compromised components. Retrofitting this to POSIX is very hard. The microkernel design of BeOS was already a good start for introducing something like Capsicum. I can see a clear evolution from BeOS to something with good support for sandboxing / privilege separation, without all of the UNIX baggage that other systems have.

    9. 5

      I’ll admit to submitting this link because I want to ask a question about it. So first off… is submitting a link because you want to ask a question about it cool, at lobsters? I’m new and want to be cool.

      1. 12

        You’re on lobste.rs, by definition you’re cool.

        And for sure, ask questions! The unwritten rule does tend to be that you shouldn’t only submit links to stuff you write yourself/are involved in, but I’m just a sign, not a cop.

        1. 5

          The unwritten rule does tend to be that you shouldn’t only submit links to stuff you write yourself/are involved in

          Let’s see… 11 links, 1 I did not author… I’m respecting the rule I guess? More seriously though, that’s an average of 3-4 links per year, and the voting patterns suggest they were mostly welcome.

          I think the actual rule is to try and submit in good faith what we think will be interesting here.

    10. 2

      Apropos of nothing, I saw a Macintosh Portable in 1997 or 1998 and I was blown away at the display quality. I don’t know if it was actually that good but it seemed incredible at the time.

      1. 3

        It was really good for the time. I had one, and I loved it, but it wasn’t actually useful once the Powerbooks came along.

    11. 11

      It is shocking to me that user agents don’t handle this correctly.

      1. 3

        Exactly. That’s the whole point of user agents!

        1. 2

          I guess I shouldn’t be surprised, still less shocked, at user agents intentionally and aggressively not acting as agents for the user. Sigh.

    12. 8

      Damn. I’m sorry for her loved ones. I hope they find peace.

    13. 23

      My reactions:

      • oh it looks pretty neat
      • oh dear it’s written by drew devault
      • oh dear it’s written in Hare
      • Well I guess this will keep him occupied on bootstrapping systems for the next 20 years or so! (I don’t have any particular beef with ddevault that hasn’t already been hashed out, but it’s interesting to watch the trajectory of his projects go ever deeper into solipsism.)

      That said, the actual technology seems solid but not particularly novel, from what little is actually written there. Message-passing microkernel based on seL4, capability-based access control, etc. Early days for it yet.

      1. 11

        This is at risk of devolving into a flame war but I’ve seen a lot of people say Drew is…objectionable?…and I don’t know why. What little I’ve read about Hare I like, for example. I would like to be informed, but for my own mental health I’ve stopped meeting my heroes.

        (I’m hoping it’s not a Io or Jai situation, where these beautiful programming languages are marred by their creators’ just…awful stances on things.)

        1. 31

          A few years ago the best way to describe him is “being smart doesn’t mean you’re always right, and being right doesn’t mean you have permission to be a douchebag”. He has said he’s trying to tone down a bit a few years ago, but I haven’t interacted with him since then so I don’t know if it’s true. As far as I know he’s not an actual nazi though; I pay to use Sourcehut with a clear conscience.

          It’s mainly interesting to me in that he first came to prominence for making wlroots and sway, which are useful things that needed to be made. Then he made Sourcehut, which is a useful thing that needed to be made. Now he’s making Hare, which is… competent, but imo pretty weaksauce compared to something like Zig. It feels like suffers from the same problem as Go, having learned nothing about programming languages since 1990 or so. (Disclaimer, I am very biased.) And now an operating system that, from the first glance, probably suffers from the same problem?

          I’d be pretty excited if he was building a new userland on top of seL4, tbh. I’d definitely trust him to do something interesting with that, or at least make a cool respin of Plan 9. But as someone who likes hobby OS dev, seeing yet another Mach-ish message-passing microkernel made is fairly uninspiring.

          1. 9

            Thank you.

            Douchebag I can handle, as long as the douchebaggery isn’t, you know, actual hatred. And kudos to him for trying, I suppose.

            I like minimalism, probably to a fault. And I still think Amiga Exec (which is an atypical, message-passing kernel) is a beautiful operating system.

            1. 11

              Douchebag I can handle, as long as the douchebaggery isn’t, you know, actual hatred. And kudos to him for trying, I suppose.

              It can be tricky. He got into a fight with someone trying to report a bug with Sourcehut’s Git integration, calling them an idiot and accusing them of lying about their repro steps.

              The issue turned out to be fully reproducible bug in libgit2.

              No, that level of douchebaggery isn’t actual hatred, but it’s more than enough to drive people away from contributing to a project, and I think it’s reasonable to be a little hesitant about new projects he’s kicking off (it drives me away from them, for example). But I agree it’s a totally different situation than some of the other referenced projects.

            2. 7

              Tangent, but how come the Amiga hasn’t left more traces within FOSS? I associate it with nostalgia and strong partisanship among its adherents. Is is the lack of source, the ties to a specific hardware, or something else?

              1. 8

                how come the Amiga hasn’t left more traces within FOSS?

                Vim started on Amiga.

                FreeBSD then Dragonfly lead Matt Dillon was a prominent Amiga dev, back then the author of “Dice C” and other tools.

                1. 17

                  FreeBSD then Dragonfly lead Matt Dillon was a prominent Amiga dev

                  The message-passing IPC that he put in Dragonfly was strongly Amiga inspired. Unfortunately, a lot of these ideas make sense as core abstractions that you build the system around but are much less useful bolted onto a POSIX system.

                  I recall reading a paper in the ’90s claiming that the last thing that any interesting new kernel design added was a POSIX compatibility layer, at which point everyone just ran ported UNIX code on it and ignored its exciting features (and then found it was a less good UNIX and gave up on it). POSIX has been very useful in making software portable but has been a sad barrier to innovation.

                  1. 4

                    Yeah, that’s my feeling too. Everyone fell into the comfortable local minimum of GNU/Posix, and there wasn’t energy to resurrect the Amiga experience, for example.

              2. 12

                Amiga stuff was tied to specific hardware for sure.

                But there was also, at least in my experience, a lot of hostility to open sourcing stuff in the Amiga world. I don’t know if it was because there were a lot of cottage programmers or what, but source-available programs on the Amiga were really rare until very recently and the SDK was expensive the whole time Amigas were in mass production.

                1. 8

                  Amiga stuff was tied to specific hardware for sure.

                  You might be surprised.

                  The MacroSystem DraCo was basically an independent, 3rd party, enhanced Amiga clone which ran AmigaOS and clean AmigaOS apps, but didn’t use any Amiga hardware.

                  http://theamigamuseum.com/amiga-models/draco-the-amiga-clone/

                  https://bigbookofamigahardware.com/bboah/product.aspx?id=43

                  hostility to open sourcing stuff in the Amiga world.

                  Indeed.

                  So, for instance, the successor to Rexx, Rebol, seems to be free, not FOSS – just source-available.

                  (The successor to Rebol, Red, seems to be FOSSier.)

                  AROS is FOSS, but Morphos and AmigaOS 4 are not. AmigaNG, based on Intent, wasn’t and is lost to history.

                  1. 4

                    Note Rebol is from Carl Sassenrath, the lead engineer behind the original AmigaOS.

                    1. 3

                      Er, yes, that’s why I mentioned it. Sorry, probably should have said, for those who don’t know the history. My bad.

                    2. 2

                      More context:

                      Rexx was the scripting language embedded in AmigaOS. (It originates from IBM mainframe OSes. It is also found in IBM PC DOS 6.x-7.x, when IBM continued development of DOS for some years, after Microsoft dropped it for Win9x and NT, and in recent versions of OS/2.)

                2. 3

                  the SDK was expensive the whole time Amigas were in mass production.

                  Yet another Commodore failure.

              3. 7

                AROS exists. It’s an open source reimplementation of AmigaOS. It’s binary compatible on original hardware, and has ports to other platforms.

                1. 1

                  I just wish there was a native Raspberry Pi version. It seems like an ideal platform for the OS.

          2. 5

            Addressed directly to you: being right doesn’t mean that you should voice your opinion bluntly in public. Speaking about someone that you think is objectionable in these terms doesn’t make your words less so.

            1. 4

              Pointing out someone’s objectionable behavior is not wrong; it’s how communities establish and demonstrate their norms, and how they prevent future abuse. It may be off topic in a particular discussion, of course, although I don’t think that that’s the case here.

              1. 2

                Calling someone a douchebag is not pointing out objectionable behaviour. There are always less hurtful, more truthful ways to convey your disagreement with someone.

            2. 3

              Touche. :-P How would you rephrase it?

              1. 3

                I can’t speak for you, but personally I’m happy for him that he found a new project and that he’s not shy of trying difficult things.

                In your case, even if you think your work is better than his, I bet you could still learn something from what he’s doing on hare and helios. Support your peers, don’t put them down.

              2. 3

                I dont have a bone in this fight but for starters, name calling (aka douchebag) and comparison to a Nazi doesn’t say anything other than you dislike them.

                Its commonly phrased “resort to name calling” for a reason, because name calling is only the best option when you can’t/won’t back up your beliefs/opinions. It would have been far more effective to show an example of Drew’s (rudeness?) or pattern of.

          3. 3

            Just a heads up, the link you included in your comment appears to be broken?

            1. 3

              Good catch, thank you! Should be fixed now.

          4. -1

            So I take it the operating system you’re developing in the language that you also developed makes more inspiring choices? I’d be curious to see it. Could you share a link? Thanks!

            1. 3

              The language, Garnet:

              The only OS of @icefox’s that I see in their submissions is “Mongoose: A small toy OS with no memory protection”, which I suspect isn’t very comparable, either to a seL4-based OS (such as this Ares is said to be) or to Garnet, in terms of ambition or “inspiring choices”. In particular, last I heard, Garnet has the ambition to have memory safety, at least by default, whereas Mongoose and Hare, to my limited understanding, do not attempt memory safety (whichever kind of memory safety would apply).

              1. 0

                Thanks for the links. Mongoose 404s on sourcehut. And judging by the May 2023 status update on Garnet it still seems to have a long way to go. No very inspiring, no? Or do we use “weaksauce” in this context? /endsnark

                1. 2

                  What is Mongoose? A browser?

                  If you’re going to snark in favor of Drew Devault, you might want to exclude not being able to reach sourcehut from the attempt, since it’s his project.

                  1. 2

                    From the parent comment, Mongoose seems to have been @icefox’s OS project.

                    And it 404s because the project was deleted BTW (?)

                    Also, that was non-partisan snarking ;) Worst case, snarking against arm chair critique.

                    1. 1

                      Thanks for the clarification.

        2. 4

          (I’m hoping it’s not a Io or Jai situation, where these beautiful programming languages are marred by their creators’ just…awful stances on things.)

          …do I want to know what’s going on there? I thought Jai was just the guy behind Braid, and, assuming Io’s the prototype language from forever ago, the only thing I’m aware of there is that the author used Io to create a porn browser, which…fine?

          1. 2

            There’s more to it IMHO, but I’ll let you draw your own conclusions from the authors’ Twitter/blogs.

            1. 8

              I think I’m gonna just go with ignorance being bliss and continue not using either

            2. 3

              Twitter is broken, and Steve Dekorte has since taken down his blogs, and archive.org isn’t being very helpful here but I’m guessing something crypto-related?

              Jonathon Blow has some really dumb political views, and more importantly is just as opinionated and blunt about them as Drew Devault has ever been, I’m guessing there’s nothing more than that though.

        3. 3

          And Racket 😞

          1. 6

            Wait what’s going on with Racket? I thought it had a pretty large maintainer base at this point…

        4. 2

          OK, I’ve spent some 15+ minutes Googling so far, and have read about Jai and Jonathan Blow, and about Io and Steve Dekorte, and I still have no clue what you’re talking about. I have already invested way too much time into this.

          Can you just, y’know, tell us, please, rather than dropping elliptical hints?

          I went to a Drew de Vault talk at the last FOSDEM and I hoped to speak with him, but I didn’t get time to – I had to run off to another talk afterwards. I find his blog posts very interesting, and often highly insightful. Most of what he writes about I have no clue about, but on laptops he is absolutely 100% right, and on Linux distros, he talks a lot of sense although his desires are very different to mine.

      2. 4

        That said, the actual technology seems solid but not particularly novel, from what little is actually written there.

        And that’s a good thing. Better keep experimentation to a minimum (writing it in Hare is the experiment).

        He’s otherwise selected the best architecture available: Multiserver system based on a 3rd gen microkernel.

        It means that his system is better at a fundamental level relative to the most prominent operating systems in use today.

        1. 8

          “Better at a fundamental level” and an empty sack is worth the sack.

          1. 6

            Much higher potential than any boring “yet another UNIX” or “yet another 1st gen microkernel”. There’s hundreds of those projects, the most popular (and one of the worst) being Linux.

            In contrast, there’s few that looked at the state of the art, selected the best technologies and made something that isn’t architecturally bankrupt from the get-go:

            That’s about it. Well, there’s also a few non-opensource ones, but I’d rather focus on open source.

            1. 5

              Helios is Ares’s kernel

              1. 1

                HeliOS is also – another Amiga connection here – a FOSS cluster-native OS designed for INMOS Transputers but later ported to Arm and other CPUs. It is very loosely a continuation of TRIPOS, from which the kernel in AmigaOS 1.x was derived, but came to market on the Atari Transputer Workstation.

                https://www.theregister.com/2021/12/06/heliosng/

            2. 2

              Another seL4-based project is Robigalia, currently having the website https://rbg.systems. (It has been submitted on Lobsters before with a different domain name, but that domain name lapsed and was nabbed by someone who set up an imposter website.)

    14. 22

      The author makes some good points, names rot just like bits. I think initialisms are sometimes in a funny place. Their real meaning can become obsolete over time, while the initialism remains relevant.

      LLVM used to mean Low-Level Virtual Machine, nowadays it’s a compiler infra ecosystem. HTTP is everywhere, but we do a lot more things than transfer and manipulate hypertext on the web.

      1. 19

        Me: It’s called the Hypertext Transfer Protocol

        Them: Oh, so it transfers hypertext?

        Me: …I mean, sometimes.

    15. 6

      Sci-fi writer Robert Sawyer wrote a love letter to WordStar and its diamond:

      https://www.sfwriter.com/wordstar.htm

    16. 4

      If I don’t know why somebody would need this, can you point me to an introduction to the topic of “expression language”? I figure some of this stuff (abs, int, float, string, max, min) is already in Go’s standard library, or some kind of “math” module. Why not just use that? The home page shows a stack machine running. Is this like a business logic execution engine, where you need configuration, introspection, and logging, instead of just plain logic execution?

      1. 5

        It’s really common for rules engines that need to restrict what the user can do. Examples:

        1. http routing: if path == “/foo” or cookie.bar then …
        2. Bot blocking rules based on user agent or various signatures
        3. Firewall rules
        4. packet filter rules

        Standard configuration/rule syntax over multiple products: https://github.com/google/cel-spec

      2. 3

        Embeddable expression languages are used to express ‘rules’ in external configuration files/UIs

        In technical world – these are firewall rule configurations. In infrastructure world, you have monitoring applications that set up ‘alerts’ when an email should be send if something bad happens. These alerts are ‘rules’ expressed in these kind of languages.

        In Security world, there are multiple examples – intrusion detection alerts, XACML authorization rules, etc.

        In gaming world, to express specific character behavior’s, etc – Lua is used, often.

        This expression language is to be embedded in a Go application. In java world, there are several of these: eg http://mvel.documentnode.com/ or

        https://docs.spring.io/spring-framework/docs/4.0.0.RC2/spring-framework-reference/html/expressions.html

        But also many use JVM-compatible languages such as groovy (although those may be too heavy for simple things).

      3. 2

        There is a list of users down in the README. I was initially a bit confused at first myself, but I can imagine this being super valuable when you want to give users the ability to input arbitrary expressions, but don’t want to give them a fully fledged programming environment for security or performance reasons (think infinite loops through recursion or whatnot).

        1. 2

          I’ve thought about using it for a templating language. Give users something that they can do at runtime to execute some code in templates without having to write a dynamic language myself.

        2. 2

          exactly, for instance in https://ossia.io we use http://www.partow.net/programming/exprtk/ to allow the user to do simple one-liner computations involving basic math functions & operations

      4. 1

        Many many years ago I wrote something very similar, it was used as an expression language in an in-browser template engine (back when such a thing was novel).

      5. 1

        Any time you want to give users the ability to filter, eg “run action A on this set of machines” where you might use an expression language to select the machines. Think of the --filter flag for most cloud provider CLI tools.

        .cluster == "foo" && .cloud == "abc" && .index == 1 && ! .elected_leader
        
    17. 2

      Does FreePascal address any of these issues?

      1. 6

        It addresses very literally all of them. But that’s not new; so did Turbo Pascal, so did UCSD Pascal, so did Macintosh Pascal, so did…well, literally every single production version of Pascal. To me, this is something of the reverse of claiming that C compilers don’t actually have undefined behavior because it’s nothing more than portable assembly. To some extent, sure that was true back in 1972, but it’s definitely not true in 2023.

      2. 1

        Indeed, but like the article says, different implementations solved them in different ways.

        1. 6

          I’d argue that, if due to nothing but raw attrition, they did ultimately all address them the same way. In the end, Delphi and Turbo Pascal ruled everything, and the main differences between the two in terms of the language are quite limited to the object system. They both extend arrays the same way, they both handle strings the same way, they both do the same extensions to the module system, etc. And this is the same system that ultimately got used in things like GNU Pascal and Think Pascal and FreePascal. Yeah, there are some differences left, but I think they’re not any more pronounced than e.g. GNU CC v. MrCpp v. VC++ v. Borland C++, and we got through that just fine in the end

    18. 4

      I don’t think the claims about static are fashionable nowadays.

      1. 5

        There’s an unstated point that Pascal (the kind bwk is complaining about) does not have a module system, so variables are either local or global. It’s true that static inside a function is a code smell, but static at file scope still has important uses.

        Tangentially, Pascal’s lack of module system and strict code ordering rules were one of the motivations for Knuth’s literate programming. Tangle and Weave give the author more control over the order of exposition than Pascal allows. I think it is a shame that this aspect of the tooling became a core part of literate programming, because the literate approach can work well with negligible tooling when the host language is more relaxed than Pascal. The best example is Literate Haskell, where comments are the default and code is marked with > instead of code being the default and comments marked with a double dash.

        1. 2

          Apropos of nothing, the Zeek (Bro) language has a feature that lets you add fields to structs incrementally: you can define a struct and then other, seemingly unrelated, bits of code can add fields to it.

          It captures, seemingly by accident, one of the more important features of literate programming: defining things where they are relevant, not when they first appear.

    19. 4

      (I pass over the question of how the end of a constant string like ‘hello’ can be detected, because it can’t.)

      NUL-terminated strings considered good?

      1. 17

        Remember this is an historical document. What was true in 1981 (the date of publication) isn’t true today.

        In 1972, when C was first implemented, NUL-terminated strings were pretty much the perfect design. C was implemented using a PDP-11/20, for which the typical configuration was 24KB of memory, into which you had to fit your code, your data, and the operating system. Memory was incredibly tight: every byte counted. NUL-terminated strings allow you to represent the length of a string of any length using only a single byte of overhead. This also allowed for representing a string using just a memory address, a char *, which simplified the compiler. Because there was so little memory, there was a very low complexity budget for the compiler, and the language had to be stupidly simple or it couldn’t be compiled. Other, more sophisticated languages existed at this time. Algol 68 had “flex arrays” which included the size of the array in the array value. But you couldn’t fit a full Algol 68 compiler into the original Unix machine. Machines of this size were still being used, running Unix, in 1981.

        1. 3

          OTOH, C’s even simpler predecessor BCPL had length-prefixed strings.

          1. 3

            First of all, the length-prefixed strings in BCPL used a single character to represent the length, which would have meant a maximum string length of 255 characters in C. Note that in both the BCPL and the C string representation, the overhead is one extra byte or character. Dennis Richie said:

            None of BCPL, B, or C supports character data strongly in the language; each treats strings much like vectors of integers and supplements general rules by a few conventions. In both BCPL and B a string literal denotes the address of a static area initialized with the characters of the string, packed into cells. In BCPL, the first packed byte contains the number of characters in the string; in B, there is no count and strings are terminated by a special character, which B spelled `*e’. This change was made partially to avoid the limitation on the length of a string caused by holding the count in an 8- or 9-bit slot, and partly because maintaining the count seemed, in our experience, less convenient than using a terminator.

            When Richie says “less convenient”, he may mean “requires more code to be written”. On a machine where you are counting every byte of both data and machine code, that’s a bad thing.

            Second, BCPL is not simpler. BCPL is too complex to implement on the original machines used to implement B and C, due to memory limitations.

            B can be thought of as C without types; more accurately, it is BCPL squeezed into 8K bytes of memory and filtered through Thompson’s brain.

            The main problem with BCPL is that it has a number of features that require you to build a parse tree in memory of the entire program, before emitting any machine code. The original Unix machines did not have enough memory to allow storing parse trees in memory. Instead, the C parser directly emitted assembly code in a single pass through the program. The problem features of BCPL include:

            • The ability to call a function that is defined later in the program. C avoids this by requiring forward declarations.
            • Nested functions.
            • Block expressions, that comprise a list of statements, followed by an expression that is the result of the block expression.

            https://www.bell-labs.com/usr/dmr/www/chist.html

      2. 6

        No, length-prefixed strings. I wasn’t aware that the original Pascal didn’t have a string type! Every real implementation I used did. Actually a parameterized type with the max length a type parameter.

        1. 3

          This leads directly into the point made in the paper’s conclusion (emphasis mine):

          Because the language is so impotent, it must be extended. But each group extends Pascal in its own direction, to make it look like whatever language they really want. Extensions for separate compilation, Fortran-like COMMON, string data types, internal static variables, initialization, octal numbers, bit operators, etc., all add to the utility of the language for one group, but destroy its portability to others.

          1. 1

            Yup. Pascal was my axe from 1983-88 — college and a few years after — but in the form of much-extended versions by HP and Apple (both inheriting from UCSD, I think.) Then I took a job requiring C and never returned.

      3. 4

        Conformant Arrays in ISO Pascal are the correct way, but came too late.

        Well, that’s not entirely true. There are still problems with having the size of the array be part of its type, but Conformant Arrays definitely help.

        You see some interesting things done in, eg, Oberon, which in some versions don’t allow arbitrary-sized memory allocations. For example, arbitrary-sized strings are represented as linked lists of fixed sized pieces.

        At least one version of Oberon (Oberon-07, IIRC) actually switched to using null-terminated strings!

        EDIT: This is driving me crazy. The latest revision of Oberon-07 doesn’t have nul-terminated strings (though it does mention that if a string is assigned to an array larger than it, a null will be appended. But I swear an earlier revision of the language had special handling for ARRAY OF CHAR and the LEN function that took the NUL into account.

    20. 22

      The real reason: the name is just an identifier, a type can be an identifier and many other sorts of things like *i32 or [u8] or Vec(T) or all sorts of stuff depending on what your language is. It’s easier to make a parser rule unambiguous when it starts with something as specific as possible, which lets you easily make types more complicated without them needing a crazy-pants syntax like C function types. So if your type rule is complicated and maybe can look a lot like a non-type value in some cases like foo[4], then having

      vardecl ::= IDENT type [EQUALS expr] SEMICOLON

      is a lot easier to handle nicely than

      vardecl ::= type IDENT [EQUALS expr] SEMICOLON

      It’s not a huge difference, you can make the latter work, but it’s enough of a difference that there’s no real reason to do int age. It also makes type inference syntax easier, since you can just omit the type but every var decl still starts the same way, instead of either needing a keyword like auto or trying to read “thing-that-may-be-an-ident followed by an ident but if it’s not followed by an ident you might have to backtrack a bit and try something else”.

      Having the syntax be let ... = ... makes this even easier, but afaict the purpose of that is more to make it possible to not need a return statement every time you want to return a value. Otherwise you might end up with an expression that’s just x; and your parser doesn’t know whether it’s declaring a variable or returning the value of it.

      1. 9

        It is really crazy how the generation who made Unix and the IP stack and all these other elegant software and protocols made C’s function pointer and variable modifier syntaxes. There must be some insight that eludes me.

        1. 17

          The insight is that C was a big improvement over what they were using before. This was not a time of 100 languages to pick from, and the hardware determined what you could even pick. They could do PDP assembly, maybe BCPL, and after that they had to invent something better from scratch. C was objectively better.

          Plus they were absolute cream of the crop computer scientists, to whom such a thing as a slightly awkward syntax did not matter in the grand scheme of things. They had much bigger fish to fry, and they did fry them.

          1. 14

            The insight is that C was a big improvement over what they were using before

            Was it? Pascal has been around, and some might argue that it was a better language. C won simply due to UNIX betting hard on it, not out of merit, at least that’s what I have seen some older folks mention in retrospect.

            1. 11

              Why Pascal is not my favorite programming language by Brian Kernighan has some (IMHO valid) points about the Pascal of the day.

          2. 3

            Plus they were absolute cream of the crop computer scientists, to whom such a thing as a slightly awkward syntax did not matter in the grand scheme of things.

            They were humans, not prophets. Still are, some of them. They’re allowed to make mistakes.

            1. 2

              Other things might’ve been a mistake, I just don’t agree that a syntax decision in one. Otherwise entire languages are mistakes, even some modern ones that are just starting out are inventing incredibly awkward syntax.

              The fact that they were humans is precisely what I meant to say in the first paragraph: they did the best with what they had. It is undeniable, however, that their best is better than almost any other “best”, probably evidenced by the huge impact C/UNIX still has 50 years later.

        2. 8

          I think a good part of it is that in the 70’s C had a much simpler type system, and it grew by accretion. Would be interesting to do some archeology and try to find out for real though.

          1. 1

            I’m re-reading SICP and in one footnote there’s a reference to abstract data types being researched, but the first cited papers are quite late from around 1978 (if I recall correctly). Maybe that’s when it started getting more widespread traction. ML started earlier than that, but my guess would be that by early 1970s it’d be restricted to provers and more theoretical applications.

            1. 5

              See Barbara Liskov’s Turing lecture. She cover the history of abstract data types in some detail.

              1. 1

                Great tip, will check it out.

        3. 8

          The type syntax is not “type var1, var2” but “simple-type expr1, expr2”, where simple-type is something like an int, double, or struct. For example:

          int c, *p, arr[5];
          

          Here the expressions c, *p, and arr[5] (shhhh, ignoring that the fifth element is out of range) will all have type int. A function pointer declaration like

          int (*myfunction)(int, int);
          

          is saying that (*myfunction)(some_int, some_int) will have the simple type int.

        4. 4

          I believe Chomsky hierarchies and similar theories were not well known by them at the time. That’s why C is infamously not context-free from a grammar standpoint.

          1. 7

            Not really, we got BNF with Algol (the language which I think introduce Algol-like type name declaration syntax).

            Also, I think C, as originally designed, was LL(1). Remember, you used to need to struct MyStruct my_var, so every type starts with a keyword. It’s only after later introduction of typedefs(a later feature) that the grammar becomes ambiguous and starts requiring the lexer hack.