Threads for Vaelatern

  1. 4

    Hm cool, is there any way for such scripts to work in the browser?

    Something I’ve always wanted is to unify web and CLI programs somehow, i.e. write a quick script and have it work in both places. The examples strongly remind me of HTML forms and checkboxes.

    I guess that is more of a terminal thing, but there are some terminals with rich text, images, etc.

    1. 8

      For Python, Wooey should be something of interest : CLI exposed on the Web. Gooey transform CLI to Desktop GUI app also.

      1. 4

        You could just use one of the Javascript based terminal packages like Hyper.js but ultimately it’s not clear to me what you’re trying to achieve here :)

        1. 1

          The question is, once a shell script has this UX, can we find some glue to just stitch it into a web page? In the CGI model allow someone to load a shell script and walk through it? Once the interactivity looks like this…. why not? It would allow shell devs to easily let other people do the same things CLI people do easily

          1. 1

            Respectfully, I’m not sure this makes sense.

            CGI scripts don’t expose ptys to end users, they take web requests as input and generate HTML and other content as output. They don’t magically enable interactive shell programs through your browser.

            1. 1

              CGI model, not the same tech.

              1. 3

                Thanks I guess I’m only aware of the traditional CGI script.

                Can you point me to an example of the kind of thing you’re thinking about?

                1. 1

                  If I got the thread right, an exemple wood be Vue.js (https://vuejs.org/).

                  Or React.

                  1. 1

                    That’s what I’m struggling with.

                    I can’t find any reference anywhere for React or Vue fitting in with a “CGI model”.

                    I’m just not understanding where folks are going with this idea - UNLESS you mean the generic HTTP request/response cycle?

                    1.  

                      I mean generic HTTP request/response with dynamic sites generated outside the web server. So a web server knows “when hit with this endpoint, run this script and connect the pipes.”

                      Don’t really have an example off the top of my head. And it would need a wrapper to have live-enough interactivity…

        2. 2

          I’ve been playing with Dear ImGui with a view to doing this. It has a load of different back ends, most of which pass the output to the GPU to render, but there are two other interesting ones (from the same person):

          • ImTui produces curses-based text interfaces.
          • imgui-ws sends the compressed display lists over a web socket for rendering via WebGL in a browser.

          Annoyingly, both of them depend on slightly (differently) modified forks of Dear ImGui, so you can’t easily build a program that dynamically switches between them. I’m hoping that the hooks that they both need will eventually be upstreamed so that I can write things that render in my local GUI, my terminal, or on the web using the same Dear ImGui interfaces.

          1. 1

            Hm didn’t know about those libraries, looks pretty cool!

            I also wonder if you can just run the whole UI in WASM, without a C++ server sending websocket data ? The funny thing is that WASM doesn’t really work well with the DOM, but maybe it would work better with terminal-based communication to JS and a virtual JS terminal. Or is that slower than WebGL?


            I think there are two directions you can approach it from:

            1. start with web -> fit in terminal
            2. start with terminal -> fit in web

            ImTui would be the latter, but I kind like proportional fonts, hyperlinks, and I think screen readers for accessibility are an issue.

            So there is some value to the first for sure … But yeah it’s hard to do that without changing the terminal. And the problem is that most people use whatever terminal comes with their OS. You would need some kind of graceful degradation, and then that introduces all sorts of complexity.

            1. 1

              I also wonder if you can just run the whole UI in WASM, without a C++ server sending websocket data ? The funny thing is that WASM doesn’t really work well with the DOM, but maybe it would work better with terminal-based communication to JS and a virtual JS terminal. Or is that slower than WebGL?

              I think you could compile the C++ bit to WASM and replace the WebSocket interface to the front end with something simple. My interest in this is for control-plane tools. I’ve been playing with these things to try to see if I can build a nicer replacement for things like systat using a thin C++ wrapper around a mostly Lua codebase (using Sol2 for bridging). I’d like to be able to run a terminal application via ssh, a graphical application locally, or a web app either locally or remotely, so that I can connect to a remote machine and run graphical administration tools.

              ImTui would be the latter, but I kind like proportional fonts, hyperlinks, and I think screen readers for accessibility are an issue.

              You get proportional fonts with any of the other Dear ImGui back ends. Screen readers are not yet supported, but there are folks working on that. If they work with the display lists from Dear ImGui then it should also be possible to make console applications accessible, which would be fantastic.

              So there is some value to the first for sure … But yeah it’s hard to do that without changing the terminal. And the problem is that most people use whatever terminal comes with their OS. You would need some kind of graceful degradation, and then that introduces all sorts of complexity.

              That’s why I like the imgui-ws approach: you can jump straight into graphical mode by just having the terminal application give you a hyperlink that you click on and see the UI in your browser, or you can use ImTui to run it directly in the terminal.

        1.  

          I’m not sure I understand. If you read only the key ID from the authenticated payload in order to authenticate it, is there a problem? Or is the problem that this is error-prone to implementers? I’m no crypto expert, but I suppose I care about security more than average, and I thought it was obvious that nothing but the key ID should be used before authentication.

          1. 6

            My interpretation is that reaching in and grabbing just one thing from the untrusted payload is bad spec design, since it means that API developers are going to want to implement grabbing any ol’ thing out of the untrusted payload.

            (Facetiously) I’m beginning to think JWT is bad?

            1.  

              Meanwhile I’m beginning to think you can have an implementation of jwt that is non compliant but good; like ignoring any specified algorithm, and only verifying but never encrypting.

              1.  

                I agree with you… but what’s the point of JWT if the only good implementations are non-compliant? I remember reading good things about paseto but I’ve never actually used it.

                1.  

                  The point is to have a tool that can be used to track sessions without maintaining much state on the server (revocation list is an obvious but, depending on your environment, plausibly optional thing). That’s all I need.

              2.  

                I’m really not a fan of JWT, but I have questions here. X.509 certificates also have an issuer field that is part of the signed data even though it doesn’t strictly need to be. Would X.509 be better if we stopped signing the issuer?

                It has some of the other problems that have gotten JWT in trouble, too: certificates identify their own issuer, leaving it to the application to decide whether that issuer is acceptable, and their own signature algorithm.

                Of course X.509 is much more tightly specified, and includes a standard way for issuer certificates to identify what kind of key they have. It also doesn’t mix asymmetric and symmetric cryptosystems. But I wonder if the main reason we consider it a reasonable security standard isn’t exactly the same reason developers might prefer JWT—the bar to implement X.509 at all is so high that people aren’t tempted to roll their own.

            1. 1

              I wonder why he didn’t pop a large resistor as the load instead of shorting the panel. Surely a fake load is safer and helps show real world operation better than just shorting the two ends!

              1. 1

                It’s all very readable, even considering it’s entirely uneditable. Does python make a better Perl? Write once, then read only forever?

                1. 2

                  I think AppleScript is a better candidate for this comparison than Perl? Perl is infamous for being write-only rather than read-only. AppleScript is infamous for being fairly easy to read but almost impossible to edit.

                  1. 1

                    This particular feature of python makes python compete head to head.

                    1. 2

                      I very much disagree, and also like I say above, I think language comparisons like this add very little value and serve mostly as incendiaries.

                      Most uses of Python’s new pattern matching feature are very readable to practically anyone. You’re looking at a trained practitioner contorting their code into a knot - stunt coding because they CAN and because doing so helps those of us who study the language understand it more deeply.

                      1. 3

                        You’re looking at a trained practitioner contorting their code into a knot - stunt coding because they CAN and because doing so helps those of us who study the language understand it more deeply.

                        Yeah the motivation is less “look how dumb pattern matching is!” (it’s not dumb at all) and more “look how much fun you can have with a programming language without writing an actual program, just doing weird things”

                  2. 1

                    Python is Python and Perl is Perl. They’re both great languages which have more or less readable features, and all of this - what’s readable and what’s not, is so incredibly dependent on the fuzzy aesthetic preferences the human doing the reading.

                  1. -1

                    Super long. Statebox-y things are the future, bijection between code and GUI tools.. proper models of what it is you are doing etc.

                    Shameless sr.ht/~ilmu/tala.saman

                    1. 1

                      I didn’t understand how tala.saman is relevant here.

                      1. 0

                        Cool, I can explain!

                        The idea of no-code is to trigger behaviours in the computer with minimal “programming” so ideally you are interacting with some kind of GUI. The tala.saman idea is to index behaviours into menus that can have human readable descriptions of the expected thing to happen if you trigger the selected option. However there are many behaviours to index and organizing them is a kind of visual programming.. Who wants to organize all this stuff though? I don’t think “semantic web” is just a UX problem, it is also an incentive problem, hence economics matter and it would be cool to pay people for legitimate improvements to the index of behaviours (i.e. the internet) but no one can be trusted to assess what constitutes a legitimate improvement so we all get to have our own opinion and the space of opinion that emerges may actually converge into consensus in some cases, which is like finality in blockchain things, i.e. you can pay people for arguing on the internet (the ones who argue in good faith will preserve trust and will actually get paid).

                        Honestly the whole idea of “visual programming” is a bit sus imo, it’s all programming.. the key is just to make it simple for people to participate in said programming; whether they are documenting observed behaviours, just supplying parameters or actually organizing the computations that take place on the machine (i.e. “programming”). We’ll always need to make this game accessible in a ‘continuous’ manner so that people can unpeel the layers of abstraction or create new abstractions (names), it’s about being able to see what is going on and drawing trust boundaries around important resources that you want to protect from “automatic updates”. Having a universal abstraction makes it much easier to reason about how that is supposed to work and bigraphs are that abstraction to a large extent. Statebox has already built some nice interfaces for working with them but we still don’t have integrations with every-day tasks. Eventually the whole data-interchange-graph of your computer should be interactively editable.

                        1. 1

                          So I think I understand your point, but not what Tala.saman has to do with it

                          1. 1

                            Did you read the article? his sub-headings after motivating the problem are:

                            • Extensibility
                              • Interoperability through good APIs and Webhooks
                              • Dropping down to code when needed
                              • Open-source, at the limit
                            • Evolvability
                              • Version Control
                              • Observability

                            Tala.saman is about how to practically achieve that, if my explanation is not clear then I am willing to continue the discussion once you point out what rankles you.

                            Maybe it is the “visual” part of “visual programming” that I didn’t go deep enough into? The “data-interchange-graph” is a visual representation of how data moves around your computer (being passed between programs), the visual programming part is what statebox has done with their bigraph and string diagram editors… also, since the data being passed around needs to be meaningful we need a name system which is what this tala.saman thing is supposed to be.

                            1. 1

                              I understood his article, but the docs I found for talasaman didn’t explain how the software addresses that need. Or even what it is. The only thing I was able to take away was “kinda datalisp I guess?”

                              1. 0

                                Yeah, I know, sorry, I don’t have much written down yet.. but still I think that takeaway is less than what you’d get from not following any links and just reading my comment above.

                                What I am imagining is roughly:

                                • Reproducible builds & FOSS OS
                                • Theoretically sound data interchange format
                                • Economically sound name system

                                The names are referencing contexts for giving data meaning. The data is version controlled and assessed, trusted data is allowed to make causal effects on the OS.

                                Economic soundness is just w.r.t. the user of the system. Since the problem of not knowing what to trust is symmetric in the p2p model I just need to motivate a user to share their beliefs.

                                The difference between collective vs artificial intelligence is that you don’t need a perfect assessment with collective intelligence, you can preempt the infinite recursion by delegating to the human, but I think the only way to know if it works is to build the software.

                                I’ve described some interfaces to make the UX of this system tolerable but that isn’t really the focus for proving that this thing will work. However in the end it kindof is all about the UX (for the CI vs AI reasons mentioned in previous paragraph) so that should maybe be a more prominent focus in any explanation?

                                Anyway, I suggest being less dismissive and more cooperative if you are actually interested in learning what others are thinking. I’m not sure how to help you understand me given your communication so far.

                    1. 3

                      No-code businesses are fundamentally about lock-in. If it’s easy to move your business to a more scalable solution as soon as your needs start to grow, they’ll lose all their whales and be out of business.

                      1. 2

                        That isn’t the call. OP simply wants you to have the ability to grow your no code tool as you grow. Build features the original engineers didn’t. It’s kind of reminiscent of Guy Steele’s talk on growing a language.

                        1. 2

                          The lock-in observation is somewhat accurate though, the trick is to offer favorable terms so that you’d rather be locked into this thing than that thing (basically you’d need a persuasive enough economic system so that people choose to migrate from the current one where it makes sense to lie, steal and manipulate to a different one where it makes sense to share, teach and cooperate).

                          It’s super weird to wake up to silent downvotes. Idk what I am doing that is so terribly wrong.

                          1. 1

                            It’s super weird to wake up to silent downvotes. Idk what I am doing that is so terribly wrong.

                            I think it’s because you didn’t explain what tala.saman is or how it relates to the OP, so it just looks like spam.

                            1. 1

                              Aha, that is totally fair. Usually you’d flag as spam then (this is an option in the UI and iirc last time I was flagged for something I was shown the reason).

                              I am also willing to discuss things with people… However, my patience for talking to the void has worn thin.

                      1. 8

                        I didn’t enjoy this.

                        In no small part because it reminds me of unfinished work, piling up behind me.

                        I think it’s important to drop those things you are least likely to do off the back of the cart, and let them lie. This piece didn’t seem to touch on strategies or other practices for improving the work, only pointed out that backlog is bad.

                        I recommend reading The Phoenix Project. It seemed to cover all this ground in a more compelling package.

                        1. 3

                          This piece didn’t seem to touch on strategies or other practices for improving the work, only pointed out that backlog is bad.

                          That wasn’t my reading at all–the size of the backlog is a different issue than work-in-progress.

                          It’s totally fine to have a big backlog and be chunking it up into little bits and chewing through it–this is in fact one of the keys to getting a backlog back under control if you can’t just axe things entirely.

                          It’s similarly terrible to have a “backlog” of zero with a thousand things in-progress.

                        1. 33

                          There is no such thing as no-code. Only weird-code.

                          1. 3

                            That’s not quite helpful. Weird gets stuff done.

                            Perhaps there is no such thing as no-code. Only code you can’t actually change.

                            1. 13

                              What I meant was: a given system either A) doesn’t have the flexibility to stand up in the real world, or B) does have that flexibility, and therefore has all of the complications that are part and parcel of programming, but because it tries to pretend not to be programming, it isn’t amenable to the usual programming tools and techniques. Of which we’ve come up with a few good ones over the past fifty or seventy years.

                          1. 2

                            Obscurity precludes common antifeatures from software builds. For example, Firefox’s EME DRM module does not exist at all in the ppc64le package.

                            Does that mean that one can’t stream from a service like Netflix or Hulu using this motherboard?

                            1. 14

                              Yes. It means that Netflix and Hulu prevent you from streaming on anything other than Windows, OSX, and mainstream Linux on mainstream hardware. Even chosing to use an insufficiently popular libc means you can’t load the DRM module.

                              1. 2

                                Yes. It means that Netflix and Hulu prevent you from streaming on anything other than Windows, OSX, and mainstream Linux on mainstream hardware.

                                … which is an utterly predictable outcome that was pointed out to the W3C while they were standardising EME. To no avail, sadly.

                                To make it super, super, clear: those Netflix and Hulu Web experiences are completely W3C standards compliant.

                                1. 1

                                  Though they do bless some embedded solutions, like the roku boxes (which are linux arm boxes). You also can’t get full resolution except on such a box, presumably because of the stronger display snooping protections those boxes provide.

                              1. 7

                                The last two words are the crux of this issue: monorepos deliberately centralize power. The underlying assumption is that the centralized authority makes good decisions. If not, the same efficiency that enables codebase-wide changes magnifies the impact of a bad decision.

                                Not to turn this into a politics thread, but this is what I find tedious about discussions of “states rights vs federal power” or “mom’n’pops vs monopolies”: whether scale is good or bad depends on who has it! If we can assume the federal government is good and competent, we don’t need states rights. If we can’t, we do. In practice, it’s usually a mixture that varies by domain. The same with big business vs small business. There’s no one right level of centralization. When things are going well, centralize and spread the benefits. When things are going poorly, decentralize and cultivate anti-fragility.

                                1. 5

                                  When things are going well, centralize and spread the benefits. When things are going poorly, decentralize and cultivate anti-fragility.

                                  The key argument of decentralization is risk management. Centralisation introduces single points of failure. If the central authority is performing badly, this affects everyone. If a decentralised authority is performing badly then this has a more limited effect on the whole system. For a decentralised system to exhibit the same failure mode, all of the individual elements need to fail simultaneously.

                                  The counter argument to this is that failures are often correlated and reasoning about them is hard. This is a big part of the reason that Katrina was such a disaster in the US, even though it did less damage than several other hurricanes: a lot of insurance companies thought that they were insuring a load of independent risks but they all had the same root cause. When Katrina hit, it turned out that they were insuring a lot of correlated risks and so several insurance companies went out of business and were unable to pay out. Often, a decentralised system fails to decentralise the risk and ends up just hiding it in a single point of failure that can trigger failure of all of the decentralised components.

                                  1. 4

                                    I work in distributed systems, so I definitely feel this, but it sort of sounds like you’re saying “the downside of decentralized systems is that some of them may actually be centralized systems” which is true, but that’s not exactly high praise for centralized systems. Indeed, we still go through the trouble of building decentralized/distributed systems when we care about reliability even though there’s a risk we might not get it right–we don’t throw up our hands and choose explicitly centralized architectures (which sort of seems to be what you’re implying in your “counter argument” paragraph; apologies if I’m misunderstanding).

                                    1. 2

                                      I work in distributed systems, so I definitely feel this, but it sort of sounds like you’re saying “the downside of decentralized systems is that some of them may actually be centralized systems” which is true, but that’s not exactly high praise for centralized systems.

                                      I’d frame it more as ‘the down side of decentralised systems is that they are very hard to get right and if you get them wrong then you get the down sides of both centralised and decentralised systems’. A good decentralised system is a big win because it has inherent fault tolerance and there are a few fantastic examples of this. GMail might be down, but email as a whole is almost never down. The single point of failure for email is DNS and that is also a distributed system (the single point of failure for DNS is distributed between the root names - a single domain’s authoritative servers might be down, but DNS as a whole keeps going).

                                      The behaviour of a centralised system depends on first-order properties of the system. The behaviour of a distributed system depends on emergent properties of the system and, in general, humans are spectacularly bad at reasoning about even second-order effects of a system.

                                    2. 4

                                      I think in many cases there’s a systematic bias in favor of pre-mature centralization, so you probably need a culture of aggressive decentralization to balance it out. The downside risks of decentralization are usually lower: suboptimal outcomes but still better than worst case.

                                      But looking at, e.g., the US response to COVID-19, in a hypothetical country the size and composition of the US but with a centralized authority it would have been easy to do better than we did. We had things like the COVID-19 Project from The Atlantic because even basic data reporting standards weren’t there, so the private sector had to do it. The centralized agencies we did have like CDC have no actual power to do anything so they just did pointless turf guarding to try to keep the FDA from approving rapid tests, instead of actually protecting public health by putting out clear and realistic guidelines based on clearly gathered and presented data. You couldn’t actually restrict interstate travel, so there was just a blanket policy of states being laissez-faire plus the governor being vocally pro- or anti- the restrictions that were or were not on the books and either way entirely unenforced. We sort of got the worst both worlds with what was in practice a uniform policy with low coherence. Oh well.

                                      Going back to centralization vs decentralization, I think what that shows is you can’t just assume because there are many actors there’s an actually decentralized system. The actors need to be actually able to act independently for it to be a decentralized system that works and in the case of a virus that crosses state lines and borders, there wasn’t really independence.

                                    3. 3

                                      The tradeoff between redundancy and efficiency is just another way to state the CAP theorem. I think intelligence is the ability to balance this tradeoff optimally.

                                      1. 3

                                        Between CAP and Arrow’s theorem, math has proven that politics will never be permanently settled. :-)

                                        1. 1

                                          Arrow’s theorem is just an information theoretic result, the ballot needs to have more information than ordering (also intensity). However if you do that then the issue is combining probabilistic assessments from different sources about the same thing since you get a second order probability (and third order, …, and nth order). Still, we can do much better than we do today to keep the assessments in line with our observed reality.

                                          I do agree that it will never permanently settle (I call those kinds of steady states “utopia” or “dystopia”).

                                      2. 3

                                        If we can assume the federal government is good and competent, we don’t need states rights

                                        You’ve missed the point of the federation. It’s not just to curtail the power of the fed, but the states are supposed to be able to have different laws. The fed is supposed to keep the states working together, and deal with cross-state issues, not to enforce the views and laws of the most-populous states on the rest of the citizenry.

                                        1. 3

                                          Right, but the whole reason for that is that we assume that federal law may have flaws or issues “on the ground”. If we can just make excellent, globally valid federal law, obviously that’s easier than having states’ laws. The whole system is a hedge against the central power making bad laws.

                                          You don’t need experimentation if you already have the answers. (However note: many more people think they have the answers than do.)

                                          1. 3

                                            Making perfect laws is in the realm of the Philosopher King

                                          2. 1

                                            the states are supposed to be able to have different laws

                                            That is literally the whole point of my comment. 🙄

                                            1. 1

                                              It seems like the thrust of your comment was “centralize when things are going well and decentralize when things aren’t going well”, while it seems the parent is arguing that federation (federalism?) isn’t about centralizing as you describe. I thought both points were interesting, and I upvoted accordingly.

                                              1. 1

                                                Sophistifunk seems to have the view that the Federal government in the US should just do the minimum tasks to allow decentralized control by the states. My original argument was I find that kind of fixed view about what the Federal government should or should not do silly because whether that’s good idea or not depends on whether decentralization is good or not, which is context dependent.

                                                I think in general a problem with political discussions is that people approach them with certain meta-points labelled as good or bad and then lose the ability to argue back to basics and talk about why the meta-points are good or bad. Federal decentralization is good if it results good outcomes for governance. (I suppose you can also argue that it’s a tradition, and it’s good to stick to traditions unless they are obviously breaking down, but that’s just a way of saying you don’t want to debate the merits of the issue itself.) Just saying don’t “enforce the views and laws of the most-populous states on the rest of the citizenry” is not addressing anything. Do the most populous states want to enforce views that are good, bad, or neutral on the other states? If it’s bad or neutral then yeah, we want decentralization. If it’s good, we want them to force the small states to do the good thing! I can think of cases where the states enforced good views and cases where they enforced bad views, and if you sat in a US history class, you can too. It’s not worth debating at that level.

                                                1. 2

                                                  The difficulty is that of limiting principles. You can’t write a law at a federal level just because you think it should be done at that level, there must be a limiting principle beyond “well that’s obviously a step too far” that prevents future generations or politicians from abusing the abilities the previous group created.

                                                  “We can pass law to force X, how is that substantially different from Y? I think X was bad and Y is good, and we did it in that case…”

                                                  Limited zones of authority help prevent arbitrary and personal beliefs about what is good and what is not. Federal versus State versus County versus Local versus a lack of governance on a topic requires limiting principles up and down the chain.

                                                  Bringing it back to the original post, if all code is in a monorepo, then all people in that repo will at some point need to confront the decision of whether to make a change for someone else that, to that someone else, may feel capricious. Your social model among committers needs to be able to stand up to a total lack of limiting factors. Not every organization is ready for that. The organizations that are have to gatekeep heavily to maintain the culture.

                                                  That’s a cost. It might be acceptable, but it’s there.

                                          3. 1

                                            monorepos deliberately centralize power

                                            I don’t follow how this works if different parts of the repo have different owners who gate code submission.

                                            1. 1

                                              If you have a polyrepo and the CTO wants to institute a new rule like “we don’t use library A in this company” it’s a little tricky because you need N teams to add a thing that enforces it. It’s much simpler with a monorepo because there’s a single chokepoint.

                                          1. 3

                                            This presentation might work for a handful of commits but how does it fare on say the linux kernel or the nixpkgs repo? Is it able to extract some understanding from that torrent of information?

                                            1. 1

                                              So for now the tool is really designed for a reasonable number of commits on a few branches (this is mentioned in my article and the docs). There are 2 limiting factors for animations at this point:

                                              1. Manim is generating video, so each element (commit circle, arrow, etc) adds time to generate the output. I’m sure I could optimize it a lot better than I am now, but large commit sequences would take a long time to generate video for. One option for that is to add a feature to export one image of the final result instead of animating, but even so there is only so much you can squeeze in 1 static image. Maybe some sort of scrollable, zoomable image output option would be cool for large repos…

                                              2. Repos with large and complex branching structures can have many crissy-crossy relationships between commits, so some TLC would need to go into making that look decent. The current logic I’m using wouldn’t scale well to massive and complex repos.

                                              1. 2

                                                If you want to try an unreasonable number of commits on a very reasonable number of branches, check out void-packages

                                                1. 1

                                                  Haha thanks for that I’ll take a peek O.o

                                            1. 4

                                              You could also use edn… easy to write (no commas!) And it has libraries in many languages

                                              1. 6

                                                I looked into EDN, it seems like a nice enough object notation. But it’s not trying to be an especially compact serialization format, so it doesn’t really have much overlap with JCOF, whose main purpose is being a compact serialization format at the cost of being hard for humans to read and edit.

                                                1. 6

                                                  So if it’s harder for humans to read and edit, why not go for full capnproto or protobuf which is efficient and tools exist to make it easy to read and edit? What’s the target need?

                                                  1. 6

                                                    Cap’n Proto and Protobuf both use a schema. Whether to use a serialization format with or without a schema is a big complicated discussion, but if you want a schemaless format, I’m not aware of one that’s smaller than JCOF.

                                                    1. 5

                                                      CBOR is such a format. Using the example on the page, just a base CBOR encoding is 198 bytes, while 136 bytes is possible if you use the string reference extension. All without a schema.

                                                      1. 7

                                                        I wasn’t that impressed with cbor without the string reference extension.

                                                        I had to search quite a bit to find a CBOR implementation which supports string references, none of the JavaScript ones did but I found a python library which does. CBOR with string references is clearly more compact than CBOR without, but it’s still usually not as compact as JCOF:

                                                          8315 circuitsim.json
                                                          2852 circuitsim.cbor (0.343x)
                                                          2093 circuitsim.jcof (0.252x)
                                                         51949 comets.json
                                                         35639 comets.cbor (0.686x)
                                                         37480 comets.jcof (0.724x)
                                                         37996 madrid.json
                                                         13411 madrid.cbor (0.353x)
                                                         11959 madrid.jcof (0.315x)
                                                        244975 meteorites.json
                                                        119415 meteorites.cbor (0.487x)
                                                         87083 meteorites.jcof (0.355x)
                                                         56828 pokedex.json
                                                         30909 pokedex.cbor (0.544x)
                                                         23140 pokedex.jcof (0.407x)
                                                        219635 pokemon.json
                                                         60249 pokemon.cbor (0.274x)
                                                         39650 pokemon.jcof (0.181x)
                                                           299 tiny.json
                                                           144 tiny.cbor (0.482x)
                                                           134 tiny.jcof (0.448x)
                                                        

                                                        I suppose if you don’t mind a slightly bigger size, don’t mind a binary format, don’t mind a format which isn’t human writable, and don’t mind using an extension which doesn’t seem that well supported by many CBOR libraries, but want an older, more established serialization format with libraries in more languages, CBOR with the string reference extension is a good choice. For a lot of situations, that’s gonna be the right trade-off.

                                                  2. 5

                                                    Not necessarily advocating, but for future reference: There are two compact serialization formats for EDN.

                                                    First, is Transit: https://github.com/cognitect/transit-format It’s goal is to be an efficient transport encoding, but not necessarily ideal for data-at-rest.

                                                    Second, is Fressian: https://github.com/Datomic/fressian/wiki It’s very similar to Transit, but intended for durable storage.

                                                1. 1

                                                  I’m trying to decide if I can use the pretty parts of the Ubiquiti user interface - like quickly seeing clients on the network, DHCP leases, wifi availability scheduling, etc. while at the same time not participating in their cloud offerings and explicitly blocking their hostnames from talking to my network.

                                                  1. 1

                                                    It’s very easy to do, at least on NixOS, through the unifi service.

                                                    1. 1

                                                      I think you can, from my own experience. I have not done exactly as you say but I have heard they are moving away from even requiring a cloud account at all.

                                                    1. 1

                                                      Well, last week I did exactly what a manager told me to do even though I knew it was probably insufficient for a coming flight test, because I was sick of them assuming that the Magical Flight Test Fairies would take care of everything without them doing anything. So that flight test went even worse than I expected, said manager has done what they should have done early last week and said “please be my magical flight test fairy and make sure everything that needs doing gets done”, and so my week has just filled up. Guess I brought this on myself, really.

                                                      1. 1

                                                        Ooh what kind of aerospace?

                                                        1. 2

                                                          Drones! And sometimes bigger things.

                                                          1. 1

                                                            Well you have now gained the ability to say it will take X long to do the magic…

                                                      1. 3

                                                        Why the choice to use different variables than the nomad cli tool itself?

                                                        1. 2

                                                          Good question…at one point I had conflicts between them when wanting to point at a different cluster, so I decided to make an explicit config set specifically for wander. I can see how it might be a bit annoying given that 90% of users will have the same values for NOMAD_ADDR/WANDER_ADDR and NOMAD_TOKEN/WANDER_TOKEN.

                                                          I’d be happy to change this in a backwards-compatible way. If you think it’d be valuable, do you mind throwing a thumbs up and/or comment on the issue I created for it here?

                                                          1. 1

                                                            I also had this, then just ran it locally with WANDER_ADDR=$NOMAD_ADDR wander to try it out. I did expect it to pick up the nomad envars though, other tooling I’ve used around nomad (mostly grown internally to be fair) reuses those envars.

                                                            1. 2

                                                              Update: wander now uses NOMAD_ADDR and NOMAD_TOKEN (with fallbacks and warnings on the old values) in v0.3.1

                                                              1. 2

                                                                Ok good to know, thank you. I’ll likely complete that issue and set the prefixes to match :)

                                                                Let me know of any other joys/pains using the tool, either through issues or email which can be found on my github

                                                          1. 15

                                                            Not using a cloud service to host your app. Examples are Heroku, Vercel, Netlify and Fly.io. Most product teams will have over-architected their solution if they have to have an ops or infra team.

                                                            I have to disagree. If you are pre-revenue and will be there for a while, you can’t afford an accidental scale-up. Now maybe the bill can be kept small enough with care, but I look askance at anybody saying Just Use The Cloud without consideration for actual business needs.

                                                            More to the point of the entire article – Yes, Kubernetes is a sign you are dealing with a complicated deploy environment. Eventually your deployments get to the point of “I can’t spend person-hours on which service is on which machine” at which point something like K8s or K3s or Nomad will help. People get there at different times. If you have a lot of services and not many people, not because you built microservices but because you legitimately have a bunch of services, you might get there sooner than others.

                                                            The reason I think kubernetes isn’t necessarily premature optimization is the core important technology that every company no matter how small (*) should be using: Configuration Management. Configuration Management requires putting energy where your product is not. It’s a backend tooling factor that doesn’t matter to your customers. If you have a library in Python that you want to use and a library in C that you want to use, and a library in Javascript, and each library fills a unique niche in your product, you absolutely should use all three languages in your product. Not because it’s a good idea to go polyglot, but because you want to get the job and product out the door with the minimum amount of effort, and those libraries help you avoid that effort. Configuration management is somewhat like that. It takes effort, but it helps you avoid major classes of effort later. It allows you to orchestrate a single node, and if that node dies, you can be back running a few hours later.

                                                            Configuration management lets you say “I have a box over SSH. Now, I DECLARE IT HAS THIS SERVICE” and now it is so. This works for a good long while. It is important and not premature optimization.

                                                            K8s and friends allow you to say “I have a fleet and I declare somewhere in there, I have a service” as well as opening you to job management tools that you don’t necessarily have across multiple computers easily. All those cloud services have a solution that falls under “and friends,” allowing cloud providers to say “You have a service. It is running somewhere. The routes are there, and I don’t care where “there” is.

                                                            Now maybe you shouldn’t be running it. I personally never want to run k8s. Running it is too much wasted work (for me in my circumstances with my needs). But saying it’s a redflag on optimization is simply missing that it’s solving an entire class of problem so you never need to worry about it again.

                                                            (*): If you are a small company and have one computer and can afford to take a week to rebuild it, or have 3 (magic number) or fewer droplets, and can afford to take a week each if they get lost in a datacenter failing, then you don’t need to be running configuration management. But most of the time you want to build your thing, and configuration management helps you get the computers out of the way.

                                                            1. 7

                                                              I recently attended a talk by Eric S. Raymond at the Southeast Linuxfest this year, he’s started a project that seems similar in goal to what this is trying to achieve:

                                                              https://gitlab.com/esr/shimmer

                                                              I’m all for de-centralized software repos, and federation amongst them. I see gitea is on the list for implementing it, it’s my go-to and favorite self-hosted VCS.

                                                              1. 2

                                                                His talk is now on YouTube. https://youtube.com/watch?v=0HMghqwa6Gs

                                                                1. 1

                                                                  That’s awesome, and you can just barely see me, Im to his right near the wall sitting in the front row. Thanks for posting this!

                                                                  1. 1

                                                                    We may have spoken at the conference!

                                                              1. 4

                                                                I have used the c920 on a mac for years, and it has always been overexposed. I’m not sure whether it’s Logitech or Apple or both to blame here. The solution for me is to install the app “Webcam Settings” from the Apple store (yeah it’s a generic name), which lets you tweak many settings on webcams and save profiles for them. It’s not perfect, but I already have the camera and it’s significantly easier to work with than hooking my DSLR up.

                                                                1. 5

                                                                  The equivalent to “Webcam Settings” on Linux is guvcview. I have a Microsoft LifeCam Studio and have to use this tool to adjust the exposure when I plug it into a new machine. Thereafter it persists… somehow.

                                                                  1. 6

                                                                    Or qv4l2, depending on your taste — but one advantage of qv4l2 is that it lets you set the controls even while another app has the camera open, whereas guvcview wants the camera for its own preview window, and will decline to work at all if it can’t get the video stream.

                                                                    1. 3

                                                                      Oh very nice, qv4l2 is exactly what I needed to adjust focus during a meeting. Thank you!

                                                                      1. 2

                                                                        update: someone anon-emailed me out of the blue to mention that guvcview has a -z or --control-panel option that will open the control panel without the preview window, letting you do the same thing as qv4l2. So use the one that makes you happy.

                                                                    2. 3

                                                                      Congrats, you are working around a hardware problem with a software patch.

                                                                      Me, I don’t care enough to spend the effort to get the software working. My audio input is an analog mixer, my audio output the same, and eventually my camera will be a DSLR because that way I don’t twiddle with software for something that really should just work on all my machines without me caring.

                                                                      Different tradeoffs in different environments.

                                                                      1. 8

                                                                        It’s a driver settings tool, not a patch. It doesn’t do post-processing. Every OS just fails to provide this tool, not sure why, possibly because webcam support is spotty and they don’t want to deal with user complaints. Some software (like Teams) include an interface for the settings. Changing it in Teams will make system wide changes. Others (like Zoom) only have post-processing effects, and these are applied after the changes you made in Teams.

                                                                        1. 2

                                                                          I can confirm this tool definitely affects the camera hardware’s exposure setting. I’ve used it for adjusting a camera that was pointed at a screen on a remote system I needed to debug. The surrounding room was dark (yay timezones!) so with automatic exposure settings it was just an overexposed white blur on a dark background. This tool fixed it. There’s no way this would have been possible with just post-processing.

                                                                          (No, VNC or similar would not have helped, as it was an incompatibility specific to the connected display, so I needed to see the physical output. And by “remote” I mean about 9000km away.)

                                                                          1. 4

                                                                            a camera that was pointed at a screen on a remote system

                                                                            Sounds like you had some fun

                                                                            1. 1

                                                                              That’s definitely one way of describing it! Not necessarily my choice of words at the time.

                                                                          2. 1

                                                                            Oh, Teams can do this? Thanks, I’ll have to check that out as an alternative.

                                                                          3. 6

                                                                            The DSLR/mirrorless ILC (interchangeable lens camera) route is great for quality but it has its risks. I started off with a $200 entry level kit and now I’ve got two bodies, a dozen lenses, 40,000 pictures, and a creatively fulfilling hobby.

                                                                            1. 2

                                                                              don’t forget the tripod! I like landscape photography, and a good tripod was surprisingly (> $200) expensive.

                                                                              1. 2

                                                                                So the risks are spending too much money?

                                                                              2. 1

                                                                                I fail to see how you’re going to use a DSLR as a webcam without “twiddling with software”. Sure, you’ll have a much better sensor, lens and resulting image quality. But I’ve yet to see a setup (at least with my Canon) that doesn’t require multiple pieces of software to make even work as a webcam. Perhaps other brands have a smoother experience. I still question how this won’t require at least as much software as my route.

                                                                                There’s also the physical footprint that matters to me. A webcam sits out of the way on top of my monitor with a single cable that plugs into the USB on the monitor. A DSLR is obviously nowhere near this simple in wiring or physical space. It also has a pretty decent pair of microphones that work perfectly for my quiet home office.

                                                                                Are either the audio or video studio quality? Nope, but that’s completely fine for my use case interacting with some coworkers on video calls.

                                                                                1. 1

                                                                                  My perception has been a DSLR with HDMI output gives you the ability to capture HDMI and just shove that as a webcam line.

                                                                                  The other things that a camera does can be tweaked with knobs instead of software.

                                                                            1. 2

                                                                              Reconnecting to the hivemind. Recently had the longest break from internet since I was 9 years old.

                                                                              1. 1

                                                                                How long this time?

                                                                                1. 1

                                                                                  Few months >.<

                                                                                  1. 1

                                                                                    Congratulations!

                                                                              1. 18

                                                                                The problem, of course, was the chance of collision.

                                                                                There’s a tool named ‘,’ in the nix community https://github.com/nix-community/comma ;)

                                                                                1. 20

                                                                                  I must say, I think that’s a terrible name for a tool. No software intended for wide distribution should use single character binary names, those should be reserved for end-users to use for shell aliases and the like.

                                                                                  1. 9

                                                                                    I guess you are unfamiliar with /usr/bin/[ then. I think it’s been around since the 80s (maybe earlier).

                                                                                    1. 15

                                                                                      I am familiar with it.

                                                                                      Do I think it’s great engineering? No, not really. Do I accept its existence? Yes, of course.

                                                                                      1. 5

                                                                                        Things that are old enough get a pass on modern sensibilities.

                                                                                      2. 7

                                                                                        I’d agree in general, but in this case it’s quite a niche product and in the end it’s the user who’s deciding what to install (under which name)

                                                                                        1. 4

                                                                                          I’d be inclined to agree in principal, but can’t help but feel that this presents fantastic UX to the user. If I ever implemented an equivalent in Guix, it’d likely use the ',' character too, shortening 'guix shell firefox -- firefox' to just ', firefox'. 70% shorter, at the cost of an intentional collision in violation of rational principles. Namespacing these (even just to ,nix and ,guix) quickly cuts into the efficiency and elegance of the shortcut, highlighting a lack of support in the shell for some form of modality.

                                                                                          Perhaps flags preceding commands could apply broad modes to that invocation, like '-, firefox' or '-C firefox' to containerize it. It feels appropriate (to me) to have some modal keybinds, a la Vim or (more so) Emacs prefixes, which might work in the context of an application runner like dmenu or Rofi, but presents it’s own challenges on the command line in regards to pipes (which otherwise work with 'guix shell ...', and presumably with comma).

                                                                                          Which is all to say I agree, and that perhaps allowing the user to establish the shell alias to ',' themselves is the best design we have right now, but there’s an unaddressed design space for broadly-applicable per-invocation command-line switches.

                                                                                          1. 2
                                                                                            $ which [
                                                                                            /bin/[