Threads for duncaen

  1. 2

    am i the only one thinking “or, you could just get more reliable networking gear” ?

    1. 2

      That would be half the fun and double the price and totally not worth it if you have just one uplink anyways.

      1. 2

        Yeah, the most likely scenarios for a residential house are:

        • power loss to the building (you might have a UPS, but you are unlikely to have an autostarting generator with an automatic transfer switch)

        • upstream ISP loss (fiber is pretty reliable, but a truck or a backhoe can happen to anyone)

        • power supply failure on a machine with only one power supply (buy more expensive hardware and probably lose some efficiency)

        In the last 20 years, I have experienced all of these – mostly while I was at home to fix the things that were in my power.

      2. 2

        I’m fascinated by these Stapelberg posts, but yes, not doing any of that tends to be the easier path.

        Note that this is all in support of

        For the guest WiFi at an event that eventually fell through, we wanted to tunnel all the traffic through my internet connection via my home router. Because the event is located in another country, many hours of travel away, (…)

        … where one might also consider, say, not tunneling all guest WiFi traffic through a home router “hours of travel away”. Or having a fall-over scenario to some gateway at a suitable hosting location.

        1. 4

          Oh, I also had a fail-over scenario prepared with another gateway on a dedicated server in Germany.

          But, tunneling through a residential connection is preferable for residential use-cases like this one :)

      1. 11

        This missed my favorite technique, which is to ls > copy_files.sh and then edit the script in vim. It’s not very impressive, but - at least for me - much faster than writing the code presented in this article. :)

        1. 11

          I do that with vidir, it opens vim with a number and the filename per line, you can change the filename and when you exit vim it will just rename changed filenames by looking up the old name using the number.

          1. 1

            See also my favourite tool, qmv from renameutils. Sounds like the same as your vidir but will just open whatever your $EDITOR is and doesn’t add line numbers.

            1. 1

              Wow, that’s almost what I wrote in python, dump file lists into lines, and rename or delete from the diff of the output. Sometimes I need create hardlink or move files around, and vidir seems only does rename?

              1. 2

                Deleting and moving files works too, hardlinks not.

                $ tree
                .
                ├── a
                ├── b
                └── c
                
                0 directories, 3 files
                $ vidir
                1   ./foo/bar/a
                3   ./foo/c
                :wq
                $ tree
                .
                └── foo
                    ├── bar
                    │   └── a
                    └── c
                
                2 directories, 2 files
                
            2. 3

              same here

                ls *.txt | vim -
              
                abc.tx -> mv abc.tx abc.txt
              
                :w !sh
                q!
              
              1. 1

                Incidentally, I wrote a little Vim plugin earlier this week to streamline that workflow a bit, which makes editing that kind of stuff in Vim a bit easier.

                1. 1

                  Nice method! I use ranger for bulk-renaming files in a vim-style workflow. However I frequently find myself installimg ranger for that single purpose. Your method seems more easily accessibie.

                1. 17

                  Unfortunately, OpenRC maintenance has stagnated: the last release was over a year ago.

                  I don’t really see this as a bad thing.

                  1. 12

                    Also, wouldn’t the obvious choice be to pick up maintenance of OpenRC rather than writing something brand new that will need to be maintained?

                    1. 10

                      There is nothing really desirable about openrc and it simply does not support the required features like supervision. Sometimes its better to start fresh, or in this case with the already existing s6/s6-rc which is build on a better design.

                      1. 3

                        There is nothing really desirable about openrc

                        I’d say this is a matter of opinion, because there’s inherent value in simplicity and systemd isn’t simple.

                        1. 5

                          But why compare the “simplicity” to systemd instead of something actually simple, openrcs design choices with its shell wrapping instead of a simple supervision design and a way to express dependencies outside of the shell script is a lot simpler. The daemontool like supervision systems simply have no boilerplate in shell scripts and provide good features like tracking pids without pid files and therefor reliably signaling the right processes, they are able to restart services if they get down and they provide a nice and reliable way to collect stdout/stderr logs of those services.

                          Edit: this is really what the post is about, taking the better design and making it more user friendly and implementing the missing parts.

                      2. 3

                        the 4th paragraph

                        This work will also build on the work we’ve done with ifupdown-ng, as ifupdown-ng will be able to reflect its own state into the service manager allowing it to start services or stop them as the network state changes. OpenRC does not support reacting to arbitrary events, which is why this functionality is not yet available.

                        also, the second to last graf

                        Alpine has gotten a lot of mileage out of OpenRC, and we are open to contributing to its future maintenance while Alpine releases still include it as part of the base system, but our long-term goal is to adopt the s6-based solution.

                        so, they are continuing to maintain OpenRC while alpine still requires it, but it doesn’t meet their needs, hence they are designing something new

                      3. 3

                        I was thinking the same thing.

                        I have no sources, but when was the last time OpenBSD or FreeBSD had a substantial change to their init systems?

                        I don’t know enough to know why there’s a need to iterate so I won’t comment on the quality of the changes or existing system.

                        1. 13

                          To my knowledge, there’s serious discussion in the FreeBSD community about replacing their init system (for example, see this talk from FreeBSD contributor and previous Core Team member Benno Rice: The Tragedy of systemd).

                          And then there’s the FreeBSD-based Darwin, whose launchd is much more similar to systemd than to either BSD init or SysVinit to my knowledge.

                          1. 4

                            this talk from FreeBSD Core Team member Benno Rice: The Tragedy of systemd).

                            This was well worth the watch/listen. Thanks for the link.

                          2. 8

                            I believe the last major change on FreeBSD was adding the rc-order stuff (from NetBSD?) that allowed expressing dependencies between services and sorting their launch order so that dependencies were fulfilled.

                            That said, writing a replacement for the FreeBSD service manager infrastructure is something I’d really, really like to do. Currently devd, inetd, and cron are completely separate things and so you have different (but similar) infrastructure for running a service:

                            • At system start / shutdown
                            • At a specific time
                            • In response to a kernel-generated event
                            • In response to a network connection

                            I really like the way that Launchd unifies these (though I hate the fact that it uses XML property lists, which are fine as a human-readable serialisation of a machine format, but are not very human-writeable). I’d love to have something that uses libucl to provide a nice composable configuration for all of these. I’d also like an init system that plays nicely with the sandboxing infrastructure on FreeBSD. In particular, I’d like to be able to manage services that run inside a jail, without needing to run a service manager inside the jail. I’d also like something that can set up services in Capsicum sandboxes with libpreopen-style behaviour.

                            1. 1

                              I believe the last major change on FreeBSD was adding the rc-order stuff (from NetBSD?) that allowed expressing dependencies between services and sorting their launch order so that dependencies were fulfilled.

                              Yep, The Design and Implementation of the NetBSD rc.d system, Luke Mewburn, 2000. One of the earlier designs of a post-sysvinit dependency based init for Unix.

                              1. 1

                                I’ve been able to manage standalone services to run inside a jail, but it’s more than a little hacky. For fun a while back, I wrote a finger daemon in Go, so I could keep my PGP keys available without needing to run something written in C. This runs inside a bare-jail with a RO mount of the homedirs and not much else and lots of FS restrictions. So jail.conf ended up with this in the stanza:

                                finger {
                                        # ip4.addr, ip6.addr go here; also mount and allow overrides
                                        exec.start = "";
                                        exec.stop = "";
                                        persist;
                                        exec.poststart = "service fingerd start";
                                        exec.prestop = "service fingerd stop";
                                }
                                

                                and then the service file does daemon -c jexec -u ${runtime_user_nonjail} ${jail_name} ${jail_fingerd} ...; the tricky bit was messing inside the internals of rc.subr to make sure that pidfile management worked correctly, with the process finding handling that the jail is not “our” jail:

                                jail_name="finger"
                                jail_root="$(jls -j "${jail_name}" path)"
                                JID=$(jls -j ${jail_name} jid)
                                jailed_pidfile="/log/pids/fingerd.pid"
                                pidfile="${jail_root}${jailed_pidfile}"
                                

                                It works, but I suspect that stuff like $JID can change without notice to me as an implementation detail of rc.subr. Something properly supported would be nice.

                              2. 2

                                I think the core issue is that desktops have very different requirements than servers. Servers generally have fixed hardware, and thus a hard-coded boot order can be sufficient.

                                Modern desktops have to deal with many changes like: USB disks being plugged in (mounting and unmounting), Wi-Fi going in and out, changing networks, multiple networks, Bluetooth audio, etc. It’s a very different problem

                                I do think there should be some “server only” init systems, and I think there are a few meant for containers but I haven’t looked into them. If anyone has pointers I’d be interested. Desktop is a complex space but I don’t think that it needs to infect the design for servers (or maybe I’m wrong).

                                Alpine has a mix of requirements I imagine. I would only use it for servers, and its original use case was routers, but I’m guessing the core devs also use it as their desktops.

                            1. 7

                              Alpine? It even does not have the same libc? But ok…

                              Statically linked glibc would still require dynamic libraries for all the things that might use NSS. Using alpine with its musl toolchain is a simple way to get fully static binaries.

                              1. 4

                                The optional -- stops option parsing i.e. allowing arguments to start with -, in the usage this should come after options and before arguments.

                                1. 2

                                  Sorry. I didn’t quite grasp that. You mean in the help text?

                                  1. 2

                                    Yes in the help screenshot: https://github.com/mr-karan/doggo/blob/main/www/static/help.png

                                    doggo [query options] [--] [arguments...] would be more correct, the other types of arguments are also not listed.

                                1. 3

                                  Both uses of stat and date are not portable.

                                  1. 1

                                    Isn’t this just basically measuring network speed? Even if you have a good Gbit connection surely you’re mostly measuring how good the mirrors/CDN are. In that sense it would make more sense to actually measure with a slow internet connection to even out the playing ground.

                                    1. 3

                                      I haven’t tried them all, but apt seems to spend significant time “Reading package lists” from disk, even on an SSD.

                                      When it gets index updates, it seems to read them one after another. These checks are mainly limited by latency, not bandwidth, so they’d benefit from being parallelized.

                                      And the entire installation process is serial.

                                      1. 3

                                        There are many other variables likes the number of not installed dependencies that need to be downloaded and installed and the repository metadata size. I don’t think getting rid of the mirror variable by hosting your own mirror for all the package managers will significantly change the results of the test.

                                        The testcase of “fetch + install” is largely in favor of apk because its the only package manager which fetches and unpacks the archive at the same time. All other package managers will write packages into the cache and after everything is downloaded they will start to read the archives again to unpack them. So you end up with apk being bottle necked by the download speed and disk write speed, while all other package managers (I don’t know enough about nix so I’m excluding it) additionally are bottle necked again by the disk read and write speed (ignoring the page cache, which could still have the package files written into the cache in memory).

                                        Changing the test case to use the local cache instead of fetching the packages will probably even out apk, pacman and nixos, but apt and dnf are still going to be a bit slower because of their “less minimal” design and repository metadata size.

                                      1. 7

                                        Overly dramatic. And I do like getting a free t shirt.

                                        1. 15

                                          Not at all. I have a few sub-500 stars projects, and they too get a significant amount of spammy PRs. It’s literally just “Adding a gif to the readme” or something similarly ridiculous.

                                          And I can’t begin to imagine how much crap high-profile project maintainers have to deal with. This post is fully warranted.

                                          1. 10

                                            Interestingly, I’ve gotten zero thus far. I wonder if tech choice and/or the kind of project is a factor here?

                                            1. 2

                                              I should’ve been clear—I was describing my experience from last year. After all, it’s only Oct 1st. Give it time. :)

                                              I’ve gotten two so far for Hacktoberfest 2020, by the way.

                                              1. 2

                                                I didn’t get any last year either; or any year. Actually, this is the first time I heard of the entire thing 🤔

                                            2. 4

                                              To add some context, Vaelatern is a Void Linux contributor and we encouraged hacktoberfest PRs: https://twitter.com/VoidLinux/status/1179006377219506177.

                                              There is a spam problem but for some reason we were not the target, maybe because its not python, js or html.

                                              Not sure about the stars, but we are now at 1.1k and I think we already had around 500 in 2018. I think we were easily one of the top non-spammy “add your name to a file” repositories.

                                            3. 8

                                              I’d love actual contributions. Even if they were just typo fixes, or I had to guide a novice how to improve the code.

                                              But the only “contributions” I’ve got were pure useless garbage. Someone has added “Requires Windows 7 or higher” (with worse spelling) to my Mac app. They didn’t even bother read a single line of the README they were changing.

                                              1. 3

                                                I thought so too, but take a look at this: https://github.com/search?o=desc&q=is%3Apr+%22improve+docs%22&s=created&type=Issues

                                                Try “amazing project” too. It’s an onslaught. I don’t remember it ever being this bad, but perhaps it was for the more popular repos.

                                                1. 2

                                                  This small 0-star project got three “improve docs” PRs in the last 40 minutes from three different accounts (just noticed it was mentioned several times on the first page): https://github.com/tehpug/TehPUG/pulls?q=is%3Apr+is%3Aclosed

                                                  Then I clicked another random project from that list, and this 4-star project has four pages of PRs spammed: https://github.com/Moeplhausen/SunknightsWebsite/pulls – literally those entire four pages are full of this idiocy, there’s not one legitimate PR in there. This is just idiocy.

                                                  I don’t know why these projects gets so many, nothing about those repos or the accounts/organisations they belong to seems well-known in the slightest; just a typical small project people uploaded just for code hosting. As I mentioned in my other comment, I’ve gotten zero PRs thus far in spite of having several >100 star repos. If these repo are targetted (and I think that’s an appropriate term here) them why aren’t mine? 🤔

                                                  What a clusterfuck.

                                                  1. 3

                                                    I would guess they are now targeting small/inactive repositories in the hope of maintainers not flagging their PRs within the 7 day period in which “invalid” flags are checked.

                                                    They could instead of make PRs to their own repos or create organizations without bothering other projects.

                                                    1. 1

                                                      Right; that makes sense. I assumed you need to actually have the PR merged to count, but turns out you just need to make it.

                                                      As for making your own repo, the site mentions:

                                                      Bad repositories will be excluded. In the past, we’ve seen many repositories that encourage participants to make simple pull requests – such as adding their name to a file – to quickly gain a pull request toward completing Hacktoberfest. [..] We’ve implemented a system to block these repositories, and any pull requests submitted to such repositories will not be counted.

                                                2. 3

                                                  I archived a repo two days back to avoid this spam (and I’m not actively working on it anyway)

                                                  https://github.com/learnbyexample/Python_Basics/pulls?q=is%3Apr+is%3Aclosed

                                                  1. 1

                                                    man I’d feel bad if I was abhinav-TB and created a PR for some project only to have it closed without comment a mere 5 days later

                                                    1. 1

                                                      may be if they read the readme first or if they explained why they are making a pointless PR, then perhaps I’d have made an effort to comment

                                                1. -11

                                                  You know what I’m going to complain about and I think there’s a significant amount of users here who share my thoughts at this.

                                                  But I’m not going to put that directly this time, just because some too sensitive people might get “offended”.

                                                  Well, just stay on topic in the posts, okay? That’s not the first time and not the only blog with this particular “issue”.

                                                  1. 46

                                                    But I’m not going to put that directly this time, just because some too sensitive people might get “offended”.

                                                    When you see something that doesn’t affect you in the slightest and was made for free and given to the community to help, and you complain anyway, maybe you should ask yourself who is “too sensitive.”

                                                    1. 27

                                                      Could you please briefly point out what the issue is? Is the problem a lack of depth, some web-technology used, the drawings?

                                                      1. 16

                                                        +1 to this request. Speaking as a moderator, there’s nothing obviously wrong with this post to me. If it in fact has some problem, fine, we can address that, but only if we know what it is.

                                                        It’s almost enough to make me think there isn’t any real complaint, just a personal vendetta… but of course, that’s hard to prove, and really it’s beside the point. Either there is a complaint or there isn’t; if there isn’t, vague insinuations accomplish nothing.

                                                        1. 8

                                                          The user already had a comment deleted, this is just a continuation/provocation.

                                                          https://lobste.rs/s/3bbj56/edutech_spyware_is_still_spyware#c_9sqrho

                                                        2. 3

                                                          He can’t point it out, because he will be downvoted into oblivion

                                                        3. 11

                                                          Haha what? I reread the post after I saw this (I don’t even like Rust man) and couldn’t find anything off-topic, or even remotely problematic. Are you referring to the art, perhaps?

                                                          1. 8

                                                            Yes. This user had a complaint about the art used in https://lobste.rs/s/3bbj56/edutech_spyware_is_still_spyware#c_9sqrho as well. In both cases, it’s their personal vendetta against cartoonish drawings of animals with human traits. They might be right that others dislike the art style as well, but it’s certainly not worth complaining about.

                                                        1. 2

                                                          I do something similar with normal comments at the beginning of the script and usually not as help message but with a separate command called “twoman” because adding option parsing to most scripts doesn’t really make sense.

                                                          https://xn--1xa.duncano.de/twoman.html

                                                          1. 1

                                                            Permit me to connect your comment to @adventureloop’s

                                                          1. 2

                                                            This is not a supervisor in the daemontools, runit and s6 sense.

                                                            It doesn’t actually use signals to notice when a child process dies, is prone to the same issues issues as PID files (sending SIGTERM to wrong processes on pid reuse).

                                                              1. 1

                                                                How do people find these bugs?

                                                                I don’t think it’s just code review. Maybe fuzzing?

                                                                1. 5

                                                                  Maybe they have a specialized fuzzer for finding issues with sub process execution (see the other two vulnerabilities they’ve found at the end of the comment.), but I think its more likely that they are doing targeted reviews into this kind of bug class.

                                                                  From my experience, I would start by searching for code paths that execute programs with user input and then work backwards to see if the user input is validated at all, if not then you already have your bug, otherwise you might have to do a full review on how the input is validated (which they probably did in this case and that’s how they found the logic error).

                                                                  https://www.qualys.com/2019/12/11/cve-2019-19726/local-privilege-escalation-openbsd-dynamic-loader.txt https://www.qualys.com/2019/12/04/cve-2019-19521/authentication-vulnerabilities-openbsd.txt

                                                                  1. 3

                                                                    Agree. For my own code, I’d just grep for system and exec* calls and review ‘em all. If I were in the business of reviewing other code for this, I think I’d probably write a taint checker to help me look. It feels like llvm could get you really close to that, these days. It looks like there might be at least the scaffolding for that there already. Last time I had to do it as a one-off on someone else’s code, I modified the codebase I was working with to taint certain variables then highlight whether any of a group of functions acted on them.

                                                                1. 1

                                                                  There was a FreeBSD security advisory the same day. If this repository is the source https://github.com/freenas/os/tree/freenas/11.3-stable, then it looks like the patches are not included.

                                                                  1. 3

                                                                    Why did you choose to pass in pledges as a null-terminated string? Did you consider adding a length field? What about encoding options as flags in a variable or two, eliminating the parsing step?

                                                                    1. 1

                                                                      It is using the openbsd api. tame(2) the pledge predecessor used flags. I can’t find a reference on why it was changed to strings.

                                                                      1. 1

                                                                        strings are easier to change without breaking code or recompiling.

                                                                        1. 3

                                                                          Can you elaborate on this?

                                                                          1. 1

                                                                            A stringly typed system can ignore values it doesn’t understand, while if you have bitfields and change what one means or need to expand the number of bits, you have changed your API and need to recompile.

                                                                            This is as I understand it.

                                                                            1. 1

                                                                              Ok, so lets look at a couple changes one might want to make, and how different apis handle them. The primary ones are adding, merging, and splitting pledge categories.

                                                                              First, to add a new pledge category in both cases no code needs to be recompiled. This type of change could occur when adding new syscalls to the kernel. In each case, old pledge calls will still be valid. Old kernels can also ignore new pledge categories for both strings and bitfields.

                                                                              When merging two (or more) pledge categories no code needs to be recompiled. However, the bitfield case is more elegant in its changes. Consider merging the pledge categories foo and bar into foobar. With strings, new kernels will need to recognize both older categories, in addition to the new (merged) category. Old kernels encountering foobar will kill the process when syscalls from either foo or bar are made. This effectively breaks the api, meaning that merges into a new pledge category in this manner difficult. There is no way to have one set of code work on both new and old kernels just using the merged category. Therefore, the only way to do merges is to have foo imply bar and vice versa. However, this may result in new code which breaks on old kernels since it uses syscalls from bar and only has foo in its pledge string.

                                                                              In the bitfield case, merging new categories is much easier. If the two older categories were PLEDGE_FOO = 1 and PLEDGE_BAR = 2, then a new definition PLEDGE_FOOBAR = 3 can be added. New code can use this symbol. Bugs where new code only sets PLEDGE_BAR or PLEDGE_FOO can occur, but many compilers have the ability to deprecate enums. Now, using the old values for FOO and BAR gives a warning at compile-time. Alternatively, the header writer could just define PLEDGE_BAR and PLEDGE_FOO to both be 3, in addition to defining PLEDGE_FOOBAR. (though this would not affect behaviour, since the kernel would still imply bar from foo and vice versa).

                                                                              Last, lets look at the case of splitting a pledge category into two new categories (for more fine-grained control). As before, no code needs to be recompiled. To illustrate, consider splitting the pledge category foobar into foo and bar. In order to not break new code running on old kernels, the old category must be included with pledge calls in addition to the new ones. E.g. to maintain compatibility, the new api for pledging just bar would be to pledge both foobar and bar, and for new kernels to then disable foo since it was not also pledged. With strings, each caller must do this manually, and there is a chance for breakage on old kernels if just bar is passed. However, with bitfields, if PLEDGE_FOOBAR = 1 and bits 1 and 2 are unused, one could define PLEDGE_BAR = 3 and PLEDGE_FOO = 5. This prevents incompatibility, while allowing transparent use of the new api. If desired, the old PLEDGE_FOOBAR could be marked as deprecated.

                                                                              Of course, all these changes rely on having spare bits left. Since there are only 18 categories in use, 64 bits provides more than enough for expansion. The ability to transparently add merges in software is helpful as well. For example, if foo, bar, and baz are commonly pledged together, a constant for foobarbaz could easily be defined in software, with no change to the kernel. With a string api, such a change would be breaking. For these reasons, in addition to not having to parse (potentially unterminated) user-generated strings, I find the design of this api puzzling.

                                                                              1. 1

                                                                                You are ignoring the case when you might want to add more than bits, maybe instead of just ‘stdio bar’ you want to add extensions other than bits, Say for example you want “~stdio” to mean prevent children from inheriting new permissions other than stdio.

                                                                                1. 1

                                                                                  then do pledge(PLEDGE_ALL, PLEDGE_STDIO).

                                                                                  1. 1

                                                                                    I think you missed my point because my example was bad.

                                                                                    1. 1

                                                                                      Perhaps, but either way, null-terminated strings in a syscall are bad design imo. The composability of a bitfield representation is a real advantage, especially when combined with the C preprocessor. I really can’t think of a case where strings would be more extensible, except if you wanted to add more than 64ish pledges.

                                                                    1. 6

                                                                      I find it curious that the Blink team at Google takes this action in order to prevent various other teams at Google from doing harmful user-agent sniffing to block browsers they don’t like. Google certainly isn’t the only ones, but they’re some of the biggest user-agent sniffing abusers.

                                                                      FWIW, I think it’s a good step, nobody needs to know I’m on Ubuntu Linux using X11 on an x86_64 CPU running Firefox 74 with Gecko 20100101. At most, the Firefox/74 part is relevant, but even that has limited value.

                                                                      1. 14

                                                                        They still want to know that. The mail contains a link to the proposed “user agent client hints” RFC, which splits the user agent into multiple more standardized headers the server has to request, making “user-agent sniffing” more effective.

                                                                        1. 4

                                                                          Oh. That’s sad. I read through a bit of the RFC now, and yeah, I don’t see why corporations wouldn’t just ask for everything and have slightly more reliable fingerprinting while still blocking browsers they don’t like. I don’t see how the proposed replacement isn’t also “an abundant source of compatibility issues … resulting in browsers lying about themselves … and sites (including Google properties) being broken in some browsers for no good reason”.

                                                                          What possible use case could a website have for knowing whether I’m on ARM or Risc-V or x86 or x86_64 other than fingerprinting? How is it responsible to let the server ask for the exact model of device you’re using?

                                                                          The spec even contains wording like “To set the Sec-CH-Platform header for a request, given a request (r), user agents MUST: […] Let value be a Structured Header object whose value is the user agent’s platform brand and version”, so there’s not even any space for a browser to offer an anti-fingerprinting setting and still claim to be compliant.

                                                                          1. 4

                                                                            What possible use case could a website have for knowing whether I’m on ARM or Risc-V or x86 or x86_64 other than fingerprinting?

                                                                            Software download links.

                                                                            How is it responsible to let the server ask for the exact model of device you’re using?

                                                                            … Okay, I’ve got nothing. At least the W3C has the presence of mind to ask the same question. This is literally “Issue 1” in the spec.

                                                                            1. 3

                                                                              Okay, I’ve got nothing.

                                                                              I have a use case for it. I’ve a server which users run on a intranet (typically either just an access point, or a mobile phone hotspot), with web browsers running on random personal tablets/mobile devices. Given that the users are generally not technical, they’d probably be able to identify a connected device as “iPad” versus “Samsung S10” if I can show that in the web app (or at least ask around to figure out whose device it is), but will not be able to do much with e.g an IP address.

                                                                              Obviously pretty niche. I have more secure solutions planned for this, however I’d like to keep the low barrier to entry that knowing the hardware type from user agent provides in addition to those.

                                                                            2. 2

                                                                              What possible use case could a website have for knowing whether I’m on ARM or Risc-V or x86 or x86_64 other than fingerprinting?

                                                                              Benchmarking and profiling. If your site performance starts tanking on one kind of processor on phones in the Philippines, you probably want to know that to see what you can do about it.

                                                                              Additionally, you can build a website with a certain performance budget when you know what your market minimally has. See the Steam Hardware and Software Survey for an example of this in the desktop videogame world.

                                                                              Finally, if you generally know what kinds of devices your customers are using, you can buy a bunch of those for your QA lab to make sure users are getting good real-world performance.

                                                                          2. 7

                                                                            Gecko 20100101

                                                                            Amusingly, this date is a static string — it is already frozen for compatibility reasons.

                                                                            1. 2

                                                                              Any site that offers you/administrators a “login history” view benefits from somewhat accurate information. Knowing the CPU type or window system probably doesn’t help much, but knowing it’s Firefox on Ubuntu combined with a location lookup from your IP is certainly a reasonable description to identify if it’s you or someone else using the account.

                                                                              1. 2

                                                                                There are terms I’d certainly like sites to know if I’m using a minority browser or a minority platform, though. Yes, there are downsides because of the risk of fingerprinting, but it’s good to remind sites that people like me exist.

                                                                                1. 1

                                                                                  Though the audience here will play the world’s tiniest violin regarding for those affected the technical impact aspect may be of interest.

                                                                                  The version numbering is useful low-hanging-fruit method in the ad-tech industry to catch fraud. A lot of bad actors use either just old browsers[1] or skew browser usage ratios; though of course most ‘fraud’ detection methods are native and just assume anything older than two major releases is fraud and ignore details such as LTS releases.

                                                                                  [1] persuade the user to install a ‘useful’ tool and it sits as a background task burning ads or as a replacement for the users regular browser (never updated)

                                                                                  1. 5

                                                                                    “Just speak Chinese.” Source: I’ve tried DOAS.

                                                                                    1. 3

                                                                                      If only Chinese had such nice man pages…

                                                                                    2. 1

                                                                                      What if you’re not a BSD user?

                                                                                      1. 1

                                                                                        DOAS is portable, I use it on Red Hat, CentOS and Oracle Linux systems, Ubuntu should also not be a problem.

                                                                                          1. 2

                                                                                            Nothing is perfect and doas is quite young comparing to sudo (about 15 years difference).

                                                                                    1. 8

                                                                                      This is especially nasty on linux, where inotify(7) events for write(2) and truncate(2)/ftruncate(2) both result in an IN_MODIFY. To make it worse, open(2) with O_TRUNC doesn’t result in IN_MODIFY, but only IN_OPEN events and there is no way to distinguish between O_RDONLY, O_WRONLY and/or O_RDWR.

                                                                                      At the time tail receives and handles the events and uses stat(2) to try to detect truncations the file could have already grown to the size before the truncation or larger and there is no way to tell if the file was truncated at all.

                                                                                      1. 3

                                                                                        Because [Podman] doesn’t need a daemon, and uses user namespacing to simulate root in the container, there’s no need to attach to a socket with root privileges, which was a long-standing concern with Docker.

                                                                                        Wait, Docker didn’t use user namespacing? I thought that was the whole point of Linux containers.

                                                                                        1. 7

                                                                                          There are two different things called user namespaces. CLONE_NEWUSER which creates a namespace that doesn’t share user and groups IDs with the parent namespace. And the kernel configuration option CONFIG_USER_NS, which allows unprivileged user to create new namespaces.

                                                                                          Docker and the tools from the article both use user namespaces as in CLONE_NEWUSER.

                                                                                          Docker by default runs as privilegued user and can create namespaces without CONFIG_USER_NS, I’m not sure if you can run docker as an unprivilegued user because of other features, but technically it should be able to create namespaces if CONFIG_USER_NS is enabled without root.

                                                                                          For the tools described in the article, they just to create a namespace and then exec into the init process of the container. Because they are not daemons and don’t do a lot more than that, they can run unprivileged if CONFIG_USER_NS is enabled.

                                                                                          Edit: Another thing worth mentioning in my opinion is, UID and GID maps (which are required if you want to have more than one UID/GID in the container) can only be written by root, and tools like podman use two setuid binaries from shadow (newuidmap(1) and newgidmap(1)) to do that.

                                                                                          1. 1

                                                                                            It can, but for a long time it was off by default. Not sure if that’s still true.

                                                                                          1. 3

                                                                                            as always, feel free to submit feedback, criticism or issues!

                                                                                            1. 3

                                                                                              Just some nitpicking on dependencies:

                                                                                              • When depending on a Git repository (as you do with your colored dependency), it is a good practice to point to a particular commit or tag using the rev or tag parameter instead of the branch, as the branch’s HEAD can change but a commit or tag can only point to only one specific state of the repository.
                                                                                              • When publishing a binary (executable) crate, it is a good practice to publish along the crate the Cargo.lock. You can find the reasoning on why you should publish this file in Cargo’s FAQ

                                                                                              I will try it later though! I always complained that some prompt frameworks are using scripting languages like Python or Ruby that have slow spin-up rate, so this project seems interesting and a cool way to customize my ugly and boring prompt.

                                                                                              1. 1

                                                                                                You kind of cover this but the Cargo.lock would capture the commit that the git dependency was at when the lock file was generated. So if the Cargo.lock was checked in everyone would build against the same commit.

                                                                                              2. 2

                                                                                                I already implemented a similar tool some months ago rusty-prompt, maybe you can get some inspiration out of it.

                                                                                                1. 1

                                                                                                  sure! thanks for sharing!

                                                                                                2. 1

                                                                                                  My bashes (both the one that comes with Mac OS and the latest 5.0.7 from brew) seem to cache PS1 somehow, making pista break quite a lot.

                                                                                                  ➜  ~ /usr/local/bin/bash
                                                                                                  bash-5.0$ PS1=$(pista)
                                                                                                  ~
                                                                                                  $ cd git
                                                                                                  ~
                                                                                                  $ PS1=$(pista)
                                                                                                  ~/git
                                                                                                  $ cd nomad
                                                                                                  ~/git
                                                                                                  $ PS1=$(pista)
                                                                                                  ~/g/nomad master ·
                                                                                                  $
                                                                                                  
                                                                                                  1. 2

                                                                                                    Try PS1='$(pista)'. What’s happening is that pista is getting called once, when you set PS1, and then never again. The single quotes force PS1 to literally contain the expansion, which then gets expanded (and thereby call pista) each time the prompt is printed

                                                                                                    1. 2

                                                                                                      Ohhh, no :( Of course. I feel like I should step far away from the computer now.

                                                                                                      1. 3

                                                                                                        looks like the installation instructions were faulty!

                                                                                                        1. 1

                                                                                                          Oh, whew, thanks for that. Now I feel slightly less stupid :)

                                                                                                    2. 1

                                                                                                      cant seem to replicate this, but it looks like a PROMPT_COMMAND thing.

                                                                                                    3. 1

                                                                                                      @hostname is nice to have if $SSH_CONNECTION is set.

                                                                                                      1. 4

                                                                                                        i have plans to extend pista into a library, so you could build your own prompts with pista’s functions. maybe ill add a hostname function :^)