Threads for journeysquid

    1. 5

      So long, and thanks for all the fish!

    2. 12

      Happy New Year everyone. It’s been nice spending time with some of you on #lobsters (see the Chat link at the bottom of the page), including a few names from the top submitter and commenter lists. In addition to the announcements above, this year also saw the introduction of mockturtle, the #lobsters channel bot which reports every story submitted to lobste.rs via IRC. I’ve enjoyed monitoring the website this way, and @journeysquid, @flyingfisch and I really appreciate @pushcx’s tireless effort in finding stories for us.

      @jcs, I’m a tad behind, but I’ll get your commit log reporting to the channel before this time next year. ::smiles::

      Thank you everyone for making lobste.rs a nice place to hang out and mingle.

      1. 16

        Thank you everyone for making lobste.rs a nice place to hang out and mingle.

        Indeed. This is my favourite place on the web right now. I hope it stays that way in 2016.

        And it’s my favourite because there is civil disagreement in the comments. The stories are a decent intersection of what you find on places like HN, but the common subset isn’t too large. The conversation is much saner.

        Thanks lobste.rs, for making me like forums again.

      2. 3

        I’ve enjoyed monitoring the website this way, and @journeysquid, @flyingfisch and I really appreciate @pushcx’s tireless effort in finding stories for us.

        I think you may have copy-pasted incorrectly.

      1. 3

        Done. Did you use the suggest feature? I am starting to think maybe it doesn’t work since every time I use it the titles don’t change. I know the OP has to manually change the title after the suggestion, but I’ve used it 10 times or so and the OP never changed the title.

        1. 5

          I stopped using the feature after a few weeks when it never made any differences. It seems like throwing a notification to the user might be more effective.

        2. 4

          The suggest feature automatically changes the title/tags after enough people make the same suggestion. Suggestions are not visible to submitters.

    3. 3

      [OT] I’d be happy to see more ideas and discussion here, but I guess that’s not the focus of the site, given the hotness penalty on the ask tag?

    4. 1

      This makes the GPL sound very cultish. Shouldn’t the copyright holders of software be enforcing their licenses? I know it’s not cheap, but this conglomerate approach gives me an uneasy feeling.

      1. 5

        Of course copyright holders are enforcing licenses. The role of Software Freedom Conservancy is to help copyright holders to do so.

        1. -1

          Why? Is the GPL so complicated that this is a necessity?

          1. 4

            Well, enforcing any licenses is rather complicated if violators do not want to comply.

          2. 3

            Our legal system is so complicated that if you want to win, you need money to do so. You can’t win a case just because you’re right. You have to invest a lot of time and effort into proving to a judge or jury that you’re right, and that requires money. Sometimes the state will provide the money, but typically only for a criminal defense, not for a civil lawsuit.

    5. 4

      Ooh, perhaps a new compiler for OpenBSD base one day. (R.I.P. pcc)

    6. 3

      Peripherally related: It’s almost 2016 and Samba still runs as root without any privilege separation or chroot of any form. I really wish there was an alternative that wasn’t “Run Windows.”

    7. 4

      I still don’t understand how people can routinely put back code into a master branch without ensuring that it even builds correctly, let alone functions as designed. When an accident happens and you break the build, or the software in general, it seems prudent to back the change out of the repository until you can understand it. Whatever the mistake was should be understood, so that you (and your colleagues) can learn from it and avoid it in future.

      Mistakes can always happen, but negligence is simply not cricket.

      1. 4

        “I still don’t understand how”: a blameless postmortem is an interesting tool to try to find out. The idea is that things that in hindsight look negligent might have seemed perfectly sensible at the time. Finding out why they seemed sensible might show gaps in tooling or training. (Eg. tests were run but not on exactly the changeset that got merged; tests have been failing for engineer X for weeks but she’s ignoring them because they pass in CI; etc.)

        GitHub has a CI-integration “protect this branch” feature: you can configure master so that a PR can only be merged if a particular CI check has passed on the branch being merged.

        1. 2

          I agree wholeheartedly that it is important not to lay blame (or worse) for mistakes. Doing a post mortem analysis of mistakes is a crucial part of avoiding repeating the same mistake over and over; to ignore the problem is to become negligent.

          If you run the tests on a patch that is not the same as the one you eventually merge, you didn’t really run the test. Discovering that this is true, and understanding that even small changes can result in unanticipated defects is an opportunity to take ownership of, and learn from, a mistake. To continue routinely putting back changes where you did not test the actual patch is, subsequently, negligence.

          If the tests routinely fail on your machine (but not on others) and you ignore those failures without understanding the cause: that’s negligence as well. Every engineer in a team is responsible for the quality of the software. This is a core part of avoiding blame games – if everybody owns, analyses, learns from and shares their mistakes, nobody need point upset fingers at others.

          1. 1

            Certainly the blameless postmortem idea is only going to work if you do something with the findings. If the kinds of mistakes you’re talking about carry on happening regularly, then yes you have a problem.

            People will still make mistakes, though. That’s the nice thing about a tooling solution like that GitHub configuration: a whole class of mistakes simply can’t happen any more.

      2. 3

        For this reason, I think more version control systems need to make an easy, visible revert part of their core functionality.

      3. 3

        Where I work, it usually happens due to portability. Someone checks in code that builds fine on their preferred dev platform and assumes it will work on the others. We have an abstraction layer that helps with differences in the system libraries, but things like mistaken capitalization in an #include will work on Windows but not Linux. Conversely the GCC linker is more forgiving about mixing up struct vs. class forward declarations than VS.

        1. 1

          Me too, the most common portability isssue we have is wchar_t = UTF-8 vs UTF-16.

        2. 1

          Yeah, developing for more than one platform can make it much more tedious to make sure your code is tested before it goes back. If this kind of failure happened more than once or twice, though, I would probably consider adding some kind of #include case check tool to be run out of make check. We do this already for things like code style checks.

          You could conceivably make it relatively easy to install the checks as a pre-commit hook in the local clone via some target like make hooks. Pushing code to a feature branch to trigger some kind of CI build/test process before cherry-picking into master could help as well.

        3. 1

          I had close to a dozen build failures in the space of an hour because someone built live-environment integration tests into the CI test process, and they depended on a dev service that was down. “Fixing” the build entailed rerunning it unchanged once the depended-upon service had been restarted. It has always been my experience that broken CI builds are due to unforeseeable problems or circumstances outside the developer’s control, not a lack of due diligence on the developer’s part; so these “build-breaker shaming” devices seem incredibly counterproductive to me.

      4. 1

        I had it happen when I changed to a job where I had to use a different IDE from the one I was used to. I was used to making the kind of change that would show up immediately as a failure in my IDE if it was incorrect; if not, I would habitually commit to master, confident that it would work. Running a command-line build or unit tests was simply not justified in terms of the cost given the level of confidence I tended to have in such a change. With the new IDE my confidence was entirely misplaced and I broke a lot of master builds until I adjusted.

    8. 6

      On the whole I agree with the article. I have worked for 3 startups and been burned 3 times. I’m much happier with the mega-corps now. That said; I keep seeing these types of articles talking about $250K packages for a senior programmer and how you can work from anywhere with this kind of salary.

      This has not been my experience and the Bureau of Labor Statistics seems to be more apt from what I have experienced

      http://www.bls.gov/ooh/computer-and-information-technology/computer-programmers.htm

      I’m well above the BLS median, but not even half of $250K. Is this really achievable as “the norm” from anywhere in the USA, and if so where are all of these opportunities that I’m so obviously missing?

      How much of that $250K is actual salary and then how much of it is “value of benefits”? This is the other piece that is a head scratcher since the $250K always includes “value of benefits” I feel like it is a bit of sleight of hand that hides the lower actual salary.

      Here is the BLS breakdown by state, which is also interesting, http://www.bls.gov/oes/current/oes151131.htm#st California median is around 89K and Washington seems to have the highest median, still only 115K.

      1. 3

        Note that the BLS definition of “Computer programmer” appears to be very low level:

        “…They turn the program designs created by software developers and engineers into instructions that a computer can follow.”

        It’s likely that many of us here who would call ourselves programmers might fit into another statistical bucket for the BLS, like “software developers” - median $97k, or even “Computer and Information Research Scientists”, median $108k. Honestly, even those seem low and so I assume they’re wrapping together some jobs that I wouldn’t consider equivalent.

        In general it seems really hard to tell how much of these stories of high compensation to believe without just asking your peers, something I’m always reluctant to do. Certainly having some idea of what sort of field these offers are being made for is useful - Dan’s article was helpful in pointing out that people in “hot fields” get gobs more money.

        It’s also not always totally clear what being “senior” means. I think I had “senior” in my title once, but I think it means different things at different places. :)

      2. 1

        How much of that $250K is actual salary and then how much of it is “value of benefits”?

        Zero. A mediocre compensation package for a senior engineer today is $150k salary, $100k/yr of equity that’s not quite as good as cash (but pretty close) and bonuses.

        1. 7

          It’s certainly not anywhere near that in New York or Boston. Perhaps some SV outliers.

          1. 2

            That’s what people I know at Google make in Madison, WI. I’ve that numbers in places with a similar cost of living (like Austin, TX) are similar. Numbers are often much higher in SV, of course.

        2. 1

          Roughly how many years of industry experience does a senior engineer at Google correspond to? (I know that years of experience is a horridly imperfect metric, but it can be useful for HR-type stuff.)

          1. 1

            A decade, give or take a few.

          2. 1

            Three or Four or more

    9. 5

      My favorite part was the Tumblr at the end: http://totalgarb.tumblr.com/tagged/startupbullshit

      That being said, I think we all make the startup equity mistake once. These days, I much prefer being paid hourly, as I feel my interests and the client’s are more aligned.

    10. 16

      At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.

      Sorry, but that’s a load of crap, and the author knows it. He didn’t like what Wes was doing and called his boss.

      1. 5

        This assumes two things: 1) what Wes was doing was otherwise defensible, 2) Facebook did not have legimate reasons to believe that Wes was operating on behalf of his employer. I think both of these are poor assumptions.

        The initial RCE discovery was fine, and Wes seemed to be behave appropriately up until that point. But using keys discovered during an already disclosed exploit to engage in further exploits is wrong. As someone on Hacker News explained it, it’s roughly equivalent to breaking into someone’s house, telling them so they can fix the way you got in, but also taking their access card for work while in the house and then using that to access their office. The second exploit is illegitimate, because it was only possible due to a now-fixed flaw. Worse yet, it’s pretty ethically indefensible to justify stealing from someone and then using the products of your theft to demand more money. Facebook was reasonable in assuming he was a malicious actor.

        This leads into the second point, that Facebook had a reasonable belief that Wes was working on behalf of Synack. Given that all of his communications had come from a Synack corporate email address, that his account on their bug bounty system listed him as a Synack employee, and that he also has publically verifiable connections with Synack through his posts on their blog system, I don’t think it’s unreasonable that they believed he was participating in the bug bounty as part of his job. Furthermore, if they believed he was a malicious actor (which they certainly appear to have believed), then contacting his employer was likely the best way to limit the possibility that he could use the information he had obtained to cause harm (along with, presumably, locking down their AWS setup so that he no longer had access).

        This is not to say that Facebook acted perfectly in this situation. Their emails to him were admittedly unclear, and they seem to have rules for their bug bounty program that are not listed on the official site. Both of these things should have been different. But Wes too is not blameless. Security researchers have to show a lot of caution when pentesting other entities, particular when that pentesting is not specifically invited (which in this case would have meant Facebook directly hiring Synack or Wes himself to pentest them).

        All turned out alright in the end. Facebook secured their systems. Wes retained the payout for the initial RCE, and didn’t face any sort of charges for his actions beyond that. But this is not a story of the big bad company trying to take out the heroic little guy trying to make them more secure. This is a story of a well-intentioned pentester engaging in some ethically questionable behavior with lack of direction from Facebook, and Facebook responding to that behavior.

        If you’re a business, the thing to take away is this:

        • Make sure the rules of your bug bounty program are as clear as possible. While this won’t stop malicious individuals from going further than you’d like, it lets security researchers know where the line is, and reduces the likelihood of excessive data/system access beyond what you’re comfortable with
        • Be as clear as possible in communication with bug bounty participants. If a bounty is not being accepted, the more information you provide, the less likely the person is to misunderstand what you’re saying, or to attempt to access systems beyond what you’d want

        If you’re an individual or business engaging in a bug bounty program:

        • Be very careful to follow the rules of the program. If you’re unsure of whether a particular action is within the bounds of the rules, contact the company and ask. This is not an “ask for forgiveness” type of situation. You are dealing with live systems and real people’s data, and you need to be respectful of the bounds put in place by the company operating the bounty program
        • Never dump data from one exploit to fuel the discovery of future exploits unless you have been clearly and explicitly asked to do so by the operators of the bounty program. There are major legal and ethical concerns otherwise.

        Hopefully everyone can learn these lessons, so companies can feel a little safer running bug bounty programs, and individuals and businesses can feel a little safer participating in them. Everyone wins.

        1. 3

          As someone on Hacker News explained it, it’s roughly equivalent to breaking into someone’s house, telling them so they can fix the way you got in, but also taking their access card for work while in the house and then using that to access their office. The second exploit is illegitimate, because it was only possible due to a now-fixed flaw. Worse yet, it’s pretty ethically indefensible to justify stealing from someone and then using the products of your theft to demand more money. Facebook was reasonable in assuming he was a malicious actor.

          That doesn’t ring true. He pointed out deficiencies the Facebook devops team should have taken care of, but didn’t. Therefore he simply informed of the severity of the issue, and did nothing like breaking into anyone’s office. It is correct in that he couldn’t have gotten to the soft underbelly without the first vuln, but the bad design would still have been there. Hey, it might still be!

          And he never demanded more money.

    11. 8
    12. 4

      I wonder if states may get involved. Most (All?) states have a weights and measures office that checks fuel pumps, scales, etc. This feels like an area that may get more regulation in the near future.

      1. 2

        Certainly, and it should.

        However in this case, the meter itself wasn’t broken, as the title implies. The wrong meter was being read. This happens all the time in the utility world (electric, water, gas), and usually has a quick resolution without additional state involvement.

        1. 1

          Certainly, and it should.

          I don’t necessarily agree. The costs involved would be prohibitive and not much would be gained versus “we assume a 1% error in favor of the customer”

          How would you like to see a 3rd party and/or government agency measure counting octets transferred over broadband networks?

          1. 1

            One thing the law could provide is an escalation pathway. The subject of this article would have had no recourse but an expensive lawsuit, if the media hadn’t chosen to get involved - that’s just wrong.

    13. 4

      Until the firmware of the flash device in question no longer wants to be recognized as a device. Or the filesystem is corrupted beyond repair.

      Multiple backups in multiple places. Script it.

      1. 3

        Yeah. This only covers a narrow set of problems (loss of single block inside a large file) and it’s quite a bit of work to use. cp is much easier and provides even better protection. And against more errors (even rm).

        Reed-Solomon is interesting if you’re trying to transmit data over a particularly lossy link and need to make something of the garbage you receive. I’d still use more advanced checksums like sha512 for integrity, but sha512 doesn’t provide an easy way to reconstruct the correct data after failure.

        1. 3

          Usenet pirates have been using Parchive for years now - as the comments note, it has the advantage of limiting how much recovery info you have to slog around by splitting up the parity files into increasingly larger sizes; if you’re only missing a few blocks you can download a tiny par file rather than the entire thing.

          Usenet & PAR2 actually make a really decent backup strategy for stuff you don’t want to disappear. Some providers are starting to offer retention rates upward of seven years - and when you can upload for around $0.20 - $0.14 per gigabyte…

    14. 3

      What’s worse when hiring, hiring a bad fit or letting a good fit get away?

    15. 20

      All of this, if true, is incredibly scummy and entirely unsurprising. The startup world, contrary to its own loud and consistent claims, is not a “meritocracy” and is absolutely something we have seen before. It is just like any other organization of power and money: insular and endlessly self-promoting.

      Just another reason that I dislike startups.

      1. 8

        All the grand-standing about innovation and meritocracy is a smokescreen for the complete lack thereof in most startups.

        They’re product companies run in exactly the same way as the Old Guard. And there’s nothing wrong with that! But to admit that constitutes a severe existential risk to the belief in Changing The World (and getting rich at the same time, conveniently!)

        1. 1

          but at least we have loste.rs, which doesn’t subscribe to the same lowly practices of aggregation links sites like hackernews and product hunt (at least I hope).

          1. 1

            lost.ers

            (Sorry, couldn’t help it, but at the same time, trying to figure out if you are serious or not)

            1. 1

              too late to fix. loBste.rs (forgot the b). and I was being serious…

      2. 5

        I wanted to come in here and disagree, but I can’t. I do feel that startups further away from the valley are less prone to the same echochamber problems, not that it isn’t there to a degree. On the whole I agree.

        1. 7

          It really frustrates me, because I think encouraging entrepreneurship is good, and investing in the development of unproven technology is good. But all the self-congratulatory pretension of Silicon Valley, the notion that they are uniquely special and immune to the problems of the rest of the world, gets to me like nothing else. Not to mention the fact that many of these startups exploit their workers, often college-aged out fresh out of college, making them work long hours for few benefits (but a Foosball table!) to make the investors and founders rich.

          There are many more problems than this, and obviously not all startups have all of these problems, but I think on the whole that the economic and cultural environment of the valley is corrosive to people’s happiness, empathy, and perspective, and that the companies born there are largely sham businesses that exist only to reach a degree of hype sufficient for a sale to whatever company’s up next on the acquisition Merry-Go-Round.

          Okay, perhaps that’s enough of a rant for now.

          1. 6

            I used to curse the fact that I never ended up in SV. Then, I realized that past me was incredibly entitled, and I dodged a bullet.

            Most startups would value me only for my ability to vomit code out towards features as fast as possible, rather than think a bit and write a bit less code that implements that feature well and sustainably. They want me to be as young as possible, so they can pay me a shitty wage without me knowing it. They’d probably resent me for having the backbone to say no to unrealistic timelines and wanting to have a life outside of work.

            I realized I was incredibly lucky to be able to continue pursuing my interest in working in deeply technical things with autonomy and self-respect, rather than constantly throwing shoddy code at the wall in the vain hope of hooking users.

            1. 1

              Most startups would value me only for my ability to vomit code out towards features as fast as possible, rather than think a bit and write a bit less code that implements that feature well and sustainably.

              Really? I’ve never met a single company, SV included, that doesn’t value that.

              1. 2

                You’ve lived a charmed life, then.

              2. 2

                As a consultant, I had a client with this attitude for six months or so. We were supposed to help their team get back on track rewriting their app in a new language + adding a modern onboarding. We were constantlty sabotaged by their one sr. dev who knew the language and was seens as the guru by the team. He cowboy’d everything with half-baked implementations that looked nice to the PM but were not a solid foundation for following features - which were very well known because we were reimplementing an existing app and migrating the data! The attitude was reflected by the entire team. We’d have design/code sessions to plan/start a reasonable implementation, then he or they would get an idea how to half-ass it and toss out the planned design, possibly with weeks of code, rewriting even during the rewrite. It was really disspiriting to watch work constantly going up in flames by repeating the same mistake over and over, no matter how many times we tried to address the atittude, the dev, the VP, the team, the process, anything, to put better practices in place.

                A few months after we left, the board was tired of hearing wonderful things for 18 months without the rewrite shipping. They fired the VP of Engineering and entire dev team. The new VP Eng chose a new language and is hiring a 100% new team to rewrite from scratch in a third language. I felt more than a bit of schadenfreude that day.

              3. 1

                You are lucky. I quit several teams at a past employer because of this mindset.

                I was sick of being held accountable for things they would not pay for, namely, enough time to do something well. I’m not talking about gold-plating, but rather, enough time to do things such that you don’t have to circle back to it afterwards. Inevitably, something blows up in production as a result of a truncated schedule, and it’s a big emergency.

                Of course, you can’t bring up the fact that you specifically mentioned this would be a problem, because then you aren’t a Team Player. You can’t fix broken systems, and some people don’t have the imagination to see anything but broken processes.

                If I don’t get the autonomy necessary to do a good job, I leave. (And I’m lucky that I can do that.)

    16. 9

      I decided to do some tests on my backups. I generally dump a postgres instances with the following command:

      time pgdump -U postgres head | (gzip > /home/database/koparohead$(/bin/date +\%Y-\%m-\%d\%H-\%M-\%S).gz)

      Here are the timings with gzip, xz and gzip -9:

      Doing a dump and compressing it with gzip:

      real    1m6.960s
      user    1m3.220s
      sys     0m3.320s
      

      Dump performed and compressed with xz:

      real    17m57.054s
      user    17m37.457s
      sys     0m8.777s
      

      gzip -9 the db dump:

      real    1m21.947s
      user    1m16.890s
      sys     0m4.373s
      

      Resulting size:

      -rw-r--r-- 1 database database 643M Dec 12 19:49 koparo_head_2015-12-12_19-48-00.gz
      -rw-r--r-- 1 database database 477M Dec 12 20:07 koparo_head_2015-12-12_19-50-00.xz
      -rw-r--r-- 1 database database 641M Dec 12 20:11 koparo_head_2015-12-12_20-09-40.gz9
      

      In my use case, it seems gz is the most sane approach. I doubt waiting 17m57s for a DB backup is a viable case :)

      1. 6

        I happen to have an actual potential use case I tried throwing this at. I’m regularly taking snapshots of a bit of server filesystem with tar (it happens to be a Minecraft server, but I doubt that massively skews the compression performance characteristics of the data). I have snapshots easily to hand, so I grabbed one and checked it. These are all done with a warmed cache and are profoundly unscientific, but here we go:

        $ time cat snapshot.tar | cat | wc -c
        1721825280
        
        real    0m3.135s
        user    0m0.032s
        sys     0m4.504s
        $ time cat snapshot.tar | gzip | wc -c
        1044644775
        
        real    1m34.558s
        user    1m33.436s
        sys     0m5.436s
        $ time cat snapshot.tar | bzip2 | wc -c
        1037459851
        
        real    6m34.127s
        user    6m28.952s
        sys     0m8.828s
        $ time cat snapshot.tar | xz | wc -c
        1030032944
        
        real    14m14.562s
        user    14m7.672s
        sys     0m14.856s
        

        So using gzip saves about 40% on my original most-of-2GB. Not bad, especially in only a minute and a half. bzip2 saves an additional half a percent, at the cost of another five minutes of processing; and xz saves 0.8% over gzip, at the cost of almost thirteen minutes of additional processing. There’s no way that’s worth it, especially since the server needs to be doing things other than compressing snapshots while this is going on. So I guess I’ll keep using gzip.

      2. 1

        How many cores do you have available and did you try xz with -T 0? For apples-to-apples, you could try pigz, as well.

    17. 2

      I regularly run personal backups and disk images through xz, they both end up about half the size of gzip. Also, having threaded compression built into the primary executable (-T 0) is nice compared to having to download a separate pigz package.

    18. 2

      Google seems to be doing groupware pretty well.

      1. 2

        That’s more of a function of the shambling advertising shoggoth they are than their core business at the start.

        Go read about the history of project Chandler to see a groupware trainwreck in action.

        EDIT: Would anybody care to disagree that advertising is the cornerstone of Google’s business?

      2. 1

        Google has no groupware products. Buzz was their closest effort.

      3. 0

        And Facebook has done well making social software to get 22 year olds laid.

    19. 1

      I received one as well. State-sponsored attacks are not uncommon against the Banana Kingdom.

      Or maybe just another Tor user.