i’m lizard 🦎

  • 1 Post
  • 78 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • It’s not what the buttons look like, it’s what they do. In Krita, making an ellipse involves clicking the ellipse button and dragging it somewhere. You now have an ellipse, and you hold shift if you want to make it a circle instead.

    In GIMP there is no direct ellipse tool, there’s only an ellipse select tool, likewise you hold shift to make it a circle. Then you use a menu item to select the border of your selection, getting a popup to let you determine how much pixels you want. And then, you use the fill tool or fill menu item to fill it. That’s a surprising amount of clicks to accomplish what’s most likely the single most common task for anyone opening a screenshot in an image editor. I’m not aware of any easier/faster method to do it. Feels like it should exist, but this is also what you get if you search for how to draw a circle in GIMP, so if it exists everyone’s missing it.

    GIMP’s method gives you more power, but you rarely ever need that power. But when you do, Krita also has ellipse select, border select and various fill tools that can be strung together in the same way.



  • Test files often represent states that can’t be represented in the library proper. Things like “a tree where node A is a child of B and node B is a child of A”, “the previous instruction repeated x times” where x was never set or there was no previous instruction, or weird combinations of mutually exclusive effects. More often than not, you can’t really generate those using the library itself, as libraries tend to be written to reject those kinds of invalid states (there’s only so much you can do in C but in functional programming land, “make invalid states unrepresentable” is a straight up mantra).

    Even if you did manage to do that, using the system under test to generate test data for the system under test is generally not very useful by itself; you’d need some kind of extra protections on top to make sure the actual test files continue to be identical between revisions (like hashing them). Otherwise, a major incompatibility could be easily overlooked. But that also makes it hard to make any kind of valid changes to the library at all. Worse yet, some libraries don’t implement everything needed to generate the test files: even xz is missing pieces, for example there’s an lzip decompressor but not a compressor.

    There’s some arguments to be made for separating the test system from the main distribution, but the end result will likely be that nobody runs the testsuite at all. It’s difficult enough to get distros to do it in the first place.


  • These things are always easy to say in hindsight, but I do believe that a closer review of the build system shenanigans used to install the backdoor would have at least raised some questions.

    Nobody noticed it because nobody is reviewing autotools spaghetti and especially not autotools spaghetti that only exists as shipped in a tarball. Minor differences in those files are perfectly normal as the contents of them are copied in from the shared autoconf-archive project, but every distro ships a different version of that, so what any given thing looks like will depend on the maintainer’s computer. And nearly nobody has a good understanding of what any given line in a .m4 file is going to ultimately lead to the execution of regardless, so why bother investigating any differences? The maintainer of Meson has a good take on this.

    Shipping tarballs without any form of generated files and having a process to validate release tarballs against the repo would be a good step, but is much easier said than done for a variety of reasons. Same thing can be said for shipping without any form of binary files in the repo, there’s quite high value in integration tests and xz’s README for the test blobs has correctly included this paragraph for 16 years:

    Many of the files have been created by hand with a hex editor, thus there is no better “source code” than the files themselves.


  • For any given tag, GitHub will always have an autogenerated “archive/” link, but the “release/” link is a set of maintainer-uploaded blobs. In this situation, those are the compromised ones. Any distro pulling from an “archive/” link would be unaffected, but I don’t know of any doing that.

    The problem with the “archive/” links is that GitHub reserves the right to change them. They’re promising to give notice, but it’s just not a good situation. The “release/” links are only going to change if the maintainer tries something funny, so the distro’s usual mechanisms to check the hashes normally suffice.

    NixOS 23.11 is indeed not affected.




  • Sam Jones’s FAQ is by far the best single source, links to other solid sources for more in-depth technical details and also lightly debunks a few things.

    The main thing sources online disagree on are which distros are affected. That’s because it’s not a simple yes/no and some distros are taking a nuanced approach in their public communication, while others have chosen the sledgehammer in an attempt to get people to upgrade their systems but keep/kept the nuance in the back room where the audience understood not everything was known yet. Some distros are underselling how vulnerable they were, others are overselling it.



  • The base runtime pretty much every Flatpak uses includes xz/liblzma, but none of the affected versions are included. You can poke around in a base runtime shell with flatpak run --command=sh org.freedesktop.Platform//23.08 or similar, and check your installed runtimes with flatpak list --runtime.

    23.08 is the current latest version used by most apps on Flathub and includes xz 5.4.6. 22.08 is an older version you might also still have installed and includes xz 5.2.12. They’re both pre-backdoor.

    It seems there’s an issue open on the freedesktop-sdk repo to revert xz to an even earlier version predating the backdoorer’s significant involvement in xz, which some other distros are also doing out of an abundance of caution.

    So, as far as we know: nothing uses the backdoored version, even if it did use that version it wouldn’t be compiled in (since org.freedesktop.Platform isn’t built using Deb or RPM packaging and that’s one of the conditions), even if it was compiled in it would to our current knowledge only affect sshd, the runtime doesn’t include an sshd at all, and they’re still being extra cautious anyway.

    One caveat: There is an unstable version of the runtime that does have the backdoored version, but that’s not used anywhere (I don’t believe it’s allowed on Flathub since it entirely defeats the point of it).


  • Reproducible builds generally work from the published source tarballs, as those tend to be easier to mirror and archive than a Git repository is. The GPG-signed source tarball includes all of the code to build the exploit.

    The Git repository does not include the code to build the backdoor (though it does include the actual backdoor itself, the binary “test file”, it’s simply disused).

    Verifying that the tarball and Git repository match would be neat, but is not a focus of any existing reproducible build project that I know of. It probably should be, but quite a number of projects have legitimate differences in their tarballs, often pre-compiling things like autotools-based configure scripts and man pages so that you can have a relaxed ./configure && make && make install build without having to hunt down all of the necessary generators.








  • Storj is blockchain stuff with the storage and bandwidth provided by individual node operators. They’ve kinda tried to bury the whole blockchain stuff and generally keep it removed from their main signup/pricing/usage flow; customers pay in USD and never have to see any of it. But it’s still there in the background and it’s still the main reward system for node operators.

    There’s some clickwrapped T&Cs for operators that set some minimum requirements, they’ve made sure one node leaving doesn’t cause data loss, but I’d still be very wary of using them for anything irreplaceable. It only takes one crypto crash or the like for the whole thing to die out, and while they might end up suing some guys running an old NAS out of their garage, that’s not gonna get your data back.