All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It’s all very exciting, personally, as someone not responsible for fixing it.

Apparently caused by a bad CrowdStrike update.

Edit: now being told we (who almost all generally work from home) need to come into the office Monday as they can only apply the fix in-person. We’ll see if that changes over the weekend…

  • Monument@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    Honestly kind of excited for the company blogs to start spitting out their disaster recovery crisis management stories.

    I mean - this is just a giant test of disaster recovery crisis management plans. And while there are absolutely real-world consequences to this, the fix almost seems scriptable.

    If a company uses IPMI (Called Branded AMT and sometimes vPro by Intel), and their network is intact/the devices are on their network, they ought to be able to remotely address this.
    But that’s obviously predicated on them having already deployed/configured the tools.

  • YTG123@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    >Make a kernel-level antivirus
    >Make it proprietary
    >Don’t test updates… for some reason??

    • CircuitSpells@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I mean I know it’s easy to be critical but this was my exact thought, how the hell didn’t they catch this in testing?

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        Completely justified reaction. A lot of the time tech companies and IT staff get shit for stuff that, in practice, can be really hard to detect before it happens. There are all kinds of issues that can arise in production that you just can’t test for.

        But this… This has no justification. A issue this immediate, this widespread, would have instantly been caught with even the most basic of testing. The fact that it wasn’t raises massive questions about the safety and security of Crowdstrike’s internal processes.

    • areyouevenreal@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      Lots of security systems are kernel level (at least partially) this includes SELinux and AppArmor by the way. It’s a necessity for these things to actually be effective.

  • richtellyard@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    This is going to be a Big Deal for a whole lot of people. I don’t know all the companies and industries that use Crowdstrike but I might guess it will result in airline delays, banking outages, and hospital computer systems failing. Hopefully nobody gets hurt because of it.

    • RegalPotoo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Big chunk of New Zealands banks apparently run it, cos 3 of the big ones can’t do credit card transactions right now

      • index@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        4 months ago

        cos 3 of the big ones can’t do credit card transactions right now

        Bitcoin still up and running perhaps people can use that