This is an unpopular opinion, and I get why – people crave a scapegoat. CrowdStrike undeniably pushed a faulty update demanding a low-level fix (booting into recovery). However, this incident lays bare the fragility of corporate IT, particularly for companies entrusted with vast amounts of sensitive personal information.

Robust disaster recovery plans, including automated processes to remotely reboot and remediate thousands of machines, aren’t revolutionary. They’re basic hygiene, especially when considering the potential consequences of a breach. Yet, this incident highlights a systemic failure across many organizations. While CrowdStrike erred, the real culprit is a culture of shortcuts and misplaced priorities within corporate IT.

Too often, companies throw millions at vendor contracts, lured by flashy promises and neglecting the due diligence necessary to ensure those solutions truly fit their needs. This is exacerbated by a corporate culture where CEOs, vice presidents, and managers are often more easily swayed by vendor kickbacks, gifts, and lavish trips than by investing in innovative ideas with measurable outcomes.

This misguided approach not only results in bloated IT budgets but also leaves companies vulnerable to precisely the kind of disruptions caused by the CrowdStrike incident. When decision-makers prioritize personal gain over the long-term health and security of their IT infrastructure, it’s ultimately the customers and their data that suffer.

  • Dran
    link
    fedilink
    English
    15
    edit-2
    5 months ago

    Separate persistent data and operating system partitions, ensure that every local network has small pxe servers, vpned (wireguard, etc) to a cdn with your base OS deployment images, that validate images based on CA and checksum before delivering, and give every user the ability to pxe boot and redeploy the non-data partition.

    Bitlocker keys for the OS partition are irrelevant because nothing of value is stored on the OS partition, and keys for the data partition can be stored and passed via AD after the redeploy. If someone somehow deploys an image that isn’t ours, it won’t have keys to the data partition because it won’t have a trust relationship with AD.

    (This is actually what I do at work)

      • Dran
        link
        fedilink
        English
        25 months ago

        With enough autism in your overlay configs, sure, but in my environment tat leakage is still encrypted. It’s far simpler to just accept leakage and encrypt the OS partition with a key that’s never stored anywhere. If it gets lost, you rebuild the system from pxe. (Which is fine, because it only takes about 20 minutes and no data we care about exists there) If it’s working correctly, the OS partition is still encrypted and protects any inadvertent data leakage from offline attacks.

    • @Trainguyrom@reddthat.com
      link
      fedilink
      English
      5
      edit-2
      5 months ago

      Separate persistent data and operating system partitions, ensure that every local network has small pxe servers, vpned (wireguard, etc) to a cdn with your base OS deployment images, that validate images based on CA and checksum before delivering, and give every user the ability to pxe boot and redeploy the non-data partition.

      At that point why not just redirect the data partition to a network share with local caching? Seems like it would simplify this setup greatly (plus makes enabling shadow copy for all users stupid easy)

      Edit to add: I worked at a bank that did this for all of our users and it was extremely convenient for termed employees since we could simply give access to the termed employee’s share to their manager and toss a them a shortcut to access said employee’s files, so if it turned out Janet had some business critical spreadsheet it was easily accessible even after she was termed

      • Dran
        link
        fedilink
        English
        35 months ago

        We do this in a lot of areas with fslogix where there is heavy persistent data, it just never felt necessary to do that for endpoints where the persistent data partition is not much more than user settings and caches of convenience. Anything that is important is never stored solely on the endpoints, but it is nice to be able to reboot those servers without affecting downstream endpoints. If we had everything locally dependant on fslogix, I’d have to schedule building-wide outages for patching.

    • @Brkdncr@lemmy.world
      link
      fedilink
      English
      35 months ago

      But your pxe boot server is down, your radius server providing vpn auth is down, your bitlocker keys are in AD which is down because all your domain controllers are down.

      • Dran
        link
        fedilink
        English
        35 months ago

        Yes and no. In the best case, endpoints have enough cached data to get us through that process. In the worst case, that’s still a considerably smaller footprint to fix by hand before the rest of the infrastructure can fix itself.

    • @pHr34kY@lemmy.world
      link
      fedilink
      English
      25 months ago

      I’ve been separating OS and data partitions since I was a kid running Windows 95. It’s horrifying that people don’t expect and prepare for machines to become unbootable on a regular basis.

      Hell, I bricked my work PC twice this year just by using the Windows cleanup tool - on Windows 11. The antivirus went nuclear, as antivirus products do.