Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will “eat just about anything that finds its way inside.”

Aaron clearly warns users that Nepenthes is aggressive malware. It’s not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That’s likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    10
    ·
    edit-2
    1 day ago

    OTOH infinite loop detection is a well known coding issue with well known, freely available solutions, so this approach will only affect the lamest implementations of AI,

    • vrighter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 day ago

      an infinite loop detector detects when you’re going round in circles. They can’t detect when you’re going down an infinitely deep acyclic graph, because that, by definition doesn’t have any loops for it to detect. The best they can do is just have a threshold after which they give up.

      • LovableSidekick@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        edit-2
        1 day ago

        You can detect pathpoints that come up repeatedly and avoid pursuing them further, which technically aren’t called “infinite loop” detection but I don’t know the correct name. The point is that the software isn’t a Star Trek robot that starts smoking and bricks itself when it hears something illogical.

        • Crassus@feddit.nl
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 day ago

          It can detect cycles. From a quick look at the demo of this tool it (slowly) generates some garbage text after which it places 10 random links. Each of these links loops to a newly generated page. Thus although generating the same link twice will surely happen. The change that all 10 of the links have already been generated before is small

          • LovableSidekick@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            edit-2
            1 day ago

            I would simply add links to a list when visited and never revisit any. And that’s just simple web crawler logic, not even AI. Web crawlers that avoid problems like that are beginner/intermediate computer science homework.

            • dev_null@lemmy.ml
              link
              fedilink
              English
              arrow-up
              11
              ·
              1 day ago

              They are no loops and repeated links to avoid. Every link leads to a brand new, freshly generated page with another set of brand new, never before seen links. You can go deeper and deeper forever without any loops.

                • vrighter@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 hours ago

                  what part of “they do not repeat” do you still not get? You can put them in a list, but you won’t ever get a hit ic it’d just be wasting memory