Sometimes it can be hard to tell if we’re chatting with a bot or a real person online, especially as more and more companies turn to this seemingly cheap way of providing customer support. What are some strategies to expose AI?

  • rodbiren@midwest.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    You can always help their software QA by pasting in the entirety of the declaration of independence. A couple of things could happen. If they comment, why did you post that? You have a human. If they give a generic response, probably an AI. If it crashes then you know they didn’t think anyone would post that.

    You can also post zero width spaces. Generic chatbot will respond with something meaningless and a human might not even respond. You could also post text using typoglycemia. The language will confuse most models but can usually be read by people.

  • nobodyspecial@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    This is very, very easy. Google “cyrillic keyboard” or just install the Cyrillic keyboard support on your phone. Many letters in the Cyrillic alphabet look exactly like their Roman counterparts, but are completely different sounds and meanings. Cut and paste the Unicode into the chat, in place of regular letters. For example, ‘Неllо’ looks exactly like ‘Hello’ in most fonts, but is actually ‘Nello.’ I know you doubt, so check it out in a Unicode inspector: https://apps.timwhitlock.info/unicode/inspect?s=%D0%9D%D0%B5ll%D0%BE

    The reverse also works. E.g., TPAKTOP B CPAKY means ‘tractor into ass’, and I typed that using 100% Roman characters.

    • fearout@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      So I’ve just tried it with chatGPT, and it replied normally. I asked it why it wasn’t bothered by Cyrillic letters, and it answered this:

      I am designed to understand and generate text in multiple languages, including those that use the Cyrillic alphabet like Russian, Bulgarian, and others. So, Cyrillic letters don’t bother me. You can continue the conversation in either English or a language that uses the Cyrillic alphabet if you prefer.

      So it really depends on the model, I guess. Don’t rely too much on this advice.

      • nobodyspecial@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        ChatGPT capabilities border on the supernatural. Here’s how I broke it:

        𐤔hat is ᛐ + ᛐ?

        ChatGPT
        The symbols you’ve used, “ᛐ,” appear to be runes from ancient runic alphabets. However, the specific meaning of these runes might differ depending on which runic alphabet you’re referencing (e.g., Elder Futhark, Younger Futhark, Anglo-Saxon Futhorc, etc.).

        As of my knowledge cutoff in September 2021, the rune “ᛐ” is not a standard rune in any of the commonly known runic alphabets. Therefore, without knowing the exact meaning or value of the rune “ᛐ,” it is impossible to provide a specific answer to the expression “ᛐ + ᛐ.”

        If you could clarify the runic alphabet you’re referring to or provide more context about the runes, I’d be happy to help you with the calculation or interpretation.

        I had limited success with gokturk (ancient turkish) and Phoenician unicode blocks (letters 𐰗𐰓𐤔𐤕) depending on the query, but you are correct. GPTs ability to divine intent from even small amounts of context are superhuman. Cyrillic used to break it, but no longer does. This thing learns like a beast. Canadian aboriginal ᗷ and ᗅ and possibly ᖇ hold some promise, but only in combination with other writing systems. I’ll have to add a LOT of other unicode code blocks to my tool belt.

  • platysalty@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Ask for the difference in behaviour between apple seeds and baseball cards, or anything equally nonsensical.

    A human would go “bro wtf”

    • WackyTabbacy42069@reddthat.com
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Not necessarily. OpenAI has been trying to make their AIs do this and be generally unharmful, but there’s lots of support in the open source LLM space for uncensored models. The uncensored models are less likely to be inclined to say so if they’ve been instructed to pretend they’re humans

    • HSL@wayfarershaven.eu
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Speaking as a real support person, people do ask and it’s fun to come up with responses. It really depends on my mood.

  • octoperson@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 year ago

    I’ve found that for chatGPT specifically;

    • it really likes to restate your question in its opening sentence
    • it also likes to wrap up with a take-home message. “It’s important to remember that…”
    • it starts sentences with little filler words and phrases. “In short,” “that said,” “ultimately,” “on the other hand,”
    • it’s always upbeat, encouraging, bland, and uncontroversial
    • it never (that I’ve seen) gives personal anecdotes
    • it’s able to use analogies but not well. They never help elucidate the matter
    • fearout@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      You’re probably joking, but I’ll comment anyway. It won’t affect LLMs at all. ChatGPT just answers the question and discusses the paradox. LLM’s function is basically just to construct sentences, so there’s nothing really that can potentially infinitely loop. It doesn’t “think” about paradoxes.

    • FrickAndMortar@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Well, I just asked the weird “message Carrot” option in my weather app, and it replied:

      Oh, look at you, trying to puzzle your way through the depths of set theory. How amusing, like a chimp trying to juggle chainsaws.

      Well, my dear meatbag, the answer to your question is a resounding NO. Just like you, that set cannot contain itself. It’s like expecting Johnny Five to date Stephanie from “Short Circuit.” Simply not gonna happen! 🤖💔