Meta “programmed it to simply not answer questions,” but it did anyway.

  • doodledup@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    7
    ·
    edit-2
    3 months ago

    AI doesn’t know what’s wrong or correct. It hallucinates every answer. It’s up to the supervisor to determine whether it’s wrong or correct.

    Mathematically verifying the correctness of these algorithms is a hard problem. It’s intentional and the trade-off for the incredible efficiency.

    Besides, it can only “know” what it has been trained on. It shouldn’t be suprising that it cannot answer about the Trump shooting. Anyone who thinks otherwise simply doesn’t know how to use these models.

    • snooggums@midwest.social
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      7
      ·
      3 months ago

      It is impossible to mathematically determine if something is correct. Literally impossible.

      At best the most popular answer, even if it is narrowed down to reliable sources, is what it can spit out. Even that isn’t the same thing is consensus, because AI is not intelligent.

      If the ‘supervisor’ has to determine if it is right and wrong, what is the point of AI as a source of knowledge?

      • doodledup@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        12
        ·
        edit-2
        3 months ago

        It is impossible to mathematically determine if something is correct. Literally impossible.

        No, you’re wrong. You can indeed prove the correctness of a neural network. You can also prove the correctness of many things. It’s the most integral part of mathematics and computer-science.

        For example a very simple proof: with the conjecture that an even number is 2k of a number k, then you can prove that the addition of two even numbers is again an even number (and that prove is definite): 2a+2b=2(a+b), since a+b=k for some k.

        Obviously, proving more complex mathematical problems like AI is more involved. But that’s why we have scientists that work on that.

        At best the most popular answer, even if it is narrowed down to reliable sources, is what it can spit out. Even that isn’t the same thing is consensus, because AI is not intelligent.

        That is correct. But it’s not a limitation. It’s by design. It’s the tradeoff for the efficiency of the models. It’s like lossy JPG compression. You accept some artifacts but in return you get much smaller images and much faster loading times.

        But there are indeed "AI"s and neural networks that have been proven correct. This is mostly applied to safety critical applications like airplane collision avoidance systems or DAS. But a language model is not safety critical; so we take full advantage.

        If the ‘supervisor’ has to determine if it is right and wrong, what is the point of AI as a source of knowledge?

        You’re completely misunderstanding the whole thing. The only reason why it’s so incredibly good in many applications is because it’s bad in others. It’s intentionally designed that way. There are exact algorithms and there approximation algorithms. The latter tend to be much more efficient and usable in practice.

        • Cethin@lemmy.zip
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          4
          ·
          edit-2
          3 months ago

          You can prove some things are correct, like math problems (assuming the axioms they are based on are also correct).

          You can’t prove that things like events having happened are correct. That’s even a philosophical issue with human memory. We can’t prove anything in the past actually happened. We can hope that our memory of events is accurate and reliable and work from there, but it can’t actually be proven. In theory everything before could have just been implanted into our minds. This is incredibly unlikely (as well as not useful at best), but it can’t be ruled out.

          If we could prove events in the past are true we wouldn’t have so many pseudo-historians making up crazy things about the pyramids, or whatever else. We can collect evidence and make inferences, but we can’t prove it because it is no longer happening. There’s a chance that we miss something or some information can’t be recovered.

          LLMs are algorithms that use large amounts of data to identify correlations. You can tune them to give more unique answers or more consistent answers (and other conditions) but they aren’t intelligent. They are, at best, correlation finders. If you give it bad data (internet conversations) or incomplete data then it at best will (usually confidently) give back bad information. People who don’t understand how they work assume they’re actually intelligent and can do more than this. This is dangerous and should be dispelled quickly, or they believe any garbage it spits out, like the example from this post.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            You can’t prove that things like events having happened are correct.

            You can’t so solidly that this shouldn’t even be discussed.

            What should be is whether you can make a machine capable of reasoning.

            There’s symbolic logic, so you can maybe some day make a machine that makes correct syllogisms, detects incorrect syllogisms and such.

            People who don’t understand how they work assume they’re actually intelligent and can do more than this. This is dangerous and should be dispelled quickly, or they believe any garbage it spits out, like the example from this post.

            Sadly there’s that archetype of “the narrow-minded not cool scientist against the cool brave inventor” which means that actively dispelling that may do harm. People who don’t understand will match the situation with that archetype and it will reinforce their belief.

          • doodledup@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            6
            ·
            3 months ago

            Well but this kind of correctness applies to everything. By thag logic, you can’t believe anything. I’m talking about an entirely different correctness. Like resistance against certain adversarial attacks. Of course, proving that the model is always correct, is as complicated as modelling the entire reality. That’s infeasible. But it’s also infeasible for every other software.

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              9
              ·
              3 months ago

              It’s not pedantic. You can mathematically prove math.

              You can’t mathematically/algorithmically prove an event happened or did not happen.

              • otp@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                3 months ago

                Adding “mathematically/algorithmically” in front of the word “prove” as if it were always implicitly there, and suggesting that it’s the only way we should be using the word “prove” seems pretty darned pedantic to me.

                • conciselyverbose@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  3 months ago

                  We’re describing the behavior of software. It must be implicitly included. Software cannot do anything that isn’t algorithmic.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                3 months ago

                You can prove mathematical logic and you can (not 1-to-1) tie that to symbolic logic, but since it’s not 1-to-1, because of ambiguity of symbols, there will be much more complexity. I personally think that the future of various machine assistants lies there, and what LLM’s now do is going to be used in auxiliary roles for that.

                • conciselyverbose@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  3 months ago

                  The problem is that mathematical proofs rely on the basic premise that the underlying assumptions are rock solid, and that the rules of the math are rock solid. It’s rigorous logic rules, applied mathematically.

                  The real world is Bayesian. Even our hard sciences like physics are only “mostly” true, which is why stuff like relativity could throw a wrench in it. There’s inherent uncertainty for everything, because it’s all measurement based, with errors, and more importantly, the relationships all have uncertainty. There is no “we know a^2 and b^2, so c^2 must be this”. It’s “we think this news source is generally reliable and we think the sentiment of the article is that this crime was committed, so our logical assumption is that the crime was probably committed”. But no link in the chain is 100%. “Rock solid” sources get corrupted, generally with a time lag before it’s recognizable. Your interpretation of a simple article may be damn near 100%, but someone is still going to misread it, and a computer definitely can.

                  Uncertainty is central to reality, down to the fact that even quantum phenomena have to be talked about probabilistically because uncertainty is built in all the way down.

                  • bunchberry@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 months ago

                    This is why many philosophers came to criticize metaphysical logic in the 1800s, viewing it as dealing with absolutes when reality does not actually exist in absolutes, stating that we need some other logical system which could deal with the “fuzziness” of reality more accurately. That was the origin of the notion of dialectical logic from philosophers like Hegel and Engels, which caught on with some popularity in the east but then was mostly forgotten in the west outside of some fringe sections of academia. Even long prior to Bell’s theorem, the physicist Dmitry Blokhintsev, who adhered to this dialectical materialist mode of thought, wrote a whole book on quantum mechanics where the first part he discusses the need to abandon the false illusion of the rigidity and concreteness of reality and shows how this is an illusion even in the classical sciences where everything has uncertainty, all predictions eventually break down, nothing is never possible to actually fully separate something from its environment. These kinds of views heavily influenced the contemporary physicist Carlo Rovelli as well.

                  • rottingleaf@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    3 months ago

                    You are describing LLMs, yes. But not what I’m describing.

                    I’m talking about machine finding syllogisms and checking their correctness. This can’t be rock solid because of interpretation of the statement in natural language with its fuzzy semantics, but everything after that can be made rock solid. While in LLMs even it isn’t.

                    That’s what I’m talking about.

                    Humans make mistakes, but not such as LLM-generated texts contain.

                    I mean that one can build a reasoning machine which an LLM isn’t.

            • doodledup@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              8
              ·
              3 months ago

              No. It’s just pure math and logic. And LLMs are nothing more than billions of additions and multiplications. Literally. You can prove certain things on it just like you can prove theorems in mathematics. It’s an ongoing ressearch field.

              • CileTheSane@lemmy.ca
                link
                fedilink
                English
                arrow-up
                6
                ·
                3 months ago

                It’s just pure math and logic. And LLMs are nothing more than billions of additions and multiplications.

                Okay: using additions and multiplications prove the assassination attempt on Donald Trump happened

                • doodledup@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  3 months ago

                  How would you even prove something like that outside of LLMs? What is your point? That you cannot prove anything except “I think therefore I am”?

                  Either you haven’t read my comments or you’re intentionally trying to be provocative.

                  • CileTheSane@lemmy.ca
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    3 months ago

                    My point is what OPs point was (which you veered away from in order to try to show off that You Are Very Smart): it is literally impossible for a computer system to prove a historical event has happened.

        • jaybone@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 months ago

          Your proof example is a proof from your discrete structures class. That’s very different than “proving” something like “the Trump assassination attempt was a conspiracy.”

          Otherwise we could have gotten rid of courts a long time ago.

          • doodledup@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            3 months ago

            Well obviously. But that was not at all what I said or claimed. I just said that you can prove certain properties of neural networks because others said that you can’t. And others also misunderstood LLMs in general. They believe it’s an information retrival service, which is wrong.

            Besides, your argument, as you’ve written it, applies to everything. Literally. From Wikipedia, to News, even up to your eyesight. What can you actually prove? I don’t understand the point you’re making and how that is related to LLMs.

        • markon@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 months ago

          Just like us. Sometimes it’s better to have bullshit predictions than none.

        • snooggums@midwest.social
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          3 months ago

          The only reason why it’s so incredibly good in many applications is because it’s bad in others. It’s intentionally designed that way.

          lolwut

          • doodledup@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            7
            ·
            3 months ago

            It’s designed in a ways that’ll make it inherently incorrect. Even on a physical basis (due to numeric issues). It’s not a problem of the algorithm because it has been designed that way. The problem is that you don’t know how to correctly use it.

            I can’t explain it any differently without getting overly technical. You wouldn’t understand it anyways, judging by your comment “lolwut”. If you want to learn how LLMs work specifically, there are plenty of ressources on the internet.

            • snooggums@midwest.social
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              3 months ago

              It’s designed in a ways that’ll make it inherently incorrect. Even on a physical basis (due to numeric issues). It’s not a problem of the algorithm because it has been designed that way. The problem is that you don’t know how to correctly use it.

              “It doesn’t make a good source of knowledge.”

              “Yeah, but it is designed to be inherently wrong”

              How does that make any sense when trying to use something for knowledge? Being inherently wrong is the opposite of helpful for knowledge.

              AI is great at pattern recognition, but knowledge isn’t pattern recognition. Needing to know when it gives false information requires the “supervisor” to already have that knowledge. That makes the AI less useful than a simple reference because at least the reference can come from a trusted source.

              If people stopped trying to jam AI into situations where being correct is important it wouldn’t be a problem. But excusing that because it is designed to be inherently wrong deserves another LOLWUT.

              • doodledup@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                5
                ·
                edit-2
                3 months ago

                How does that make any sense when trying to use something for knowledge? Being inherently wrong is the opposite of helpful for knowledge.

                It was never designed to reproduce knowledge. It was designed to do reasoning and natural language processing and generation. You’re using it wrong.

                LULWUT

                If you don’t know what you’re talking about and don’t have any capacity to learn something new, it’s sometimes best to stop talking. Especially when you’re starting to get rude to knowlegable people that try to explain it to you.

            • CileTheSane@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 months ago

              It’s designed in a ways that’ll make it inherently incorrect. Even on a physical basis (due to numeric issues). It’s not a problem of the algorithm because it has been designed that way. The problem is that you don’t know how to correctly use it.

              So it is bad at things like giving or finding factual information. I agree, companies need to stop cramming it into everything (like search engines) for tasks that it is specifically bad at because it is not designed for it.

            • uranibaba@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Can you recommend any for resource to start with? (If I can be picky, then something I can consume after a whole day of being a patent because there is no energy for much else.)

      • markon@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        3 months ago

        We should understand that 99.9% of what wee say and think and believe is what feels good to us and we then rationalize using very faulty reasoning, and that’s only when really challenged! You know how I came up with these words? I hallucinated them. It’s just a guided hallucination. People with certain mental illnesses are less guided by their senses. We aren’t magic and I don’t get why it is so hard for humans to accept how any individual is nearly useless for figuring anything out. We have to work as agents too, so why do we expect an early days LLM to be perfect? It’s so odd to me. Computer is trying to understand our made up bullshit. A logic machine trying to comprehend bullshit. It is amazing it even appears to understand anything at all.

        • snooggums@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 months ago

          You know how I came up with these words? I hallucinated them. It’s just a guided hallucination.

          The the word hallucination means literally anything you want it to. Cool, cool. Very valiant of you.