Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • jacksilver@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    1 month ago

    Here’s an easy way we’re different, we can learn new things. LLMs are static models, it’s why they mention the cut off dates for learning for OpenAI models.

    Another is that LLMs can’t do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it’s almost guaranteed to fail.

    Yes, they are very impressive models, but they’re a long way from AGI.

    • DavidDoesLemmy@aussie.zone
      link
      fedilink
      arrow-up
      4
      arrow-down
      8
      ·
      1 month ago

      I know lots of humans who can’t do maths. At least I think they’re human. Maybe there LLMs, by your definition.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        1 month ago

        I think you’re missing the point. No LLM can do math, most humans can. No LLM can learn new information, all humans can and do (maybe to varying degrees, but still).

        AMD just to clarify by not able to do math. I mean that there is a lack of understanding in how numbers work where combining numbers or values outside of the training data can easily trip them up. Since it’s prediction based, exponents/tri functions/etc. will quickly produce errors when using large values.

        • Zexks@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          Yes. Some LLMs can do math. It’s a documented thing. Just because you’re unaware of it doesn’t mean it doesn’t exist.