• 0 Posts
  • 158 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle




  • Learning how Systemd manages the network was a total mindfuck. There are so many alternatives, all of them being used differently by different tools, partially supported. networkd, Network Manager… There were other tools, they shared similar files but had them in different /etc or /usr folders. There were unexpected interactions between the tools… Oh man, it was so bad. I was very disappointed.

    I was really into learning how things really worked in Linux and this was a slap to my face because my mindset was “Linux is so straightforward”. No, it is not, it is actually a mess like most systems. I know this isn’t a “Linux” issue, I’m just ranting about this specific ecosystem.










  • You keep asking questions like “can a model build a house” but keep ignoring questions like “can an octopus build a house”. Then asking “can a model learn in seconds how to escape from a complex enclosure” and then ignoring “can a newborn human baby do that?”

    Can an octopus write a poem? Can a baby write an essay? Can an adult human speak every human language, including fictional languages?

    Just because it isn’t as intelligent as a human doesn’t mean this isn’t some type if intelligence.

    Go and check what we call AI in videogames. Do you think that’s a simulated human? Go see what we’ve been calling AI in chess. Is that a simulated human being playing chess? No.

    We’ve been calling Artificial intelligence things that are waaaaaay dumber than GPTs for decades. Even in the academia. Suddenly a group of people decided “artificial intelligence must be equal to human intelligence”. Nope.

    Intelligence doesn’t need to be the same type of human intelligence.




  • Things we know so far:

    • Humans can train LLMs with new data, which means they can acquire knowledge.

    • LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.

    • We know multi-modal is possible, which means these models can acquire skills.

    • We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.

    • We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.

    … What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.


  • What is intelligence?

    Even if we don’t know what it is with certainty, it’s valid to say that something isn’t intelligence. For example, a rock isn’t intelligent. I think everyone would agree with that.

    Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.

    A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.

    For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I’m not aware of the “intelligent” process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.

    If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?