Do we really need a video about this in 2024? Shouldn’t this be already a core part of our education as software engineers?
Do we really need a video about this in 2024? Shouldn’t this be already a core part of our education as software engineers?
The title of the post is “how to avoid if-else hell”, not “how to avoid conditionals”. Not sure what’s your point.
So you probably have to go and fix it now. Good luck.
It’s a joke… Before I’m sentenced to death by downvotes.
Learning how Systemd manages the network was a total mindfuck. There are so many alternatives, all of them being used differently by different tools, partially supported. networkd, Network Manager… There were other tools, they shared similar files but had them in different /etc or /usr folders. There were unexpected interactions between the tools… Oh man, it was so bad. I was very disappointed.
I was really into learning how things really worked in Linux and this was a slap to my face because my mindset was “Linux is so straightforward”. No, it is not, it is actually a mess like most systems. I know this isn’t a “Linux” issue, I’m just ranting about this specific ecosystem.
And yet, the worst design choice was how this meme template was used.
I honestly don’t get why everyone is agreeing with Windows on this one. I just love how explicit Linux is.
file.txt is fucking file.txt. Don’t do any type extra magic. Do exactly as I’m saying. If I say “open file.txt”, it is “open file.txt”, not “open File.txt”.
The feature isn’t being able to create filenames with the same name, nobody does that. The feature is how explicit it is.
It would be so confusing to read some code trying to access FILE.TXT and then find the filesystem has file.txt
The moment you finally install arch and your realize you still feel empty inside.
For large companies that serve many customers 5K per year is a drop in a bucket. If it provides their customers with a more secure experience, it is worth it.
Hey, I’m not connecting the dots about the revenge. Could you elaborate please?
And this won’t even stop kids from finding porn. I think it is based on good intentions but they are too proud to say “yeha, maybe this has more cons than pros”
Hang the DJ
Right? Specially when those minorities have been abused into one of the low social classes.
These are two related issues.
You keep asking questions like “can a model build a house” but keep ignoring questions like “can an octopus build a house”. Then asking “can a model learn in seconds how to escape from a complex enclosure” and then ignoring “can a newborn human baby do that?”
Can an octopus write a poem? Can a baby write an essay? Can an adult human speak every human language, including fictional languages?
Just because it isn’t as intelligent as a human doesn’t mean this isn’t some type if intelligence.
Go and check what we call AI in videogames. Do you think that’s a simulated human? Go see what we’ve been calling AI in chess. Is that a simulated human being playing chess? No.
We’ve been calling Artificial intelligence things that are waaaaaay dumber than GPTs for decades. Even in the academia. Suddenly a group of people decided “artificial intelligence must be equal to human intelligence”. Nope.
Intelligence doesn’t need to be the same type of human intelligence.
Yeha, that’s what I can’t imagine. What part of their data architecture can’t be sharded?
user accounts? sessions? cache keys? profiles? graphical assets?
This isn’t a highly transactional bank with strong transactional guarantes.
Would be pretty cool if they explained the issue after fixing it.
Is an octopus intelligent? Can an octopus build an airplane?
Why do you expect these models to have human skills if they are not humans?
How can they build a house if they don’t even have vision or a physical body? Can a paralized human that can only hear and speak build a house? Is that human intelligent?
This is clearly not human intelligence, it clearly lacks human skills. Does it mean it isn’t intelligent and it has no skills?
Things we know so far:
Humans can train LLMs with new data, which means they can acquire knowledge.
LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.
We know multi-modal is possible, which means these models can acquire skills.
We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.
We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.
… What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.
What is intelligence?
Even if we don’t know what it is with certainty, it’s valid to say that something isn’t intelligence. For example, a rock isn’t intelligent. I think everyone would agree with that.
Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.
A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.
For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I’m not aware of the “intelligent” process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.
If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?
I wonder what’s the scalability issue. I’ve never seen a system that can’t be fixed by throwing more horizontally scalable resources at it.
Isn’t the sign ironic?
My expectation is that this is something core that programmers should be aware of all the time. Forgetting about this is like forgetting what an interface is. It’s at the core of what we do. At least I think so, maybe I’m wrong assuming this is something every programmer should be aware of all the time.