He’s very good.

  • 2 Posts
  • 117 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle

  • This isn’t my field, and some undergraduate philosophy classes I took more than 20 years ago might not be leaving me well equipped to understand this paper. So I’ll admit I’m probably out of my element, and want to understand.

    That being said, I’m not reading this paper with your interpretation.

    This is exactly what they’ve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).

    But they’ve defined the AI-by-Learning problem in a specific way (here’s the informal definition):

    Given: A way of sampling from a distribution D.

    Task: Find an algorithm A (i.e., ‘an AI’) that, when run for different possible situations as input, outputs behaviours that are human-like (i.e., approximately like D for some meaning of ‘approximate’).

    I read this definition of the problem to be defined by needing to sample from D, that is, to “learn.”

    The explicit point is to show that it doesn’t matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI

    But the caveat I’m reading, implicit in the paper’s definition of the AI-by-Learning problem, is that it’s about an entire class of methods, of learning from a perfect sample of intelligent outputs to itself be able to mimic intelligent outputs.

    General Intelligence has a set definition that the paper’s authors stick with. It’s not as simple as “it’s a human-like intelligence” or something that merely approximates it.

    The paper defines it:

    Specifically, in our formalisation of AI-by-Learning, we will make the simplifying assumption that there is a finite set of possible behaviours and that for each situation s there is a fixed number of behaviours Bs that humans may display in situation s.

    It’s just defining an approximation of human behavior, and saying that achieving that formalized approximation is intractable, using inferences from training data. So I’m still seeing the definition of human-like behavior, which would by definition be satisfied by human behavior. So that’s the circular reasoning here, and whether human behavior fits another definition of AGI doesn’t actually affect the proof here. They’re proving that learning to be human-like is intractable, not that achieving AGI is itself intractable.

    I think it’s an important distinction, if I’m reading it correctly. But if I’m not, I’m also happy to be proven wrong.


  • I can’t think of a scenario where we’ve improved something so much that there’s just absolutely nothing we could improve on further.

    Progress itself isn’t inevitable. Just because it’s possible doesn’t mean that we’ll get there, because the history of human development shows that societies can and do stall, reverse, etc.

    And even if all human societies tends towards progress, it could still hit dead ends and stop there. Conceptually, it’s like climbing a mountain through the algorithm of “if there is a higher elevation near you, go towards that, and avoid stepping downward in elevation.” Eventually that algorithm brings you to a local peak. But the local peak might not be the highest point on the mountain, and while it is theoretically possible to have gotten to the other true peak from the beginning, the person who is insistent on never stepping downward is now stuck. Or, it’s possible to get to the true peak but it requires climbing downward for a time and climbing up past elevations we’ve already been to, on paths we hadn’t been on. One can imagine a society that refuses to step downward, breaking the inevitability of progress.

    This paper identifies a specific dead end and advocates against hoping for general AI through computational training. It is, in effect, arguing that even though we can still see plenty of places that are higher elevation than where we are standing, we’re headed towards a dead end, and should climb back down. I suspect that not a lot of the actual climbers will heed that advice.


  • That’s assuming that we are a general intelligence.

    But it’s easy to just define general intelligence as something approximating what humans already do. The paper itself only analyzed whether it was feasible to have a computational system that produces outputs approximately similar to humans, whatever that is.

    True, they’ve only calculated it’d take perhaps millions of years.

    No, you’re missing my point, at least how I read the paper. They’re saying that the method of using training data to computationally develop a neural network is a conceptual dead end. Throwing more resources at the NP-hard problem isn’t going to solve it.

    What they didn’t prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It’s just that this particular method of inferential training, what they call “AI-by-Learning,” is an NP-hard computational problem.



  • The paper’s scope is to prove that AI cannot feasibly be trained, using training data and learning algorithms, into something that approximates human cognition.

    The limits of that finding are important here: it’s not that creating an AGI is impossible, it’s just that however it will be made, it will need to be made some other way, not by training alone.

    Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.

    So it may still be the case that AGI via computation alone is possible, and that creating such an AGI will not require solution of an NP-hard problem. But this paper closes one potential pathway that many believe is a viable pathway (if the paper’s proof is actually correct, I definitely am not the person to make that evaluation). That doesn’t mean they’ve proven there’s no pathway at all.



  • Tell me, during an incumbent primary, who controls the DNC?

    Same as during a non-incumbent primary. The person who won the most recent nomination tends to have an outsized voice in the selection of party officials (because it’s their pledged delegates who vote on all the other stuff). Yes, that means Biden-affiliated insiders had an inside track in 2020, but that’s also true of Clinton allies in 2016, Obama allies in 2012, Obama allies in 2008, and Kerry allies in 2004.

    More than a year ago, the DNC adopted new rules—including a primary calendar that ignored state law in Iowa and New Hampshire and eliminated any primary debates—designed to ensure that Biden’s coronation would proceed untroubled by opposition from any credible Democrat.

    Which of those changes in the rules do you think were designed to benefit Biden specifically? De-emphasizing the role of Iowa and New Hampshire? There’s been people clamoring for that for decades, within the party.

    There’s basically no set of rules that will ever create a credible challenge to an incumbent who wants to run for reelection. It’s a popularity problem, not a structural problem.


  • No one deserves to be a president any more than anyone else, and treating an incumbent as though they do, without having to go through an open, democratic primary process, is to treat them as more deserving of future authority than other citizens.

    There was a primary, and Biden got the most votes/delegates under the rules. Nobody is saying that incumbents should automatically get renomination. Or even that the incumbent should get some sort of rules advantage (like say, the way the defending world champ in chess gets an auto-bid to defend his title against a challenger who has to win a tournament to get there).

    The rules are already set up to where any challenger has an equal structural change of winning the primary. They just won’t have the actual popular support. You know, the core principles of democratic elections.



  • After being acquired by Google, YouTube got better for years (before getting worse again). Android really improved for a decade or so after getting acquired by Google.

    The Next/Apple merger made the merged company way better. Apple probably wouldn’t have survived much longer without Next.

    I’d argue the Pixar acquisition was still good for a few decades after, and probably made Disney better.

    A good merger tends to be forgotten, where the two different parts work together seamlessly to the point that people forget they used to be separately run.




  • Like many others, I jumped on the sourdough bandwagon in 2020, but fell off sometime during the year after that.

    But a friend of mine stuck with it, and expanded into sourdough pizza doughs for NY style or Neapolitan style pizzas in his backyard pizza oven. He had a bunch of us over today, and I don’t think I understood everything he was saying (he was doing 60% hydration for 00 flour, but stuff I didn’t quite catch about when to knead/rest), but I can say that the pizzas he was making were delicious, and he made it seem so effortless to stretch the dough out to around 14 inch (35cm) diameter. And it was kinda infectious to see his enthusiasm for something he’d been churning away at for the last few years, explaining a bunch of things to a bunch of friends gathered around, and just having a great time on a Sunday afternoon.

    So a bunch of us are probably gonna try our hands at the same thing, and form a bit of an amateur pizza group, texting our successes and failures to each other.




  • If construction is delayed by an injunction

    Can you name an example? Because the reactor constructions that I’ve seen get delayed have run into plain old engineering problems. The 4 proposed new reactors at Vogtle and V.C. Summer ran into cost overruns because of production issues and QA/QC issues requiring expensive redesigns mid-construction, after initial regulatory approvals and licensing were already approved. The V.C. Summer project was canceled after running up $9 billion in costs, and the Vogtle projects are about $17 billion over the original $14 billion budget, at $31 billion (and counting, as reactor 4 has been delayed once again over cooling system issues). The timeline is also about 8 years late (originally proposed to finish in 2016).

    And yes, litigation did make those projects even more expensive, but the litigation was mostly about other things (like energy buyers trying to back out of the commitment to buy power from the completed reactors when it was taking too long), because it took too long, not litigation to slow things down.

    The small modular reactor project in Idaho was just canceled too, because of the mundane issue of interest rates and buyers unwilling to commit to the high prices.

    Nuclear doesn’t make financial sense anymore. Let’s keep the plants we have for as long as we can, but we might be past the point where new plants are cost effective.



  • It’s certainly interesting that people are exploring other options for creating hot dark beverages that taste at least somewhat similar to coffee, but it’s also entirely possible that synthesized caffeine makes its way into other beverages entirely. Obviously there’s tea as a substitute, but there are also lots of soft drinks and energy drinks with caffeine.

    So long as caffeine remains cheap, increasing price of coffee will likely be met with caffeinated substitutes that have nothing to do with the coffee plant.



  • And the comment-section on those type of post isn’t the right place for a “philosophical” discussions that would otherwise be on topic for that sub/community, but exactly align with topic of that post or news article.

    Can you explain why you believe this? I’ve always understood deep dives into the topic or context or general issues raised by an article to be fair game, whether we’re talking the comments on the news article itself, a link on Reddit, a link on Hacker News, a link on a vBulletin/phpBB forum, or even old newsgroup/listserv discussions.

    Reddit’s decision to start allowing “self” posts that were only links back to the comments thread itself (showing just how link-centered the design of reddit originally was, that every post had to have a link to something) came after the discussions around links became robust enough to support comments-first threads.