There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.

What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.

AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    17
    ·
    1 year ago

    Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.

    Unlikely to replace the “most” competent humans, but probably the lower 80% (Pareto principle), where “crappy” is “good enough”.

    What’s really troubling, is that it will happen all across the board; I’m yet to find a single field where most tasks couldn’t be replaced by an AI. Used to think 3D design would take the longest, but no, there are already 3D design AIs.

    • potterman28wxcv@beehaw.org
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      1 year ago

      I’m yet to find a single field where most tasks couldn’t be replaced by an AI

      Critical-application development. For example, developing a program that drives a rocket or an airplane.

      You can have an AI write some code. But good luck proving that the code meets all the safety criteria.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        You just said the same thing the comment responding to did, though. He pointed out that AI can replace the lower 80%, and you said the AI can write some code but that it might have trouble doing the expert work of proving the code meets the safety criteria. That’s where the 20% comes in.

        Also, it becomes easier to recognize the possibility for AI contribution when you widen your view to consider all the work required for critical application development beyond just the particular task of writing code. The company surrounding that task has a lot of non-coding work that gets done that is also amenable to AI replacement.

        • PenguinTD@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          That split won’t work cause the top 20% would not like to do their day job clean up AI codes. It’s much better time investment wise for them to write their own template generation tool so the 80% can write the key part of their task, than taking AI templates that may or may not be wrong and then hunting all over the place to remove bugs.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            6
            ·
            edit-2
            1 year ago

            Use the AI to fix the bugs.

            A couple months ago, I tried it on ChatGPT: I had never ever written or seen a single line in COBOL… so I asked ChatGPT to write me a program to print the first 10 elements of the Fibonacci series. I copy+pasted it into a COBOL web emulator… and it failed, with some errors. Copy+pasted the errors back to ChatGPT, asked it to fix them, and at the second or third iteration, the program was working as intended.

            If an AI were to run with enough context to keep all the requirements for a module, then iterate with input from a test suite, all one would need to write would be the requirements. Use the AI to also write the tests for each requirement, maybe make a library of them, and the core development loop could be reduced to ticking boxes for the requirements you wanted for each module… but maybe an AI could do that too?

            Weird times are coming. 😐

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              6
              ·
              1 year ago

              I’m a professional programmer and this is how I use ChatGPT. Instead of asking it “give me a script to do big complicated task” and then laughing at it when it fails, I tell it “give me a script to do .” Then when I confirm that works, I say "okay, now add a function that takes the output of the first function and does " Repeat until done, correcting it when it makes mistakes. You still need to know how to spot problems but it’s way faster than writing it myself, even if I don’t have to go rummaging through API documentation and whatnot.

              • amki@feddit.de
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                I mean that is exactly what programming is except you type to an AI and have it type the script. What is that good for?

                Could have just typed the script in the first place.

                It ChatGPT can use the API it can’t be too complex otherwise you are in for a surprise once you find out what ChatGPT didn’t care about (caching, usage limits, pricing, usage contracts)

                • abhibeckert@beehaw.org
                  link
                  fedilink
                  arrow-up
                  7
                  ·
                  edit-2
                  1 year ago

                  Could have just typed the script in the first place.

                  Sure - but ChatGPT can type faster than me. And for simple tasks, CoPilot is even faster.

                  Also - it doesn’t just speed up typing, it also speeds up basics like “what did bob name that function?”

                  • FaceDeer@kbin.social
                    link
                    fedilink
                    arrow-up
                    3
                    ·
                    1 year ago

                    And stuff like “I know there’s a library out there that does the thing I’m trying to do, what’s it named and how do I call it?”

                    I haven’t been using ChatGPT for the “meat” of my programming, but there are so many things that little one-off scrappy Python scripts make so much easier in my line of work.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  1 year ago

                  it’s way faster than writing it myself

                  I already explained.

                  I could write the scripts myself, sure. But can I write the scripts in a matter of minutes? Even with a bit of debugging time thrown in, and the time it takes to describe the problem to ChatGPT, it’s not even close. And those descriptions of the problem make for good documentation to boot.

    • Remmock@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      Fashion designers are being replaced by AI.
      Investment capitalists are starting to argue that C-Suite company officers are costing companies too much money.
      Our Ouroboros economy hungers.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        C-Suites can get replaced by AIs… controlled by a crypto DAO replacing the board. And now that we’re at it, replace all workers by AIs, and investors by AI trading bots.

        Why have any humans, when you can put in some initial capital, and have the bot invert in a DAO that controls a full-AI company. Bonus points if all the clients are also AIs.

        The future is going to be weird AF. 😆😰🙈

          • TwilightVulpine@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            That’s where we need to ask how we define “better”. Is better “when the number goes bigger” or is better “when more people benefit”? If an AI can better optimize to better extract the maximum value from people’s work and discard them, then optimize how many ways they can monetize their product to maximize the profit they get from each customer, the result is a horrible company and a horrible society.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            In theory yes… but what do we call “doing a better job”? Is it just blindly extracting money? Or is it something more, and do we all agree on what it is? I think there could be a compounded problem of oversight.

            Like, right now an employee pays/invests some money into a retirement fund, whose managers invest into several mutual funds, whose managers invest into several companies, whose owners ask for some performance from their C-suite, who through a chain of command tell the same employee what to do. Even though it’s part of the employee’s capital that’s controlling that company, if it takes an action negative for the employee like fracking under their home, or firing them, they’re powerless to do anything about it with their investment.

            With AI replacing all those steps, it would all happen much quicker, and —since AIs are still basically a black box— with even less transparency than having corruptible humans on the same steps (at least we kind of know what tends to corrupt humans). Adding strict “code as contract” rules to try to keep them in check, would on a first sight look like an improvement, but in practice any unpredicted behavior could spread blindingly fast over the whole ecosystem, with nobody having a “stop” button anymore. That’s even before considering coding errors and malicious actors.

            I guess a possible solution would be requiring every AI to have an external stop trigger, that a judicial system could launch to… possibly paralyze the whole economy. But that would require new legislation to be passed (with AI lawyers), and it would likely get late, and not be fully implemented by those trying to outsmart the system. Replace the judges by AIs too, politicians with AIs, talking heads on TV with AIs… and it becomes an AI world where humans have little to nothing to say. Are humans even of any use, in such a world?

            None of those AIs need to be an AGI, so we could run ourselves into a corner with nobody and nothing having a global plan or oversight. Kind of like right now, but worse for the people.

            Alternatively, all those AIs could be eco-friendly humans-first compassionate black boxes… but I kind of doubt those are the kind of AIs that current businesses are trying to build.

            • amki@feddit.de
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Thing is nobody will do that because once AI finds a way to spazz out that is totally unpredictable (black box) everything might just be gone.

              It’s a totally unrealistic scenario.

              • jarfil@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                People are already doing it, piece by piece, in all areas. As more AIs get input from other AIs, the chance of a cascading failure increases… but it will seem to be working “good enough” up until then, so more people will keep jumping on the bandwagon.

                The question is: can we prepare for the eventual cascading spazz out, or have we no option other than letting it catch us by surprise?

              • Honytawk@lemmy.zip
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                They are working on mitigating the unpredictable “black box”.

                Like making the AI explain their working method step by step. Not only does it make the AI more transparent, it also increases the correctness of whatever it types.

                AI is still in development. It is good to list the problems you have, but don’t think those problems won’t be solved in the future.

    • amki@feddit.de
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Unfortunately everything AI does is kind of shitty. Sure you might have a query for which the chosen AI works well but you might as well not.

      It you accept that it sometimes just doesn’t work at all sure AI is your revolution. Unfortunately there are not too many use cases where this is helpful.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I posit that in 80% of the cases, an AI working well even less than 50% of the times, is still “good enough” to achieve the shittier 80% of goals.

        “I’ll have a burger with extra ketchup”… you get extra mayo instead… for half the price; “good enough”.