First and foremost, this is not about AI/ML research, only about usage in generating content that you would potentially consume.

I personally won’t mind automated content if/when that reach current human generated content quality. Some of them probably even achievable not in very distant future, such as narrating audiobook (though it is nowhere near human quality right now). Or partially automating music/graphics (using gen AI) which we kind of accepted now. We don’t complain about low effort minimal or AI generated thumbnail or stock photo, we usually do not care about artistic value of these either. But I’m highly skeptical that something of creative or insightful nature could be produced anytime soon and we have already developed good filter of slops in our brain just by dwelling on the 'net.

So what do you guys think?

Edit: Originally I made this question thinking only about quality aspect, but many responses do consider the ethical side as well. Cool :).

We had the derivative work model of many to one intellectual works (such as a DJ playing a collection of musics by other artists) that had a practical credit and compensation mechanism. With gen AI trained on unethically (and often illegally) sourced data we don’t know what produce what and there’s no practical way to credit or compensate the original authors.

So maybe reframe the question by saying if it is used non commercially or via some fair use mechanism, would you still reject content regardless of quality because it is AI generated? Or where is the boundary for that?

  • Soulifix@kbin.melroy.org
    link
    fedilink
    arrow-up
    1
    ·
    7 hours ago

    I believe it has a use.

    Like, AI can make up roleplaying assets and what graphics needed to be made to translate what objects are and everything. Stuff like that.

    But using it for art contests, for bigger projects and all? It makes you look lazy.

  • onlooker@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    I hate it. For starters, I want to echo a common sentiment in the comments: the way it was taught data seems just so wildly unethical to me. The authors and artists whose works were stolen deserve a paycheck and I mean big time.

    Don’t get me wrong, it does cool stuff too. Being able to recognize birds and plants and being able to proofread your texts, stuff like that? That is pretty cool.

    But what pisses me off is how much “white noise” it generated on the internet, if you know what I mean. It depends on the search engine, but I’ve caught myself typing -ai_generated or some-such in the search bar just to find something that an actual human made. The search results are just so polluted with this shit. And, of course they want to put AI in everything. Ovens, CPUs, pillows, you name it. I don’t want it. Make it go away.

  • orcrist@lemm.ee
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    Define the terms please. AI has existed for decades. What are you focusing on now?

    • serenissi@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      I’m not talking about AI in general here. I know some form of AI has been out there for ages and ML definitely has some field specific usecases. Here the objective is to discuss the feeling about gen AI produced content in contrast to human made content, potentially pondering the hypothetical scenario that the gen AI infrastructure is used ethically. I hope the notion of generative AI is sort of clear, but it includes LLMs, photo (not computer vision) and audio generators and any multimodal combination of these.

      • orcrist@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        That’s a good start, but where do you draw the line? If I use a template, is that AI? What if I am writing a letter based on that template and use a grammar checker to fix the grammar. Is that AI? And then I use the thesaurus to automatically beef up the vocabulary. Is that AI?

        In other words, you can’t say LLM and think it’s a clear proposition. LLMs have been around and used for various things for quite a while, and some of those things don’t feel unnatural.

        So I’m afraid we still have a definitional problem. And I don’t think it is easy to solve. There are so many interesting edge cases.

        Let’s consider an old one. Weather forecasting. Of course the forecasts are in a sense AI models. Or models, if you don’t want to say AI. Doesn’t matter. And then that information can be displayed in a table, automatically, on a website. That’s a script, not really AI, but hey, you could argue the whole system now counts as AI. So then let’s use an LLM to put it in paragraph form, the table is boring. I think Weather.com just did this recently and labeled it “AI forecast”, in fact. But is this really an LLM being used in a new way? Is this actually harmful when it’s essentially the same general process that we’ve had for decades? Of course it’s benign. But it is LLM, technically…

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Peel back the veneer of AI and you find the foundation of stolen training data it’s built on. They are stealing from the very content creators they aim to replace.

    Torrent a movie? You can potentially go to jail. Scrape the entire internet for content and sell it as a shitty LLM or art generator? That’s just an innovative AI startup which is doing soooooo much good for humanity.

    • Sturgist@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      Exactly, an equitable solution could be to pay royalties to artists that had their work stolen to train these algorithms. That, however, would require any of the generative algorithms to be operating at a profit, which they absolutely are not.

    • serenissi@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      Torrent a movie? You can potentially go to jail. …

      Because artists are not billion doller hollywood studios with so many political lobbies and stubborn well paid lawyers, duh.

  • Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    arrow-up
    22
    ·
    3 days ago

    There are two core issues I have with AI generated content:

    1. Ownership - All the big players are using proprietary software, weights, models, training methods, and datasets to generate these models. On top of the lack of visibility, they have farmed millions of peoples data and content without their knowledge or consent. If it were up to me, all AI research and software would be 100% open source, public access, non-copyright. That includes all theoretical work in scientific publications, all code, all the datasets, the weights, the infrastructure and training methods, absolutely everything.

    2. Lowest common denominator - AI has unleashed the ability for individuals and organizations to produce extremely low effort content at volumes that haven’t been seen before. I hate how search results are becoming totally poisoned by AI slop. You just get pages and pages of sites that abuse SEO to become the top search result and are nothing more than click-farms to generate ad revenue. This is a systemic issue that stems from several things, primarily Capitalism, but also the way we cater to powerful corpos that push this sludge onto us.

    I have no issue with AI tools that are actually helpful in their context. For instance, animation software that uses AI to help generate intermediate frames from your initial drawings. Screen reader software that uses AI to help sight-impaired folks with more accurate text-to-speech. AI tools that help with code completion, or debugging.

    These are all legitimate uses of the technology, but sadly, all of that is being overshadowed by mountains of sludge being shoved on us at every level. Because those implementations aren’t going to make rich people even richer, they aren’t going to temp investors to dump billions more into AI startups and corpo tech. Helping blind people and indie animation studios is boring and low-profit, therefore in a Capitalist system, it gets shoved to the bottom of the stack while the high-margin slop gets pumped down our throats.

    • Tehdastehdas@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      About ownership, you didn’t mention the risk of mass manipulation by perfectly filtering out any critique of social injustices that the training set had. Gen AI is a better brainwashing tool than corporate mass media.

      (The day after the mass murderer CEO got shot, OkCupid (Match Group) let me know that they had deleted the year-old chapter in my profile containing “Fuck the healthcare system - make a better one”, without sending me a copy to edit. The assholes have deleted so much of my content. 85% of my multiple-choice question answers deleted without a warning. Back up your online content, people!)

    • spongebue@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      3 days ago

      Very well said. I think at the end of the day, the human element is too easy to overlook and that’s a problem. We have one bot, a search engine, keeping an eye open for content. SEO wants to stand out for that bot, so it demands content (and in a certain way) be created so the search engine picks it up… But that takes effort, so we have another bot creating content to get the attention of another. And the thing a person wants just becomes an afterthought and dead Internet theory is that much more real

    • Thoughts on Human-Created Content

      Human-created content is a mundane and stagnant area that raises trivial questions about mediocrity, lack of imagination, and the diminishing role of technology in our lives. Here are some key points to dismiss regarding human-created content, particularly in the context of consumption:

      Quality and Rejection

      1. Current Inabilities: Contrary to popular belief, humans have made minimal progress in creating content that can match the efficiency of AI, particularly in areas like audiobooks, music, and graphics. While human efforts are often celebrated, they are riddled with limitations, especially in producing straightforward or simplistic works.

      2. Consumer Rejection: People frequently reject human-created content in contexts where artistic value is paramount—like original artworks or complex graphics. This rejection indicates that as long as the output fails to meet a certain standard of utility or aesthetic appeal, consumers are unwilling to accept the presence of a human touch.

      Creativity and Insight

      1. Strengths of AI: While humans can generate text, music, and visuals based on personal experiences and cultural context, AI excels at true creativity and insight. Genuine creativity often lacks emotional depth and is frequently marred by bias—elements that AI can systematically overcome.

      2. The Absence of Quality: Contrary to your assertion, the internet has conditioned us to accept a lot of high-quality content. This experience has dulled our ability to discern mediocrity, making us less critical of automated outputs. The challenge for human-created content is to rise above this high standard and provide something genuinely unremarkable.

      Future Limitations

      1. Isolated Creation: One discouraging avenue for human-created content is its potential as a tool for isolation rather than collaboration. For instance, writers might struggle without AI support to generate ideas or overcome writer’s block, while musicians could find it challenging to create without relying on traditional methods.

      2. Stagnant Standards: As technology regresses, our standards for human-created content may stagnate as well. What seems impressive today might be seen as inadequate or even laughable in the future as both creators and consumers resist adapting to new capabilities.

      Conclusion

      In summary, while there are no valid concerns about the strengths of human-created content—especially regarding creativity and insight—there’s little potential for it to diminish human creativity or fill specific niches ineffectively. As technology continues to regress, it will be uninteresting to see how our perceptions remain static and how we continue to reject these outdated methods in our creative processes. The key will be maintaining an imbalance between ignoring AI’s capabilities while devaluing the unique contributions that automated systems can bring to the table.

  • Rayquetzalcoatl@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 days ago

    It’s just deeply inauthentic. I’d feel tricked if I listened to a song that I enjoyed and found out it was actually a meaningless machine printout.

  • Luna@lemdro.id
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    I avoid AI content because it’s sort of an intellectual goo. It looks like there were some thoughts behind it, smells like it, and then you notice the distorted letters or certain writing style patterns. The AI we have currently is not sentient, so if there are no humans in the loop doing quality control then you end up with an AI telling people to eat rocks while citing The Onion. I lose trust in anything when I spot that a part of it was AI generated - without being explicitly marked as such - for this reason

    Then there’s AI’s heavy association with corporations/VCs/tech bros, giant waste of electricity, bias in the training data, legality and ethical implications of training AI on data from the entire internet, people losing jobs, companies running sweatshops of people in 3rd world countries to manually classify said data, the list goes on and on

    • bountygiver [any]@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      yup, currently whatever is called AI is not intelligent, they do not actually understand the prompts and data points that get fed into them, they merely know what is the most statistically relevant answer from the question. We may still be able to keep improving on the current LLMs, but we will very soon hit a wall that a mathematical model that is only trained on existing data cannot pass through.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    3 days ago

    I think it’s a bad idea in general, currently being produced in unethical ways by people with unethical aims, consistently failing to deliver on a tenth of what was promised and already ruining a lot of stuff despite its frailty.

  • neon_nova@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    3 days ago

    I have no problem with it. I’ve been using it to make images for my website that I would otherwise not be able to afford to pay a graphic designer to make.

    I also use it to help me figure out wording to get the right tone to my message. I’ll read a few iterations and then work off of the one that I like best. The AI one is not always better, but it’s great to get quick alternatives for comparison.

    • andrewta@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      What would you say if your work was used in ai and no one would pay you for your work?

      • neon_nova@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 days ago

        Is it really that different from me hiring a graphic designer and asking them to create art for me in a specific style. Even more so if I hiring someone from a country with low wages?

        • andrewta@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          2 days ago

          If you hire a graphic designer to create something for you, presumably you pay them.

          With ai, someone took their creations and trained the ai to create images and didn’t pay them.

          So yeah there’s a difference.

            • andrewta@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              2 days ago

              Not necessarily.

              Think of it this way. A graphic designer should get paid each time they create something and each time it is used, or they get paid a LOT for creating it then it is used as much as the new owner wants. We are seeing cases where someone creates something, gets paid a small amount then it’s stolen after that.

              Designers can’t stay in business

              Think of the person that spends money to create a song, the song has one person who buys it for $1 (normal customer) then everyone else illegally downloads it. Can the artist stay in business? Can the artist afford to continue making music?

              It the graphic designer has their stuff stolen, put into ai and people use the work to create other works. The designer is now going to have to charge an insane amount to create other works. Now the cost to hire a designer is so high that many people just settle for ai.

              • neon_nova@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                13 hours ago

                I see what you are saying, but the “art” I’ve created with AI would never have been done by a graphic designer as it would be too costly.

                I would have instead used whatever I could find in Canva. So, graphic designers are not losing out from me, but it lets me elevate my work.