• 0 Posts
  • 10 Comments
Joined 3 days ago
cake
Cake day: February 10th, 2025

help-circle
  • Because people are still Reddit-brained, have no capacity for nuance and thrive on outrage like an addict.

    For the addicts with their finger smashing the downvote button:

    Elon Musk is an idiot. But that doesn’t mean that a Tesla Model S is an idiot.

    A Hyprland developer could be transphobic, members who comment in the community could be transphobic but that doesn’t make the software transphobic.

    Software doesn’t have political opinions.


    If you want to not be hypocritical and examine all products with the same ridiculous level of scrutiny then you’re probably using electronic components in your house, car, smartphone and PC that were sourced using slave labor, child labor or built by countries that engage in human rights abuse.

    The electricity used to allow you to uncritically attack people online was generated by means which contribute to climate change which will kill or displace hundreds of millions of people.

    The language you’re using is primarily used by cultures who have historically engaged in colonialism, piracy, slavery, religious oppression, ethnic cleansing and wars of aggression.

    So, unless you’re willing to sit in a forest and never communicate with another person, you’re going to be using technology which, if you pedantically dig deep enough, you can find some “problematic” behaviors associated with.

    Or, you could not act ignorant in online spaces. That’s also an option.


  • There are thousands of different diffusion models, not all of them are trained on copyright protected work.

    In addition, substantially transformative works are allowed to use content that is otherwise copy protected under the fair use doctrine.

    It’s hard to argue that a model, a file containing the trained weight matrices, is in any way substantially similar to any existing copyrighted work. TL;DR: There are no pictures of Mickey Mouse in a GGUF file.

    Fair use has already been upheld in the courts concerning machine learning models trained using books.

    For instance, under the precedent established in Authors Guild v. HathiTrust and upheld in Authors Guild v. Google, the US Court of Appeals for the Second Circuit held that mass digitization of a large volume of in-copyright books in order to distill and reveal new information about the books was a fair use.

    And, perhaps more pragmatically, the genie is already out of the bottle. The software and weights are already available and you can train and fine-tune your own models on consumer graphics cards. No court ruling or regulation will restrain every country on the globe and every country is rapidly researching and producing generative models.

    The battle is already over, the ship has sailed.



  • Companies that are incompetently led will fail and companies that integrate new AI tools in a productive and useful manner will succeed.

    Worrying about AI replacing coders is pointless. Anyone who writes code for a living understands the limitations that these models have. It isn’t going to replace humans for quite a long time.

    Language models are hitting some hard limitations and were unlikely to see improvements continue at the same pace.

    Transformers, Mixture of Experts and some training efficiency breakthroughs all happened around the same time which gave the impression of an AI explosion but the current models are essentially taking advantage of everything and we’re seeing pretty strong diminishing returns on larger training sets.

    So language models, absent a new revolutionary breakthrough, are largely as good as they’re going to get for the foreseeable future.

    They’re not replacing software engineers, at best they’re slightly more advanced syntax checkers/LSPs. They may help with junior developer level tasks like refactoring or debugging… but they’re not designing applications.


  • I know that it’s a meme to hate on generated images people need to understand just how much that ship has sailed.

    Getting upset at generative AI is about as absurd as getting upset at CGI special effects or digital images. Both of these things were the subject of derision when they started being widely used. CGI was seen as a second rate knockoff of “real” special effects and digital images were seen as the tool of amateur photographers with their Photoshop tools acting as a crutch in place of real photography talent.

    No amount of arguments film purist or nostalgia for the old days of puppets and models in movies was going to stop computer graphics and digital images capture and manipulation. Today those arguments seem so quaint and ignorant that most people are not even aware that there was even a controversy.

    Digital images and computer graphics have nearly completely displaced film photography and physical model-based special effects.

    Much like those technologies, generative AI isn’t going away and it’s only going to improve and become more ubiquitous.

    This isn’t the hill to die on no matter how many upvotes you get.