Im using Ollama on my server with the WebUI. It has no GPU so its not quick to reply but not too slow either.

Im thinking about removing the VM as i just dont use it, are there any good uses or integrations into other apps that might convince me to keep it?

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    Think of LLMs like a stupid office worker. You wouldn’t rely on them to make critical decisions, but they’re valuable for tedious stuff.

    For example, my calendar changed the way to enter new events breaking my workflow. Now I just type out a skeletal schedule and have LLM convert that into a .csv that I import.

    I’m thinking of Ripping my CD collection again. I’m researching a way to use a LLM to tidy up the metadata.

    I had a folder full of random stuff I’ve saved for years. Had a LLM organize and categorize it for me. I had to tweak the prompt enough that this was a medium difficulty task, but still way easier than doing it manually.

  • RandomLegend [He/Him]@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    It’s a tool like any other. If you don’t have any usecase for it, just don’t use it.

    I use it to summarize release notes and generate some minor descriptions for generic stuff in my TTRPG campaigns.

    • DrinkMonkey@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      generate some minor descriptions for generic stuff in my TTRPG campaigns.

      Need a quick 200 word description of the interior of an apothecary? Or a band of marauding orcs? It’s been a huge time saver for me.

  • minnix@lemux.minnix.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Ollama without a GPU is pretty useless unless you’re using with Apple silicon. I’d just get rid of it until you get a GPU.

      • umami_wasabi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        IMO LLMs are ok to get a head start of searching. Like got a vague idea of something but don’t know the exact keywords. LLMs can help and use the output on whatever search engine you like. This saves a lots of time tinkering the right keywords.

        • dwindling7373@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Sure, or you could send an email to the leading international institution on the matter to get a very accurate answer!

          Is it the most reasonable course of action? No. Is it more reasonable than waste a gazillion Watt so you can maybe get some better keywords to then paste in a search engine? Yes.

          • kitnaht@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Once the model is trained, the electricity that it uses is trivial. LLMs can run on a local GPU. So you’re completely wrong.

              • kitnaht@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                Those were statements. Statements of fact.

                Once the models are already trained, it takes almost no power to use them.

                Yes, TRAINING the models uses an immense amount of power - but utilizing the training datasets locally consumes almost nothing. I can run the llama 7b set on a 15w Raspberry Pi for example. Just leaving my PC on uses 400w. This is all local – Nothing entering or leaving the Pi. No communication to an external server, nothing being done on anybody else’s server or any AWS instances, etc.

                • dwindling7373@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.

                  It’s like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.

                • dwindling7373@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.

                  It’s like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.