llama.cpp works on windows too (or any os for that matter), though linux will vive you better performances
llama.cpp works on windows too (or any os for that matter), though linux will vive you better performances
Revolt tries to be a discord clone/replacement and suffer from some of the same issues. Matrix happens to have a lot of feature in common, but is focused on privacy and security at its core.
Whatsapp is europe’s iMessage
You can take a look at exllama and llama.cpp source code on github if you want to see how it is implemented.
If you have good enough hardware, this is a rabbithole you could explore. https://github.com/oobabooga/text-generation-webui/
Around 48gb of VRAM if you want to run it in 4bits
To run this model locally at gpt4 writing speed you need at least 2 x 3090 or 2 x 7900xtx. VRAM is the limiting factor in 99% of cases for interference. You could try a smaller model like mistral-instruct or SOLAR with your hardware though.
Those slowndown article were clickbait / bad journalism , youtube hasn’t been slowing down the site for adblock user.
I put zorin on my parent’s computer 2 years ago, while its a great distro, their windows app support is just marketing, its an out of date wine version with an unmaintained launcher. Worse than tinkering with wine yourself.
It is already here, half of the article thumbnails are already AI generated.
It works with plugin juste like obsidian, so if their implémentation is not gold enough, you can always find a gramarly plugin.
It does not work exactly like obsidian as it is an outliner. I use both on the same vault and logseq is slower on larger vault.
It works pretty well. You can create a good dataset for a fraction of the effort and price it would have required to do it by hand. The quality is similar. You just have to review each prompt so you don’t train your model on bad data.
Do you use comfyui ?
You are easier to track with Adnauseum
Being able to run benchmarks doesn’t make it is a great experience to use unfortunately. 3/4 of applications don’t run or have bugs that the devs don’t want to fix.
Windows is not fine with ARM, which can be a turnoff for some.
Llama models tuned for conversation are pretty good at it. ChatGPT also was before getting nerfed a million time.
Even dumber than that, when their activation method fail, the support uses massgrev to install windows on costumer pc
they released a search engine where the model reads the first link before trying to answer your request