• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle

  • Despite everything, Telegram is actually great. It’s only bloated if you’re using the features on the device, the client is opensource and native apps for any platforms, it’s very lightweight compared to other messangers and even to some dedicated solutions, it sends stuff p2p on the same network so you don’t need to care about the traffic, but also it allows for on-demand downloads so if you want the stuff will be available outside of your network.
    Alternatively, kdeconnect, but I find myself using Telegram instead 9 times out of 10, even though I have both installed.





  • Nalivai@lemmy.worldtoPrivacy@lemmy.worldI'm sick and tied of cameras
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    3 days ago

    Yeah, I know you know better than everyone else, and everyone who disagrees is just an uneducated pleb, but this one you can test for yourself. More than half of the nodal points needed for the face recognition are above the centroid of the face, which usually a top point of a mask, and half of the other half are the outer part of the face which mask doesn’t conceal much. You need less then a half of the points to reliably recognize a face, and even me, stupid rube with no education, can do the math here.




  • Yeah, the scary thing about LLMs is that by their very nature they sound convincing and it’s very easy to fall into a trap, we as humans are hardwired to misconstrue the ability to talk smoothly for intelligence, and when computer started to speak with complete sentences and hold the immediate context of a conversation, we immediately started to think that we have a thinking machine and started believing it.
    The worst thing is, there are legit uses for all the machine learning stuff and LLMs in particular, so we can’t just throw it all out of the window, we will have to collectively adapt to this very convincing randomness machine that is just here all the time


  • As someone with degrees and decades of experience, I urge you not use it for that. It’s a cleverly disguised randomness machine, it will give you incorrect information that will be indistinguishable from truth because “truth” is never the criteria that it can use, but be convincing is. It will seed those untruths into you and unlearning bad practices that you picked up at the beginning might take years and cost you a career. And since you’re just starting, you have no idea how to pick up bullshit from truth as long as the final result seem to work, and that’s the works way to hide the bullshit from you.
    The field is already very accessible for everyone who wants to learn it, the amount of guides, examples, teaching courses, very useful youtube videos with thick Indian accent is already enormous, and most of them are at least trying to self-correct, while LLM actively doesn’t, in fact it’s trying to do the opposite.
    Best case scenario you’re learning inefficiently, worst case scenario you aren’t learning at all