• 0 Posts
  • 47 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle






  • Yeah, I’d agree with that.

    The point I was making was for people who thought this was cellphone cameras and that it would somehow work even if the camera wasn’t actively running.

    As far as war driving with an sdr you’d probably occasionally find something interesting, but the vast majority would be cameras just pointed back out at the street. I think you’d mostly see stuff where if you wanted to spy it would make more sense to hide your own camera because it’s already public.

    All that said, I would lose my shit if Hollywood did something believable for once and used this for a heist movie.





  • I work on this stuff, short answer, no, it’s not possible. This is just yet another overly complicated tempest attack. Especially with phones the camera link is so short it’s just not radiating enough. They claim 30cm so you basically need the receiver in the same backpack as the phones. As phones get higher resolution and faster cameras this will become even less of an issue. Also, most importantly the camera has to be powered and running for this to work so just don’t take pictures of classified stuff while carrying around a weirdly warm battery bank an unusually attractive eastern European girl gave you as an engagement gift and you’re good.

    The actual target here is some sort of The Thing https://en.m.wikipedia.org/wiki/The_Thing_(listening_device) style attack where someone with a huge budget can get a wildly expensive device really close to a system through a significant human intelligence effort.

    The line of reasoning is valid though. These satellites will have some ability to track and intercept low power intentional emissions like WiFi and cellular packets. While these are encrypted there are still things you can do with the metadata.




  • So the line of reasoning I’m taking is that current ai is just a statistical model. It’s useful for plenty of stuff, but it just doesn’t do things well that don’t lend themselves to a statistical approach, for instance it can kinda “luck” it’s way through basic math problems because there’s a lot of examples in its training set but it’s fundamentaly not doing the kind of forward reasoning/chaining that is required to actually solve problems that aren’t very commonly seen.

    In the case of a robot body, where are they going to get the training set required to fully control it? There isn’t a corpus of trillions of human movements available to scrape on the web. As mentioned in this thread, you can get certain types of a ai to play video games but that’s relatively easy because the environment is simple, virtual, and reproducible. In the real world you have to account for things like sample variation between actuators, forces you didn’t expect, and you don’t have infinite robots if it breaks itself trying to learn a motion. Boston dynamics uses forms of ai but they’re not strictly the types that are exploding right now and don’t necessarily translate well.


  • This…isn’t how the current paradigm of ai works at all. We’ve built glorified auto-complete bots, not something that can make a physical robot behave at a human level. Best case, they build something that can carry on a conversation long enough to excite a tech journalist and aimlessly meander like the Boston dynamic bots but without the pre-programmed tasking (assuming they don’t cheat and add canned routines).

    So that leaves one option: it’s a moonshot project to convince the tech illiterate public to take them and their stock price to the moon long enough for a few people to make an obscene amount of money.


  • potatopotato@sh.itjust.workstoTechnology@lemmy.worldHave We Reached Peak AI?
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    5
    ·
    6 months ago

    Please god I hope so. I don’t see a path to anything significantly more powerful than current models in this paradigm. ANNs like these have existed forever and have always behaved the way current LLMs do, they just recently were able to make them run somewhat more efficiently with bigger context windows and training sets which birthed GPT3 which was further minimally tweaked into 3.5 and 4 among others. This feels a whole lot like a local maxima where anything better will have to go back down through another several development cycles before it surpasses the current gen.





  • Part of the issue is the whole thing smells weird.

    Like they won’t talk about their monetization strategy at all but they acknowledge that there will be one. They’re trying to randomly apply crypto to something that’s literally already the one proven blockchain tech, and they started at the height of the crypto token scam industry and it looks a lot like they’re trying to suck up the last dregs of that cycle.

    If you are hammering crypto into things that don’t obviously need crypto you really need to justify it thoroughly. A relatively old company just hand waving all of it should raise all of the red flags.