• tyler@programming.dev
    link
    fedilink
    English
    arrow-up
    61
    ·
    2 days ago

    This is incredible, but why does the article end by stating that this might not have any immediate applications? Shouldn’t this immediately result in more efficient hash tables in everyday programming languages?

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      56
      ·
      edit-2
      1 day ago

      After reading through the abstract the article is pop sci bunk: They developed a method to save additional space with constant-time overhead.

      Which is certainly novel and nice and all kinds of things but it’s just a tool in the toolbox, making things more optimal in theory says little about things being faster in practice because the theoretical cost models never match what real-world machines are actually doing. In algorithm classes we learn to analyse sorting algorithms by number of comparisons, and indeed the minimum necessary is O(n log n), in the real world, it’s numbers of cache invalidation that matters: CPUs can compare numbers basically instantly, getting the stuff you want to compare from memory to the CPU is where time is spent. It can very well be faster to make more comparisons if it means you get fewer, or more regular (so that the CPU can predict and pre-fetch), data transfers.

      Consulting my crystal ball, I see this trickling down into at least the minds of people who develop the usual KV stores, database engineers, etc, maybe it’ll help maybe it won’t those things are already incredibly optimized. Never trust a data structure optimisation you didn’t benchmark. Never trust any optimisation you didn’t benchmark, actually. Do your benchmarks, you’re not smarter than reality. In case it does help, it’s going to trickle down into standard implementations of data structures languages ship with.

      EDIT: I was looking an this paper, not this. It’s actually disproving a conjecture of Yao, who has a Turing prize, certainly a nice feather to have in your cap. It’s also way more into the theoretical weeds than I’m comfortable with. This may have applications, or this may go along the lines of the Karatsuba algorithm: Faster only if your data is astronomically large, for (most) real-world applications the constant overhead out-weighs the asymptotic speedup.

      • tyler@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        23 hours ago

        the reason it confused me is because the college student was clearly using the algorithm to accomplish his task, not just theoretically designed. So it didn’t seem to be a small improvement that would only be noticeable in certain situations.

        I’m not smart enough to understand the papers so that’s why I asked.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          23 hours ago

          Oh no it’s definitely a theoretical paper. Even if the theory is fully formalised and thus executable it still wouldn’t give much insight on how it’d perform in the real world because theorem provers aren’t the most performant programming languages.

          And, FWIW, CS theorists don’t really care about running programs same as theoretical physicists don’t care much about banging rocks together, in both cases making things work in the real world is up to engineers.

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        18
        ·
        1 day ago

        Also never even start optimizing until you profile and are sure the bit you are trying to optimize even matters to the overall performance of your program.

    • OhNoMoreLemmy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      Hash trees are super efficient when they’re not nearly full. So the standard trick is just to resize them when they’re too close to capacity.

      The new approach is probably only going to be useful in highly memory constrained applications, where resizing isn’t an option.

      • deegeese@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        Hash tables are used in literally everything and they always need to minimize resizing because it’s a very expensive operation.

        I suspect this will silently trickle into lots of things once it gets picked up by standard Python and JavaScript platforms, but that will take years.

    • rockSlayer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Infrastructural APIs are much slower to change, and in a lot of cases the use of those APIs are dependent on a specific version. The change will definitely occur over time as the practical limitations are discovered

    • lemmyng@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      I haven’t read the Tiny Pointers article yet, but the OP article implies that the new hash tables may rely on them. If so, then the blocker could be the introduction (or lack thereof) of tiny pointers in programming languages.

      • tyler@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        Tiny Pointers was the paper that the student read to get the idea. The paper he co-authored was “Optimal Bounds for Open Addressing Without Reordering”

      • zkfcfbzr@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 day ago

        Hash tables are often used behind the scenes. dicts and sets in python both utilize hash tables internally, for example.

        • source_of_truth@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 day ago

          I’ve only used java but java hash tables are stupid fast in my experience, like everything else in my crap programs was 1000 times slower than the hash table access or storage.

          Just reading the title, it’s talking about searching hash tables, which wasn’t something I was specifically doing.

              • Trailblazing Braille Taser@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                24 hours ago

                I wrote my comment not to antagonize you but to point out that you’re asking the wrong questions. I failed to articulate that, and I’m sorry for being harsh.

                Your prior comment indicated that you have used hash tables in Java, which were very fast. You said that your program accessed the hash tables, but did not “search” the table. These operations are the same thing, which led me to believe you’re out of your depth.

                This last comment asks me how much this paper’s contribution speeds up an average program. You’re asking the wrong question, and you seem to be implying the work was useless if it doesn’t have an immediate practical impact. This is a theoretical breakthrough far over my head. I scanned the paper, but I’m unsurprised they haven’t quantified the real-world impact yet. It’s entirely possible that despite finding an asymptotic improvement, the constant factors (elided by the big O analysis) are so large as to be impractical… or maybe not! I think we need to stay tuned.

                Again, sorry for being blunt. We all have to start somewhere. My advice is to be mindful of where the edge of your expertise lies and try to err on the side of not devaluing others’ work.

          • deegeese@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            If you use a hash table, you search every time you retrieve an object.

            If you didn’t retrieve, why would you be storing the data in the first place?

      • lime!@feddit.nu
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        anything that deserializes arbitrary json will put it into a hash table, right? it would definitely speed up the web.