• FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    2 days ago

    Ah this ancient nonsense. Typescript and JavaScript get different results!

    It’s all based on

    https://en.wikipedia.org/wiki/The_Computer_Language_Benchmarks_Game

    Microbenchmarks which are heavily gamed. Though in fairness the overall results are fairly reasonable.

    Still I don’t think this “energy efficiency” result is worth talking about. Faster languages are more energy efficient. Who new?

    Edit: this also has some hilarious visualisation WTFs - using dendograms for performance figures (figures 4-6)! Why on earth do figures 7-12 include line graphs?

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Private or obscure ones I guess.

        Real-world (macro) benchmarks are at least harder to game, e.g. how long does it take to launch chrome and open Gmail? That’s actually a useful task so if you speed it up, great!

        Also these benchmarks are particularly easy to game because it’s the actual benchmark itself that gets gamed (i.e. the code for each language); not the thing you are trying to measure with the benchmark (the compilers). Usually the benchmark is fixed and it’s the targets that contort themselves to it, which is at least a little harder.

        For example some of the benchmarks for language X literally just call into C libraries to do the work.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Private or obscure ones I guess.

          Private and obscure benchmarks are very often gamed by the benchmarkers. It’s very difficult to design a fair benchmark (e.g chrome can be optimized to load Gmail for obvious reasons. maybe we should choose a more fair website when comparing browsers? but which? how can we know that neither browser has optimizations specific for page X?). Obscure benchmarks are useless because we don’t know if they measure the same thing. Private benchmarks are definitely fun but only useful to the author.

          If a benchmark is well established you can be sure everyone is trying to game it.

    • Dumhuvud@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Typescript and JavaScript get different results!

      It does make sense, if you skim through the research paper (page 11). They aren’t using performance.now() or whatever the state-of-the-art in JS currently is. Their measurements include invocation of the interpreter. And parsing TS involves bigger overhead than parsing JS.

      I assume (didn’t read the whole paper, honestly DGAF) they don’t do that with compiled languages, because there’s no way the gap between compiling C and Rust or C++ is that small.

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Their measurements include invocation of the interpreter. And parsing TS involves bigger overhead than parsing JS.

        But TS is compiled to JS so it’s the same interpreter in both cases. If they’re including the time for tsc in their benchmark then that’s an even bigger WTF.