Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 464 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • For the end user, its main weakness is that complex pages can be pretty slow to render if not coded well. It’s not that bad either. You wouldn’t be like “oh this is a React site, yuck”, they’re all like that these days for the reasons you’d expect.

    As for React Native, its main issue is the communication between the JavaScript browser-ish environment and the Java/Kotlin native environment that can be costly because every has to be serialized (meaning, converted to some type of data structure both sides can understand) and deserialized, so complex screen updates don’t scale too well.

    It’s easy for developers to accidentally trigger much bigger and much more expensive rerenders than expected. If you see whole second long page hangs on some websites as new content loads in that’s usually what happened.


    For developers, it’s complicated, you kind of need to experience it to understand the footguns.

    React was born to solve one particular problem at Facebook: how can we make it so any developer can jump on any part of the UI code and add features without breaking everything. One of the most complicated aspects of a website is state management, in other words, making sure every part of the page are updated when something changes. For example, if you read a message in your inbox, the unread count needs to update a couple places on the page. That’s hard because you need to make sure everything that can change that count is in agreement with everything that displays that count.

    React solves that problem by hiding it away from you. Its model is simple: given a set of inputs, you have a function that outputs how to display that. Every time the value changes, React re-renders every component that used that value, compares it with the previous result, and then modifies the page with the updated data. That’s why it’s called React, it reacts to changes and actions.

    The downside of that is if you’re not very careful, you can place something in a non-ideal spot that can cascade into re-rendering the entire page every time that thing updates. At scale, it usually works out relatively okay, and it’s not like rendering the whole page is that expensive. There’s an upper cap on how bad it can be, it won’t let you do re-render loops, but it can be slow.

    I regularly see startups with 25MB of JavaScript caused by React abuse and favoring new features over tracking down excessive renders. Loads the same data 5 times because “this should only render once” and that turned out to be false, but it displays correctly. I commonly see entire forms being re-rendered every character you type because the data is stored in the form’s state, so it has to re-render that entire tree.

    But it’s not that bad. It’s entirely possible to make great and snappy sites with React. Arguably its problem isn’t React itself but how much it is associated with horrible websites because of how tolerant to bad code it is. It’s maybe a little bit too easy to learn, it gives bad developers an undeserved sense of confidence.

    E: And we have better solutions to this such as signals which SolidJS, Vue and Svelte make heavy use of. Most of the advantages with less problems.


    Anyway, that part wasn’t relevant at all why I don’t like React. The point is, skip the web, you don’t really need the web. React Native skipped the whole HTML part, it’s still JSX but for native app styled components for UI building. The web backend worked very well, your boxes became divs with some styles. It pretty much just worked. Do that but entirely in Rust since Rust can run natively on all platforms. Rust gets to skip all the compromises RN needed, and skip the embedded browser entirely. Make it desktop first then make the web version, it’ll run just as well and might even generate better code than if a human wrote it. Making the web look native sucks but making native fit web is a lot easier than it looks. Letting go of HTML and CSS was a good call from React Native.



  • I wish we went the other way around: build for native and compile to HTML/CSS/WASM.

    For me the disadvantage of Electron is well, it doesn’t have any advantage or performance improvement over the browser version for 99% of use cases, and when you shove that on a mobile phone it performs as horribly as the web version.

    People already use higher level components that ends up shitting out HTML and CSS anyway, why not skip the middleman and just render the box optimally from the start? Web browsers have become good, but if you can skip parsing HTML and CSS entirely and also skip maintaining their state, that’s even better.

    I had the misfortune of developing a React Native app, and I’d say thinking in terms of rows and columns and boxes was nice. Most of RN’s problems are because they still run JS and so you have to bundle node and have the native messaging bridge, and of course that it’s tied to the turd that is React. But zero complains about the UI part when it doesn’t involve the bridge: very smooth and snappy, much more than the browser. And the browser version was no different than standard React in performance.

    I like that it’s not yet another Chromium one at least.




  • It’ll depend a lot on your experience. I can just install Arch without reading the wiki at all in about 5 minutes for something fairly vanilla. If you’re comfortable with Linux then following the wiki won’t be too hard, took me maybe 2-3 hours on my first install before I had my DE and everything all set up (12 years ago). If you’ve never used Linux before and take the deep dive then it could take hours and days depending on how fast you can absorb all that information.

    “Easy” is very subjective, there’s stuff that’s so dumbed down for the sake of “easy” that it makes my life harder when I need to do more complex stuff. I know people for whom linear algebra in 11 dimensions is easy for them to do and solve. Easy is relative to your own personal experience level and what you’re trying to accomplish.

    Install it in a VM as a test run, you’ll see by yourself.


  • No, simply because even with pure CSS and even pure HTML you can find ways to leak some information about the browser. For example, a background image that only loads on 1920x1080, another for 2560x1440, and so on. Make hundreds of those for every possible resolution (they can be the same file on the server but at a different path), and there you go, you now figured that the client downloaded img/background/2448x1280.png from the server logs. You can use the same trick for fonts as well, you just apply the same trick on a box on the page that is sized based on text content. Repeat for every font you want to test for.

    There’s just a ton of those little features that are for performance optimizations because loading a 4K background on a 480p phone is a bad experience for everyone involved. Sometimes you need to know the size of some elements to position other elements relative to it. You need the mouse cursor position to open popups at the right place. You need the window size to realign popups and modals. You’d have to go back to text based only sites like it’s the 80s and 90s to avoid that kind of fingerprinting.

    And thus Tor’s solution: everyone’s got the same window size, same fonts and everything.


  • That’s the smart way to do math. I mean not with such small numbers but you’d do the same thing adding up large numbers, you break down the numbers and rearrange them in a way that’s easier to compute.

    Algebra probably feels intuitive to you.

    They’re also trying to teach that in math classes (it gets called “new” math) but the boomers are freaking out because “why can’t they just do normal additions like we used to, this is so complicated”. And the answer to that is, 99% of the time you’ll be doing algebra because we literally all carry a calculator in our pockets and sometimes on our wrists at all times and we never need to just do a long division. And that kind of thinking really makes it easy to break down formulas because your brain thinks in terms of moving stuff around in an equation.







  • The problem with Fedora and especially the atomic versions is that when you Google “how to do X on Linux” you pretty much always get information for Ubuntu and Debian derivatives. The atomic versions have it mildly harder because now you also have to learn how immutable distros work, and you can’t just make install something from GitHub (not that it’s recommended to do so, but if you just want your WiFi to work and that’s all you could find, it’s your best option).

    It’s not as bad as it used to be thanks to Flatpak and stuff, but if you’re really a complete noob the best experience will be the one you can Google and get a working answer as easily as possible.

    Once you’re familiar and ready to upgrade then it makes sense to go to other distros like Fedora, Nobara, Bazzite, Kionite and whatnot.

    I don’t like Ubuntu, I feel like Mint is to Ubuntu what Manjaro is to Arch, Pop_OS is okay when it doesn’t uninstall your DE when installing Steam. But I still recommend those 3 to noobs because everyone knows how to get things working on those, and the guides are mostly interchangeable as well. Purely because it’s easy to search for help with those. I just tell them when you’re tired of the bugs and comfortable enough with Linux then go start distrohopping a bit to find your more permanent home.


  • Ask your admin to turn it off, or if you’re the admin, turn it off.

    They really went with the worst possible way to implement this in that it mangles the post to rewrite all images to the image proxy, so it’s not giving you a choice. So if you want the original link you have to reprocess it to strip the proxy. It’s like when they thought it was a good idea to store the data as HTML encoded, so not-web clients had to try to undo all of it and it’s lossy. It should be up to the clients to add the proxy as needed and if desired. Never mangle user data for storage, always reprocess it as needed and cache it if the processing is expensive.

    Now you edit a post and your links are rewritten to the proxy, and if you save it again, now you proxy to the proxy. Just like when they applied the HTML processing on save, if you edited a post and saved it again it would become double encoded.

    Personally I leave it off, and let Tesseract do it instead when it renders the images. It’s the right way to do it. If the user wants a fresh copy because it’s a dynamic image, they can do so on demand instead of being forced into it. And it actually works retroactively compared to the Lemmy server only doing it for new posts.


  • API documentation isn’t a tutorial, it’s there to tell you what the arguments are, what it does and what to expect as the output and just generally, what’s available.

    I actually have the opposite problem as you: it infuriates me when a project’s documentation is purely a bunch of examples and then you have to guess if you want to do anything out of the simple tutorial’s paved path. Tell me everything that’s available so I can piece together something for what I need, I don’t want that info on chapter 12 of the example of building a web store. I’ve been coding for nearly two decades now, I’m not going to follow a shopping cart tutorial just in the off chance that’s how you tell how the framework defines many to many relationships.

    I believe an ideal world has both covered: you need full API documentation that’s straight to the point, so experienced people know about all the options and functions available, but also a bunch of examples and a tutorial for those that are new and need to get started and generally learning how to use the library.

    Your case is probably a bit atypical as PyTorch and AI stuff in general is inherently pretty complex. It likely assumes you know your calculus and linear algebra and stuff like that so that’d make the API docs extra dense.


  • And also with the atomic/immutable distros, the switch is practically instant, so it’s not even like it forces you to watch a spinning circle for 20 minutes when you turn off your computer. You reboot and the apps all start clean with the right library versions.

    It’s rare but I’ve seen software trash itself because the newly spawned process talks a different protocol and it can lead to either crashes or off behavior that leads to a crash eventually. Or it tries to read a file mid update. Kernel updates can make it so when you plug in a USB stick, nothing happens because the driver’s gone. Firefox as you mentionned. Chromium will tolerate it mostly but it can get very weird over time.

    The risk is non-zero, so when you target end users that don’t want to have to troubleshoot, it’s safer to just do offline updates. Especially with Flatpaks now, you get those updated online and really it’s only system components you don’t care to delay updates taking effect

    If you’re new to Linux and everyone told you you can just update and no reboot, and you run into weird Firefox glitches, it just looks bad.



  • Max-P@lemmy.max-p.metoLinux@lemmy.worldArch Stability
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    The stability of a distro usually has more to do with API and ABI stability than stability in terms of reliability. And a “stable” system can be unreliable.

    That’s why RHEL forks are said to be compatible bug for bug. Because you don’t know if fixing the bug could have a cascading side effect for somebody’s very critical system.

    Arch has been nothing but reliable for me. Does it doesn’t need fixing sometimes because the config format of some daemon changed, or Python or nodejs got updated and now my project doesn’t build? Absolutely not. But for me usually newer versions are better even if it needs some fixing, and I like doing it piecemeal rather than all at once every couple years.

    Stable distributions are well loved for servers because you don’t want to update 2000 servers and now you’re losing millions because your app isn’t compatible with the latest Ruby version. You need to be able to reliably install and reinstall the same distro version and the same packages at the same versions over and over. I can’t deal with needing a new server up urgently and then get stuck having to fix a bunch of stuff because I got a newer version of something.

    I use multiple distros regularly, for different purposes. Although lately Docker has significantly reduced my need for stable distros and lean more on rolling distros as the host.


  • Lemmy wasn’t ready and still mostly not ready for a mass Reddit exodus. The Reddit API fiasco wasn’t anticipated by anybody and the large influx of users exposed a ton of bugs and federation issues.

    But it’s not a failure, yet. I’m sure Reddit had growing pains after the Digg exodus too. Some platforms take years to become popular. Reddit was small for quite a while before it became more mainstream.

    In a way to me Lemmy feels a bit like Reddit must have been a few years before I joined it 12 years ago.

    The problem is the expectation that Lemmy could replace Reddit overnight, and would immediately be a 1:1 replacement.

    Although personally I like it more here, and I get more interactions than Reddit. But I am a tech nerd, so.