DefederateLemmyMl

  • Gen𝕏
  • Engineer ⚙
  • Techie 💻
  • Linux user 🐧
  • Ukraine supporter 🇺🇦
  • Pro science 💉
  • Dutch speaker
  • 1 Post
  • 217 Comments
Joined 11 months ago
cake
Cake day: August 8th, 2023

help-circle


  • most PCs by that time had built-in MIDI synthesizers

    Built-in? You had AdLib cards for FM synthesis, but they were never built-in and most PCs didn’t even have them. Adlib cards used the Yamaha OPL2 or OPL3 chip.

    Along came Creative Labs with their AWE32, a synthesizer card that used wavetable synthesis instead of FM

    You are skipping a very important part here: cards that could output digital audio. The early Soundblaster cards were pioneers here (SB 1.0, SB 2.0, SB Pro, SB16). The SB16 for example was waaaaay more popular than the AWE32 ever was, even if it still used OPL3 based FM synth for music. It’s the reason why most soundcards in the 90s were “Soundblaster compatible”.

    Digital audio meant that you could have recorded digital sound effects in games. So when you fired the shotgun in Doom to kill demons, it would play actual sound effects of shotgun blasts and demon grunts instead of bleeps or something synthesized and it was awesome. This was the gamechanger that made soundcards popular, not wavetable.

    The wavetable cards I feel were more of a sideshow. They were interesting, and a nice upgrade, especially if you composed music. They never really took off though and they soon became obsolete as games switched from MIDI based audio to digital audio, for example Quake 1 already had its music on audio tracks on CD-ROM, making wavetable synthesis irrelevant.

    BTW, I also feel like you are selling FM synthesis short. The OPL chips kinda sucked for plain MIDI, especially with the Windows drivers, and they were never good at reproducing instrument sounds but if you knew how to program them and treated the chip as its own instrument rather than a tool to emulate real world instruments, they were capable of producing beautiful electronic music with a very typical sound signature. You should check out some of the adlib trackers, like AdTrack2 for some examples. Many games also had beautiful FM synthesized soundtracks, and I often preferred it over the AWE32 wavetable version (e.g. Doom, Descent, Dune)






  • We are talking about addresses, not counters. An inherently hierarchical one at that. If you don’t use the bits you are actually wasting them.

    Bullshit.

    I have a 64-bit computer, it can address up to 18.4 exabytes, but my computer only has 32GB, so I will never use the vast majority that address space. Am I “wasting” it?

    All the 128 bits are used in IPv6. ;)

    Yes they are all “used” but you don’t need them. We are not using 2^128 ip addresses in the world. In your own terminology: you are using 4 registers for a 2 register problem. That is much more wasteful in terms of hardware than using 40 bits to represent an ip address and wasting 24 bits.


  • you are wasting 24 bits of a 64-bit register

    You’re not “wasting” them if you just don’t need the extra bits, Are you wasting a 32-bit integer if your program only ever counts up to 1000000?

    Even so when you do start to need them, you can gradually make the other bits available in the form of more octets. Like you can just define it as a.b.c.d.e = 0.a.b.c.d.e = 0.0.a.b.c.d.e = 0.0.0.a.b.c.d.e

    Recall that IPv6 came out just a year before the Nintendo 64

    If you’re worried about wasting registers it makes even less sense to switch from a 32-bit addressing space to a 128-bit one in one go.

    Anyway, your explanation is a perfect example of “second system effect” at work. You get all caught up in the mistakes of the first system, in casu the lack of addressing bits, and then you go all out to correct those mistakes for your second system, giving it all the bits humanity could ever need before the heat death of the universe, while ignoring the real world implications of your choices. And now you are surprised that nobody wants to use your 128-bit abomination.


  • Hmm, I can’t say that I’ve ever noticed this. I have a 3950x 16-core CPU and I often do video re-encoding with ffmpeg on all cores, and occasionally compile software on all cores too. I don’t notice it in the GUI’s responsiveness at all.

    Are you absolutely sure it’s not I/O related? A compile is usually doing a lot of random IO as well. What kind of drive are you running this on? Is it the same drive as your home directory is on?

    Way back when I still had a much weaker 4-core CPU I had issues with window and mouse lagging when running certain heavy jobs as well, and it turned out that using ionice helped me a lot more than using nice.

    I also remember that fairly recently there was a KDE/plasma stutter bug due to it reading from ~/.cache constantly. Brodie Robertson talked about it: https://www.youtube.com/watch?v=sCoioLCT5_o








  • You don’t even have to NAT the fuck out of your network. NAT is usually only needed in one place: where your internal network meets the outside world, and it provides a clean separation between the two as well, which I like.

    For most internal networks there really are no advantages to moving to IPv6 other than bragging rights.

    The more I think about it, the more I find IPv6 a huge overly complicated mistake. For the issue they wanted to solve, worldwide public IP shortage, they could have just added an octet to IPv4 to multiply the number of available addresses with 256 and called it a day. Not every square cm of the planet needs a public IP.


  • People have choices. If they want to keep using the Lemmy.ml community, that’s their freedom. The alternatives exist, if they want to switch, they can.

    Because network effect is a thing, it’s really the illusion of choice. When a lemmy.ml community has 50k subscribers and the equivalent lemmy.world or programming.dev community has just a tenth of that, it’s not really a choice. People will always gravitate towards ml and the smaller community will never gain critical mass unless some strong enough outside force influences that decision.

    Which brings me to …

    Intrigued by your name change, you are really pushing for this.

    I think defederation from lemmy.ml together with raising awareness about ml should be the outside force to move communities off lemmy.ml.



  • It’s when you have to set static routes and such.

    For example I have a couple of locations tied together with a Wireguard site-to-site VPN, each with several subnets. I had to write wg config files and set static routes with hardcoded subnets and IP addresses. Writing the wg config files and getting it working was already a bit daunting with IPv4, because I was also wrapping my head around wireguard concepts at the same time. It would have been so much worse to debug with IPv6 unreadable subnet names.

    Network ACLs and firewall rules are another thing where you have to work with raw IPv6 addresses. For example: let’s say you have a Samba share or proxy server that you only want to be accessible from one specific subnet, you have to use IPv6 addresses. You can’t solve that with DNS names.

    Anyway my point is: the idea that you can simply avoid IPv6’s complexity by using DNS names is just wrong.