![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/q98XK4sKtw.png)
They have x86_64 models.
They have x86_64 models.
They couldn’t play Doom (until much later). Even to this day, the Amiga ports are lackluster. Hardware wasn’t designed for that kind of game.
There’s almost always at least a little ASM sprinkled into any kernel, so that’s not a big deal.
OTOH, there is the factor of “you know how Chrome takes up 2GB per tab? What if that was a whole OS?”
You’re probably in a country that got a ton of allocations in the 90s. If you came from a country that was a little late to build out their infrastructure, or even tried to setup a new ISP in just about any country, you would have a much harder time.
Yes, everyone forgets them. Mostly for good reasons.
Arm is better because there are more than three companies who can design and manufacture one.
Edit: And only one of the three x86 manufacturers are worth a damn, and it ain’t Intel.
Edit2: On further checking, VIA sold its CPU design division (Centaur) to Intel in 2021. VIA now makes things like SBCs, some with Intel, some ARM. So there’s only two x86 manufacturers around anymore.
The Rust compiler tends to turn my impostor syndrome to 11. I assume she has some kind of humiliation kink and I do not consent.
If your home router blocked incoming connections on IPv4 by default now, then it’s likely to continue doing so for IPv6. At least, I would hope so. The manufacturer did a bad job if otherwise.
You can get exactly the same benefit by blocking non-established/non-related connections on your firewall. NAT does nothing to help security.
Edit: BTW–every time I see this response of “NAT can prevent external access”, I severely question the poster’s networking knowledge. Like to the level where I wonder how you manage to config a home router correctly. Or maybe it’s the way home routers present the interface that leads people to believe the two functions are intertwined when they aren’t.
Nope. There is an industry standard way of measuring latency, and it’s measured at the halfway point of drawing the image.
Edit: you can measure this through Nvidia’s LDAT system, for example, which uses a light sensor placed in the middle of the display combined with detecting the exact moment you create an input. The light sensor picks up a change (such as the muzzle flash in an fps) and measures the difference in time. If you were to make this work on a CRT running at NTSC refresh rates, it would never show less than 8.3ms when in the middle of the screen.
If you are measuring fairly with techniques we use against LCDs, then yes, CRTs have latency.
Governments are not anyone’s issue other than other governments. If your threat model is state actors, you’re SOL either way.
That’s a silly way to look at it. Governments can be spying on a block of people at once, or just the one person they actually care about. One is clearly preferable.
Again, the obscurity benefit of NAT is so small that literally any cost outweighs it.
I don’t see where you get a cost from it.
We forced decisions into a more centralized, less private Internet for reasons that can be traced directly to NAT.
If you want to hide your hosts, just block non-established, non-related incoming connections at your firewall. NAT does not help anything besides extending IPv4’s life.
JSON and XML can be “real” languages. Mostly because of people who didn’t stop to ask if they should.
But why bother? “Let’s make my network slower and more complicated so it works like a hack on the old thing”.
So instead we open up a bunch of other issues.
With CGNAT, governments still spy on individual addresses when they want. Since those individual addresses now cover a whole bunch of people, they effectively spy on large groups, most of whom have nothing to do with whatever they’re investigating. At least with IPv6, it’d be targetted.
NAT obscurity comes at a cost. Its gain is so little that even a small cost eliminates its benefit.
IIRC, there are some sloppy ISPs who are needlessly handing out prefixes dynamically. ISPs seem to be doing everything they can to fuck this up, and it seems more incompetence than malice. They are hurting themselves with this more than anybody else.
It wasn’t designed for a security purpose in the first place. So turn the question around: why does NAT make a network more secure at all?
The answer is that it doesn’t. Firewalls work fine without NAT. Better, in fact, because NAT itself is a complication firewalls have to deal with, and complications are the enemy of security. The benefits of obfuscating hosts behind the firewall is speculative and doesn’t outweigh other benefits of end to end addressing.
Obfuscation is not security, and not having IPv6 causes other issues. Including some security/privacy ones.
There is no problem having a border firewall in IPv6. NAT does not help that situation at all.
He mangles some of the pros and cons of CRTs towards the end.
They aren’t going to be indefinitely reliable. The phosphor goes bad over time and makes for a weaker image. Doubly so for color phosphors. Some of them are aging better than others, but that’s survivorship bias. We might be looking at the last decade where those old CRTs can still be in anything close to widespread use. Will probably be a few working examples here or there in private collections, of course.
CRTs do have latency, and this is something a lot of people get wrong. A modern flatscreen display can have better latency than CRTs when the hardware takes advantage of it.
The standard way of measuring latency is at the halfway point of the screen. For NTSC running at 60Hz (which is interlaced down to 30fps (roughly)), that means we have 8.33ms of latency. If you were to hit the button the moment the screen starts the next draw, and the CPU miraculously processes it in time for the draw, then it takes that long for the screen to be drawn to the halfway point and we take our measurement.
An LCD can have a response time of less than 2ms. That’s on top of the frame draw time, which can easily be 120Hz on modern systems (or more; quite a bit more in some cases). That means you’re looking at (1 / 120) + 2 = 10.3ms of latency, provided your GPU keeps up at 120 fps. Note that this is comparable to a PAL console (which runs at 50Hz) on CRT. A 200Hz LCD with fast pixel response times is superior to NTSC CRTs. >400Hz is running up against the human limit to distinguish frame changes, and we’re getting there with some high end LCDs right now.
When talking about retro consoles, we’re limited by the hardware feeding the display, and the frame can’t start drawing until the console has transmitted everything. So then you’re looking at the 2ms LCD draw time on top of a full frame time, which for NTSC would be (1 / 60) + 2 = 18.7ms. Which is why lightguns can’t work.
For individuals. There are tons of benefits for everyone collectively, but as is often the case, there’s not enough incentive for any one person to bother until everybody else does.
Not sure about GP, but that’s basically what we did under “SAFe” (Scaled Agile Framework). PI planning means taking most of a sprint to plan everything for the next quarter or so. It’s like a whole week of ticket refinement meetings. Or perhaps 3 days, but when you’ve had 3 days of ticket refinement meetings, it might as well be the whole work week for as much a stuff as you’re going to get done otherwise.
It’s as horrible as you’re thinking, and after a lot of agitating, we stopped doing that shit.