Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.
Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.
I was referring to work setups with the overengineering - if I had a cent for every time I had to argue with somebody at work to not make things more complex than we actually need I’d have retired a long time ago.
Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,
Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.
Everything is deployed via ansible - including nameservices. So I already have the description of my infra in ansible, and rest is just a matter of writing scripts to pull it in a more readable form, and maybe add a few comment labels that also get extracted for easily forgettable admin URLs.
Shitty companies did it like that back then - and shitty companies still don’t properly utilize what easy tools they have available for controlled deployment nowayads. So nothing really changed, just that the amount of people (and with that, amount of morons) skyrocketed.
I had automated builds out of CVS with deployment to staging, and option to deploy to production after tests over 15 years ago.
I personally prefer bzip2 - but it needs to be packed with pbzip, not the regular bzip to generate archives that can be extracted on multiple cores. Not a good option if you have to think about Windows users, though.
Nowadays it matters if you use a compression algorithm that can utilize multiple cores for packing/unpacking larger data. For a multiple GB archive that can be the difference between “I’ll grab a coffee until this is ready” or “I’ll go for lunch and hope it is done when I come back”
I nowadays manage my private stuff with the ansible scripts I develop for work - so mostly my own stuff is a development environment for work, and therefore doesn’t need to be done on private time.
There is nothing like this availlable currently. Framework probably comes closest, but they only sell in a few countries, and there is lots of stuff to dislike about their solutions - but building your own around a framework board might be feasible.
I have two mnt reforms - as you said, slow and expensive. They have their use for work prototyping for me, but generally wouldn’t recommend. They also have the worst keyboard I’ve encountered in a notebook in the last decade.
Generally yes, but you still need hardware support (mostly kernel and mesa). They upstream - but generally you currently want packages built from their git for that.
Also the installer is very mac hardware specific.
A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
That’s already the friendly variant. Traditional find has a mandatory path as first argument, so to find in the current directory you need to do find .
It also doesn’t know if it really is a path - it just prints that as a likely error. You might just have messed up quoting an argument.
Is it a ‘death by quantity’ thing?
Pretty much that - those companies rely on open projects to sort it for them, so they’re pretty much scraping open databases, and selling good data they pull from there. That’s why they were complaining about the kernel stuff - the info required was there already, just you needed to put effort in, so they were asking for CVEs. Now they got their CVEs - but to profit from it they’d still need to put the same effort in as they’d had to without CVEs in place.
Short version: A bunch of shitty companies have as business model to sell open databases to companies to track security vulnerabilities - at pretty much zero effort to themselves. So they’ve been bugging the kernel folks to start issuing CVEs and do impact analysis so they have more to sell - and the kernel folks just went “it is the kernel, everything is critical”
tl;dr: this is pretty much an elaborate “go fuck yourself” towards shady ‘security’ companies.
Funny timing, I’m currently going through a stack of Sun hardware in my garage to decide what to keep, and for what I’ll try to find a good home (or eventually dispose of it).
Admittedly I’m just toying around for entertainment purposes - but I didn’t really have any problems of getting anything I wanted to try out with rocm support. Bigger annoyance was different projects targetting specific distributions or specific software versions (mostly ancient python), but as I’m doing everything in containers anyway that also was manageable.
For AI and compute… They’re far behind. CUDA just wins. I hope a joint standard will be coming up soon, but until then Nvidia wins
I got a W6800 recently. I know a nvidia model of the same generation would be faster for AI - but that thing is fast enough to run stable diffusion variants with high resolution pictures locally without getting too annoyed.
Vanilla teams is a a stinking pile of shit. Corporate policies just add a bit of bonus nuclear waste to that.
It starts with them only doing initial talks about buying their hardware for a project with you for a 7-figure payment, and doesn’t improve from there.
Maps also has gone to shit. Complex routing including public transport is pretty much the only thing it still is useful for. For using maps as maps openstreetmap has been better for a long time, even before Google decided to dumb down their maps. For bicycle routing osm also is better nowadays as Google is missing most of the small paths.