… Sata DVD-ROM drives are a thing
Hell I’ve still got one just in case
🇨🇦
… Sata DVD-ROM drives are a thing
Hell I’ve still got one just in case
A paid plex share is a plex server that someone is running + selling access too.
This is against plex’ terms, gets plex accounts banned; and in some cases, Plex (co) has taken rather drastic action by blocking entire VPS providers from reaching plex.tv; thus plex server software no longer functions on those VPS’s at all.
Naturally, people selling shares want to maximize profit, so they use VPS providers on the cheaper end; resulting in cheaper VPS solutions being blocked for everyone.
Drink less paranoia smoothie…
I’ve been self-hosting for almost a decade now; never bothered with any of the giants. Just a domain pointed at me, and an open port or two. Never had an issue.
Don’t expose anything you don’t share with others; monitor the things you do expose with tools like fail2ban. VPN into the LAN for access to everything else.
and using DDNS
As in, running software to update your DNS records automatically based on your current system IP. Great for dynamic IPs, or just moving location.
Sure, cloudflare provides other security benefits; but that’s not what OP was talking about. They just wanted/liked the plug+play aspect, which doesn’t need cloudflare.
Those ‘benefits’ are also really not necessary for the vast majority of self hosters. What are you hosting, from your home, that garners that kind of attention?
The only things I host from home are private services for myself or a very limited group; which, as far as ‘attacks’ goes, just gets the occasional script kiddy looking for exposed endpoints. Nothing that needs mitigation.
Unless you are behind CGNAT; you would have had the same plug+play experience by using your own router instead of the ISP supplied one, and using DDNS.
At least, I did.
A one-off, or on occasion is fine; but having to constantly reassure someone that they aren’t the cause of every single frustration you encounter gets extremely exhausting.
Huh, usually they ask ‘jump where?’
If they are injecting ads into the actual video stream; it won’t matter what client you use. You request the next video chunk for playback and get served a chunk filled with advertising video instead. The clients won’t be able to tell the difference unless they start analyzing the actual video frames. That’s an entirely server-side decision that clients can’t bypass.
Only if the ads are a fixed length and always in the same place for each playback of the same video.
Inserting ads of various lengths in varying places throughout the video will alter all the time stamps for every playback.
The 5th minute of the video might happen 5min after starting playback, or it could be 5min+a 2min ad break after starting. This could change from playback to playback; so basing ad/sponsor blocking on timestamps becomes entirely useless.
Been a while since it was updated; but I used to use Win32DiskImager for reading/writing rpi cards.
I had a couple cards fail where they wouldn’t throw any errors during the actual write process, but once on to the verify step (checking that what was written to the card matches the source file, after writing) then they’d fail. Data hadn’t been written correctly, but it wasn’t reporting failures during writes.
Perhaps this is your issue? Not sure I’d trust those cards regardless.
I have one more thought for you:
If downtime is your concern, you could always use a mixed approach. Run a daily backup system like I described, somewhat haphazard with everything still running. Then once a month at 4am or whatever, perform a more comprehensive backup, looping through each docker project and shutting them down before running the backup and bringing it all online again.
I setup borg around 4 months ago using option 1. I’ve messed around with it a bit, restoring a few backups, and haven’t run into any issues with corrupt/broken databases.
I just used the example script provided by borg, but modified it to include my docker data, and write info to a log file instead of the console.
Daily at midnight, a new backup of around 427gb of data is taken. At the moment that takes 2-15min to complete, depending on how much data has changed since yesterday; though the initial backup was closer to 45min. Then old backups are trimmed; Backups <24hr old are kept, along with 7 dailys, 3 weeklys, and 6 monthlys. Anything outside that scope gets deleted.
With the compression and de-duplication process borg does; the 15 backups I have so far (5.75tb of data) currently take up 255.74gb of space. 10/10 would recommend on that aspect alone.
/edit, one note: I’m not backing up Docker volumes directly, though you could just fine. Anything I want backed up lives in a regular folder that’s then bind mounted to a docker container. (including things like paperless-ngxs databases)
Damn.
I still remember playing the creative-only browser version for hours on end…
Edit: I miss the original sponges. Just place, and all water around it is permanently kept at bay. No drying needed. They made for some interesting water features.
I just set group chat notifications to silent.
I still get a notification that’s easy to spot in the tray/bar, but it doesn’t play sound or vibrate, so it’s not distracting unless I’m already looking at the screen.
Github isn’t taking Nintendo to court over one of their users projects. They just comply with the takedown request and remove it. The user/dev isn’t going to fight Nintendo to bring it back either.
I wasn’t trying to assign blame…
Just noticed it didn’t work when I tried it, so I added the working one for others to follow if they wish.
But the crowd mentally needs something to be pissed about; so go ahead, pile on some more downvotes. Glad I could provide some release.
Indeed.
The post also contains a link to github as well as a link to youtube.
The github link does not work, hence my comment.
Could always plug it in temporarily; do what you gotta do, then remove it again.