Ah, I’m a moron. Cheers
Ah, I’m a moron. Cheers
Cade to enlighten me? I’m still lost.
If you want a richer login authelia + caddy is good.
Both. Both is fine.
You can port forward to another port without issue, then just route through to it from your server. Domain name lookups support explicit port lists. Although I’d suggest just buying a domain name, setting up dynamic dns through a raspberry pi and forward from your router to port 80. I use porkbun for the latter.
That was your understanding. Clearly leah understood something else. And nothing youce shared so far excuses you’re actions. I repeat. Move on. Do something productive with your time.
Buddy… leah took credit for their own work. Anyone could’ve contributed a better patch than you in that week. You don’t get first dibs on feature contributions and you certainly don’t get a free pass to harass Foss maintainers when they prioritise better functional code than you’re own. Take the L. Move on.
She.
And how are they keeping anything together. Market share isn’t substantially better than before and rather than focusing on the product mozilla was created for they keep pivoting to weird BS like this AI grab. I actually think market shares gone up recently… cause google pushed through manifestv3. That would’ve happened even if mozilla did nothing. I think mozillq is still the better browser but that sure as hell doesn’t seem to be because of whose in charge.
Really interesting read. I love deep dives like this.
Yep. That’s what I plan to do, just a shame it isn’t already there… also that I’m travelling from tomorrow so might have to defer it for a bit XD.
Ooh, didn’t know about podman. That’s neat.
Edit: shame they didn’t include podman-compose as well.
Curious, how is this workflow working for you. I basically did the same thing, at this point the only real blockers are the screensizens is too small and I don’t like carrying separate keyboard and mice from my case.
I use docker so don’t really have to worry about reproducibility of the Services or configurations. Docker will fetch the right services and versions. I’ve documented the core configurations so I can set them back up relatively easily. Anything custom I haven’t documented I’ll just have to remember or find I need to reset up.
As a programmer, I disagree. This isn’t the users fault, it’s the shells and filesystems for being too permissiv. Honestly the shell is a bad choice for pin point acting on files anyways. I say this as a heavy user but selecting files is the most annoying part of using the shell and the solution isn’t warping your filenames to make them easier to type without shell weirdness, it’s using tools built to prevent these issues. That can either be tab complete (with zsh it auto escapes shell characters) or a terminal file manager like lf.
Eh, they really don’t. Maybe in shell scripts or when using a shell interactively but basically any modern language (read post perl) supports spaces fine and without any issue. only shell scripts with bad quoting show problems.
I had thought it was partially because spaces make urls completely unreadable since their replaced with %20. Dots have the advantage of being compact, self representing and not conflicting with any filesystem standard (that I’m aware of).
In general yes. You can think of each container in a docker network as a host and docker makes these hosts discoverable to each other. Docker also supports some other network types that may not follow this concept if you configure them as such (for example if you force all containers to use the same networking stack as one container (I do this with gluetun so I can run everything in a vpn) all services will be reachable only from the gluetun host instead of individual service hosts).
Furthermore services in a container are not exposed outside of it by default. You must explicitly state when a port in a container is reachable by your host (the ports: option).
But getting back to the question at hand, what you’re looking for is a reverse proxy. It’s a program that accepts requests from multiple requested and forwards them somewhere else. So you connect to the proxy and it can tell based on how you connect (the url) whether to send the request to sonarr or radarr. http://sonarr.localhost and http://radarr.localhost will both route to your proxy and the proxy will pass them to the respective services based on how you configure it. For this you can use nginx, but I’d recommend caddy as it’s what I’m using and it makes setting up things like this such a breeze.
The arrs mostly support generating metadata usable by jellyfin/emby. You just need to go to settings on for example sonarr and there should be an option for metadata provider and jellyfin their. Whenever sonarr then imports an episode it’ll add a nfo file containing everything jellyfin needs to process the episode.
No work? They host, maintain and provide access to a massive catalogue of subtitles providing metadata needed for matching media to subs and up until recently we’re giving free access to everyone. Might I suggest if you care about your wife’s access to subtitled movies this much that maybe you should buy the 10 euro per year subscription for her to help keep the platform alive? Alternatively you can find a subtitles group that does all this for free and choose to solely download their subs (also I assume donating to them since you’re so appreciative of their work).
I take offence to this. We duel at dawn! /s