honestly every explanation probably just ends at ‘this is what i learned on and it works’. same way i religiously use nano and try to do everything in bash first. or how a couple coworkers can’t stop explaining their vim workflow and defending python unprompted like it’s a trauma response for them. my current homelab is also running a r9 with 64gb ram and 30tb storage. if i were paying for remote hosting, still using salvaged hardware or being paid, i’d invest time learning newer processes. but containers haven’t caught my interested and this set up takes basically no effort on my part to maintain, so i can focus my limited free time elsewhere.
These days the hammer is usually docker/podman/lxc containers instead of VMs though. Like, you don’t need a container to run a self-contained statically-compiled binary, yet people still do it for some reason.
Same.
The time it takes me to write a single function in Python is the same as writing a whole Bash-script using nano.
Also I initially set up my homelab using Docker in a VM on Proxmox. Totally useless abstraction, but I never found the time and patience to migrate the VM to bare metal.
Not really useless, it’s an extra layer of management (a good thing). The Proxmox system can be nearly static while giving you external level management of the OS that manages the containers.
I have a 3 server Proxmox cluster running various VMs doing different things. Some of those VMs are my container systems.
Besides, you can run containers directly on Proxmox itself.
I can backup an entire VM snapshot very quickly and then restore it in a matter of minutes. Everything from the system files, database, Jellyfin version and configs, etc. All easily backed up and restored in an easy to manage bundle.
A container is not as easy to manage in the same way.
VMs can also be live migrated to another server in the cluster with no downtime and backups don’t need to take the VM down to do their thing. If in the future you want to move to physical hardware, you can use something like Clonezilla to back it up (not needed often, but still, something to consider).
Both have their places, but those factors are the main ones that come into play of when I want to use a VM or LXC.
If a lxc container is in a btrfs subvolume or in a zfs dataset (those are created easily like a directory, it’s not a partition), you can do a full 1:1 copy in less than one second via a snapshot, keeping all the system files, database, version and configs
Sure, ZFS snapshots are dead simple and fast. But you’d need to ensure that each container and its volumes are created in each respective dataset.
And none of this is implying that it’s hard. The top comment was criticizing OP for using VMs instead of containers. Neither one is better than the other for all use cases.
I have a ton of VMs for various use cases, and some of those VMs are container/Docker hosts. Each tool where it works best.
Unrelated but why a full VM for Linux stuff, lxc is much more efficient
honestly every explanation probably just ends at ‘this is what i learned on and it works’. same way i religiously use nano and try to do everything in bash first. or how a couple coworkers can’t stop explaining their vim workflow and defending python unprompted like it’s a trauma response for them. my current homelab is also running a r9 with 64gb ram and 30tb storage. if i were paying for remote hosting, still using salvaged hardware or being paid, i’d invest time learning newer processes. but containers haven’t caught my interested and this set up takes basically no effort on my part to maintain, so i can focus my limited free time elsewhere.
Yeah, lots of these answers basically boil down to “when all you have is a hammer, everything looks like a nail.”
These days the hammer is usually docker/podman/lxc containers instead of VMs though. Like, you don’t need a container to run a self-contained statically-compiled binary, yet people still do it for some reason.
Same.
The time it takes me to write a single function in Python is the same as writing a whole Bash-script using nano.
Also I initially set up my homelab using Docker in a VM on Proxmox. Totally useless abstraction, but I never found the time and patience to migrate the VM to bare metal.
Not really useless, it’s an extra layer of management (a good thing). The Proxmox system can be nearly static while giving you external level management of the OS that manages the containers.
I have a 3 server Proxmox cluster running various VMs doing different things. Some of those VMs are my container systems.
Besides, you can run containers directly on Proxmox itself.
I can backup an entire VM snapshot very quickly and then restore it in a matter of minutes. Everything from the system files, database, Jellyfin version and configs, etc. All easily backed up and restored in an easy to manage bundle.
A container is not as easy to manage in the same way.
VMs can also be live migrated to another server in the cluster with no downtime and backups don’t need to take the VM down to do their thing. If in the future you want to move to physical hardware, you can use something like Clonezilla to back it up (not needed often, but still, something to consider).
Both have their places, but those factors are the main ones that come into play of when I want to use a VM or LXC.
How not?
If a lxc container is in a btrfs subvolume or in a zfs dataset (those are created easily like a directory, it’s not a partition), you can do a full 1:1 copy in less than one second via a snapshot, keeping all the system files, database, version and configs
Sure, ZFS snapshots are dead simple and fast. But you’d need to ensure that each container and its volumes are created in each respective dataset.
And none of this is implying that it’s hard. The top comment was criticizing OP for using VMs instead of containers. Neither one is better than the other for all use cases.
I have a ton of VMs for various use cases, and some of those VMs are container/Docker hosts. Each tool where it works best.
Stronger compartmentalization