![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
Nope, that curl command says ‘connect to the public ip of the server, and ask for this specific site by name, and ignore SSL errors’.
So it’ll make a request to the public IP for any site configured with that server name even if the DNS resolution for that name isn’t a public IP, and ignore the SSL error that happens when you try to do that.
If there’s a private site configured with that name on nginx and it’s configured without any ACLs, nginx will happily return the content of whatever is at the server name requested.
Like I said, it’s certainly an edge case that requires you to have knowledge of your target, but at the same time, how many people will just name their, as an example, vaultwarden install as vaultwarden.private.domain.com?
You could write a script that’ll recon through various permuatations of high-value targets and have it make a couple hundred curl attempts to come up with a nice clean list of reconned and possibly vulnerable targets.
Honestly it feels like they’re trying to get away from being just a file sync platform, and are pushing for more corpo feature sets to compete with gsuite or O365.
Which I mean is great: that’s exactly what I needed and why I use it - it let me ditch almost all of my Google services and move it all to selfhosted.
But I bet it also causes incentives to prioritize fixes and features that are focused on that, and pushes stuff like ‘make the android sync app work like every other file sync app in history’ to the bottom of the list.