![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/2QNz7bkA1V.png)
I’m pretty sure that people were unhappy because it was opt-out at first. Now that bridging is opt-in, I don’t think most people have a problem with it and I’ve seen a number of posts from both sides of the bridge so it seems to be working.
I’m pretty sure that people were unhappy because it was opt-out at first. Now that bridging is opt-in, I don’t think most people have a problem with it and I’ve seen a number of posts from both sides of the bridge so it seems to be working.
That’s true. I know they did increase the number of filters from the initial amount but they really should just make it effectively infinite.
As long as that extension developer can be trusted to have access to read and modify the data of any site you load and to not sell the extension (and its userbase) for a quick buck (see Hover Zoom+ for an example of how much they’re willing to offer, as recently as today).
There are definitely trade-offs between the permissions allowed in V2 versus V3. It really depends on where you think the main threat is (websites and online tracking versus extension developers).
I think it was more targeting the client ISP side, than the VPN provider side. So something like having your ISP monitor your connection (voluntarily or forced to with a warrant/law) and report if your connection activity matches that of someone accessing a certain site that your local government might not like for example. In that scenario they would be able to isolate it to at least individual customer accounts of an ISP, which usually know who you are or where to find you in order to provide service. I may be misunderstanding it though.
Edit: On second reading, it looks like they might just be able to buy that info directly from monitoring companies and get much of what they need to do correlation at various points along a VPN-protected connection’s route. The Mullvad post has links to Vice articles describing the data that is being purchased by governments.
One example:
By observing that when someone visits site X, it loads resources A, B, C, etc in a specific order with specific sizes, then with enough distinguishable resources loaded like that someone would be able to determine that you’re loading that site, even if it’s loaded inside a VPN connection. Think about when you load Lemmy.world, it loads the main page, then specific images and style sheets that may be recognizable sizes and are generally loaded in a particular order as they’re encountered in the main page, scripts, and things included in scripts. With enough data, instead of writing static rules to say x of size n was loaded, y of size m was loaded, etc, it can instead be used with an AI model trained on what connections to specific sites typically look like. They could even generate their own data for sites in both normal traffic and the VPN encrypted forms and correlate them together to better train their model for what it might look like when a site is accessed over a VPN. Overall, AI allows them to simplify and automate the identification process when given enough samples.
Mullvad is working on enabling their VPN apps to: 1. pad the data to a single size so that the different resources are less identifiable and 2. send random data in the background so that there is more noise that has to be filtered out when matching patterns. I’m not sure about 3 to be honest.
I don’t propose we break the laws, I propose we change them.
For me it’s not boot licking but recognizing that IA made a huge unforced error that may cost us all not just that digital lending program but stuff like the Wayback Machine and all the other good projects the IA runs.
The Internet Archive refused to follow industry standards for ebook licensing, because they aren’t a library.
It’s worse than that. They did use “Controlled Digital Lending” to limit the number of people who can access a book at one time to something resembling the number of physical books that they had. And then they turned that restriction off because of the pandemic. There is no pandemic exception to copyright laws, even if that would make sense from a public health perspective to prevent people from having unnecessary contact at libraries. They screwed themselves and I can only hope that the Wayback Machine archives get a home somewhere else if they do go under.
Laws can very well be wrong, in a moral sense, and quite a few of them still in existence today are, but trying to argue that in court is usually a bad idea.
https://fingfx.thomsonreuters.com/gfx/legaldocs/lbvggjmzovq/internetarchive.pdf
[IA] professes to perform the traditional function of a library by lending only limited numbers of these works at a time through “Controlled Digital Lending,” … CDL’s central tenet, according to a September 2018 Statement and White Paper by a group of librarians, is that an entity that owns a physical book can scan that book and “circulate [the] digitized title in place of [the] physical one in a controlled manner.” … CDL’s most critical component is a one-to-one “owned to loaned ratio.” Id. Thus, a library or organization that practices CDL will seek to “only loan simultaneously the number of copies that it has legitimately acquired.”
…
Judging itself “uniquely positioned to be able to address this problem quickly and efficiently,” on March 24, 2020, IA launched what it called the National Emergency Library (“NEL”), intending it to “run through June 30, 2020, or the end of the US national emergency, whichever is later.” … During the NEL, IA lifted the technical controls enforcing its one-to-one owned-to-loaned ratio and allowed up to ten thousand patrons at a time to borrow each ebook on the Website.
[…]
The Publishers have established a prima facie case of copyright infringement.
First, the Publishers hold exclusive publishing rights in the Works in Suit …
Second, IA copied the entire Works in Suit without the Publishers’ permission. Specifically, IA does not dispute that it violated the Publishers’ reproduction rights, by creating copies of the Works in Suit … ; the Publishers’ rights to prepare derivative works, by “recasting” the Publishers’ print books into ebooks …; the Publishers’ public performance rights, through the “read aloud” function on IA’s Website …; and the publishers’ display rights, by showing the Works in Suit to users through IA’s in-browser viewer
Bold added.
It’s pretty much not in dispute that Internet Archive distributed the copyrighted works of the publishers without permission, outside of what even a traditional library lending system would allow.
To do that they need to make sure they have adequate funding and make sure they don’t incur some huge financial liabilities somehow. The Internet Archive failed at that last part when they decided to lend out ebooks that are under copyright without many limits (and potentially with their Great 78 Project regarding music as well).
Internet Archive’s other projects like the Wayback Machine may be good but how they handled their digital lending of books during the pandemic was not. They removed the limit on the number of people that can borrow a book at a time, thus taking away any resemblance to traditional physical lending. You can argue that copyright laws are bad and should be changed (and I’d agree) but that doesn’t change the facts of what happened under the current law.
Are there any semi-popular alternative browsers still based on WebKit? I thought most of them like Brave and Vivaldi were based on Chromium’s Blink rather than WebKit.
That’s likely what they want. If you’re not viewing their ads and your third-party app is even blocking all the tracking, then you are not providing any value to them to keep you as a ‘customer’. All it does is reduce their hosting and serving costs when you’re blocked or when you eventually stop using it.
To be fair, one of the apps mentioned, [Re]Vanced, is literally just the stock app with extra features patched in and the premium features enabled for free (like no ads and downloads). It makes sense that it would be more user friendly. Allowing that modified version doesn’t get them any revenue though while still costing them to host and serve the content to those users.
At least with NewPipe it supports multiple sites and is its own app with their own code and UI.
They already do but it’s pretty restrictive in what can be changed about the experience:
https://developers.google.com/youtube/terms/developer-policies-guide#examples_3
Votes are public to not just to the original instance admins though but to any instance admin, right? If you setup your own instance and federate with another, then you should be able to view the votes for any communities on the one you federate with. The only privacy is that the default UI doesn’t display it, but a different UI could:
e.g. the one for this post on kbin.social that shows Lemmy upvotes as favorites.
I feel like this should be more prominently disclosed to Lemmy users.
Or things that are already made disappear and are replaced with reality TV, like what’s happening with HBO/Discovery/Max/whatever.
I agree that the no algorithm hill gets annoying once you’re following enough people.
What I don’t understand is why they don’t setup something like Bluesky has where you can choose which algorithm you want, including those not made directly by the Bluesky team: https://www.theverge.com/2023/5/26/23739174/bluesky-custom-feeds-algorithms-twitter-alternative
One of those algorithms could just be a chronological feed that some people seem dead set on sticking with. Everyone can be happy.
Looking at it most favorably, if they ever want to not be dependent on Google, they need revenue to replace what they get from Google and like it or not much of the money online comes from advertising. If they can find a way to get that money without being totally invasive on privacy, that’s still better than their current position.