• 0 Posts
  • 30 Comments
Joined 6 months ago
cake
Cake day: November 30th, 2024

help-circle

  • Many worlds theories are rather strange.

    If you take quantum theory at face value without trying to modifying it in any way, then you unequivocally run into the conclusion that ψ is contextual, that is to say, what ψ you assign to a system depends upon your measurement context, your “perspective” so to speak.

    This is where the “Wigner’s friend paradox” arises. It’s not really a “paradox” as it really just shows ψ is contextual. If Wigner and his friend place a particle in a superposition of states, his friend says he will measure it, and then Wigner steps out of the room for a moment when he is measuring it, from the friend’s perspective he would reduce ψ to an eigenstate, whereas in Wigner’s perspective ψ would instead remain in a superposition of states but one entangled with the measuring device.

    This isn’t really a contradiction because in density matrix form Wigner can apply a perspective transformation and confirm that his friend would indeed perceive an eigenstate with certain probabilities for which one they would perceive given by the Born rule, but it does illustrate the contextual nature of quantum theory.

    If you just stop there, you inevitably fall into relational quantum mechanics. Relational quantum mechanics just accepts the contextual nature of ψ and tries to make sense of it within the mathematics itself. Most other “interpretations” really aren’t even interpretations but sort of try to run away from the conclusion, such as significantly modifying the mathematics and even statistical predictions in order to introduce objective collapse or hidden variables in order to either get rid of a contextual ψ or get rid of ψ as something fundamental altogether.

    Many Worlds is still technically along these lines because it does add new mathematics explicitly for the purpose of avoiding the conclusion of irreducible contextuality, although it is the most subtle modification and still reproduces the same statistical predictions. If we go back to the Wigner’s friend scenario, Wigner’s friend reduced ψ relative to his own context, but Wigner, who was isolated from the friend and the particle, did not reduce ψ by instead described them as entangled.

    So, any time you measure something, you can imagine introducing a third-party that isn’t physically interacting with you or the system, and from that third party’s perspective you would be in an entangled superposition of states. But what about the physical status of the third party themselves? You could introduce a fourth party that would see the system and the third party in an entangled superposition of states. But what about the fourth party? You could introduce a fifth party… so on and so forth.

    You have an infinite regress until, at some how (somehow), you end up with Ψ, which is a sort of “view from nowhere,” a perspective that contains every physical object, is isolated from all those physical objects, and is itself not a physical object, so it can contain everything. So from the perspective of this big Ψ, everything always remains in a superposition of states forever, and all the little ψ are only contextual because they are like perspectival slices within Ψ.

    You cannot derive Ψ mathematically because there is no way to get from inherently contextual ψ to this preferred nonphysical perspective Ψ, so you cannot know its mathematical properties. There is also no way to define it, because each ψ is an element of Hilbert space and Hilbert space is a constructed space, unlike background spaces like Minkowski space. The latter are defined independently of the objects the contain, whereas the former are defined in terms of the objects they contain. That means for two different physical systems, you will have two different ψ that will be assigned to two different Hilbert spaces. The issue is that you cannot define the Hilbert space that Ψ is part of because it would require knowing everything in the universe.

    Hence, Ψ cannot be derived nor defined, so it can only be vaguely postulated, and its mathematical properties also have to be postulated as you cannot derive them from anything. It is just postulated to be this privileged cosmic perspective, a sort of godlike ethereal “view from nowhere,” and then it is postulated to have the same mathematical properties as ψ but that all ψ are also postulated to be subsystems of Ψ. You can then write things down like how a partial trace on Ψ can give you information about any perspective of its subsystems, but only because it was defined to have those properties. It is true by definition.

    In a RQM perspective it just takes quantum theory at face value without bothering to introduce a Ψ and just accepts that ψ is contextual. Talking about a non-contextual (absolute) ψ makes about as much sense as talking about non-contextual (absolute) velocity, and talking about a privileged perspective in QM makes about as much sense as talking about a privileged perspective in special relativity. For some reason, people are perfectly happy with accepting the contextual nature of special relativity, but they struggle real hard with the contextual nature of quantum theory, and feel the need to modify it, to the point of convincing themselves that there is a multiverse in order to escape it.


  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzETERNAL TORMENT
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    1 month ago

    There are no “paradoxes” of quantum mechanics. QM is a perfectly internally consistent theory. Most so-called “paradoxes” are just caused by people not understanding it.

    QM is both probabilistic and, in its own and very unique way, relative. Probability on its own isn’t confusing, if the world was just fundamentally random you could still describe it in the language of classical probability theory and it wouldn’t be that difficult. If it was just relative, it can still be a bit of a mind-bender like special relativity with its own faux paradoxes (like the twin “paradox”) that people struggle with, but ultimately people digest it and move on.

    But QM is probabilistic and relative, and for most people this becomes very confusing, because it means a particle can take on a physical value in one perspective while not having taken on a physical value in another (called the relativity of facts in the literature), and not only that, but because it’s fundamentally random, if you apply a transformation to try to mathematically place yourself in another perspective, you don’t get definite values but only probabilistic ones, albeit not in a superposition of states.

    For example, the famous “Wigner’s friend paradox” claims there is a “paradox” because you can setup an experiment whereby Wigner’s friend would assign a particle a real physical value whereas Wigner would be unable to from his perspective and would have to assign an entangled superposition of states to both his friend and the particle taken together, which has no clear physical meaning.

    However, what the supposed “paradox” misses is that it’s not paradoxical at all, it’s just relative. Wigner can apply a transformation in Hilbert space to compute the perspective of his friend, and what he would get out of that is a description of the particle that is probabilistic but not in a superposition of states. It’s still random because nature is fundamentally random so he cannot predict what his friend would see with absolute certainty, but he can predict it probabilistically, and since this probability is not a superposition of states, what’s called a maximally mixed state, this is basically a classical probability distribution.

    But you only get those classical distributions after applying the transformation to the correct perspective where such a distribution is to be found, i.e. what the mathematics of the theory literally implies is that only under some perspectives (defined in terms of any physical system at all, kind of like a frame of reference, nothing to do with human observers) are the physical properties of the system actually realized, while under some other perspectives, the properties just aren’t physically there.

    The Schrodinger’s cat “paradox” is another example of a faux paradox. People repeat it as if it is meant to explain how “weird” QM is, but when Schrodinger put it forward in his paper “The Present Situation in Quantum Mechanics,” he was using it to mock the idea of particles literally being in two states at once, by pointing out that if you believe this, then a chain reaction caused by that particle would force you to conclude cats can be in two states at once, which, to him, was obviously silly.

    If the properties of particles only exist in some perspectives and aren’t absolute, then a particle can’t meaningfully have “individuality,” that is to say, you can’t define it in complete isolation. In his book “Science and Humanism,” Schrodinger talks about how, in classical theory, we like to imagine particles as having their own individual existence, moving around from interaction to interaction, carrying their properties with themselves at all times. But, as Schrodinger points out, you cannot actually empirically verify this.

    If you believe particles have continued existence in between interactions, this is only possible if the existence of their properties are not relative so they can be meaningfully considered to continue to exist even when entirely isolated. Yet, if they are isolated, then by definition, they are not interacting with anything, including a measuring device, so you can never actually empirically verify they have a kind of autonomous individual existence.

    Schrodinger pointed out that many of the paradoxes in QM carry over from this Newtonian way of thinking, that particles move through space with their own individual properties like billiard balls flying around. If this were to be the case, then it should be possible to assign a complete “history” to the particle, that is to say, what its individual properties are at all moments in time without any gaps, yet, as he points out in that book, any attempt to fill in the “gaps” leads to contradiction.

    One of these contradictions is the famous “delayed choice” paradox, whereby if you imagine what the particle is doing “in flight” when you change your measurement settings, you have to conclude the particle somehow went back in time to rewrite the past to change what it is doing. However, if we apply Schrodinger’s perspective, this is not a genuine “paradox” but just a flaw of actually interpreting the particle as having a Newtonian-style autonomous existence, of having “individuality” as he called it.

    He also points out in that book that when he originally developed the Schrodinger equation, the purpose was precisely to “fill in the gaps,” but he realized later that interpreting the evolution of the wave function according to the Schrodinger equation as a literal physical description of what’s going on is a mistake, because all you are doing is pushing the “gap” from those that exist between interactions in general to those that exist between measurement, and he saw no reason as to why “measurement” should play an important role in the theory.

    Given that it is possible to make all the same predictions without using the wave function (using a mathematical formalism called matrix mechanics), you don’t have to reify the wave function because it’s just a result of an arbitrarily chosen mathematical formalism, and so Schrodinger cautioned against reifying it, because it leads directly to the measurement problem.

    The EPR “paradox” is a metaphysical “paradox.” We know for certain QM is empirically local due to the no-communication theorem, which proves that no interaction a particle could undergo could ever cause an observable alteration on its entangled pair. Hence, if there is any nonlocality, it must be invisible to us, i.e. entirely metaphysical and not physical. The EPR paper reaches the “paradox” through a metaphysical criterion it states very clearly on the first page, which is to equate the ontology of a system to its eigenstates (to “certainty”). This makes it seem like the theory is nonlocal because entangled particles are not in eigenstates, but if you measure one, both are suddenly in eigenstates, which makes it seem like they both undergo an ontological transition simultaneously, transforming from not having a physical state to having one at the same time, regardless of distance.

    However, if particles only have properties relative to what they are physically interacting with, from that perspective, then ontology should be assigned to interaction, not to eigenstates. Indeed, assigning it to “certainty” as the EPR paper claims is a bit strange. If I flip a coin, even if I can predict the outcome with absolute certainty by knowing all of its initial conditions, that doesn’t mean the outcome actually already exists in physical reality. To exist in physical reality, the outcome must actually happen, i.e. the coin must actually land. Just because I can predict the particle’s state at a distance if I were to travel there and interact with it doesn’t mean it actually has a physical state from my perspective.

    I would recommend checking out this paper here which shows how a relative ontology avoids the “paradox” in EPR. I also wrote my own blog post here which if you go to the second half it shows some tables which walk through how the ontology differs between EPR and a relational ontology and how the former is clearly nonlocal while the latter is clearly local.

    Some people frame Bell’s theorem as a paradox that proves some sort of “nonlocality,” but if you understand the mathematics it’s clear that Bell’s theorem only implies nonlocality for hidden variable theories. QM isn’t a hidden variable theory. It’s only a difficulty that arises in alternative theories like pilot wave theory, which due to their nonlocal nature have to come up with a new theory of spacetime because they aren’t compatible with special relativity due to the speed of light limit. However, QM on its own, without hidden variables, is indeed compatible with special relativity, which forms the foundations of quantum field theory. This isn’t just my opinion, if you go read Bell’s own paper himself where he introduces the theorem, he is blatantly clear in the conclusion, in simple English language, that it only implies nonlocality for hidden variable theories, not for orthodox QM.

    Some “paradoxes” just are much more difficult to catch because they are misunderstandings of the mathematics which can get hairy at times. The famous Frauchiger–Renner “paradox” for example stems from incorrect reasoning across incompatible bases, a very subtle point lost in all the math. The Cheshire cat “paradox” tries to show particles can disassociate from their properties, but those properties only “disassociate” across different experiments, meaning in no singular experiment are they observed to dissociate.

    I ran out of charact-


  • I will be the controversial one and say that I reject that “consciousness” even exists in the philosophical sense. Of course, things like intelligence, self-awareness, problem-solving capabilities, even emotions exist, but it’s possible to describe all of these things in purely functional terms, which would in turn be computable. When people like about “consciousness not being computable” they are talking about the Chalmerite definition of “consciousness” popular in philosophical circles specifically.

    This is really just a rehashing of Kant’s noumena-phenomena distinction, but with different language. The rehashing goes back to the famous “What is it like to be a bat?” paper by Thomas Nagel. Nagel argues that physical reality must be independent of point of view (non-contextual, non-relative, absolute), whereas what we perceive clearly depends upon point of view (contextual). You and I are not seeing the same thing for example, even if we look at the same object we will see different things from our different standpoints.

    Nagel thus concludes that what we perceive cannot be reality as it really is, but must be some sort of fabrication by the mammalian brain. It is not equivalent to reality as it is really is (which is said to be non-contextual) but must be something irreducible to the subject. What we perceive, therefore, he calls “subjective,” and since observation, perception and experience are all synonyms, he calls this “subjective experience.”

    Chalmers later in his paper “Facing up to the Hard Problem of Consciousness” renames this “subjective experience” to “consciousness.” He points out that if everything we perceive is “subjective” and created by the brain, then true reality must be independent of perception, i.e. no perception could ever reveal it, we can never observe it and it always lies beyond all possible observation. How does this entirely invisible reality which is completely disconnected from everything we experience, in certain arbitrary configurations, “give rise to” what we experience. This “explanatory gap” he calls the “hard problem of consciousness.”

    This is just a direct rehashing in different words Kant’s phenomena-noumena distinction, where the “phenomena” is the “appearance of” reality as it exists from different points of view, and the “noumena” is that which exists beyond all possible appearances, the “thing-in-itself” which, as the term implies, suggests it has absolute (non-contextual) properties as it can be meaningfully considered in complete isolation. Velocity, for example, is contextual, so objects don’t meaningfully have velocity in complete isolation; to say objects meaningfully exist in complete isolation is to thus make a claim that they have a non-contextual ontology. This leads to the same kind of “explanatory gap” between the two which was previously called the “mind-body problem.”

    The reason I reject Kantianism and its rehashing by the Chalmerites is because Nagel’s premise is entirely wrong. Physical reality is not non-contextual. There is no “thing-in-itself.” Physical reality is deeply contextual. The imagined non-contextual “godlike” perspective whereby everything can be conceived of as things-in-themselves in complete isolation is a fairy tale. In physical reality, the ontology of a thing can only be assigned to discrete events whereby its properties are always associated with a particular context, and, as shown in the famous Wigner’s friend thought experiment, the ontology of a system can change depending upon one’s point of view.

    This non-contextual physical reality from Nagel is just a fairy tale, and so his argument in the rest of his paper does not follow that what we observe (synonym for: experience, perceive) is “subjective,” and if Nagel fails to establish “subjective experience,” then Chalmers fails to establish “consciousness” which is just a renaming of this term, and thus Chalmers fails to demonstrate an “explanatory gap” between consciousness and reality because he has failed to establish that “consciousness” is a thing at all.

    What’s worse is that if you buy Chalmers’ and Nagel’s bad arguments then you basically end up equating observation as a whole with “consciousness,” and thus you run into the Penrose conclusion that it’s “non-computable.” Of course we cannot compute what we observe, because what we observe is not consciousness, it is just reality. And reality itself is not computable. The way in which reality evolves through time is computable, but reality as a whole just is. It’s not even a meaningful statement to speak of “computing” it, as if existence itself is subject to computation, but Chalmerite delusion tricks people like Penrose to think this reveals something profound about the human mind, when it’s not relevant to the human mind.


  • That’s more religion than pseudoscience. Pseudoscience tries to pretend to be science and tricks a lot of people into thinking it is legitimate science, whereas religion just makes proclamations and claims it must be wrong if any evidence debunks them. Pseudoscience is a lot more sneaky, and has become more prevalent in academia itself ever since people were infected by the disease of Popperism.

    Popperites believe something is “science” as long as it can in principle be falsified, so you invent a theory that could in principle be tested then you have proposed a scientific theory. So pseudoscientists come up with the most ridiculous nonsense ever based on literally nothing and then insist everyone must take it seriously because it could in theory be tested one day, but it is always just out of reach of actually being tested.

    Since it is testable and the brain disease of Popperism that has permeated academia leads people to be tricked by this sophistry, sometimes these pseudoscientists can even secure funding to test it, especially if they can get a big name in physics to endorse it. If it’s being tested at some institution somewhere, if there is at least a couple papers published of someone looking into it, it must be genuine science, right?

    Meanwhile, while they create this air of legitimacy, a smokescreen around their ideas, they then reach out to a laymen audience through publishing books, doing documentaries on television, or publishing videos to YouTube, talking about woo nuttery like how we’re all trapped inside a giant “cosmic consciousness” and we are all feel each other’s vibrations through quantum entanglement, and that somehow science proves the existence of gods.

    As they make immense dough off of the laymen audience they grift off of, if anyone points to the fact that their claims are based on nothing, they just can deflect to the smokescreen they created through academia.


  • Color is not invented by the brain but is socially constructed. You cannot look inside someone’s brain and find a blob of green, unless idk you let the brain mold for awhile. All you can do is ask the person to think of “green” and then correlate whatever their brain patterns are that respond to that request, but everyone’s brain patterns are different so the only thing that ties them all together is that we’ve all agreed as a society to associate a certain property in reality with “green.”

    If you were an alien who had no concept of green and had abducted a single person, if that person is thinking of “green,” you would have no way to know because you have no concept of “green,” you would just see arbitrary patterns in their brain that to you would seem meaningless. Without the ability to reference that back to the social system, you cannot identify anything “green” going on in their brain, or for any colors at all, or, in fact, for any concepts in general.

    This was the point of Wittgenstein’s rule-following problem, that ultimately it is impossible to tie any symbol (such as “green”) back to a concrete meaning without referencing a social system. If you were on a deserted island and forgot what “green” meant and started to use it differently, there would be no one to correct you, so that new usage might as well be what “green” meant.

    If you try to not change your usage by building up a basket of green items to remind you of what “green” is, there is no basket you could possibly construct that would have no ambiguity. If you put a green apple and a green lettuce in there, and you forget what “green” is so you look at the basket for reference, you might think, for example, that “green” just refers to healthy vegetation. No matter how many items you add to the basket, there will always be some ambiguity, some possible definition that is compatible with all your examples yet not your original intention.

    Without a social system to reference for meaning and to correct your mistakes, there is no way to be sure that today you are even using symbols the same way you used them yesterday. Indeed, there would be no reason for someone born and grew up in complete isolation to even develop any symbols at all, because they would just all be fuzzy and meaningless. They would still have a brain and intelligence and be able to interpret the world, but they would not divide it up into rigid categories like “green” or “red” or “dogs” or “cats.” They would think in a way where everything kind of merges together, a mode of thought that is very alien to social creatures and so we cannot actually imagine what it is like.



  • The point wasn’t that the discussion is stupid, but that believing particles can be in two states at once is stupid. Schrodinger was doing a kind of argument known as a reduction to absurdity in his paper The Present Situation in Quantum Mechanics. He was saying that if you believe a single particle can be in two states at once, it could trivially cause a chain reaction that would put a macroscopic object in two states at once, and that it’s absurd to think a cat can be in two states at once, ergo a particle cannot be in two states at once.

    In his later work Science and Humanism, Schrodinger argues that all the confusion around quantum mechanics originates from assuming that if that particles are autonomous objects with their own individual existence. If this were to be the case, then the particle must have properties localizable to itself, such as its position. And if the particle’s position is localized to itself and merely a function of itself, then it would have a position at all times. That means if the particle is detected by a detector at t=0 and a detector at t=1 and no detection is made at t=0.5, the particle should have some position value at t=0.5.

    If the particle has properties like position at all times, then the changes in its position must always be continuous as there would be no gaps between t=0 and t=1 where it lacks a position but would have a position at t=0.1, t=0.2, etc. Schrodinger referred to this as the “history” of the particle, saying that whenever a particle shows up on a detector, we always assume it must have come from somewhere, that it used to be somewhere else before arriving at the detector.

    However, Schrodinger viewed this as mistake that isn’t actually backed by the empirical evidence. We can only make observations at discrete moments in time, and to assume the particle is doing something in between those observation is by definition to make assumptions about something we cannot, by definition, observe, and so it can never actually be empirically verified.

    Indeed, Schrodinger’s concern was more that it could not be verified, but that all the confusion around quantum theory comes precisely from what he called trying to “fill in the gaps” of the particle’s history. When you do so, you run into logical contradictions without introducing absurdities, like nonlocal action, retrocausality, or these days it’s even popular to talk about multiverses. Schrodinger also pointed out how the measurement problem, too, directly stems from trying to fill in the gaps of the particle’s history.

    Schrodinger thought it made more sense to just abandon the notion that particles are really autonomous objects with their own individual existence. They only exist at the moment they are interacting with something, and the physical world evolves through a sequence of discrete events and not through continuous transitions of autonomous entities.

    He actually used to hate this idea and criticized Heisenberg for it as it was basically Heisenberg’s view as well, saying “I cannot believe that the electron hops about like a flea.” However, in the same book he mentions that he changed his mind precisely because of the measurement problem. He says that he introduced the Schrodinger equation as a way to “fill in the gaps” between these “hops,” but that it actually fails to achieve this because it just shifts the gap between from between “hops” to between measurements as the system would evolve continuously up until measurement then have a sudden transition to a discrete value.

    Schrodinger didn’t think it made sense that measurement should be special or play any sort of role in the theory over any other kind of physical interaction. By not trying to fill in the gaps at all, then no physical interaction is treated as special and all are put on an equal playing field, and so you don’t have a problem of measurement.

    What a lot of people aren’t taught is that when quantum mechanics was originally formulated, it had no Schrodinger wave equation and it had no wave function, yet it was perfectly capable of making all the same predictions that modern quantum mechanics could make. The original formulation of quantum mechanics by Heisenberg is known as matrix mechanics and it does not have the wave function, it instead really does treat it as if particles just hop from one physical interaction to the next. Heisenberg believed this process was fundamentally random and so at best you could ever hope to make a probabilistic prediction, so he treated the state vector as something epistemic, i.e. the particle doesn’t literally spread out like a wave, it just hops from one interaction to the next and you make your best guess using probability rules.

    Again, matrix mechanics can make all the same predictions as standard quantum mechanics, and so the wave function formulation is really just a quirk of a very specific way to mathematically formulate the theory, so assigning it such strong ontological validity is rather dubious as it is not indispensable. Superposition is just a mathematical notation representing the likelihoods of different results when a future interaction occurs, such as with your measuring device. It doesn’t represent the ontological status of the system in that very moment, because the system does not even have its own ontological status. As Schrodinger put it, particles on their own have no “individuality.” Physical systems only have ontological reality when they are participating in a physical interaction.


  • That’s literally China’s policies. The problem is most westerners are lied to about China’s model and it is just painted it as if Deng Xiaoping was an uber capitalist lover and turned China into a free market economy and that was the end of history.

    The reality is that Deng Xiaoping was a classical Marxist so he wanted China to follow the development path of classical Marxism (grasping the large, letting go of the small) and not the revision of Marxism by Stalin (nationalizing everything), because Marxian theory is about formulating a scientific theory of socioeconomic development, so if they want to develop as rapidly as possible they needed to adhere more closely to Marxian economics.

    Deng also knew the people would revolt if the country remained poor for very long, so they should hyper-focus on economic development first-of-foremost at all costs for a short period of time. Such a hyper-focus on development he had foresight to predict would lead to a lot of problems: environmental degradation, rising wealth inequality, etc. So he argued that this should be a two-step development model. There would be an initial stage of rapid development, followed by a second stage of shifting to a model that has more of a focus on high quality development to tackle the problems of the previous stage once they’re a lot wealthier.

    The first stage went from Deng Xiaoping to Jiang Zemin, and then they announced they were entering the second phase under Hu Jintao and this has carried onto the Xi Jinping administration. Western media decried Xi an “abandonment of Deng” because western media is just pure propaganda when in reality this was Deng’s vision. China has switched to a model that no longer prioritizes rapid growth but prioritizes high quality growth.

    One of the policies for this period has been to tackle the wealth inequality that has arisen during the first period. They have done this through various methods but one major one is huge poverty alleviation initiatives which the wealthy have been required to fund. Tencent for example “donated” an amount worth 3/4th of its whole yearly profits to government poverty alleviation initiatives. China does tax the rich but they have a system of unofficial “taxation” as well where they discretely take over a company through a combination of party cells and becoming a major shareholder with the golden share system and then make that company “donate” its profits back to the state. As a result China’s wealth inequality has been gradually falling since 2010 and they’ve become the #1 funder of green energy initiatives in the entire world.

    The reason you don’t see this in western countries is because they are capitalist. Most westerners have an mindset that laws work like magic spells, you can just write down on a piece of paper whatever economic system you want and this is like casting a spell to create that system as if by magic, and so if you just craft the language perfectly to get the perfect spell then you will create the perfect system.

    The Chinese understand this is not how reality works, economic systems are real physical machines that continually transform nature into goods and services for human conception, and so whatever laws you write can only meaningfully be implemented in reality if there is a physical basis for them.

    The physical basis for political power ultimately rests in production relations, that is to say, ownership and control over the means of production, and thus the ability to appropriate all wealth. The wealth appropriation in countries like the USA is entirely in the hands of the capitalist class, and so they use that immense wealth, and thus political power, to capture the state and subvert it to their own interests, and thus corrupt the state to favor those very same capital interests rather than to control them.

    The Chinese understand that if you want the state to remain an independent force that is not captured by the wealth appropriators, then the state must have its own material foundations. That is to say, the state must directly control its own means of production, it must have its own basis in economic production as well, so it can act as an independent economic force and not wholly dependent upon the capitalists for its material existence.

    Furthermore, its economic basis must be far larger and thus more economically powerful than any other capitalist. Even if it owns some basis, if that basis is too small it would still become subverted by capitalist oligarchs. The Chinese state directly owns and controls the majority of all its largest enterprises as well as has indirect control of the majority of the minority of those large enterprises it doesn’t directly control. This makes the state itself by far the largest producer of wealth in the whole country, producing 40% of the entire GDP, no singular other enterprise in China even comes close to that.

    The absolute enormous control over production allows for the state to control non-state actors and not the other way around. In a capitalist country the non-state actors, these being the wealth bourgeois class who own the large enterprises, instead captures the state and controls it for its own interests and it does not genuinely act as an independent body with its own independent interests, but only as the accumulation of the average interests of the average capitalist.

    No law you write that is unfriendly to capitalists under such a system will be sustainable, and often are entirely non-enforceable, because in capitalist societies there is no material basis for them. The US is a great example of this. It’s technically illegal to do insider trading, but everyone in US Congress openly does insider trading, openly talks about it, and the records of them getting rich from insider training is pretty openly public knowledge. But nobody ever gets arrested for it because the law is not enforceable because the material basis of US society is production relations that give control of the commanding heights of the economy to the capitalist class, and so the capitalists just buy off the state for their own interests and there is no meaningfully competing power dynamic against that in US society.


  • China does tax the rich but they also have an additional system of “voluntary donations.” For example, Tencent “volunteered” to give up an amount that is about 3/4th worth of its yearly profits to social programs.

    I say “voluntary” because it’s obviously not very voluntary. China’s government has a party cell inside of Tencent as well as a “golden share” that allows it to act as a major shareholder. It basically has control over the company. These “donations” also go directly to government programs like poverty alleviation and not to a private charity group.



  • There is no action at a distance in quantum mechanics, that is a laymen’s misconception. If there was, it would not be compatible with special relativity, but it is compatible as they are already unified under the framework of quantum field theory. The No-communication theorem is a rather simple proof that shows there is no “sharing at a distance” in quantum mechanics. It is an entirely local theory. The misconception arises from people misinterpreting Bell’s theorem which says quantum mechanics is not compatible with a local hidden variable theory, so people falsely conclude it’s a nonlocal theory, but this is just false because quantum mechanics is not a hidden variable theory, and so it is not incompatible with locality. It is a local theory. Bell’s theorem only shows it is nonlocal if you introduce hidden variables, meaning the theorem is really only applicable to a potential replacement to quantum mechanics and is not even applicable to quantum mechanics itself. It is applicable to things like pilot wave theory, but not to quantum theory.


  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzDon't look now
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    We know how it works, we just don’t yet understand what is going on under the hood.

    Why should we assume “there is something going on under the hood”? This is my problem with most “interpretations” of quantum mechanics. They are complex stories to try and “explain” quantum mechanics, like a whole branching multiverse, of which we have no evidence for.

    It’s kind of like if someone wanted to come up with deep explanation to “explain” Einstein’s field equations and what is “going on under the hood”. Why should anything be “underneath” those equations? If we begin to speculate, we’re doing just tha,t speculation, and if we take any of that speculation seriously as in actually genuinely believe it, then we’ve left the realm of being a scientifically-minded rational thinker.

    It is much simpler to just accept the equations at face-value, to accept quantum mechanics at face-value. “Measurement” is not in the theory anywhere, there is no rigorous formulation of what qualifies as a measurement. The state vector is reduced whenever a physical interaction occurs from the reference point of the systems participating in the interaction, but not for the systems not participating in it, in which the systems are then described as entangled with one another.

    This is not an “interpretation” but me just explaining literally how the terminology and mathematics works. If we just accept this at face value there is no “measurement problem.” The only reason there is a “measurement problem” is because this contradicts with people’s basic intuitions: if we accept quantum mechanics at face value then we have to admit that whether or not properties of systems have well-defined values actually depends upon your reference point and is contingent on a physical interaction taking place.

    Our basic intuition tells us that particles are autonomous entities floating around in space on their lonesome like little stones or billiard balls up until they collide with something, and so even if they are not interacting with anything at all they meaningfully can be said to “exist” with well-defined properties which should be the same properties for all reference points (i.e. the properties are absolute rather than relational). Quantum mechanics contradicts with this basic intuition so people think there must be something “wrong” with it, there must be something “under the hood” we don’t yet understand and only if we make the story more complicated or make a new discovery one day we’d “solve” the “problem.”

    Einstein once said, God does not place dice, and Bohr rebutted with, stop telling God what to do. This is my response to people who believe in the “measurement problem.” Stop with your preconceptions on how reality should work. Quantum theory is our best theory of nature and there is currently no evidence it is going away any time soon, and it’s withstood the test of time for decades. We should stop waiting for the day it gets overturned and disappears and just accept this is genuinely how reality works, accept it at face-value and drop our preconceptions. We do not need any additional “stories” to explain it.

    The blind spot is that we don’t know what a quantum state IS. We know the maths behind it, but not the underlying physics model.

    What is a physical model if not a body of mathematics that can predict outcomes? The physical meaning of the quantum state is completely unambiguous, it is just a list of probability amplitudes. Probability captures the likelihoods of certain outcomes manifesting during an interaction, although quantum probability amplitudes are somewhat unique in that they are complex-valued, but this is to add the additional degrees of freedom needed to simultaneously represent interference phenomena. The state vector is a mathematical notation to capture likelihoods of events occurring while accounting for interference effects.

    It’s likely to fall out when we unify quantum mechanics with general relativity, but we’ve been chipping at that for over 70 years now, with limited success.

    There has been zero “progress” because the “problem” of unifying quantum mechanics and general relativity is a pseudoproblem. It stems from a bias that because we had success quantizing all the fundamental forces except gravity, then therefore gravity should be quantizable. Since the method that worked for all other forces failed, this being renormalization, all these other theories search for a different way to do it.

    But (1) there is no reason other than blind faith to think gravity should be quantized, and (2) there is no direct compelling evidence that either quantum mechanics or general relativity are even wrong.

    Also, we can already unify quantum mechanics and general relativity just fine. It’s called semi-classical gravity and is what Hawking used to predict that black holes radiate. It makes quantum theory work just fine in a curved spacetime and is compatible with all experimental predictions to this day.

    People who dislike semiclassical gravity will argue it seems to make some absurd predictions in under specific conditions we currently haven’t measured. But this isn’t a valid argument to dismiss it, because until you can actually demonstrate via experiment that such conditions can actually be created in physical reality, then it remains a purely metaphysical criticism and not a scientific one.

    If semi-classical gravity is truly incorrect then you cannot just point to it having certain strange predictions in certain domains, you also have to demonstrate it is physically possible to actually probe them and this isn’t just a metaphysical quirk of the theory of trying to make predictions to things that aren’t physical possible in the first place and thus naturally what it would predict would also be physically impossible.

    If you could construct such an experiment and its prediction was indeed wrong, you’d disprove it the very second you turned on the experiment. Hence, if you genuinely think semi-classical gravity is wrong and you are actually following the scientific method, you should be doing everything in your power to figure out how to probe these domains.

    But instead people search for many different methods of trying to quantize gravity and then in a post-hoc fashion look for ways it could be experimentally verified, then when it is wrong they go back and tweak it so it is no longer ruled out by experiment, and zero progress has been made because this is not science. Karl Popper’s impact on the sciences has been hugely detrimental because now everyone just believes if something can in principle be falsified it is suddenly “science” which has popularized incredibly unscientific methods in academia.

    Sorry but both the “measurement problem” and the “unification problem” are pseudoproblems and not genuine scientific problems but both stems from biases on how we think nature should work rather than just fitting the best physical model to the evidence and accepting this is how nature works. Physics is making enormous progress and huge breakthroughs in many fields, but there has been zero “progress” in the fields of "solving the measurement “problem” or quantizing gravity because neither of these are genuine scientific problems.

    They have been working at this “problem” for decades now and what “science” has come out of it? String Theory which is only applicable to an anti-de Sitter space despite our universe being a de Sitter space, meaning it only applies to a hypothetical universe we don’t live in? Loop Quantum Gravity which can’t even reproduce Einstein’s field equations in a limiting case? The Many Worlds Interpretation which no one can even agree what assumptions need to be added to be able to mathematically derive the Born rule, and thus there is also no agreed upon derivation? What “progress” besides a lot of malarkey on people chasing a pseudoproblem?

    If we want to know how nature works, we can just ask her, and that is the scientific method. The experiments are questions, the results are her answers. We should believe her answers and stop calling her a liar. The results of experimental practice—the actual real world physical data—should hold primacy above everything else. We should set all our preconceptions aside and believe whatever the data tells us. There is zero reason to try and update our theories or believe they are “incomplete” until we get an answer from mother nature that contradicts with our own theoretical predictions.

    People always cry about how fundamental physics isn’t “making progress,” but what they have failed to justify is why it should progress in the first place. The only justification for updating a theory is, again, to better fit with experimental data, but they present no data. They just complain it doesn’t fit some bias and preconception they have. That is not science.


  • On the surface, it does seem like there is a similarity. If a particle is measured over here and later over there, in quantum mechanics it doesn’t necessarily have a well-defined position in between those measurements. You might then want to liken it to a game engine where the particle is only rendered when the player is looking at it. But the difference is that to compute how the particle arrived over there when it was previously over here, in quantum mechanics, you have to actually take into account all possible paths it could have taken to reach that point.

    This is something game engines do not do and actually makes quantum mechanics far more computationally expensive rather than less.


  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzGottem. :)
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    So usually this is explained with two scientists, Alice and Bob, on far away planets. They’re each in the possession of a particle that is entangled with the other, and in a superposition of state 1 and state 2.

    This “usual” way of explaining it is just overly complicating it and making it seem more mystical than it actually is. We should not say the particles are “in a superposition” as if this describes the current state of the particle. The superposition notation should be interpreted as merely a list of probability amplitudes predicting the different likelihoods of observing different states of the system in the future.

    It is sort of like if you flip a coin, while it’s in the air, you can say there is a 50% chance it will land heads and a 50% chance it will land tails. This is not a description of the coin in the present as if the coin is in some smeared out state of 50% landed heads and 50% landed tails. It has not landed at all yet!

    Unlike classical physics, quantum physics is fundamentally random, so you can only predict events probabilistically, but one should not conflate the prediction of a future event to the description of the present state of the system. The superposition notation is only writing down probability amplitudes of the likelihoods of what you will observe (state 1 or state 2) of the particles in the future event that you go to the interact with it and is not a description of the state of the particles in the present.

    When Alice measures the state of her particle, it collapses into one of the states, say state 1. When Bob measures the state of his particle immediately after, before any particle travelling at light speed could get there, it will also be in state 1 (assuming they were entangled in such a way that the state will be the same).

    This mistreatment of the mathematical notation as a description of the present state of the system also leads to confusing language like “it collapses into one of the states” as if the change in a probability distribution represents a physical change to the system. The mental picture people say this often have is that the particle literally physically becomes the probability distribution prior to measuring it—the particle “spreads out” like a wave according to the probability amplitudes of the state vector—and when you measure the particle, this allows you to update the probabilities, and so they must interpret this as the wave physically contracting into an eigenvalue—it “collapses” like a house of cards.

    But this is, again, overcomplicating things. The particle never spreads out like a wave and it never “collapses” back into a particle. The mathematical notation is just a way of capturing the likelihoods of the particle showing up in one state or the other, and when you measure what state it actually shows up in, then you can update your probabilities accordingly. For example, if you the coin is 50%/50% heads/tails and you observe it land on tails, you can update the probabilities to 0%/100% heads/tails because you know it landed on tails and not heads. Nothing “collapsed”: you’re just observing the actual outcome of the event you were predicting and updating your statistics accordingly.


  • Any time you do something to the particles on Earth, the ones on the Moon are affected also

    The no-communication theorem already proves that manipulating one particle in an entangled pair has no impact at al on another. The proof uses the reduced density matrices of the particles which capture both their probabilities of showing up in a particular state as well as their coherence terms which capture their ability to exhibit interference effects. No change you can make to one particle in an entangled pair can possibly lead to an alteration of the reduced density matrix of the other particle.


  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzObserver
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I don’t think solving the Schrodinger equation really gives you a good idea of why quantum mechanics is even interesting. You also shouldstudy very specific applications of it where it yields counterintuitive outcomes to see why it is interesting, such as in the GHZ experiment.


  • You have not made any point at all. Your first reply to me entirely ignored the point of my post which you did not read followed with an attack, I reply pointing out you ignored the whole point of my post and just attacked me without actually respond to it, and now you respond again with literally nothing of substance at all just saying “you’re wrong! touch grass! word salad!”

    You have nothing of substance to say, nothing to contribute to the discussion. You are either a complete troll trying to rile me up, or you just have a weird emotional attachment to this topic and felt an emotional need to respond and attack me prior to actually thinking up a coherent thing to criticize me on. Didn’t your momma ever teach you that “if you have nothing positive or constructive to say, don’t say anything at all”? Learn some manners, boy. Blocked.


  • They are incredibly efficient for short-term production, but very inefficient for long-term production. Destroying the environment is a long-term problem that doesn’t have immediate consequences on the businesses that engage in it. Sustainable production in the long-term requires foresight, which requires a plan. It also requires a more stable production environment, i.e. it cannot be competitive because if you are competing for survival you will only be able to act in your immediate interests to avoid being destroyed in the competition.

    Most economists are under a delusion known as neoclassical economics which is literally a nonphysical theory that treats the basis of the economy as not the material world we actually live in but abstract human ideas which are assumed to operate according to their own internal logic without any material causes or influences. They then derive from these imagined “laws” regarding human ideas (which no one has ever experimentally demonstrated but were just invented in some economists’ armchair one day) that humans left to be completely free to make decisions without any regulations at all will maximize the “utils” of the population, making everyone as happy as possible.

    With the complete failure of this policy leading to the US Great Depression, many economists recognized this was flawed and made some concessions, such as with Keynesianism, but they never abandoned the core idea. In fact, the core idea was just reformulated to be compatible with Keynesianism in what is called the neoclassical synthesis. It still exists as a fundamental belief to most every economist that completely unregulated market economy without any plan at all will automagically produce a society with maximal happiness, and while they will admit some caveats to this these days (such as the need for a central organization to manage currency in Keynesianism), these are treated as an exception and not the rule. Their beliefs are still incompatible with long-term sustainable planning because in their minds the success of markets from comes util-maximizing decisions built that are fundamental to the human psyche and so any long-term plan must contradict with this and lead to a bad economy that fails to maximize utils.

    The rise of Popperism in western academia has also played a role here. A lot of material scientists have been rather skeptical of the social sciences and aren’t really going to take arguments like those based in neoclassical economics which is based largely in mysticism about human free will seriously, and so a second argument against long-term planning was put forward by Karl Popper which has become rather popular in western academia. Popper argued that it is impossible to learn from history because it is too complicated with too many variables and you cannot control them all. You would need a science that studies how human societies develop in order to justify a long-term development plan into the future, but if it’s impossible to study them to learn how they develop because they are too complicated, then it is impossible to have such a science, and thus impossible to justify any sort of long-term sustainable development plan. It would always be based on guesswork and so more likely to do more harm than good. Popper argued that instead of long-term development plans, the state should instead be purely ideological, what he called an “open society” operating purely on the ideology of liberalism rather getting involved in economics.

    As long as both neoclassical economics and Popperism are dominate trends in western academia there will never be long-term sustainable planning because they are fundamentally incompatible ideas.


  • You did not read what I wrote, so it is unironic you call it “word salad” when you are not even aware of the words I wrote since you had an emotional response and wrote this reply without actually addressing what I argued. I stated that it is impossible to have an very large institution without strict rules that people follow, and this requires also the enforcement of the rules, and that means a hierarchy as you will have rule-enforcers.

    Also, you are insisting your personal definition of anarchism is the one true definition that I am somehow stupid for disagreeing with, yet anyone can just scroll through the same comments on this thread and see there are other people disagreeing with you while also defending anarchism. A lot of anarchists do not believe anarchism means “no hierarchy,” like, seriously, do you unironically believe in entirely abolishing all hierarchies? Do you think a medical doctor should have as much authority on how to treat an injured patient as the janitor of the same hospital? Most anarchists aren’t even “no hierarchy” they are “no unjustified hierarchy.”

    The fact you are entirely opposed to hierarchy makes your position even more silly than what I was criticizing.