There is a tendency for real doctors with backing from Academia or whoever’s in charge of deciding how you science to just plain getting it wrong and not realizing it for a long time.
Homeopathy is a good example of this, as it appeared to get great results when it was created during the Bubonic Plague and had such staying power to the point that in the 1800’s it was considered a legitimate and mainstream field of medical practice.
Now today we know Homeopathy is nonsense… Remembers New Age Healing is still a thing Okay, those of us with sense know homeopathy is garbage. With the only reason it was getting such wonderful results was because the state of medicine for a long period of time in human history was so god awful that not getting any treatment at all was actually the smarter idea. Since Homeopathy is basically just “No medicine at all”, that’s exactly what was happening with its success.
Incidentally this is also why the Christian Science movement (Which was neither Christian nor Science) had so many people behind it, people were genuinely living longer from it because it required people to stop smoking at a time when no one knew smoking killed you.
Anyhow. With that in mind, I want to know if there’s a case where the exact opposite happened.
Where Scientists got together on a subject, said “Wow, only an idiot would believe this. This clearly does not work, can not work, and is totally impossible.”
Only for someone to turn around, throw down research proving that there was no pseudo in this proposed pseudoscience with their finest “Ya know I had to do it 'em” face.
The closest I can think of is how people believed that Germ Theory, the idea that tiny invisible creatures were making us all sick, were the ramblings of a mad man. But that was more a refusal to look at evidence, not having evidence that said “No” that was replaced by better evidence that said “Disregard that, the answer is actually Yes”
Can anyone who sciences for a living instead of merely reading science articles as a hobby and understanding basically only a quarter of them at best tell me if something like that has happened?
Thank you, have a nice day.
Quantum Mechanics: The early concepts of quantum mechanics, such as quantized energy levels and wave-particle duality, were initially met with resistance, even by scientists like Albert Einstein, who helped develop them.
Reason for Rejection: The ideas were counterintuitive and challenged classical physics’ deterministic view, introducing probabilistic interpretations of nature.
Adoption: The overwhelming experimental evidence, such as the photoelectric effect, blackbody radiation, and the behavior of atoms and subatomic particles, eventually led to the acceptance of quantum mechanics as a fundamental framework in physics.
Schrödinger’s cat was also meant as a rejection of quantum mechanics. Something cannot be both a wave and a partical until observed the same way a cat cannot be both alive and dead until observed. However, it does seem like quantum superposition is a reality, making the thought experiment even more bizarre.
To make it clear how bizarre, The Elitzur–Vaidman bomb test has actually been experimented, and proven both do simultaneously exist and interact with each other. To expand the Schrödinger’s cat joke, quantum physics allows that you may find a half-eaten dead cat in the box.
Schrödinger was not “rejecting” quantum mechanics, he was rejecting people treating things described in a superposition of states as literally existing in “two places at once.” And Schrödinger’s argument still holds up perfectly. What you are doing is equating a very dubious philosophical take on quantum mechanics with quantum mechanics itself, as if anyone who does not adhere to this dubious philosophical take is “denying quantum mechanics.” But this was not what Schrödinger was doing at all.
What you say here is a popular opinion, but it just doesn’t make any sense if you apply any scrutiny to it, which is what Schrödinger was trying to show. Quantum mechanics is a statistical theory where probability amplitudes are complex-valued, so things can have a -100% chance of occurring, or even a 100i% chance of occurring. This gives rise to interference effects which are unique to quantum mechanics. You interpret what these probabilities mean in physical reality based on how far they are away from zero (the further from zero, the more probable), but the negative signs allow for things to cancel out in ways that would not occur in normal probability theory, known as interference effects. Interference effects are the hallmark of quantum mechanics.
Because quantum probabilities have this difference, some people have wondered if maybe they are not probabilities at all but describe some sort of physical entity. If you believe this, then when you describe a particle as having a 50% probability of being here and a 50% probability of being there, then this is not just a statistical prediction but there must be some sort of “smeared out” entity that is both here and there simultaneously. Schrödinger showed that believing this leads to nonsense as you could trivially set up a chain reaction that scales up the effect of a single particle in a superposition of states to eventually affect a big system, forcing you to describe the big system, like a cat, in a superposition of states. If you believe particles really are “smeared out” here and there simultaneously, then you have to believe cats can be both “smeared out” here and there simultaneously.
Ironically, it was Schrödinger himself that spawned this way of thinking. Quantum mechanics was originally formulated without superposition in what is known as matrix mechanics. Matrix mechanics is complete, meaning, it fully makes all the same predictions as traditional quantum mechanics. It is a mathematically equivalent theory. Yet, what is different about it is that it does not include any sort of continuous evolution of a quantum state. It only describes discrete observables and how they change when they undergo discrete interactions.
Schrödinger did not like this on philosophical grounds due to the lack of continuity. There were discrete “gaps” between interactions. He criticized it saying that “I do not believe that the electron hops about like a flea” and came up with his famous wave equation as a replacement. This wave equation describes a list of probability amplitudes evolving like a wave in between interactions, and makes the same predictions as matrix mechanics. People then use the wave equation to argue that the particle literally becomes smeared out like a wave in between interactions.
However, Schrödinger later abandoned this point of view because it leads to nonsense. He pointed in one of his books that while his wave equation gets rid of the gaps in between interactions, it introduces a new gap in between the wave and the particle, as the moment you measure the wave it “jumps” into being a particle randomly, which is sometimes called the “collapse of the wave function.” This made even less sense because suddenly there is a special role for measurement. Take the cat example. Why doesn’t the cat’s observation of this wave not cause it to “collapse” but the person’s observation does? There is no special role for “measurement” in quantum mechanics, so it is unclear how to even answer this in the framework of quantum mechanics.
Schrödinger was thus arguing to go back to the position of treating quantum mechanics as a theory of discrete interactions. There are just “gaps” between interactions we cannot fill. The probability distribution does not represent a literal physical entity, it is just a predictive tool, a list of probabilities assigned to predict the outcome of an experiment. If we say a particle has a 50% chance of being here or a 50% chance of being there, it is just a prediction of where it will be if we were to measure it and shouldn’t be interpreted as the particle being literally smeared out between here and there at the same time.
There is no reason you have to actually believe particles can be smeared out between here and there at the same time. This is a philosophical interpretation which, if you believe it, it has an enormous amount of problems with it, such as what Schrödinger pointed out which ultimately gets to the heart of the measurement problem, but there are even larger problems. Wigner had also pointed out a paradox whereby two observers would assign different probability distributions to the same system. If it is merely probabilities, this isn’t a problem. If I flip a coin and look at the outcome and it’s heads, I would say it has a 100% chance of being heads because I saw it as heads, but if I asked you and covered it up so you did not see it, you would assign a 50% probability of it being heads or tails. If you believe the wave function represents a physical entity, then you could setup something similar in quantum mechanics whereby two different observers would describe two different waves, and so the physical shape of the wave would have to differ based on the observer.
There are a lot more problems as well. A probability distribution scales up in terms of its dimensions exponentially. With a single bit, there are two possible outcomes, 0 and 1. With two bits, there’s four possible outcomes, 00, 01, 10, and 11. With three bits, eight outcomes. With four bits, sixteen outcomes. If we assign a probability amplitude to each possible outcome, then the number of degrees of freedom grows exponentially the more bits we have under consideration.
This is also true in quantum mechanics for the wave function, since it is again basically a list of probability amplitudes. If we treat the wave function as representing a physical wave, then this wave would not exist in our four-dimensional spacetime, but instead in an infinitely dimensional space known as a Hilbert space. If you want to believe the universe actually physically made up of infinitely dimensional waves, have at ya. But personally, I find it much easier to just treat a probability distribution as, well, a probability distribution.
For us today it may be surprising, but in 1922, Einstein was not awarded for the Relativity theories (SRT 1905, ART 1915) with the Physics Nobel prise 1921, but for his theory on the explanation of the photoelectric effect (1905), as the theory of relativity was still controversially discussed.
I remember they actually wrote a book “debunking him” called “100 Authors Against Einstein”
To which, like a total gigachad, he responded. “If I were really wrong, it would have only taken one.”
Our professor in quantum chemistry always told the story, that no one believed in it in the beginning and wanted to disprove it. This lead to one of the best tested hypotheses in the field that it is today.
Quantum mechanics works, no doubt about it. What I seriously doubt is the interpretation of what it means. When you get right down to it, it’s just our best most successful attempt to model physical systems we can’t observe with enough detail to tell exactly what’s happening. The whole uncertainty principle isn’t some magic truth about the nature of reality, it’s just a statement about measuring particles by bumping them with other particles. Wave particle duality is an interpretation of the math, it isn’t required for the math to work. Bell “disproved” hidden variable theories and we’ve been working with our hands tied behind our backs ever since, just so we can preserve the comforting fiction that have free will or that it means anything in the first place. It honestly pisses me off because so many scientists just accept the Copenhagen interpretation as truth, to the point that it’s become dogma and anyone suggesting otherwise is automatically wrong. It’s no wonder particle physics has hardly made any progress in the last few decades.
Everything we do is just trying to model reality. It has always been like that, reality is not simple.
What do you mean with"hands tied behind our backs"?
It is weird that you start by criticizing our physical theories being descriptions of reality then end criticizing the Copenhagen interpretation, since this is the Copenhagen interpretation, which says that physics is not about describing nature but describing what we can say about nature. It doesn’t make claims about underlying ontological reality but specifically says we cannot make those claims from physics and thus treats the maths in a more utilitarian fashion.
The only interpretation of quantum mechanics that actually tries to interpret it at face value as a theory of the natural world is relational quantum mechanics which isn’t that popular as most people dislike the notion of reality being relative all the way down. Almost all philosophers in academia define objective reality in terms of something being absolute and point-of-view independent, and so most academics struggle to comprehend what it even means to say that reality is relative all the way down, and thus interpreting quantum mechanics as a theory of nature at face-value is actually very unpopular.
All other interpretations either: (1) treat quantum mechanics as incomplete and therefore something needs to be added to it in order to complete it, such as hidden variables in the case of pilot wave theory or superdeterminism, or a universal psi with some underlying mathematics from which to derive the Born rule in the Many Worlds Interpretation, or (2) avoid saying anything about physical reality at all, such as Copenhagen or QBism.
Since you talk about “free will,” I suppose you are talking about superdeterminism? Superdeterminism works by pointing out that at the Big Bang, everything was localized to a single place, and thus locally causally connected, so all apparent nonlocality could be explained if the correlations between things were all established at the Big Bang. The problem with this point of view, however, is that it only works if you know the initial configuration of all particles in the universe and a supercomputer powerful to trace them out to modern day.
Without it, you cannot actually predict any of these correlations ahead of time. You have to just assume that the particles “know” how to correlate to one another at a distance even though you cannot account for how this happens. Mathematically, this would be the same as a nonlocal hidden variable theory. While you might have a nice underlying philosophical story to go along with it as to how it isn’t truly nonlocal, the maths would still run into contradictions with special relativity. You would find it difficult to construe the maths in such a way that the hidden variables would be Lorentz invariant.
Superdeterministic models thus struggle to ever get off the ground. They only all exist as toy models. None of them can reproduce all the predictions of quantum field theory, which requires more than just accounting for quantum mechanics, but doing so in a way that is also compatible with special relativity.