• 0 Posts
  • 14 Comments
Joined 8 months ago
cake
Cake day: March 13th, 2024

help-circle







  • Lots of stretching here. The paper uses simulations of microtubules to show quantum effects when tryptophan residues are excited by UV light. The paper only did simulations of microtubules, and those simulations did not include the bends and many many dynein molecules found on microtubules. The reason this is important is that researchers have been hitting every biomolecule with UV excitation for decades, including microtubules, and have never observed this effect.

    A key finding missing from this video is that microtubules are dynamic. They are constantly disassembling and reassembling and recycling components. This occurs at very short timescales. Also, they do not bridge cell membranes. If information is passing through networks of microtubules, it is constantly disrupted and not affecting other cells. Synapses do handle cell-cell information transfer (where the role of microtubules is already well studied and not quantum in nature). Why would quantum microtubule information be limited to a single cell? Maybe it could influence coordinated assembly and disassembly at the termini, but the authors offer no evidence that there is any chemical effect of this quantum phenomenon, which would be required to change anything about how those enzymes behave.

    We already know of a mechanism by which information is transported across microtubules: physical transport of signalling molecules. They are walked (quite literally, dynein is cool) along the microtubules to different sites in the cell. No quantum effects needed to explain this phenomenon.


  • Go to pubmed. Type “social media mental health”. Read the studies, or the reviews if you don’t have the time.

    The average American teenager spends 4.8 hours/day on social media. Increased use of social media is associated with increased rates of depression, eating disorders, body image dissatisfaction, and externalizing problems. These studies don’t show causation, but guess what, we literally cannot show causation in most human studies because of ethics.

    Social media drastically alters peer interactions, with negative interactions (bullying) associated with increased rates of self harm, suicide, internalizing and externalizing problems.

    Mobile phone use alone is associated with sleep disruption and daytime sleepiness.

    Looking forward to your peer-reviewed critiques of these studies claiming they are all “just vibes.”





  • We are not in a recession. The problems with wage stagnation are not some temporary hiccup in the economy. It is a systemic problem. Stop conflating the two, complaining that a macroeconomic term with a very specific meaning isn’t defined the way you want it to be. Stop expecting the problem to heal itself if the fed lowers rates or taxes get nudged up or down or whatever. We know how to fix wage stagnation because we have done it before. Regulation. Labor protections. Minimum wage increases. Wage stagnation occurs in the absence of these things, and they can only be done by Congress.


  • I think where you are going wrong here is assuming that our internal perception is not also a hallucination by your definition. It absolutely is. But our minds are embodied, thus we are able check these hallucinations against some outside stimulus. Your gripe that current LLMs are unable to do that is really a criticism of the current implementations of AI, which are trained on some data, frozen, then restricted from further learning by design. Imagine if your mind was removed from all stimulus and then tested. That is what current LLMs are, and I doubt we could expect a human mind to behave much better in such a scenario. Just look at what happens to people cut off from social stimulus; their mental capacities degrade rapidly and that is just one type of stimulus.

    Another problem with your analysis is that you expect the AI to do something that humans cannot do: cite sources without an external reference. Go ahead right now and from memory cite some source for something you know. Do not Google search, just remember where you got that knowledge. Now who is the one that cannot cite sources? The way we cite sources generally requires access to the source at that moment. Current LLMs do not have that by design. Once again, this is a gripe with implementation of a very new technology.

    The main problem I have with so many of these “AI isn’t really able to…” arguments is that no one is offering a rigorous definition of knowledge, understanding, introspection, etc in a way that can be measured and tested. Further, we just assume that humans are able to do all these things without any tests to see if we can. Don’t even get me started on the free will vs illusory free will debate that remains unsettled after centuries. But the crux of many of these arguments is the assumption that humans can do it and are somehow uniquely able to do it. We had these same debates about levels of intelligence in animals long ago, and we found that there really isn’t any intelligent capability that is uniquely human.