☆ Yσɠƚԋσʂ ☆

  • 1.02K Posts
  • 1.13K Comments
Joined 4 years ago
cake
Cake day: January 18th, 2020

help-circle


  • The United States has done far more harm than good for humanity at large. The individualistic values it champions have led to a society that is fragmented and leaves many citizens in misery. Its global hegemony has resulted in the destruction of numerous countries, with countless lives lost due to its military interventions, coups, and regime change operations around the world. Moreover, the US’s extractive policies have prevented other nations from developing their own economies, perpetuating a cycle of underdevelopment and dependency. Additionally, as one of the largest consumers of energy per capita and major producers of fossil fuels, the United States is among the worst offenders when it comes to climate change, exacerbating global environmental crises with its unsustainable practices.














  • The reality is that every source will have some sort of a view point which constitutes a bias. I think people should be careful with all sources, and it’s actually good to look at viewpoints from across the spectrum. You don’t have to agree with them or trust them, but it’s often useful to understand their perspective even if for the purpose of framing a counterpoint. If you know a source like quillette has a particular bias, then you just keep it in mind when you read it.

    The sources I dismiss are the ones that can’t provide primary sources for the claims they make or are known to be factually wrong. These are the kinds of sources that constitute a waste of time and should be avoided.






  • I’d say it’s not so much that this tech doesn’t have value, but that it gets hyped up and used for things it really shouldn’t be used for. Specifically, the way models work currently, they’re not suitable for any scenario where you need an exact answer. So, it’s great for stuff like generative art or creative writing, but absolutely terrible for solving math problems or driving cars. Understanding the limitations of the tech is key for applying it in a sensible way.



  • not working due to hallucinations

    It’s pretty clear that hallucinations are an issue only for specific use cases. This problem certainly doesn’t make ML useless. For example, I find it’s far faster to use a code oriented model to get an idea of how to solve a problem than going to stack overflow. The output of the model doesn’t need to be perfect, it just needs to get me moving in the right direction.

    Furthermore, there is nothing to suggest that the problem of hallucinations is fundamental and can’t be addressed going forward. I’ve linked an example of a research team doing precisely that above.

    wasteful in terms of resources

    Sure, but so are plenty of other things. And as I’ve illustrated above, there are already drastic improvements happening in this area.

    creates problematic behaviors in terms of privacy

    Not really a unique problem either.

    creates more inequality

    Don’t see how that’s the case. In fact, I’d argue the opposite to be true, especially if the technology is open and available to everyone.

    and other problems and is thus in most cases (say outside of e.g numerical optimization as already done at e.g DoE, so in the “traditional” sense of AI, not the LLM craze) better be entirely ignored.

    There is a lot of hype around this tech, and some of it will die down eventually. However, it would be a mistake to throw the baby out with the bath water.

    what I mean is that the argument of inevitability itself is dangerous, often abused.

    The argument of inevitability stems from the fact that people have already found many commercial uses for this tech, and there is a ton of money being poured into it. This is unlikely to stop regardless of what your personal opinion on the tech is.




  • Open source does actually pave the way towards addressing many of the problems. For example, Petals is a torrent style system for running models which allows regular people to share resources to run models.

    Problems like hallucinations and energy consumption aren’t inherent either. These problems are actively being worked on, and people are finding ways to make models more efficient all the time. For example, by using the same techniques Google used to solve Go (MTCS and backprop), Llama8B gets 96.7% on math benchmark GSM8K. That’s better than GPT-4, Claude and Gemini, with 200x fewer parameters. https://arxiv.org/pdf/2406.07394

    And here’s an approach being explored for making models more reliable https://www.wired.com/story/game-theory-can-make-ai-more-correct-and-efficient/

    The reality is that we can’t put the toothpaste back in the tube now. This tech will be developed one way or the other, and it’s much better if it’s developed in the open.






  • Understanding what liberal ideology stands for is key for understanding why liberals are becoming an insular cult now that the ideology is in a crisis. This is precisely the point I’m making here regarding the threat of populism:

    When threatened by populism, liberalism readily abandons its political ideals in favor of preserving the capitalist economic system. Liberalism ultimately serves as a mask for capitalism, concealing its exploitative nature behind a facade of individual freedom and democracy.

    Liberals see both right and left wing populism, which is another term for the democratic will of the majority, as a threat to their core ideology of private ownership. Hence why liberals lash out whenever seeing sources they consider to fall outside the approved liberal Overton window.



  • Our tendency to perceive agency in ambiguous situations sheds light on the origins of cognitive biases like religion. Our minds, shaped by eons of natural selection, are finely tuned to err on the side of caution. Think of a group of ancient hunters traversing the savanna. A rustle in the tall grass could be merely the wind, or it could be a lurking predator. Those who instinctively assume the worst and flee are more likely to survive than those who dismiss the sound and remain vulnerable.

    Over time, this survival advantage has led to the evolution of cognitive models that favor the perception of agency, even when there is none. We are prone to seeing patterns, faces, and intentions in random events because the cost of mistakenly attributing agency is far less than the cost of failing to detect a real threat. This explains why we might see a face in the clouds or feel a presence in a dark room. Religion is a direct byproduct of this phenomenon.

    Furthermore, it’s important to keep in mind that every contemporary belief system stems from an uninterrupted chain of development, tracing back to the earliest human societies. This implies that every ideology has enjoyed a measure of success, having endured the test of time. This makes it difficult to definitively assert that one set of beliefs is fundamentally “more correct” than another, as truth is often subjective and dependent on context. After all, the effectiveness of a belief system in enabling a culture to thrive and grow is perhaps the most relevant measure of its “truthfulness.”

    If somebody grows up in a religious environment, then religion becomes central to their world model. It’s not an isolated concept, it’s an integral part of the tapestry of their mind. Our brains, like all physical systems, operate within the constraints of energy efficiency. Assimilating a new idea requires mental effort, as it necessitates restructuring our existing cognitive framework to accommodate the newcomer. This, in turn, translates to expending energy to rebalance the connections within the neural networks of our brain. If a novel concept clashes significantly with our established beliefs, the energetic cost of integration can be substantial. Radical ideas that demand a significant restructuring of our mental models, such as challenging deeply held religious beliefs or political ideologies, may be discarded, deemed “too expensive” from an energetic standpoint.

    This principle helps explain why it’s often so difficult to change the views of others, regardless of the soundness of your argument. The strength of the argument alone may not be enough to overcome the inherent inertia of our entrenched belief systems.