• 0 Posts
  • 18 Comments
Joined 4 months ago
cake
Cake day: July 16th, 2024

help-circle
  • I am not a lawyer. But you wouldn’t be surprised to hear that

    1. I don’t have inside story of Bing in Germany. It could be that Microsoft either doesn’t want to do it well, or hasn’t yet done it well enough. I’m not promising either in particular, but it can be done.
    2. Generally as an engineer you have a pile of options with trade offs. You absolutely can build nuanced solutions, as often the law and the lawyers live in nuanced realities. That is the reality of even the best sorts of tech companies who are trying.

    My commitment is that maximalism or strict binary assumptions won’t work on either end and don’t satisfy what anyone truly wants or needs. If we’re not careful about what it takes to move the needle, we agree with them by saying ‘it can’t be done, so it wont be done.’


  • That’s a good question, because there is nuance here! It’s interesting because while working on similar projects I also ran into this issue. First off, it’s important to understand what your obligation is and the way that you can understand data deletion. No one believes it is necessary to permanently remove all copies of anything, anymore than it is necessary to prevent all forms of plagairism. No one is complaining that is possible at all to plaigarise, we’re complaining that major institutions are continuing to do so with ongoing disregard of the law.

    Only maximalists fall into the trap that thinking of the world in binary sense: either all in or do nothing at all.

    For most of us, it’s about economics and risk profiles. Open source models get trained continuously over time, there won’t be one version. Saying that open source operators do have some obligations to in good faith to curate future training to comply has a long tail impact on how that model evolves. Previous PII or plaigarized data might still exist, but its value and novelty and relevance to economic life goes down sharply over time. No artist or writer argues that copyright protections need to exist forever. They literally, just need to have survival working conditions, and the respect for attribution. The same thing with PII: no one claims that they must be completely anonymous. They just desire cyber crime to be taken seriously rather than abandoned in favor of one party taking the spoils of their personhood.

    Also, yes, there are algorithms that can control how further learning promotes or demotes growth and connections relative to various policies. Rather than saying that any one policy is perfect, a mere willingness to adopt policies in good faith (most such LLM filters are intentionally weak so that those with $$ and paying for API access can outright ignore them, while they can turn around and claim it can’t be solved too bad so sad).

    Yes. It is possible to perturb and influence the evolution of a continuously trained neural network based on external policy, and they’re carefully lying through omision when they say they can’t 100% control it or 100% remove things. Fine. That’s, not necessary, neither in copyright nor privacy law. Never been.





  • Despite what the tech companies say, there are absolutely techniques for identifying the sources of their data, and there are absolutely techniques for good faith data removal upon request. I know this, because I’ve worked on such projects before on some of the less major tech companies that make some effort to abide by European laws.

    The trick is, it costs money, and the economics shift such that one must eventually begin to do things like audit and curate. The shape and size of your business, plus how you address your markets, gains nuance that doesn’t work when your entire business model is smooth, mindless amotirizing of other people’s data.

    But I don’t envy these tech companies, or the increasing absurd stories they must tell to hide the truth. A handsome sword hangs above their heads.


  • Moravec’s Paradox is actually more interesting than it appears. You don’t have take his reasoning or Pinker’s seriously but the observation is salient. Also the paradox gets stated in other ways by other scientists, it’s a common theme.

    One way I often think about it: in order for your to survive, the intelligence of moving in unknown spaces and managing numerous fuzzy energy systems is way more important to prioritize and master than like, the abstract conceptual spaces that are both not full of calories and are also cheaper to externalize anyways.

    It’s part of why I don’t think there is a globally coherent heirarchy of intelligence, or potentially even general intelligence at all. Just, the distances and spaces that a thing occupies, and the competencies that define being in that space.










  • I feel this shouldn’t at all be surprising, and continues to point to Diverse Intelligence as more fundamental than any sort General Intelligence conceptually. There’s a huge difference between what something is in theory or in principal capable of, and the economics story of what that thing attends to naturally as per its energy story.

    Broadly, even simple things are powerful precisely because of what they don’t bother trying to do until perturbed.

    Ultimately, I hypothesize the reason why VCs like the idea of LLMs doing simple things far more expensively than otherwise is already possible, is because, They literally can’t imagine what else to spend their money on. They are vacuous consumers by design.



  • Yeah, this lines up with what I have heard, too. There is always talk of new models, but even the stuff in the pipeline not yet released isn’t that differentiable from the existing stuff.

    The best explanation of strawberry is that it isn’t any particular thing, it’s rather a marketing and project framing, both internal and external, that amounts to… cost optimizations, and hype driving. Shift the goal posts, tell two stories: one is if we just get affordable enough, genAI in a loop really can do everything (probably much more modest, when genAI gets cheap enough by several means, it’ll have several more modest and generally useful use cases, also won’t have to be so legally grey). The other is that we’re already there and one day you’ll wake up and your brain won’t be good enough to matter anymore, or something.

    Again, this is apparently the future of software releases. :/