Deep learning has always been classified as AI. Some consider pathfinding algorithms to be AI. AI is a broad category.
AGI is the acronym you’re looking for.
Deep learning has always been classified as AI. Some consider pathfinding algorithms to be AI. AI is a broad category.
AGI is the acronym you’re looking for.
This feels to me like the LLM misinterpreted it as some kind of fictional villain talk and started to autocomplete it.
Could also be the model simply breaking. There was a time when Sydney (Bing AI or whatever they call it now) had to be constrained to 10 messages per context and having some sort of supervisor on top of itself because it would occasionally throw a fit or start threatening the user for no reason.
The author’s suggesting that smart people are more likely to fall for cons that they try to dissect but can’t find the specific method being used, supposedly because they consider themselves to be infallible.
I disagree with this take. I don’t see how that thought process is exclusive to people who are or consider themselves to be smart. I think the author is tying himself into a knot to state that smart people are actually the dumb ones, likely in preparation to drop an opinion that most experts in the field will disagree with.
The paracausal tarrasque seems like a genuinely interesting concept. Gives me False Hydra vibes
Was this ever a thing? I have never seen or heard anyone use “gen AI” to mean AGI. In fact I can’t even find one instance of such usage.