One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn’t summarising at all, it only looks like it…
I don’t know. Maybe endorsement of LLMs and “AIs” is a way to encourage people create datasets, which can then be used for other things.
Also this technology is good for one thing - flagging people for some political sympathies or likeliness to behave a certain way, based on their other behavior.
As if - a technology to make kill lists for fascists, if you excuse my alarmism. Maybe nobody will come at night in black leather to take you away, but you won’t get anywhere near posts affecting serious decisions. An almost bloodless world fascist revolution.
I don’t know. Maybe endorsement of LLMs and “AIs” is a way to encourage people create datasets, which can then be used for other things.
Also this technology is good for one thing - flagging people for some political sympathies or likeliness to behave a certain way, based on their other behavior.
As if - a technology to make kill lists for fascists, if you excuse my alarmism. Maybe nobody will come at night in black leather to take you away, but you won’t get anywhere near posts affecting serious decisions. An almost bloodless world fascist revolution.