Whenever AI is mentioned lots of people in the Linux space immediately react negatively. Creators like TheLinuxExperiment on YouTube always feel the need to add a disclaimer that “some people think AI is problematic” or something along those lines if an AI topic is discussed. I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers (just look at what “Apple Intelligence” seems to be capable of.) Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete. Using an AI-less desktop may be akin to hand copying books after the printing press revolution. If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.
TLDR: A lot of ludite sentiments around AI in Linux community.
Reminder that we don’t even have AI yet, just learning machine models, which are not the same thing despite wide misuse of the term AI.
Have you mentioned that in gaming forums aswell when they talked about AI?
AI is a broad term and can mean many different things, it does not need to mean ‘true’ AI
Yes, lots of people are using this argument when reacting negatively.
Well, it’s kind of more of a fact than an argument, but do go on!
Well not at all. What a word means is not defined by what you might think. When the majority starts to use a word for something and that sticks, it can be adopted. That happens all the time and I have read articles about it many times. Even for our current predicament. Language is evolving. Meanings change. And yes ai today includes what is technically machine learning. Sorry friend, that’s how it works. Sure you can be the grumpy drunk at a bar complaining that this is not strictly ai by some definition while the rest of the world rolls their eyes and proceeds to more meaningful debates.
Words have meaning and, sure, they can be abused and change meaning over time but let’s be real here: AI is a hype term with no basis on reality. We do not have AI, we aren’t even all that close. You can make all the ad hominem comments you want but at the end of the day, the terminology comes from ignorant figureheads hyping shit up for profit (at great environmental cost too, LLM aka “AI” takes up a lot of power while yielding questionable results).
Kinda sounds like you bought into the hype, friend.
You missed the point again, oh dear! Let me try again in simpler terms : you yourself dont define words, how they are used in the public does. So if the world calls it ai, then the word will mean what everybody means when they use it.
This is how the words come to be, evolve and are at the end put in the dictionary. Nobody cares what you think. Ai today includes ML. Get over it.
Nice try with deflection attempts, but I really don’t care about them, I’m only here to teach you where words come from and to tell you, the article is written about you.
Also that I’m out of time for this. Bye.
That’s just nitpicking. Everyone here knows what we mean by AI. Yes it refers to LLMs.
Reminds me of Richard Stallman always interjecting to say “actually its gnu/Linux or as I like to say gnu plus Linux”…
Well no Mr Stallman its actually gnu + Linux + Wayland + systemd + chromium and whatever other software you have installed, are you happy now??
So when we actually do have AI, what are we supposed to call it? The current use of the term “AI” is too ambiguous to be of any use.
Nothing was ever wrong with calling them “virtual assistants” - at least with them you’re conditioned to have a low bar of expectations. So if it performs past expectations, you’ll be excited, lol.
Honestly what we have now is AI. As in it is not intelligent just trys to mimic it.
Digital Intelegence if we ever achive it would be a more accurate name.
Look, the naming ship has sailed and sunk somewhere in the middle of the ocean. I think it’s time to accept that “AI” just means “generative model” and what we would have called “AI” is now more narrowly “AGI”.
People call videogame enemies “AI”, too, and it’s not the end of the world, it’s just imprecise.
As someone who frequently interacts with the tech illiterate, no they don’t. This sudden rush to put weighed text hallucination tables into everything isn’t that helpful. The hype feels like self driving cars or 3D TVs for those of us old enough to remember that. The potential for damage is much higher than either of those two preceding fads and cars actually killed poeple. I think many of us are expressing a healthy level of skepticism toward the people who need to sell us the next big thing and it is absolutely warranted.
It’s exactly like self driving everyone is like this is the time we are going to get AGI. But it well be like everything else overhyped and under deliver. Sure it well have its uses companies well replace people with it and they enshitificstion well continue.
Doubt it. Maybe Microsoft can fuck it up somehow but the tech is here to stay and will do massive good.
You can doubt all you like but we keep seeing the training data leaking out with passwords and personal information. This problem won’t be solved by the people who created it since they don’t care and fundamentally the technology will always show that lack of care. FOSS ones may do better in this regard but they are still datasets without context. Thats the crux of the issue. The program or LLM has no context for what it says. That’s why you get these nonsensical responses telling people that killing themselves is a valid treatment for a toothache. Intelligence is understanding. The “AI” or LLM or, as I like to call them, glorified predictive textbars, doesn’t understand the words it is stringing together and most people don’t know that due to flowery marketing language and hype. The threat is real.
Not to mention the hulucinations. What a great marketing term for it’s fucking wrong.