One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn’t summarising at all, it only looks like it…
Cheaper for now, since venture capitalist cash is paying to keep those extremely expensive servers running. The AI experiments at my work (automatically generating documentation) have got about an 80% reject rate - sometimes they’re not right, sometimes they’re not even wrong - and it’s not really an improvement on time having to review it all versus just doing the work.
No doubt there are places where AI makes sense; a lot of those places seem to be in enhancing the output of someone who is already very skilled. So let’s see how “cheaper” works out.
At a consulting job I did recently they got an AI for a specific task to have an 25% rejection rate. Which I thought was pretty good and the team working on it said there was no way they could do better, this is the absolute best.
So they went and asked the customers if they would be interested in this feature and how much they would be willing to pay. The response was nobody was willing to pay at all for the feature and a 25% rejection rate was too high.
The reason customers gave was this meant they still need a human to check the results, so the human is still in the loop. And because the human basically has to do most of if not all of the work to check the result, it didn’t really save that much time. And knowing their people, they will probably slack on the checks, since most are correct. Which then leads to incorrect data going forward. This was simply not something customers wanted, they want to replace the humans and have it do better, not worse.
And paying for it is out of the question, because so many companies are offering AI for free or close to free. Plus they see it as a cost saving measure and paying for it means it has to save even more time for it to be worth.
So they put the project on ice for now, hoping the technology improves. The next customer poll they did, AI was the most requested feature. This caused some grumbles.
I think the best way to use “AI” for work, is together with a human to improve output of that human, because the human learned the skill on how to use “AI” to work more efficiently. This is happening at my workplace right now. More and more coworkers are learning when it is the right moment to start writing a prompt.
I see a future (or maybe I hope), where a brilliant mind finds an efficient way to train “AI” by just working with it and get so efficient that we can have more time for us…
I use AI often as a glorified search engine these days. It’s actually kinda convenient to give me ideas to look into further, when encountering a problem to solve. But would I just take some AI output without reviewing it? Hell no😄
People always assume that the current state of generative AI is the end point. Five years ago nobody would have believed what we have today. In five years it’ll all be a different story again.
People always assume that generative AI (and technology in general) will continue improving at the same pace it always has been. They always assume that there are no limits in the number of parameters, that there’s always more useful data to train it on, and that things like physical limits in electricity infrastructure, compute resources, etc., don’t exist. In five years generative AI will have roughly the same capability it has today, barring massive breakthroughs that result in a wholesale pivot away from LLMs. (More likely, in five years it’ll be regarded similarly to cryptocurrency is today, because once the hype dies down and the VC money runs out the AI companies will have to jack prices to a level where it’s economically unviable to use in most commercial environments.)
In five years it’ll all be a different story again.
You don’t know that. Maybe it will take 124 years to make the next major breakthru and until then all that will happen is people will tinker around and find that improving one thing makes another thing worse.
Cheaper for now, since venture capitalist cash is paying to keep those extremely expensive servers running. The AI experiments at my work (automatically generating documentation) have got about an 80% reject rate - sometimes they’re not right, sometimes they’re not even wrong - and it’s not really an improvement on time having to review it all versus just doing the work.
No doubt there are places where AI makes sense; a lot of those places seem to be in enhancing the output of someone who is already very skilled. So let’s see how “cheaper” works out.
At a consulting job I did recently they got an AI for a specific task to have an 25% rejection rate. Which I thought was pretty good and the team working on it said there was no way they could do better, this is the absolute best.
So they went and asked the customers if they would be interested in this feature and how much they would be willing to pay. The response was nobody was willing to pay at all for the feature and a 25% rejection rate was too high.
The reason customers gave was this meant they still need a human to check the results, so the human is still in the loop. And because the human basically has to do most of if not all of the work to check the result, it didn’t really save that much time. And knowing their people, they will probably slack on the checks, since most are correct. Which then leads to incorrect data going forward. This was simply not something customers wanted, they want to replace the humans and have it do better, not worse.
And paying for it is out of the question, because so many companies are offering AI for free or close to free. Plus they see it as a cost saving measure and paying for it means it has to save even more time for it to be worth.
So they put the project on ice for now, hoping the technology improves. The next customer poll they did, AI was the most requested feature. This caused some grumbles.
I think the best way to use “AI” for work, is together with a human to improve output of that human, because the human learned the skill on how to use “AI” to work more efficiently. This is happening at my workplace right now. More and more coworkers are learning when it is the right moment to start writing a prompt.
I see a future (or maybe I hope), where a brilliant mind finds an efficient way to train “AI” by just working with it and get so efficient that we can have more time for us…
We gotta fight for that, I think
I think a lot of people will have to learn the hard way that AI isn’t what it’s cracked up to be.
Saving this comment for posterior
Posterity?
https://www.merriam-webster.com/dictionary/posterity
Why would I save something for posterity when I could save it for posterior?
Butt
I use AI often as a glorified search engine these days. It’s actually kinda convenient to give me ideas to look into further, when encountering a problem to solve. But would I just take some AI output without reviewing it? Hell no😄
People always assume that the current state of generative AI is the end point. Five years ago nobody would have believed what we have today. In five years it’ll all be a different story again.
People always assume that generative AI (and technology in general) will continue improving at the same pace it always has been. They always assume that there are no limits in the number of parameters, that there’s always more useful data to train it on, and that things like physical limits in electricity infrastructure, compute resources, etc., don’t exist. In five years generative AI will have roughly the same capability it has today, barring massive breakthroughs that result in a wholesale pivot away from LLMs. (More likely, in five years it’ll be regarded similarly to cryptocurrency is today, because once the hype dies down and the VC money runs out the AI companies will have to jack prices to a level where it’s economically unviable to use in most commercial environments.)
To add to this, we’re going to run into the problem of garbage in, garbage out.
LLMs are trained on text from the internet.
Currently, a massive amount of text on the internet is coming from LLMs.
This creates a cycle of models getting trained on data sets that increasingly contain large sets of data generated by older models.
The most likely outlook is that LLMs will get worse as the years go by, not better.
You don’t know that. Maybe it will take 124 years to make the next major breakthru and until then all that will happen is people will tinker around and find that improving one thing makes another thing worse.