Not really, no. Sources of infornation gain a reputation as time goes on. So, even though you should still check with multiple sources, you can sort of know if a certain bit of information is likely to be correct or not.
On the other hand, LLM’s will quote different sources and sometimes it will only provide them if you ask it to. Even then it can hallucinate and quote a source that doesn’t actually exist, so there’s that as well.
At least it’s citing sources and you can check to make sure. And from my anecdotal evidence it has been pretty good so far. It also told me on some occasions that the queried information was not found in it’s sources instead of just making something up. But it’s not perfect for sure, it’s always better to do manual research but for a first impression and to find some entry points I’ve found it useful so far
The problem is that you need to check those sources today make sure it’s not just making up bullshit and at that point you didn’t gain anything from the genai
As I said the links provide some entry points for further research. It’s providing some use to me because I don’t need to check every search result. But to each their own and I understand the general scepticism of generative “AI”
If you don’t check everyone source. It might be just bullshitting you. There’s people who followed your approach and got into hot shit with their bosses and judges
There is absolutely value in something compiling sources for you to personally review. Anyone who cannot use AI efficiently is analogous to someone who can’t see the utility in a graphing calculator. It’s not magic, it’s a tool. And tools need to be used precisely, and for appropriate purposes.
My plumber fucks up I don’t blame his wrench. My lawyers don’t vet their case work, I blame them.
It’s an LLM. Odds are it’s hallucinating the sources and they don’t even exist.
Know what does compile sources for you which are guaranteed to exist and be related to what you’re looking for…? A good old not LLM infected search engine.
If my plumber replaces their wrench for a rabid gerbil claiming it’ll be just as good I’m definitely changing plumbers.
I’m not an llm hater. I run one of the biggest Foss genai services. It’s because of that that I know their limitations.
You said that you’re not going to check every search result, which implied you’re not checking every time source either which will lead you to eventually believe some llm bullshit. And of you’re using an llm just to compile sources that you check with yourself it’s no difference than a search engine without llm
Problem is, you cannot trust it’s not hallucinating these stats
And even if it’s showing the correct number, you can’t be sure how trustworthy the source is.
This applies to any information though, it’s got nothing to do with LLMs specifically.
Not really, no. Sources of infornation gain a reputation as time goes on. So, even though you should still check with multiple sources, you can sort of know if a certain bit of information is likely to be correct or not.
On the other hand, LLM’s will quote different sources and sometimes it will only provide them if you ask it to. Even then it can hallucinate and quote a source that doesn’t actually exist, so there’s that as well.
At least it’s citing sources and you can check to make sure. And from my anecdotal evidence it has been pretty good so far. It also told me on some occasions that the queried information was not found in it’s sources instead of just making something up. But it’s not perfect for sure, it’s always better to do manual research but for a first impression and to find some entry points I’ve found it useful so far
The problem is that you need to check those sources today make sure it’s not just making up bullshit and at that point you didn’t gain anything from the genai
As I said the links provide some entry points for further research. It’s providing some use to me because I don’t need to check every search result. But to each their own and I understand the general scepticism of generative “AI”
If you don’t check everyone source. It might be just bullshitting you. There’s people who followed your approach and got into hot shit with their bosses and judges
There is absolutely value in something compiling sources for you to personally review. Anyone who cannot use AI efficiently is analogous to someone who can’t see the utility in a graphing calculator. It’s not magic, it’s a tool. And tools need to be used precisely, and for appropriate purposes.
My plumber fucks up I don’t blame his wrench. My lawyers don’t vet their case work, I blame them.
It’s an LLM. Odds are it’s hallucinating the sources and they don’t even exist.
Know what does compile sources for you which are guaranteed to exist and be related to what you’re looking for…? A good old not LLM infected search engine.
If my plumber replaces their wrench for a rabid gerbil claiming it’ll be just as good I’m definitely changing plumbers.
Spoken like someone who never even tried to use an LLM and just parrots the bad things they hear online.
Lemmy is full of LLM haters, I get where they’re coming from but they take it to the extreme every single time.
I’m not an llm hater. I run one of the biggest Foss genai services. It’s because of that that I know their limitations.
You said that you’re not going to check every search result, which implied you’re not checking every time source either which will lead you to eventually believe some llm bullshit. And of you’re using an llm just to compile sources that you check with yourself it’s no difference than a search engine without llm
The sources are the same result of the search? Or at least the top results?
When I query an AI I always end with “provide sources and bibliography for your reply”. That seems to get better replies.
That being said, I can’t trust MKBHD is not hallucinating either.