LLMs spew hallucinations by their nature. But what if you could get the LLM to correct itself? Or if you just claimed your LLM could correct itself? Last week, Matt Shumer of AI startup HyperWrite/…
another valiant attempt to get “promptfonder” into more common currency
Just one more model bro