Just wrapping up a trip to China… I kept telling my partner how much every restaurant smelled like the 80s. So glad it’s not like this any more in the States.
Just wrapping up a trip to China… I kept telling my partner how much every restaurant smelled like the 80s. So glad it’s not like this any more in the States.
Touché!
And you didn’t even proofread the output…
I’ve used crewai and autogen in production… And I still agree with the person you’re replying to.
The 2 main problems with agentic approaches I’ve discovered this far:
One mistake or hallucination will propagate to the rest of the agentic task. I’ve even tried adding a QA agent for this purpose but what ends up happening is those agents aren’t reliable and also leads to the main issue:
It’s very expensive to run and rerun agents at scale. The scaling factor of each agent being able to call another agent means that you can end up with an exponentially growing number of calls. My colleague at one point ran a job that cost $15 for what could have been a simple task.
One last consideration: the current LLM providers are very aware of these issues or they wouldn’t be as concerned with finding “clean” data to scrape from the web vs using agents to train agents.
If you’re using crewai btw, be aware there is some builtin telemetry with the library. I have a wrapper to remove that telemetry if you’re interested in the code.
Personally, I’m kinda done with LLMs for now and have moved back to my original machine learning pursuits in bioinformatics.
This is great! Also… I highly recommend DEFCON for something similar – there is a Furry parade among other shenanigans.