From Re-evaluating GPT-4’s bar exam performance (linked in the article):
First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population.
Ohhh, that is sneaky!
What I find delightful about this is that I already wasn’t impressed! Because, as the paper goes on to say
Moreover, although the UBE is a closed-book exam for humans, GPT-4’s huge training corpus largely distilled in its parameters means that it can effectively take the UBE “open-book”
And here I was thinking it not getting a perfect score on multiple-choice questions was already damning. But apparently it doesn’t even get a particularly good score!
Why is that a criticism? This is how it works for humans too: we study, we learn the stuff, and then try to recall it during tests. We’ve been trained on the data too, for neither a human nor an ai would be able to do well on the test without learning it first.
This is part of what makes ai so “scary” that it can basically know so much.
Because a machine that “forgets” stuff it reads seems rather useless… considering it was a multiple choice style exam and, as a machine, Chat GPT had the book entirely memorized, it should have scored perfect almost all the time.
Chat GPT had the book entirely memorized
I feel like this exposes a fundamental misunderstanding of how LLMs are trained.
Dont anthropomorphise. There is quite the difference between a human and an advanced lookuptable.
Well… I do agree with you but human brains are basically big prediction engines that use lookup tables, experience, to navigate around life. Obviously a super simplification, and LLMs are nowhere near humans, but it is quite a step in the direction.
@phoenixz @Soyweiser “Let’s redefine what it means to be human, so we can say the LLM is human” have you bumped your head?
Though making an unreliable intern is amazing and was impossible 5 years ago…
thank fuck sama invented the concept of doing a shit job
I mean, it’s not shit at everything; it can be quite useful in the right context (GitHub Copilot is a prime example). Still, it doesn’t surprise me that these first-party LLM benchmarks are full of smoke and mirrors.
citation needed
That GitHub Copilot and friends are useful? I would argue that their utility is rather subjective, but there are indications that it improves developer productivity.
I’m unsure if you’ve used tools like GH Copilot before, but it primarily operates through “completions” (“spicy autocorrect” in its truest form) rather than a chatbot-like interface. It’s mostly good for filling out boilerplate and code that has a single obvious solution; not game-changing intelligence by any means, but useful in relieving the programmer of various menial tasks.
May I ask, what evidence are you hoping to see in particular?
https://awful.systems/comment/1286383
I look forward to the money that I’ll make cleaning up the mess you provide people with
all in all: underwhelming. I remain promptdubious.
I know I’m six months late to the party but how do you like “promptcritical”?