I am really dumb. The link you shared doesn’t show any table like you describe, and no links to the other “parts” out of 13. Can you help me figure this out? The part I can see is pretty helpful!
I am really dumb. The link you shared doesn’t show any table like you describe, and no links to the other “parts” out of 13. Can you help me figure this out? The part I can see is pretty helpful!
Well, I’m using GitHub. I don’t know what to tell you.
I guess I would be ok sending the money on my own. But these other services have the nice feature of allowing other users to contribute to the bounty into a single pot. I.e., I can put a bounty of $20, then user b also really wants the feature and will add $5. When the PR is approved, the developer is guaranteed $25 and doesn’t have to contact user b to send their $5 or give out their financial info to X number of people.
Oh interesting. Do you know if there is a way to get Windows Explorer to support tags for files so they can be searched? I know there’s a roundabout way to do it through the properties menu for specific file types but perhaps there is a better way?
The problem is they aren’t comparing apples to apples. They asked each version of GPT a different pool of questions. (Edited my post to make this clear).
Once you ask them the same questions, it becomes clear that ChatGPT isn’t getting worse at math, because it has been terrible all along.
My understanding is this claim is basically entirely false. The tests done by these researchers had some glaring errors that when corrected, show gpt-4 is getting slightly better at math, if anything. See this video that describes some of the issues: https://youtu.be/YSokS2ivf7U
TL;DR The researchers gave new GPT questions from two different pools. It’s no surprise they got worse answers.
Mmmmm… SoakCenter.