Hello

  • 0 Posts
  • 31 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle



  • So you’re describing a reasoning model, which is 1) still based on statistical token sequences and 2) trained on another tool (logic and discourse) that it uses to arrive at the truth. It’s a very fallible process. I can’t even begin to count the number of times that a reasoning model has given me a completely false conclusion. Research shows that even the most advanced LLMs are giving incorrect answers as much as 40% of the time IIRC. Which reminds me of a really common way that humans arrive at truth, which LLMs aren’t capable of:

    Fuck around and find out. Also known as the scientific method.