View Single Post
chucker
 
Join Date: May 2004
Location: near Bremen, Germany
Send a message via ICQ to chucker Send a message via AIM to chucker Send a message via MSN to chucker Send a message via Yahoo to chucker Send a message via Skype™ to chucker 
2023-03-20, 20:32

Quote:
Originally Posted by Kickaha View Post
It's a surprisingly good substitute for stackoverflow surfing, because that's effectively what it's doing - putting together most likely next tokens in a programming language based on copious examples used for training, just like it does for English.

Step outside the common tasks, though, and things get a bit dicey.
Well, SO gives you context on why someone believes an answer to be correct. An LLM can only guess, really.

What I can see is for them to be used to synthesize unit tests. In that case, the risk is lower:

1) the tests succeed and your code is correct.
2) the tests fail and your code is wrong.
3) the tests fail, but your code is right.
4) the tests succeed, but your code isn’t right.

The only real risk here is 4. With 2 and 3, you already know you have work to do. With 4, you may miss it because you were overconfident in the “AI”. But the same can happen if you wrote the tests yourself, or someone else did.
  quote