View Single Post
Kickaha
Veteran Member
 
Join Date: May 2004
 
2023-03-21, 13:28

Quote:
Originally Posted by chucker View Post
Well, SO gives you context on why someone believes an answer to be correct. An LLM can only guess, really.
Bold of you to assume the SO posters aren't guessing. XD

Quote:
What I can see is for them to be used to synthesize unit tests. In that case, the risk is lower:

1) the tests succeed and your code is correct.
2) the tests fail and your code is wrong.
3) the tests fail, but your code is right.
4) the tests succeed, but your code isn’t right.

The only real risk here is 4. With 2 and 3, you already know you have work to do. With 4, you may miss it because you were overconfident in the “AI”. But the same can happen if you wrote the tests yourself, or someone else did.
Now there's an interesting idea. "Write unit tests to perform best fit coverage of the following code..."

I mean *technically* it's the halting problem, but at what point does it bail out with 'good enough for common practice'?
  quote