Hold on let me have chat gpt rephrase that for you.
I’m not exactly sure of the source, but there was a statement suggesting that language models offer three kinds of responses: ones that are too general to be of any value, those that essentially mimic existing content in a slightly altered form, and assertions that are completely incorrect yet presented with unwavering certainty. I might be paraphrasing inaccurately, but that was the essence.
Are all responses so non-committal? “I’m not exactly sure”, “I might be paraphrasing inaccurately”.
I hope this sort of phrasing doesn’t make its way into common usage by students and early careers people. Learning to run away from liability at every opportunity is not going to help them.
Hold on let me have chat gpt rephrase that for you.
Are all responses so non-committal? “I’m not exactly sure”, “I might be paraphrasing inaccurately”.
I hope this sort of phrasing doesn’t make its way into common usage by students and early careers people. Learning to run away from liability at every opportunity is not going to help them.