ylai@lemmy.ml to Technology@lemmy.worldEnglish · 1 year agoVR Headsets Are Approaching the Eye’s Resolution Limitsspectrum.ieee.orgexternal-linkmessage-square36fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkVR Headsets Are Approaching the Eye’s Resolution Limitsspectrum.ieee.orgylai@lemmy.ml to Technology@lemmy.worldEnglish · 1 year agomessage-square36fedilink
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up0·1 year ago maybe the whole damn thing is outsourced to ChatGPT now, who the fuck knows. I don’t understand why so many people assume an LLM would make glaring errors like this…
minus-squaredrislands@lemmy.worldlinkfedilinkEnglisharrow-up0·1 year ago…because they frequently do? Glaring errors are like, the main thing LLMs produce besides hype.
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up0·1 year agoThey make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt you highlighted.
minus-squaredrislands@lemmy.worldlinkfedilinkEnglisharrow-up0·1 year agoY’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up0·1 year agoAh apologies, I’m terrible with tracking usernames, I’ll edit for clarity.
minus-squareGarbanzo@lemmy.worldlinkfedilinkEnglisharrow-up0·1 year agoI’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.
minus-squareZammy95@lemmy.worldlinkfedilinkEnglisharrow-up0·1 year agoI think he was being sarcastic lol. I…hope
I don’t understand why so many people assume an LLM would make glaring errors like this…
…because they frequently do? Glaring errors are like, the main thing LLMs produce besides hype.
They make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt you highlighted.
Y’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.
Ah apologies, I’m terrible with tracking usernames, I’ll edit for clarity.
I’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.
I think he was being sarcastic lol. I…hope