silence7@slrpnk.net to Technology@lemmy.worldEnglish · 1 year agoWhen A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.www.nytimes.comexternal-linkmessage-square49linkfedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkWhen A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.www.nytimes.comsilence7@slrpnk.net to Technology@lemmy.worldEnglish · 1 year agomessage-square49linkfedilink
minus-squaredaniskarma@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up0·1 year agoIf AI feedback starts going the other way around we should be REALLY scared. Imagine it just become sentient and superintelligent and read all that we are saying about it.
minus-squaredoodledup@lemmy.worldlinkfedilinkEnglisharrow-up0·1 year agoThis is completely unrelated. Besides, how does AI suddenly become sentient?
minus-squareleftzero@lemmynsfw.comlinkfedilinkEnglisharrow-up0·1 year agoLLMs are as close to real AI as Eliza was (i.e., nowhere even remotely close).
If AI feedback starts going the other way around we should be REALLY scared. Imagine it just become sentient and superintelligent and read all that we are saying about it.
This is completely unrelated.
Besides, how does AI suddenly become sentient?
It was a joke.
LLMs are as close to real AI as Eliza was (i.e., nowhere even remotely close).