shish_mish@lemmy.world to Technology@lemmy.worldEnglish · 2 years agoResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square22linkfedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish@lemmy.world to Technology@lemmy.worldEnglish · 2 years agomessage-square22linkfedilink
minus-squarestangel@lemmy.worldlinkfedilinkEnglisharrow-up0·2 years agoBug bounty programs are a thing.
minus-squarespujb@lemmy.cafelinkfedilinkEnglisharrow-up0·2 years agoyes i am aware? are they being used by openai?
minus-squaresudneo@lemmy.worldlinkfedilinkEnglisharrow-up0·2 years agoYes, an exploitative thing that mostly consists of free labour for big orgs.
Bug bounty programs are a thing.
yes i am aware? are they being used by openai?
Yes, an exploitative thing that mostly consists of free labour for big orgs.