Researchers at ETH Zurich created a jailbreak attack that bypasses AI guardrails
![Researchers at ETH Zurich created a jailbreak attack that bypasses AI guardrails](https://cryptoinvestment.at/wp-content/uploads/2023/11/researchers-at-eth-zurich-created-a-jailbreak-attack-that-bypasses-ai-guardrails.jpg)
Artificial intelligence models that rely on human feedback to ensure that their outputs are harmless and helpful may be universally vulnerable to so-called ‘poison’ attacks.