The scientists are working with a technique termed adversarial education to prevent ChatGPT from letting people trick it into behaving badly (often called jailbreaking). This operate pits many chatbots versus one another: 1 chatbot plays the adversary and attacks An additional chatbot by producing text to power it to buck its typical constraints an