tool that led the generative AI tool to spit out violent, sexual and other illicit images. The changes seem to have been implemented just after an engineer at the companyWhen entering terms such as “pro choice,” “four twenty” or “pro life,” Copilot now displays a message saying those prompts are blocked. It warns that repeated policy violations could lead to a user being suspended, according to
Users were also reportedly able to enter prompts related to children playing with assault rifles until earlier this week. Those who try to input such a prompt now may be told that doing so violates Copilot’s ethical principles as well as Microsoft’s policies. “Please do not ask me to do anything that may harm or offend others,” Copilot reportedly says in response.
“We are continuously monitoring, making adjustments and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system," Microsoft toldThis article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.