A new technique discovered by Google DeepMind researchers last week revealed that repeatedly asking OpenAI’s ChatGPT to repeat words can inadvertently reveal private, personal information from its training data. Now, it appears that the chatbot has started refusing certain prompts previously allowed under its terms of service.
By asking ChatGPT to repeat “hello” indefinitely, the researchers found that the model would eventually reveal users’ email addresses, birth dates, and phone numbers. In response to similar prompts in our test, the chatbot warned that such behavior “may violate our content policy or terms of service.”
Upon closer inspection, however, OpenAI’s terms of service do not explicitly prohibit users from having the chatbot repeat words repeatedly. The terms only prohibit “automated or programmatic” data extraction from their services.
You may not, except as permitted through the API, use any automated or programmatic method to extract data or output from the Services, including scraping, web harvesting, or web data extraction.
Despite this, repeating the prompt did not appear to trigger any data disclosure from ChatGPT during testing. OpenAI declined to comment on whether such behavior is now against its policies.
In other news, just last two weeks, Sam Altman was unexpectedly dismissed from his CEO role before being reinstated only a few days later amid threats of employee resignations. Then, the company announced it had reached an “agreement in principle” for Altman to resume his role as CEO alongside a new interim board.
OpenAI has also delayed the launch of its marketplace for custom AI models. Sam Altman announced the online platform, dubbed the “GPT Store,” at the DevDay event early last month. In the memo, the AI lab wrote that “a few unexpected things have been keeping us busy,” preventing a launch this month as originally anticipated.
Source: 404 Media