Site icon SkinPack

The head of OpenAI’s trust and safety group, Dave Willner, departs the company

 

 

Earlier today, the Biden Administration announced that seven tech companies that are developing generative AI systems pledged to adhere to a number of standards for making their AI products safe, trustworthy, and accountable.

However, just before that joint announcement, one of the companies involved, OpenAI, suffered a loss in that particular area. In a Thursday post on LinkedIn, Dave Willner revealed he was leaving his position as OpenAI’s head of trust and safety. He was first hired in that role in February 2022, several months before OpenAI officially launched its ChatGPT chatbot.

In his post, Willner did not indicate that he had issues with the job itself. Rather, he felt that it was simply taking up too much time away from his family. He stated:

OpenAI is going through a high-intensity phase in its development — and so are our kids. Anyone with young children and a super intense job can relate to that tension, I think, and these past few months have really crystallized for me that I was going to have to prioritize one or the other.

While he will no longer be a full-time employee of OpenAI, Willner said he would be an advisor to the company going forward. There’s no word on who might replace Willner as the new head of trust and safety.

OpenAI and its ChatGPT chatbot have come under more and more scrutiny over various issues. Lawsuits have been filed accusing ChatGPT of copyright violations for illegally scraping over novels and books to create summaries. Another lawsuit claims both OpenAI and Microsoft’s Bing Chat have used data to train these chatbots without the consent of the people who created the data.

More recently, the US Federal Trade Commission has launched a probe to see if OpenAI and ChatGPT have violated any consumer protection laws and regulations.

Exit mobile version