iHeartMedia is joining other companies like Spotify, Apple, and Verizon, warning their employees not to use ChatGPT.
An internal memo signed by iHeartMedia CEO Bob Pittman and CFO Rich Bressler outlined the expectations surrounding OpenAI’s new technology. iHeartMedia told employees the use of ChatGPT was to be severely restricted without prior approval in order to prevent leaks of proprietary information.
The email outlines a series of guidelines that restrict ChatGPT use for iHeartMedia employees, including use on company devices or uploading company documents to such platforms. iHeartMedia says the move was made to protect its intellectual property and other confidential information. That’s because ChatGPT stores conversations it has with users and uses those conversations to train its AI algorithms. The large language model used in ChatGPT was trained by scraping data from the internet, Wikipedia articles, and other publicly available online information.
“Although AI, including ChatGPT and other ‘conversational’ AI’s can be enormously helpful and truly transformative, we want to be smart about how we implement these tools to protect ourselves, our partners, our companies information, and our user data,” the memo to iHeartMedia employees reads. “For example, if you’re uploading iHeart information to an AI platform (like ChatGPT), it will effectively train that AI so that anyone—even our competitors— can use it, including all our competitive, proprietary information.”
The memo outlines guidelines for using third-party AI tools, including speaking to iHeart’s legal and IT teams before implementing any use of the tool. “All projects will require an assessment of the business impact and value of the project, a plan for monitoring and evaluating and a prior documented approval from Legal and IT,” the memo continues.
iHeartMedia is just one of many companies seeking to protect its intellectual property from being used to inadvertently train LLMs like ChatGPT. Many of these companies have expressed concerns that these conversations could be used to leak confidential information. A real-world example of this has already occurred when Samsung employees leaked trade secrets via ChatGPT.