After Samsung’s temporary ban on employees using ChatGPT and AI chatbots in general, Apple is following suit and banning employees from using any AI chatbots. The reason behind this, although not confirmed, is that Apple is developing its own AI platform and is concerned about employees accidentally leaking confidential information when working with third-party chatbots.
According to The Wall Street Journal, Apple is working on its own AI chatbot technologies and is extremely cautious of employees leaking confidential information. The iPhone maker has also warned employees against using GitHub’s CoPilot feature, an AI-based automatic code writer. The reason behind these bans is also quite understandable. The way these conversational chatbots work is by sending the information they receive in conversations with users back to their training models to further improve their own capabilities.
Apple is afraid that employees using AI tools to write code or prepare any other source material might end up serving the company’s confidential work with its rivals on a silver platter. With the official ChatGPT app also coming to the Apple App Store, its AI-assistant Siri, which doesn’t have the best track record as a virtual assistant anyway is under threat.
The fear isn’t completely unfounded, as Samsung employees were found doing the same in April when there were three separate incidents in under a month in which employees ended up inadvertently leaking sensitive information to ChatGPT.
Apple and Samsung aren’t the only companies keeping employees from using AI to lighten their workloads. JP Morgan Chase and Verizon have also restricted their employees from using the AI chatbot. The bans aren’t exclusive to big firms either. The New York City Public Schools system also put a temporary ban on students’ use of ChatGPT and several other restricted websites before pulling back.
In the News: Official ChatGPT iOS app released; Android app coming soon