20 May 2023, Mumbai: Apple Inc. has recently imposed restrictions on the use of ChatGPT and other external artificial intelligence (AI) tools for its employees, according to a report by the Wall Street Journal. The move comes as Apple develops its own similar technology and expresses concerns about the potential leak of confidential data by employees using third-party AI programs.
In addition to ChatGPT, Apple has also advised its employees against using GitHub’s Copilot, a software code writing automation tool owned by Microsoft. The company’s decision to restrict the use of these AI tools reflects its commitment to safeguarding proprietary code and other sensitive internal data. OpenAI, the creator of ChatGPT, has made efforts to address privacy concerns. Last month, they introduced an “incognito mode” for ChatGPT, ensuring that users’ conversation history is not stored or used to enhance the AI’s capabilities. The scrutiny surrounding ChatGPT and similar chatbots has been increasing due to concerns about how they handle and manage user data. This data is commonly used to train AI models and improve their performance. However, Apple’s decision to limit the use of AI tools by its employees aligns with other major firms’ cautious approach towards generative AI platforms.
Earlier this month, rival smartphone manufacturer Samsung implemented a similar ban on ChatGPT and other AI tools following an accidental leak of sensitive code by one of its engineers. Amazon, JPMorgan Chase, and several other major banks have also taken steps to restrict or ban the internal use of ChatGPT to prevent the sharing of confidential information or potential regulatory issues. As companies recognize the potential risks associated with external AI platforms, they are taking proactive measures to protect sensitive data and maintain control over their proprietary technology.
By Yashika Desai