Commentary
With the progression of the digital age, the merging of artificial intelligence (AI) with consumer technology is stirring ethical debates. Major tech companies, having exhausted public English data sources, are now resorting to personal devices and social platforms for AI training. This shift has triggered public unease, particularly regarding privacy concerns.
Apple recently showcased significant AI advancements, notably in its digital assistant, Siri.
During Apple’s 2024 Worldwide Developers Conference (WWDC) on June 10, the company revealed that the updated Siri now comprehends natural language syntax akin to ChatGPT, enabling features like quick photo editing, email composition, and emoji and image generation through voice commands. These improvements are exclusive to devices such as iPhone 15 Pro, M-series chip tablets, and Mac computers with advanced processors.
Apple also announced a partnership with OpenAI to integrate ChatGPT directly into Siri, utilizing GPT-4 as the core AI engine.
During the conference’s keynote, Craig Federighi, Apple’s senior VP of software engineering, emphasized the importance of data privacy in AI operations. Apple has traditionally safeguarded privacy by conducting data processing on devices rather than external servers.
Mr. Federighi highlighted that Apple’s generative AI models predominantly operate on devices, ensuring data privacy without relying on cloud processing. However, he acknowledged the necessity of accessing server-based models for more complex tasks. In such cases, Apple’s new “Private Cloud Compute” guarantees secure data processing on cloud servers, which can be independently verified.
Public Reaction and High-Profile Criticism
Despite Apple’s reassurances, skeptics raised concerns about potential exploitation of personal data by corporations for AI training or undisclosed experiments.
Tesla CEO Elon Musk emerged as a prominent critic, publicly criticizing the move on X after the conference. Mr. Musk expressed concerns about security breaches, even suggesting a ban on Apple devices within his company’s premises, citing fears of potential misuse of sensitive data resulting from integrating OpenAI into Apple’s ecosystem.
Mr. Musk’s critique extended to the transparency of data handling between Apple and OpenAI, questioning Apple’s ability to monitor OpenAI’s data usage post-transfer.
“The problem with ‘agreeing’ to share your data: nobody actually reads the terms & conditions,” Mr. Musk stated on X.
This issue was exemplified by a recent incident involving actress Scarlett Johansson. In May, Ms. Johansson threatened legal action against OpenAI, alleging that a voice in its ChatGPT product, named “Sky,” bore a striking resemblance to hers.
OpenAI had reportedly approached Ms. Johansson for her voice for the AI, an offer she declined. Months later, the similarity in voices led Ms. Johansson to seek legal counsel, expressing shock and anger.
OpenAI subsequently halted the use of the “Sky” voice, denying any intent to mimic the actress’s voice and citing privacy reasons for not disclosing the identity of the actual voice actor. This incident spurred discussions on ethical standards in AI voice replication.
Broader Implications and Legal Challenges
The discourse extends beyond Apple and OpenAI, encompassing concerns about social media data being utilized for AI training. Meta, the parent company of Facebook and Instagram, recently announced plans to use user data from Facebook and Instagram in the UK and Europe to train its Llama AI language model starting June 26.
Meta stated that the training data would involve publicly shared content, photos, and interactions with AI chatbots, excluding users’ private messages.
Meta assured users of the option to opt-out of data usage for Llama training. In an email to users, Meta emphasized the right to object to data utilization and promised to uphold objections moving forward.
Concerns surrounding these practices prompted NOYB, a European digital rights advocacy group, to lodge complaints with 11 privacy watchdogs about Meta’s AI training strategies. In response, the Irish Data Health Protection Commission, Meta’s primary regulator, requested the company to pause the Llama training initiative on social media content.
As a follow-up, Meta expressed disappointment on June 14, deeming the pause a setback for European innovation.
The Dangers of AI in the Hands of Authoritarian Regimes
With growing concerns about the misuse of AI by major tech companies, there is also a significant worry about authoritarian regimes and unethical actors utilizing AI to spread harmful ideologies, create misinformation, and manipulate public opinion.
Manipulating Global Perceptions with AI
OpenAI, in a blog post on May 30, revealed instances of AI misuse by state actors and private entities to shape global narratives. Over three months, OpenAI identified five covert operations utilizing their technology to control public discourse and influence international opinion discreetly.
These operations, spanning countries like Russia, China, Iran, and involving a private Israeli company, utilized OpenAI’s advanced language models for generating fake content, managing social media profiles, assisting in programming, and translating texts, among other purposes.
The report highlighted China’s “Spamouflage” campaign, which employed AI to monitor social media activities and disseminate counterfeit messages in multiple languages. Russia’s “Bad Grammar” and “Doppelganger” groups, along with Iran’s International Virtual Media Union, were also called out for spreading false news and extremist content across various digital platforms.
The integration of AI by authoritarian regimes and unethical actors poses a significant threat to global information integrity and public discourse. Heightened scrutiny and proactive measures are necessary to counter these malicious uses of AI technology.
Contributors: Kane Zhang and Ellen Wan
The opinions expressed in this article are solely those of the author and do not necessarily reflect the views of The Epoch Times.
Please provide an alternative version.
Source link