Apple’s AI Intelligence: Safe, Secure and Ethically Sourced – Or Is It? | Commentary

Apple’s redefinition of “AI” to mean “Apple Intelligence” was all the rage earlier this week. That’s kind of funny since a big piece of Apple’s announced AI launch strategy is OpenAI and ChatGPT dependent — which means, by default, Microsoft, OpenAI’s biggest investor.

But casting that aside for the moment, Apple CEO Tim Cook, as expected, firmly placed privacy and security at the center of his pitch. That’s a fascinating and extremely narrow needle to thread, since privacy, security and respect for intellectual property go hand in hand with the data AI uses to “do its thing.”

Apple’s AI strategy comes in two parts. First, Apple – using its own wholly grown AI tech – will enable users to do myriad tasks more productively and efficiently directly on their iPhones, iPads and Macs. None of those tasks – like prioritizing messages and notifications – requires any outside assistance from OpenAI or any other Big Tech generative AI. Apple Intelligence will be opt-in by default, which means that users must agree to make their data available to Apple’s AI either directly on device or by leveraging the power of its own private cloud for more complex tasks. Apple assures its faithful that it will never ever share their personal data. If all of that is true, so far, so good. No privacy or copyright harm, no infringing foul.

But Apple may be doing at least some of the same things for which OpenAI and other Big Tech AI have been rightfully criticized. The company’s Machine Learning Research site states that its foundational AI model trains on both licensed data and “publicly available data collected by its web-crawler, AppleBot.” There are those three words again – “publicly available data.” Typically, that’s code for unlicensed copyrighted works — not to mention personal data — being included in the training data set, which calls into question whether Apple Intelligence is fully “safe” and “ethically sourced.” That more troubling interpretation is bolstered by the fact that Apple says that web publishers “have the option to opt out of the use of their web content for Apple Intelligence training with a data usage control.”

The notion of “ethically sourced” AI also goes beyond privacy and copyright legalities. It gives rise to larger considerations of respect for individuals, their creative works and their right to commercialize them. That’s particularly pointed for Apple, which – notwithstanding its recent “Crush” video brain freeze when it (literally) pummeled the creative works of humanity down into an iPad – prides itself for safety and for being Big Tech’s home for the creative community.

The second part of Apple’s strategy is also problematic from this “ethically sourced” perspective. This is when users seek generative AI solutions that Apple’s own AI can’t handle and instead hands off the relevant prompt to OpenAI and ChatGPT to do the work when users give permission to do so. Remember, ChatGPT scoops up “publicly available data” which, again, means that third party personal data and unlicensed copyrighted works are included to some extent.

An Apple spokesperson declined to comment, but the company in its press materials said it takes steps to filter personally identifiable data from publicly available information on the web. Apple has also stated it does not use other users’ private personal data or other user interactions when training models built into Apple Intelligence.

In any event, all of this properly calls into question Apple’s “white knight” positioning. Let’s take the legal piece first. If Apple’s use of “publicly available data” means what I think it means, then Apple faces the same potentially significant legal liability that OpenAI and other Big Tech players face. It also may be legally liable when it hands off its generative AI work to OpenAI’s ChatGPT even with user consent. Merely because CEO Sam Altman and his Wild West gAIng at OpenAI do the work does not necessarily excuse Apple from legal liability.

Companies can be secondarily liable for copyright infringing behavior if they are aware of those transgressions but actively enable and encourage them anyway. That’s certainly at least arguably the case with Apple, which is well aware that OpenAI is accused of copyright infringement on a grand scale for training its AI models on unlicensed copyrighted works. That’s what The New York Times case, and many others like it, are all about.

To be clear, the concept of “ethically sourced” AI is nuanced beyond the strictly legal part of the equation. Creator-friendly Adobe found this out the hard way. It launched its standalone Firefly generative AI application last year with great artist-first fanfare, trumpeting the fact that its AI trained only on licensed stock works already in the Adobe family. It was later reported, however, that that wasn’t exactly true. Firefly apparently had, in fact, also trained – at least in some part – on images from visual AI generator Midjourney, a company that now also finds itself embroiled in significant copyright litigation. And with that inconvenient truth, Adobe’s purity was called into question, which is fair when a company makes purity a headline feature.

But Adobe’s transgressions appear to be of a completely different order of magnitude than OpenAI’s wholesale guardrail-less taking, and its ethical intentions seem to be generally honorable. Given the great steps it takes at least on the privacy side of the equation, Apple too seems to land closer to Adobe than to OpenAI and other Big Tech generative AI services.

That doesn’t make Apple completely innocent though, especially when being “ethically sourced” is front and center in its pitch. The company developed its two-part strategy to serve its over 2.2 billion users, keep them firmly in its walled garden and catch up in the expected multi-trillion dollar AI race. And it built its next big thAIng knowing that its “Apple Intelligence ” solution likely includes at least some third party personal data and unlicensed copyrighted works.

Reach out to Peter at peter@creativemedia.biz. For those of you interested in learning more, sign up to his “the brAIn” newsletter, visit his firm Creative Media at creativemedia.biz, and follow him on Threads @pcsathy.

The post Apple’s AI Intelligence: Safe, Secure and Ethically Sourced – Or Is It? | Commentary appeared first on TheWrap.