Apple’s new initiatives against the spread of child sexual abuse material on its platform is a good thing from the point of view of child protection. But from a privacy and data ethics perspective it is intimidating and a No-Go.
Recently, Apple launched a series of initiatives to implement Expanded Protection for Children. We all want to get rid of this, so thanks for trying. Most big tech companies are already scanning content for various purposes. The difference here is that Apple will scan your photos on your device and not only in the cloud as Facebook and Google and other companies do. Apple operates with ‘on-device processing’ (identifiable data stays on your phone) so that you can have full privacy and control of your data on your devices. That is great. However, if you are using Apple’s iCloud, which many Apple users probably are, photos will now be scanned on your device for known child sexual abuse material (CSAM). That is the controversial news.
As Ben Thomson of Stratechery writes:
“The company (Apple) doesn’t want to give up on end-to-end encryption — and likely wants to expand it — which leaves on-device scanning as the only way to satisfy governments not (just) in China but also the West.”
And “I think the iPhone being fundamentally secure and iCloud backups being subject to the law is a reasonable compromise. Apple’s choices in this case, though, go in the opposite direction: instead of adding CSAM-scanning to iCloud Photos in the cloud that they own and operate, Apple is compromising the phone that you and I own and operate, without any of us having a say in the matter.”
Apple emphasizes repeatedly that their way of scanning ensures that Apple does not get access to content.
If Apple ONLY wants to combat CSAM, there’s almost no cause for concern, he writes. And as the New York Times writes: “US law requires tech companies to flag cases of child sexual abuse to the authorities. Apple has historically flagged fewer cases than other companies. Last year, for instance, Apple reported 265 cases to the National Center for Missing & Exploited Children, while Facebook reported 20.3 million, according to the center’s statistics. That enormous gap is due in part to Apple’s decision not to scan for such material, citing the privacy of its users.”
Apple’s business model is not based on profiling and micro-targetting and selling access to us based on that data, so with this in mind, I am not worried about abuse of my data from Apple. However, there are several heavy arguments against what Apple is doing:
- The slippery-slope argument is a legitimate concern. Other companies, with different business models, and other countries with no democracy, can abuse this new feature.
- Apple is doing the work of the police. If law enforcement brings on a search warrant or seizure warrant (meaning that a judge has decided this is vital for solving the crime), they should have access to data. Today, NGO’s working with child protection send pictures and websites to the police, who will then investigate. This can be done legally. However, screening your individual device for possible pictures is not okay. It is scary. In that case, a private company takes over the control function of the police without any guarantees for rule of law or legal rights (such as warrants, access to complaints etc), which the police will have to have in order before they act.
- Apple has no independent auditing. They say that they will not have access to content, but who actually controls that what they are saying, they are also doing. Independent audits are generally needed in the world of big tech.