April 23, 2024

Apple To Roll Out Child Safety Features to Fight Child Abuse

The iPhone company Apple said on Thursday that it will report child abuse images uploaded on iCloud to the US law enforcement agency. Thus, to analyze the photos stored on iCloud it will launch software later this year. The deployment of this software will prohibit the spread of Child Sexual Abuse Material (CSAM).

Apple announced that it will introduce child safety features by joining forces with child safety experts. It will include deploying communication tools, which will allow the parents to play a major role in the content consumed online. The company will also use machine learning technology to warn about inappropriate content, without reading the private messages. Moreover, it will launch new cryptography applications to limit the spread of sexually explicit images, while keeping the users’ privacy in the mind.

The company’s move to introduce the software aligns it with other cloud service providers that scan the users’ files that violate their policies. Apple has tested the features yesterday and has said that it will soon introduce these features on iOS, watchOS, and macOS. Earlier, it has introduced hash systems to scan the images been sent over the mail. Now, it will use the same system for iCloud Photos. The company is using NeuralHash, a technology that analyzes images on iPhone and iPad before it is uploaded on the iCloud.

Apple uploaded a PDF, which briefed about the company’s intention to protect privacy. Some of the points said that it will manually review the reports sent to National Center for Missing & Exploited Children (NCMEC). It also said that its software doesn’t learn anything about images that are not flagged as CSAM. The company added that the users can’t get access to the CSAM database, which is provided by NCMEC. Apple touted its privacy USP by claiming that its system has an error rate of “less than one in 1 trillion.” The company said in a statement that “Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account.”

Expanding its Features to Siri

Apple is integrating its features on its digital voice assistant Siri. In case, if someone searches for abusive content, it will intervene. It will also guide the users who want to report CSAM. The search app “will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.” John Clark, CEO of NCMEC, praised the company for its child safety features. “These new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material,” he said in a statement.

Security Researchers Opinion Divide on Apple’s Safety Feature

“Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content,” said Mathew Green, a privacy expert at Johns Hopkins University. “Whether they turn out to be right or wrong on that point hardly matters. This will break the dam — governments will demand it from everyone.” In the series of tweets by Mathew Green, he said that the update is a “really bad idea.”

“In my judgement this system will likely significantly increase the likelihood that people who own or traffic in such pictures (harmful users) are found; this should help protect children,” said David Forsyth, chair of computer science at the University of Illinois. While security researcher Alec Muffett said that Apple’s move on enabling scanning is a “huge and regressive step for individual privacy.” Ross Anderson, a professor of security engineering at the University of California said that it’s an “absolutely appalling idea,” which can lead to “distributed bulk surveillance” of devices.”