Apple shares a security threat review for its new CSAM detection feature - TechnW3

What you need to know

  • Apple has released a new document further detailing the security implications of its CSAM detection plans.
  • Apple will publish details of the encrypted CSAM hash database that will be on all iPhones and iPads.
  • Security researchers will be able to inspect the database to confirm its validity

Apple continues to try and set the record straight about CSAM detection.

Apple today released a new document that it hopes will go some way to allaying fears surrounding the security of its new CSAM detection system. The document carries the lofty title of "Security Threat Model Review of Apple's Child Safety Features" and is available on Apple's website.

In the document, Apple explains that it will publish a further Knowledge Base article that will contain the root hash of the encrypted CSAM hash database that will itself be included in all versions of iOS and iPadOS. The theory is that security researchers will be able to compare the database on their devices with the one on Apple's servers, ensuring that it hasn't been meddled with in any way. That's just one of the methods Apple will now employ to ensure the database of CSAM being checked is legitimate.

Apple will publish a Knowledge Base article containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature. Additionally, users will be able to inspect the root hash of the encrypted database present on their device, and compare it to the expected root hash in the Knowledge Base article. That the calculation of the root hash shown to the user in Settings is accurate is subject to code inspection by security researchers like all other iOS device-side security claims.

Apple also says that the approach will allow third-party technical audits of its system, including the encrypted CSAM database.

This approach enables third-party technical audits: an auditor can confirm that for any given root hash of the encrypted CSAM database in the Knowledge Base article or on a device, the database was generated only from an intersection of hashes from participating child safety organizations, with no additions, removals, or changes. Facilitating the audit does not require the child safety organization to provide any sensitive information like raw hashes or the source images used to generate the hashes – they must pro- vide only a non-sensitive attestation of the full database that they sent to Apple. Then, in a secure on-campus environment, Apple can provide technical proof to the auditor that the intersection and blinding were performed correctly. A participating child safety organization can decide to perform the audit as well.

In a media briefing, Apple further confirmed that the CSAM threshold at which photos will be flagged for manual review is 30. That means an iCloud account will need to have 30 pieces of CSAM content discovered before a manual review takes place. Apple says that the number was never intended to be kept private because security researchers would have been able to discover it regardless. It's also thought that while some may find the number 30 high, it's still very likely much lower than the number of images likely to be found in an offender's library.

The full document can be read on Apple's website and may go some way to making people more comfortable with the new CSAM detection system. Apple was also keen to point out that the current dialog surrounding the system is something that was built into the feature's timeline.

- TechnW3
from iMore - Learn more. Be more.
via TechnW3

No comments: