When Google added photo scanning technology to Android phones, it caused a huge backlash, with the company accused of “secretly” installing new monitoring technology on Android phones “without user permission.”
At the time, Google assured me that SafetyCore was an enabling framework and would not actually start scanning photos or other content. The new app, it said, “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.”
Well that time has now come and it starts with Google Messages. As reported by 9to5Google, “Google Messages is rolling out Sensitive Content Warnings that blur nude images on Android.” Not only does it blur content, but it also warns that such imagery can be harmful and provides options to view explicit content or block numbers.
This AI scanning takes place on device, and Google also assures that nothing is sent back to them. Android hardener GrapheneOS backed up that claim: SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.”
But GrapheneOS also lamented that “it’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source… We’d have no problem with having local neural network features for users, but they’d have to be open source.” Back to that secrecy point, again.
The Google Messages update was expected. The question now is what comes next. And the risk is that the capability is being introduced at the same time as secure, encrypted user content is under increasing pressure from legislators and security agencies around the world. Each time such technology is introduced, privacy advocates push back.
For now the feature is disabled by default for adults but enabled by default for children. Adults can decide to enable the new safety measures in Google Messages Settings, under Protection & Safety— Manage sensitive content warnings. Depending on a child’s age, their settings can only be changed in either their account settings or Family Link.
This doesn’t end here, and so just as with Gmail and other platforms, Google’s 3 billion Android, email and other users will need to decide what level of AI scanning, monitoring and analysis they’re comfortable with and where they draw the line. This is on-device, but many of the new updates don’t have that same privacy protection.
AI monitoring is here to stay and will take some getting used to. As Phone Arena points out, the new photo scanning “also works in reverse; if you try to send or forward an image that might be considered sensitive, Messages will flash a heads-up to let you know what you’re about to share, and you’ll have to confirm before it goes through.”
Welcome to the brave new world of “big brother” AI.