Melon Farmers Unrated

Apple Snooping


Apple scans users' images for sexual content and child abuse


 

Updated: Danger! Poisoned Apple...

Apple will add software to scan all your images, nominally for child abuse, but no doubt governments will soon be adding politically incorrect memes to the list


Link Here14th August 2021
Full story: Apple Snooping...Apple scans users' images for sexual content and child abuse
   Apple intends to install software, initially on American iPhones, to scan for child abuse imagery, raising alarm among security researchers who warn that it will open the door to surveillance of millions of people’s personal devices.

The automated system would proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified. The scheme will initially roll out only in the US.

According to people briefed on the plans, every photo uploaded to iCloud in the US will be given a safety voucher saying whether it is suspect or not. Once a certain number of photos are marked as suspect, Apple will enable all the suspect photos to be decrypted and, if apparently illegal, passed on to the relevant authorities.

The scheme seems to be a nasty compromise with governments to allow Apple to offer encrypted communication whilst allowing state security to see what some people may be hiding.

Alec Muffett, a security researcher and privacy campaigner who formerly worked at Facebook and Deliveroo, said Apple's move was tectonic and a huge and regressive step for individual privacy. Apple are walking back privacy to enable 1984, he said.

Ross Anderson, professor of security engineering at the University of Cambridge, said:

It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of . . . our phones and laptops.

Although the system is currently trained to spot child sex abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests, say researchers. Apple's precedent could also increase pressure on other tech companies to use similar techniques.

And given that the system is based on mapping images to a hash code and then comparing that has code with those from known child porn images, then surely there is a chance of a false positive when an innocent image just happens to the map to the same hash code as an illegal image. That could surely have devastating consequences with police banging on doors at dawn accompanied by the 'there's no smoke without fire' presumption of guilt that exists around the scourge of child porn. An unlucky hash may then lead to a trashed life.

Apple's official blog post inevitably frames the new snooping capability as if it was targeted only at child porn but it is clear that the capability can be extended way beyond this narrow definition. The blog post states:

Child Sexual Abuse Material (CSAM) detection

To help address this, new technology in iOS and iPadOS* will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.

Apple's method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users' devices.

Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.

Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user's account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users' photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.

Expanding guidance in Siri and Search

Apple is also expanding guidance in Siri and Search by providing additional resources to help children and parents stay safe online and get help with unsafe situations. For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report.

Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.

These updates to Siri and Search are coming later this year in an update to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.*

Update: Apples photo scanning and snooping 'misunderstood'

13th August 2021. See article from cnet.com

Apple plans to scan some photos on iPhones, iPads and Mac computers for images depicting child abuse. The move has upset privacy advocates and security researchers, who worry that the company's newest technology could be twisted into a tool for surveillance and political censorship. Apple says those concerns are misplaced and based on a misunderstanding of the technology it's developed.

In an interview published Friday by The Wall Street Journal, Apple's software head, Craig Federighi, attributed much of people's concerns to the company's poorly handled announcements of its plans. Apple won't be scanning all photos on a phone, for example, only those connected to its iCloud Photo Library syncing system.

It's really clear a lot of messages got jumbled pretty badly in terms of how things were understood, Federighi said in his interview. We wish that this would've come out a little more clearly for everyone because we feel very positive and strongly about what we're doing.

 

 

Update: Apple offers slight improvements

14th August 2021. See article from theverge.com

The idea that Apple would be snooping on your device to detect child porn and nude mages hasn't gone down well with users and privacy campaigners. The bad publicity has prompted the company to offer an olive branch.

To address the possibility for countries to expand the scope of flagged images to be detected for their own surveillance purposes, Apple says it will only detect images that exist in at least 2 country's lists. Apple says it won't rely on a single government-affiliated database -- like that of the US-based National Center for Missing and Exploited Children, or NCMEC -- to identify CSAM. Instead, it will only match pictures from at least two groups with different national affiliations. The goal is that no single government could have the power to secretly insert unrelated content for censorship purposes, since it wouldn't match hashes in any other database.

Apple has also said that it would 'resist' requests from countries to expand the definition of images of interest. However this is a worthless reassurance when all it would take is a court order for Apple to be forced into complying with any requests that the authorities make.

Apple has also states the tolerances that will be applied to prevent false positives. It is alarming that innocent images can in fact generate a hash code that matches a child porn image. And to try and prevent innocent people from being locked up, Apple will now require 30 images to nave hashes matching illegal images before the images get investigated by Apple staff. Previously Apple had declined to comment on what the tolerance value will be.

 

 

Comments: Poisoned Apple...

The EFF comments: Apple's Plan to Think Different About Encryption Opens a Backdoor to Your Private Life


Link Here9th August 2021
Full story: Apple Snooping...Apple scans users' images for sexual content and child abuse

Apple has announced impending changes to its operating systems that include new protections for children features in iCloud and iMessage. If you've spent any time following the Crypto Wars, you know what this means: Apple is planning to build a backdoor into its data storage system and its messaging system.

Child exploitation is a serious problem, and Apple isn't the first tech company to bend its privacy-protective stance in an attempt to combat it. But that choice will come at a high price for overall user privacy. Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.

To say that we are disappointed by Apple's plans is an understatement. Apple has historically been a champion of end-to-end encryption, for all of the same reasons that EFF has articulated time and time again. Apple's compromise on end-to-end encryption may appease government agencies in the U.S. and abroad, but it is a shocking about-face for users who have relied on the company's leadership in privacy and security.

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts204that is, accounts designated as owned by a minor204for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

When Apple releases these client-side scanning functionalities, users of iCloud Photos, child users of iMessage, and anyone who talks to a minor through iMessage will have to carefully consider their privacy and security priorities in light of the changes, and possibly be unable to safely use what until this development is one of the preeminent encrypted messengers.

Apple Is Opening the Door to Broader Abuse

We've said it before, and we'll say it again now: it's impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger's encryption itself and open the door to broader abuses.

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children's, but anyone's accounts. That's not a slippery slope; that's a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content. New laws in Ethiopia requiring content takedowns of misinformation in 24 hours may apply to messaging services. And many other countries204often those with authoritarian governments204have passed similar laws. Apple's changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.

We've already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of terrorist content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society. While it's therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content as terrorism, including documentation of violence and repression, counterspeech, art, and satire.

Image Scanning on iCloud Photos: A Decrease in Privacy

Apple's plan for scanning photos that get uploaded into iCloud Photos is similar in some ways to Microsoft's PhotoDNA. The main product difference is that Apple's scanning will happen on-device. The (unauditable) database of processed CSAM images will be distributed in the operating system (OS), the processed images transformed so that users cannot see what the image is, and matching done on those transformed images using private set intersection where the device will not know whether a match has been found. This means that when the features are rolled out, a version of the NCMEC CSAM database will be uploaded onto every single iPhone. The result of the matching will be sent up to Apple, but Apple can only tell that matches were found once a sufficient number of photos have matched a preset threshold.

Once a certain number of photos are detected, the photos in question will be sent to human reviewers within Apple, who determine that the photos are in fact part of the CSAM database. If confirmed by the human reviewer, those photos will be sent to NCMEC, and the user's account disabled. Again, the bottom line here is that whatever privacy and security aspects are in the technical details, all photos uploaded to iCloud will be scanned.

Make no mistake: this is a decrease in privacy for all iCloud Photos users, not an improvement.

Currently, although Apple holds the keys to view Photos stored in iCloud Photos, it does not scan these images. Civil liberties organizations have asked the company to remove its ability to do so. But Apple is choosing the opposite approach and giving itself more knowledge of users' content.

Machine Learning and Parental Notifications in iMessage: A Shift Away From Strong Encryption

Apple's second main new feature is two kinds of notifications based on scanning photos sent or received by iMessage. To implement these notifications, Apple will be rolling out an on-device machine learning classifier designed to detect sexually explicit images. According to Apple, these features will be limited (at launch) to U.S. users under 18 who have been enrolled in a Family Account. In these new processes, if an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content. If the under-13 child still chooses to send the content, they have to accept that the parent will be notified, and the image will be irrevocably saved to the parental controls section of their phone for the parent to view later. For users between the ages of 13 and 17, a similar warning notification will pop up, though without the parental notification.

Similarly, if the under-13 child receives an image that iMessage deems to be sexually explicit, before being allowed to view the photo, a notification will pop up that tells the under-13 child that their parent will be notified that they are receiving a sexually explicit image. Again, if the under-13 user accepts the image, the parent is notified and the image is saved to the phone. Users between 13 and 17 years old will similarly receive a warning notification, but a notification about this action will not be sent to their parent's device.

This means that if204for instance204a minor using an iPhone without these features turned on sends a photo to another minor who does have the features enabled, they do not receive a notification that iMessage considers their image to be explicit or that the recipient's parent will be notified. The recipient's parents will be informed of the content without the sender consenting to their involvement. Additionally, once sent or received, the sexually explicit image cannot be deleted from the under-13 user's device.

Whether sending or receiving such content, the under-13 user has the option to decline without the parent being notified. Nevertheless, these notifications give the sense that Apple is watching over the user's shoulder204and in the case of under-13s, that's essentially what Apple has given parents the ability to do.

It is also important to note that Apple has chosen to use the notoriously difficult-to-audit technology of machine learning classifiers to determine what constitutes a sexually explicit image. We know from years of documentation and research that machine-learning technologies, used without human oversight, have a habit of wrongfully classifying content, including supposedly sexually explicit content. When blogging platform Tumblr instituted a filter for sexual content in 2018, it famously caught all sorts of other imagery in the net, including pictures of Pomeranian puppies, selfies of fully-clothed individuals, and more. Facebook's attempts to police nudity have resulted in the removal of pictures of famous statues such as Copenhagen's Little Mermaid. These filters have a history of chilling expression, and there's plenty of reason to believe that Apple's will do the same.

Since the detection of a sexually explicit image will be using on-device machine learning to scan the contents of messages, Apple will no longer be able to honestly call iMessage end-to-end encrypted. Apple and its proponents may argue that scanning before or after a message is encrypted or decrypted keeps the end-to-end promise intact, but that would be semantic maneuvering to cover up a tectonic shift in the company's stance toward strong encryption.

Whatever Apple Calls It, It's No Longer Secure Messaging

As a reminder, a secure messaging system is a system where no one but the user and their intended recipients can read the messages or otherwise analyze their contents to infer what they are talking about. Despite messages passing through a server, an end-to-end encrypted message will not allow the server to know the contents of a message. When that same server has a channel for revealing information about the contents of a significant portion of messages, that's not end-to-end encryption. In this case, while Apple will never see the images sent or received by the user, it has still created the classifier that scans the images that would provide the notifications to the parent. Therefore, it would now be possible for Apple to add new training data to the classifier sent to users' devices or send notifications to a wider audience, easily censoring and chilling speech.

But even without such expansions, this system will give parents who do not have the best interests of their children in mind one more way to monitor and control them, limiting the internet's potential for expanding the world of those whose lives would otherwise be restricted. And because family sharing plans may be organized by abusive partners, it's not a stretch to imagine using this feature as a form of stalkerware.

People have the right to communicate privately without backdoors or censorship, including when those people are minors. Apple should make the right decision: keep these backdoors off of users' devices.

 

 

Comments: Poisoned Apple...

Comments about Apple's plans to scan people's phones and tablets seeking sexual content and child abuse


Link Here7th August 2021
Full story: Apple Snooping...Apple scans users' images for sexual content and child abuse

Apple has announced 2 new snooping capabilities (initial used for US users only) that will be added to its operating systems in the near future.

The first will be to analyse the content of pictures on users' devices sent in messages. Apple says that this system will only be used to inform parents when their under 12yo children attempt to send sexual content. No doubt Apple will come under pressure to scan images of all users for an ever expanding list of restrictions, eg terrorism, covid memes, copyrighted images etc.

The second scan is to match any photos being uploaded to Apple's iCloud Photo Library. If these images match a curated list of child abuse image hashes then Apple will decrypt flagged images and judge for themselves whether it is an illegal image, and then inform the police. Apple claims that they will avoid the life shattering possibility of a false positive by only investigating if several images match hashes.

Update: WhatsApp responds: A setback for people's privacy all over the world

7th August 2021. See article from dailymail.co.uk

The head of WhatsApp tweeted a barrage of criticism against Apple over plans to automatically scan iPhones and cloud storage for images of child abuse. It would see flagged owners reported to the police after a company employee has looked at their photos.

But WhatsApp head Will Cathcart said the popular messaging app would not follow Apple's strategy. His criticism adds to a stream of criticism of Apple's new system by privacy campaigners who say it is the start of an infrastructure for surveillance and censorship. Cathcart said:

I think this is the wrong approach and a setback for people's privacy all over the world.

Apple's system can scan all the private photos on your phone -- even photos you haven't shared with anyone. That's not privacy.

People have asked if we'll adopt this system for WhatsApp. The answer is no.

Instead of focusing on making it easy for people to report content that's shared with them, Apple has built software that can scan all the private photos on your phone -- even photos you haven't shared with anyone. That's not privacy.

We've had personal computers for decades and there has never been a mandate to scan the private content of all desktops, laptops or phones globally for unlawful content. It's not how technology built in free countries works..

Will this system be used in China? What content will they consider illegal there and how will we ever know? How will they manage requests from governments all around the world to add other types of content to the list for scanning? Can this scanning software running on your phone be error proof? Researchers have not been allowed to find out. Why not? How will we know how often mistakes are violating people's privacy? What will happen when spyware companies find a way to exploit this software? Recent reporting showed the cost of vulnerabilities in iOS software as is. What happens if someone figures out how to exploit this new system?, Cathcart listed as concerning questions.

There are so many problems with this approach, and it's troubling to see them act without engaging experts that have long documented their technical and broader concerns with this.




 

melonfarmers icon

Home

Index

Links

Email

Shop
 


US

World

Media

Nutters

Liberty
 

Film Cuts

Cutting Edge

Info

Sex News

Sex+Shopping
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys