The Google+ social network exposed the personal information of hundreds of thousands of people using the site between 2015 and March 2018, according to a report in the Wall Street Journal. But managers at the company chose not to go public with
the failures, because they worried that it would invite scrutiny from regulators, particularly in the wake of Facebook's security failures.
Shortly after the report was published, Google announced that it would be shutting down Google+ by August 2019. In the announcement, Google also announced raft of new security features for Android, Gmail and other Google platforms that it has
taken as a result of privacy failures..
Google said it had discovered the issues during an internal audit called Project Strobe. Ben Smith, Google's vice president of engineering, wrote in a blog post:
Given these challenges and the very low usage of the consumer version of Google+, we decided to sunset the consumer version of Google+.
The audit found that Goggle+ APIs allowed app developers to access the information of Google+ users' friends, even if that data was marked as private by the user. As many as 438 applications had access to the unauthorized Google+ data, according
to the Journal.
Now, users will be given greater control over what account data they choose to share with each app. Apps will be required to inform users what data they will have access to. Users have to provide explicit permission in order for them to gain
access to it. Google is also limiting apps' ability to gain access to users' call log and SMS data on Android devices.Additionally, Google is limiting which apps can seek permission to users' consumer Gmail data. Only email clients, email backup
services and productivity services will be able to access this data.
Google will continue to operate Google+ as an enterprise product for companies.
As someone who has tracked technology and human rights over the past ten years, I am convinced that digital ID, writ large, poses one of the gravest risks to human rights of any technology that we have encountered. . By Brett Soloman
Add a phone number I never gave Facebook for targeted advertising to the list of deceptive and invasive ways Facebook makes money off your personal information. Contrary to user expectations and Facebook representatives' own previous statements,
the company has been using contact information that users explicitly provided for security purposes--or that users never provided at all --for targeted advertising.
A group of academic researchers from Northeastern University and Princeton University , along with Gizmodo reporters , have used real-world tests to demonstrate how Facebook's latest deceptive practice works. They found that Facebook harvests
user phone numbers for targeted advertising in two disturbing ways: two-factor authentication (2FA) phone numbers, and shadow contact information. Two-Factor Authentication Is Not The Problem
First, when a user gives Facebook their number for security purposes--to set up 2FA , or to receive alerts about new logins to their account--that phone number can become fair game for advertisers within weeks. (This is not the first time
Facebook has misused 2FA phone numbers .)
But the important message for users is: this is not a reason to turn off or avoid 2FA. The problem is not with two-factor authentication. It's not even a problem with the inherent weaknesses of SMS-based 2FA in particular . Instead, this is a
problem with how Facebook has handled users' information and violated their reasonable security and privacy expectations.
There are many types of 2FA . SMS-based 2FA requires a phone number, so you can receive a text with a second factor code when you log in. Other types of 2FA--like authenticator apps and hardware tokens--do not require a phone number to work.
However, until just four months ago , Facebook required users to enter a phone number to turn on any type of 2FA, even though it offers its authenticator as a more secure alternative. Other companies-- Google notable among them --also still
follow that outdated practice.
Even with the welcome move to no longer require phone numbers for 2FA, Facebook still has work to do here. This finding has not only validated users who are suspicious of Facebook's repeated claims that we have complete control over our own
information, but has also seriously damaged users' trust in a foundational security practice.
Until Facebook and other companies do better, users who need privacy and security most--especially those for whom using an authenticator app or hardware key is not feasible--will be forced into a corner. Shadow Contact Information
Second, Facebook is also grabbing your contact information from your friends. Kash Hill of Gizmodo provides an example :
...if User A, whom we'll call Anna, shares her contacts with Facebook, including a previously unknown phone number for User B, whom we'll call Ben, advertisers will be able to target Ben with an ad using that phone number, which I call shadow
contact information, about a month later.
This means that, even if you never directly handed a particular phone number over to Facebook, advertisers may nevertheless be able to associate it with your account based on your friends' phone books.
Even worse, none of this is accessible or transparent to users. You can't find such shadow contact information in the contact and basic info section of your profile; users in Europe can't even get their hands on it despite explicit requirements
under the GDPR that a company give users a right to know what information it has on them.
As Facebook attempts to salvage its reputation among users in the wake of the Cambridge Analytica scandal , it needs to put its money where its mouth is . Wiping 2FA numbers and shadow contact data from non-essential use would be a good start.
Facebook plans to unveil its Portal video chat device for the home next week.
Facebook originally planned to announce Portal at its annual F8 developer conference in May of this year. But the company's scandals, including the Cambridge Analytica data breach led executives to shelve the announcement at the last minute.
Portal will feature a wide-angle video camera, which uses artificial intelligence to recognize people in the frame and follow them as they move throughout a room. In response to the breakdown in trust for Facebook, the company has recently added
a privacy shutter which can physically block the camera.