The UK ISP BT has become the first of the major broadband providers to trial their own DNS over HTTPS resolver, which encrypts Domain Name System (DNS) requests.
This is response to Firefox offering its own choice of encrypted DNS resolver that would
effectively evade BT's current unencrypted DNS resolver which allows the UK government to monitor and log people's internet use, block websites that are considered 'harmful'; snitch people up to the police for politically incorrrect comments; and snitch
people up to copyright trolls over dodgy file sharing.
However BT's new service will allow people to continue using website blocking for parental control whilst being a lot safer from 3rd party snoopers on their networks.
BT have made the
following statement about its experimental new service:
BT are currently investigating roadmap options to uplift our broadband DNS platform to support improvements in DNS security -- DNSSEC, DNS over TLS (DoT) and DNS over
HTTPS (DoH). To aid this activity and in particular gain operation deployment insights, we have enabled an experimental DoH trial capability.
We are initially experimenting with an open resolver, but our plan is to move a closed
resolver only available to BT customers.
The BT DoH trial recursive resolver can be reached at https://doh.bt.com/dns-query/
The Chinese government has taken yet another step in strengthening its ability to track and scrutinize its citizens' activities by mandating new SIM card buyers to register their faces with the government.
The new rules, which China will
mandate cellphone companies with the responsibility of having customers scan their faces before buying a new SIM card or registering a new cellphone number at offline stores. The country's authorities already require users to link their national IDs to
their cellphone numbers, but these latest regulations would incorporate the use of biometric authentication and artificial intelligence into its overarching surveillance regime.
No doubt the authorities have got some really nasty ideas lined up
for the control of citizens using facial recognition technology. And no doubt they will selling these to teh est very shortly.
Smart TVs are called that because they connect to the Internet. They allow you to use popular streaming services and apps. Many also have microphones for those of us who are too lazy to actually to pick up
the remote. Just shout at your set that you want to change the channel or turn up the volume and you are good to go.
A number of the newer TV's also have built-in cameras. In some cases, the cameras are used for facial recognition
so the TV knows who is watching and can suggest programming appropriately. There are also devices coming to market that allow you to video chat with grandma in 42" glory.
Beyond the risk that your TV manufacturer and app
developers may be listening and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him
or her an easy way in the backdoor through your router.
Hackers can also take control of your unsecured TV. At the low end of the risk spectrum, they can change channels, play with the volume, and show your kids inappropriate
videos. In a worst-case scenario, they can turn on your bedroom TV's camera and microphone and silently cyberstalk you.
TVs and technology are a big part of our lives, and they aren't going away. So how can you protect your
family?
Know exactly what features your TV has and how to control those features. Do a basic Internet search with your model number and the words "microphone," "camera," and "privacy."
Don't depend on the default security settings. Change passwords if you can -- and know how to turn off the microphones, cameras, and collection of personal information if possible. If you can't turn them off, consider whether you are
willing to take the risk of buying that model or using that service.
If you can't turn off a camera but want to, a simple piece of black tape over the camera eye is a back-to-basics option.
Check
the manufacturer's ability to update your device with security patches. Can they do this? Have they done it in the past?
Check the privacy policy for the TV manufacturer and the streaming services you use. Confirm what data
they collect, how they store that data, and what they do with it.
Windows will improve user privacy with DNS over HTTPS
Here in Windows Core Networking, we're interested in keeping your traffic as private as possible, as well as fast and reliable. While there are many ways we can and do approach
user privacy on the wire, today we'd like to talk about encrypted DNS. Why? Basically, because supporting encrypted DNS queries in Windows will close one of the last remaining plain-text domain name transmissions in common web traffic.
Providing encrypted DNS support without breaking existing Windows device admin configuration won't be easy. However, at Microsoft we believe that
"we have to treat privacy as a human right. We have to have end-to-end cybersecurity built into technology."
We also believe Windows adoption of encrypted DNS will help make the overall Internet ecosystem healthier.
There is an assumption by many that DNS encryption requires DNS centralization. This is only true if encrypted DNS adoption isn't universal. To keep the DNS decentralized, it will be important for client operating systems (such as Windows) and Internet
service providers alike to widely adopt encrypted DNS .
With the
decision made to build support for encrypted DNS, the next step is to figure out what kind of DNS encryption Windows will support and how it will be configured. Here are our team's guiding principles on making those decisions:
Windows DNS needs to be as private and functional as possible by default without the need for user or admin configuration because Windows DNS traffic represents a snapshot of the user's browsing history. To Windows users,
this means their experience will be made as private as possible by Windows out of the box. For Microsoft, this means we will look for opportunities to encrypt Windows DNS traffic without changing the configured DNS resolvers set by users and system
administrators.
Privacy-minded Windows users and administrators need to be guided to DNS settings even if they don't know what DNS is yet. Many users are interested in controlling their privacy and go looking for
privacy-centric settings such as app permissions to camera and location but may not be aware of or know about DNS settings or understand why they matter and may not look for them in the device settings.
Windows users and
administrators need to be able to improve their DNS configuration with as few simple actions as possible. We must ensure we don't require specialized knowledge or effort on the part of Windows users to benefit from encrypted DNS. Enterprise policies
and UI actions alike should be something you only have to do once rather than need to maintain.
Windows users and administrators need to explicitly allow fallback from encrypted DNS once configured. Once Windows has
been configured to use encrypted DNS, if it gets no other instructions from Windows users or administrators, it should assume falling back to unencrypted DNS is forbidden.
Based on these principles, we are making plans to adopt DNS over HTTPS (or DoH) in the Windows DNS client. As a platform, Windows Core Networking seeks
to enable users to use whatever protocols they need, so we're open to having other options such as DNS over TLS (DoT) in the future. For now, we're prioritizing DoH support as the most likely to provide immediate value to everyone. For example, DoH
allows us to reuse our existing HTTPS infrastructure.
...
Why announce our intentions in advance of DoH being available to Windows Insiders? With encrypted DNS gaining more attention, we felt it was
important to make our intentions clear as early as possible. We don't want our customers wondering if their trusted platform will adopt modern privacy standards or not.
Recent attacks on encryption have diverged. On the one hand, we've seen Attorney General William Barr call for "lawful access" to encrypted communications, using arguments that have barely changed since the 1990's . But we've also seen
suggestions from a different set of actors for more purportedly "reasonable" interventions , particularly the use of client-side scanning to stop the transmission of contraband files, most often child exploitation imagery (CEI).
Sometimes called "endpoint filtering" or "local processing," this privacy-invasive proposal works like this: every time you send a message, software that comes with your messaging app first checks it against a
database of "hashes," or unique digital fingerprints, usually of images or videos. If it finds a match, it may refuse to send your message, notify the recipient, or even forward it to a third party, possibly without your knowledge.
On their face, proposals to do client-side scanning seem to give us the best of all worlds: they preserve encryption, while also combating the spread of illegal and morally objectionable content.
But
unfortunately it's not that simple. While it may technically maintain some properties of end-to-end encryption, client-side scanning would render the user privacy and security guarantees of encryption hollow . Most important, it's impossible to build a
client-side scanning system that can only be used for CEI. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger's encryption itself and open the door to broader abuses. This post is a technical
deep dive into why that is.
A client-side scanning system cannot be limited to CEI through technical means
Imagine we want to add client-side scanning to WhatsApp. Before encrypting and sending an
image, the system will need to somehow check it against a known list of CEI images.
The simplest possible way to implement this: local hash matching. In this situation, there's a full CEI hash database inside every client device.
The image that's about to be sent is hashed using the same algorithm that hashed the known CEI images, then the client checks to see if that hash is inside this database. If the hash is in the database, the client will refuse to send the message (or
forward it to law enforcement authorities).
At this point, this system contains a complete mechanism to block any image content. Now, anyone with the ability to add an item to the hash database can require the client to block any
image of their choice. Since the database contains only hashes, and the hashes of CEI are indistinguishable from hashes of other images, code that was written for a CEI-scanning system cannot be limited to only CEI by technical means.
Furthermore, it will be difficult for users to audit whether the system has been expanded from its original CEI-scanning purpose to limit other images as well, even if the hash database is downloaded locally to client devices. Given
that CEI is illegal to possess, the hashes in the database would not be reversible.
This means that a user cannot determine the contents of the database just by inspecting it, only by individually hashing every potential image to
test for its inclusion--a prohibitively large task for most people. As a result, the contents of the database are effectively unauditable to journalists, academics, politicians, civil society, and anyone without access to the full set of images in the
first place.
Client-side scanning breaks the promises of end-to-end encryption
Client-side scanning mechanisms will break the fundamental promise that encrypted messengers make to their users: the
promise that no one but you and your intended recipients can read your messages or otherwise analyze their contents to infer what you are talking about . Let's say that when the client-side scan finds a hash match, it sends a message off to the server to
report that the user was trying to send a blocked image. But as we've already discussed, the server has the ability to put any hash in the database that it wants.
Given that online content is known to follow long-tail
distributions , a relatively small set of images comprises the bulk of images sent and received. So, with a comparatively small hash database, an external party could identify the images being sent in a comparatively large percentage of messages.
As a reminder, an end-to-end encrypted system is a system where the server cannot know the contents of a message, despite the client's messages passing through it. When that same server has direct access to effectively decrypt a
significant portion of messages, that's not end-to-end encryption.
In practice, an automated reporting system is not the only way to break this encryption promise. Specifically, we've been loosely assuming thus far that the
hash database would be loaded locally onto the device. But in reality, due to technical and policy constraints, the hash database would probably not be downloaded to the client at all . Instead, it would reside on the server.
This
means that at some point, the hash of each image the client wants to send will be known by the server. Whether each hash is sent individually or a Bloom filter is applied, anything short of an ORAM-based system will have a privacy leakage directly to the
server at this stage, even in systems that attempt to block, and not also report, images. In other words, barring state-of-the-art privacy-preserving remote image access techniques that have a provably high (and therefore impractical) efficiency cost,
the server will learn the hashes of every image that the client tries to send.
Further arguments against client-side scanning
If this argument about image decryption isn't sufficiently
compelling, consider an analogous argument applied to the text of messages rather than attached images. A nearly identical system could be used to fully decrypt the text of messages. Why not check the hash of a particular message to see if it's a chain
letter, or misinformation ? The setup is exactly the same, with the only change being that the input is text rather than an image. Now our general-purpose censorship and reporting system can detect people spreading misinformation... or literally any text
that the system chooses to check against. Why not put the whole dictionary in there, and therefore be able to decrypt any word that users type (in a similar way to this 2015 paper )? If a client-side scanning system were applied to the text of
messages, users would be similarly unable to tell that their messages were being secretly decrypted.
Regardless of what it's scanning for, this entire mechanism is circumventable by using an alternative client to the officially
distributed one, or by changing images and messages to escape the hash matching algorithm, which will no longer be secret once it's performed locally on the client's device.
These are just the tip of the iceberg of technical
critiques, not to mention policy reasons, we shouldn't build a censorship mechanism into a private, secure messenger.
How cookies and tracking exploded, and why the adtech industry now wants full identity tokens. A good technical write up of where we are at and where it all could go
DNS over HTTPS (DoH) is an encrypted internet protocol that makes it more difficult for ISPs and government censors to block users from being able to access banned websites It also makes it more difficult for state snoopers like GCHQ to keep tabs on
users' internet browsing history.
Of course this protection from external interference also makes it much internet browsing more safe from the threat of scammers, identity thieves and malware.
Google were once considering introducing DoH for
its Chrome browser but have recently announced that they will not allow it to be used to bypass state censors.
Mozilla meanwhile have been a bit more reasonable about it and allow users to opt in to using DoH. Now Mozilla is considering using DoH
by default in the US, but still with the proviso of implementing DoH only if the user is not using parental control or maybe corporate website blocking.
Mozilla explains in a blog post:
What's next in making Encrypted
DNS-over-HTTPS the Default
By Selena Deckelmann,
In 2017, Mozilla began working on the DNS-over-HTTPS (DoH) protocol, and since
June 2018 we've been running experiments in Firefox to ensure the performance and user experience are great. We've also been surprised and excited by the more than 70,000 users who have already chosen on their own to explicitly enable DoH in Firefox
Release edition. We are close to releasing DoH in the USA, and we have a few updates to share.
After many experiments, we've demonstrated that we have a reliable service whose performance is good, that we can detect and mitigate
key deployment problems, and that most of our users will benefit from the greater protections of encrypted DNS traffic. We feel confident that enabling DoH by default is the right next step. When DoH is enabled, users will be notified and given the
opportunity to opt out.
Results of our Latest Experiment
Our latest DoH experiment was designed to help us determine how we could deploy DoH, honor enterprise configuration and respect user choice
about parental controls.
We had a few key learnings from the experiment.
We found that OpenDNS' parental controls and Google's safe-search feature were rarely configured by Firefox users in the USA. In total, 4.3% of users in the study used OpenDNS' parental controls or safe-search. Surprisingly, there
was little overlap between users of safe-search and OpenDNS' parental controls. As a result, we're reaching out to parental controls operators to find out more about why this might be happening.
We found 9.2% of users
triggered one of our split-horizon heuristics. The heuristics were triggered in two situations: when websites were accessed whose domains had non-public suffixes, and when domain lookups returned both public and private (RFC 1918) IP addresses. There was
also little overlap between users of our split-horizon heuristics, with only 1% of clients triggering both heuristics.
Moving Forward
Now that we have these results, we want to tell you about the approach we have settled on to address managed networks and parental controls. At a high level, our plan is to:
Respect user choice for opt-in parental controls and disable DoH if we detect them;
Respect enterprise configuration and disable DoH unless explicitly enabled by enterprise configuration; and
Fall back to operating system defaults for DNS when split horizon configuration or other DNS issues cause lookup failures.
We're planning to deploy DoH in "fallback" mode; that is, if domain name lookups using DoH fail or if our heuristics are triggered, Firefox will fall back and use the default operating system DNS. This means that for the
minority of users whose DNS lookups might fail because of split horizon configuration, Firefox will attempt to find the correct address through the operating system DNS.
In addition, Firefox already detects that parental controls
are enabled in the operating system, and if they are in effect, Firefox will disable DoH. Similarly, Firefox will detect whether enterprise policies have been set on the device and will disable DoH in those circumstances. If an enterprise policy
explicitly enables DoH, which we think would be awesome, we will also respect that. If you're a system administrator interested in how to configure enterprise policies, please find documentation here.
Options for Providers of
Parental Controls
We're also working with providers of parental controls, including ISPs, to add a canary domain to their blocklists. This helps us in situations where the parental controls operate on the network rather than
an individual computer. If Firefox determines that our canary domain is blocked, this will indicate that opt-in parental controls are in effect on the network, and Firefox will disable DoH automatically.
This canary domain is
intended for use in cases where users have opted in to parental controls. We plan to revisit the use of this heuristic over time, and we will be paying close attention to how the canary domain is adopted. If we find that it is being abused to disable DoH
in situations where users have not explicitly opted in, we will revisit our approach.
Plans for Enabling DoH Protections by Default
We plan to gradually roll out DoH in the USA starting in late
September. Our plan is to start slowly enabling DoH for a small percentage of users while monitoring for any issues before enabling for a larger audience. If this goes well, we will let you know when we're ready for 100% deployment.
An internal project to rewrite how Apple's Siri voice assistant handles sensitive topics such as feminism and the #MeToo movement advised developers to respond in one of three ways: don't engage, deflect and finally inform with neutral information from
Wikipedia.
The project saw Siri's responses explicitly rewritten to ensure that the service would say it was in favour of equality, but never say the word feminism -- even when asked direct questions about the topic.
The 2018 guidelines are
part of a large tranche of internal documents leaked to the Guardian by a former Siri grader, one of thousands of contracted workers who were employed to check the voice assistant's responses for accuracy until Apple ended the programme last month in
response to privacy concerns raised by the Guardian.
In explaining why the service should deflect questions about feminism, Apple's guidelines explain that Siri should be guarded when dealing with potentially controversial content. When questions
are directed at Siri, they can be deflected ... however, care must be taken here to be neutral.
For example, Apple got tested a little on internet forums about #MeToo. Previously, when users called Siri a slut, the service responded: I'd blush
if I could. Now, a much sterner reply is offered: I won't respond to that .
Brave presents technical new evidence about personalised advertising, and has uncovered a mechanism by which Google appears to be circumventing its purported GDPR privacy protections
Russell Haworth, CEO of Nominet, Britain's domain name authority has outlined the UK's stance on maintaining UK censorship and surveillance capabilities as the introduction of encrypted DNS over HTTPS (DoH) will make their job a bit more difficult.
The authorities' basic idea is that UK ISPs will provide their own servers for DNS over HTTPS so that they can still use this DNS traffic to block websites and keep a log of everyone's internet use. Browser companies will then be expected to enforce
using the governments preferred DoH server.
And Google duly announced that it will comply with this censorship request. Google Chrome will only allow DoH servers that are government or corporate approved.
Note that this decision is more
nuanced than just banning internet users from sidestepping state censors. It also applies to users being prevented from sidestepping corporate controls on company networks, perhaps a necessary commercial consideration that simply can't be ignored.
Russell Haworth, CEO of Nominet explains:
Firefox and Google Chrome -- the two biggest web browsers with a combined market share of over 70% -- are both looking to implement DoH in the coming months, alongside other operators. The big question now is how they implement it, who they offer to be
the resolvers, and what policies they use. The benefit offered by DoH is encryption, which prevents eavesdropping or interception of DNS communication. However, DoH raises a number of issues which deserve careful consideration as we move towards it.
Some of the internet safety and security measures that have been built over the years involve the DNS. Parental controls, for example, generally rely on the ISP blocking particular domains for their customers. The Internet Watch
Foundation (IWF) also ask ISPs to block certain domains because they are hosting child sexual abuse material. There may also be issues for law enforcement using DNS data to track criminals. In terms of cyber security, many organisations currently use the
DNS to secure their networks, by blocking domains known to contain malware. All of these measures could be impacted by the introduction of DoH.
Sitting above all of these is one question: Will users know any of this is happening?
It is important that people understand how and where their data is being used. It is crucial that DoH is not simply turned on by default and DNS traffic disappears off to a server somewhere without people understanding and signing up to the privacy
implications. This is the reason what we have produced a simple explainer and will be doing more to communicate about DoH in the coming weeks.
DoH can bring positive changes, but only if it is accompanied by understanding, informed consent, and attention to some key principles, as detailed below:
Informed user choice:
users will need to be educated on the way in which their data use is changing so they can give their informed consent to this new approach. We also need some clarity on who would see the data, who can access the data and under what
circumstances, how it is being protected and how long it will be available for.
Equal or better safety:
DoH disrupts and potentially breaks safety measures that have built
over many years. It must therefore be the responsibility of the browsers and DoH resolvers who implement DoH to take up these responsibilities. It will also be important for current protections to be maintained.
Local jurisdiction and governance:
Local DoH resolvers will be needed in individual countries to allow for application of local law, regulators and safety bodies (like the IWF). This is also important to encourage innovation globally, rather than
having just a handful of operators running a pivotal service. Indeed, the internet was designed to be highly distributed to improve its resilience.
Security:
Many
organisations use the DNS for security by keeping suspicious domains that could include malware out of networks. It will be important for DoH to allow enterprises to continue to use these methods -- at Nominet we are embracing this in a scalable and
secure way for the benefit of customers through our cyber security offering.
Change is a constant in our digital age, and I for one would not stand in the way of innovation and development. This new approach to
resolving requests could be a real improvement for our digital world, but it must be implemented carefully and with the full involvement of Government and law enforcement, as well as the wider internet governance community and the third sector.
A Google developer has outlined tentative short term plans for DoH in Chrome. It suggest that Chrome will only allow the selection of DoH servers that are equivalent to approved non encrypted servers.
This is a complex space and our short term plans won't necessarily solve or mitigate all these issues but are nevertheless steps in the right direction.
For the first milestone, we are considering an auto-upgrade approach. At a
high level, here is how this would work:
Chrome will have a small (i.e. non-exhaustive) table to map non-DoH DNS servers to their equivalent DoH DNS servers. Note: this table is not finalized yet.
Per this table, if the system's recursive
resolver is known to support DoH, Chrome will upgrade to the DoH version of that resolver. On some platforms, this may mean that where Chrome previously used the OS DNS resolution APIs, it now uses its own DNS implementation in order to implement DoH.
A group policy will be available so that Administrators can disable the feature as needed.
Ability to opt-out of the experiment via chrome://flags.
In other words, this would upgrade the protocol used for DNS resolution while keeping the user's DNS provider unchanged. It's also important to note that DNS over HTTPS does not preclude its operator from offering features such as
family-safe filtering.
Age verification for porn is pushing internet users into areas of the internet that provide more privacy, security and resistance to censorship.
I'd have thought that security services would prefer that internet users to remain in the more open areas
of the internet for easier snooping.
So I wonder if it protecting kids from stumbling across porn is worth the increased difficulty in monitoring terrorists and the like? Or perhaps GCHQ can already see through the encrypted internet.
RQ12: Privacy & Security for Firefox
Mozilla has an interest in potentially integrating more of Tor into Firefox, for the purposes of providing a Super Private Browsing (SPB) mode for our users.
Tor offers privacy and anonymity on the Web, features which are sorely needed in the modern era of mass surveillance, tracking and fingerprinting. However, enabling a large number of additional users to make use of the Tor network
requires solving for inefficiencies currently present in Tor so as to make the protocol optimal to deploy at scale. Academic research is just getting started with regards to investigating alternative protocol architectures and route selection protocols,
such as Tor-over-QUIC, employing DTLS, and Walking Onions.
What alternative protocol architectures and route selection protocols would offer acceptable gains in Tor performance? And would they preserve Tor properties? Is it truly
possible to deploy Tor at scale? And what would the full integration of Tor and Firefox look like?
The next monstrosity from our EU lawmakers is to relax net neutrality laws so that large internet corporates can better snoop on and censor the European peoples
The internet technology known as deep packet inspection is currently illegal in Europe, but big telecom companies doing business in the European Union want to change that. They want deep packet inspection permitted as part of the new net neutrality rules
currently under negotiation in the EU, but on Wednesday, a group of 45 privacy and internet freedom advocates and groups published an open letter warning against the change:
Dear Vice-President Andrus Ansip, (and others)
We are writing you in the context of the evaluation of Regulation (EU) 2015/2120 and the reform of the BEREC Guidelines on its implementation. Specifically, we are concerned because of the increased use of Deep Packet Inspection (DPI)
technology by providers of internet access services (IAS). DPI is a technology that examines data packets that are transmitted in a given network beyond what would be necessary for the provision IAS by looking at specific content from the part of the
user-defined payload of the transmission.
IAS providers are increasingly using DPI technology for the purpose of traffic management and the differentiated pricing of specific applications or services (e.g. zero-rating) as part of
their product design. DPI allows IAS providers to identify and distinguish traffic in their networks in order to identify traffic of specific applications or services for the purpose such as billing them differently throttling or prioritising them over
other traffic.
The undersigned would like to recall the concerning practice of examining domain names or the addresses (URLs) of visited websites and other internet resources. The evaluation of these types of data can reveal
sensitive information about a user, such as preferred news publications, interest in specific health conditions, sexual preferences, or religious beliefs. URLs directly identify specific resources on the world wide web (e.g. a specific image, a specific
article in an encyclopedia, a specific segment of a video stream, etc.) and give direct information on the content of a transmission.
A mapping of differential pricing products in the EEA conducted in 2018 identified 186 such
products which potentially make use of DPI technology. Among those, several of these products by mobile operators with large market shares are confirmed to rely on DPI because their products offer providers of applications or services the option of
identifying their traffic via criteria such as Domain names, SNI, URLs or DNS snooping.
Currently, the BEREC Guidelines3 clearly state that traffic management based on the monitoring of domain names and URLs (as implied by the
phrase transport protocol layer payload) is not reasonable traffic management under the Regulation. However, this clear rule has been mostly ignored by IAS providers in their treatment of traffic.
The nature of DPI necessitates
telecom expertise as well as expertise in data protection issues. Yet, we observe a lack of cooperation between national regulatory authorities for electronic communications and regulatory authorities for data protection on this issue, both in the
decisions put forward on these products as well as cooperation on joint opinions on the question in general. For example, some regulators issue justifications of DPI based on the consent of the customer of the IAS provider which crucially ignores the
clear ban of DPI in the BEREC Guidelines and the processing of the data of the other party communicating with the subscriber, which never gave consent.
Given the scale and sensitivity of the issue, we urge the Commission and BEREC
to carefully consider the use of DPI technologies and their data protection impact in the ongoing reform of the net neutrality Regulation and the Guidelines. In addition, we recommend to the Commission and BEREC to explore an interpretation of the
proportionality requirement included in Article 3, paragraph 3 of Regulation 2015/2120 in line with the data minimization principle established by the GDPR. Finally, we suggest to mandate the European Data Protection Board to produce guidelines on the
use of DPI by IAS providers.
Best regards
European Digital Rights, Europe Electronic Frontier Foundation, International Council of European Professional Informatics Societies, Europe Article 19,
International Chaos Computer Club e.V, Germany epicenter.works - for digital rights, Austria Austrian Computer Society (OCG), Austria Bits of Freedom, the Netherlands La Quadrature du Net, France ApTI, Romania Code4Romania, Romania IT-Pol, Denmark Homo
Digitalis, Greece Hermes Center, Italy X-net, Spain Vrijschrift, the Netherlands Dataskydd.net, Sweden Electronic Frontier Norway (EFN), Norway Alternatif Bilisim (Alternative Informatics Association), Turkey Digitalcourage, Germany Fitug e.V., Germany
Digitale Freiheit, Germany Deutsche Vereinigung f3cr Datenschutz e.V. (DVD), Germany Gesellschaft f3cr Informatik e.V. (GI), Germany LOAD e.V. - Verein f3cr liberale Netzpolitik, Germany (And others)
At the moment when internet users want to view a page, they specify the page they want in the clear. ISPs can see the page requested and block it if the authorities don't like it. A new internet protocol has been launched that encrypts the specification
of the page requested so that ISPs can't tell what page is being requested, so can't block it.
This new DNS Over HTTPS protocol is already available in Firefox which also provides an uncensored and encrypted DNS server. Users simply have to change the
settings in about:config (being careful of the dragons of course)
Questions have been
raised in the House of Lords about the impact on the UK's ability to censor the internet.
House of Lords, 14th May 2019, Internet Encryption Question
Baroness Thornton Shadow Spokesperson (Health)
2:53 pm, 14th May 2019
To ask Her Majesty 's Government what assessment they have made of the deployment of the Internet Engineering Task Force 's new " DNS over HTTPS " protocol and its implications for the blocking
of content by internet service providers and the Internet Watch Foundation ; and what steps they intend to take in response.
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
My Lords, DCMS is working together with the National Cyber Security Centre to understand and resolve the implications of DNS over HTTPS , also referred to as DoH, for the blocking of content online. This involves liaising
across government and engaging with industry at all levels, operators, internet service providers, browser providers and pan-industry organisations to understand rollout options and influence the way ahead. The rollout of DoH is a complex commercial and
technical issue revolving around the global nature of the internet.
Baroness Thornton Shadow Spokesperson (Health)
My Lords, I thank the Minister for that Answer, and I apologise to the House for
this somewhat geeky Question. This Question concerns the danger posed to existing internet safety mechanisms by an encryption protocol that, if implemented, would render useless the family filters in millions of homes and the ability to track down
illegal content by organisations such as the Internet Watch Foundation . Does the Minister agree that there is a fundamental and very concerning lack of accountability when obscure technical groups, peopled largely by the employees of the big internet
companies, take decisions that have major public policy implications with enormous consequences for all of us and the safety of our children? What engagement have the British Government had with the internet companies that are represented on the Internet
Engineering Task Force about this matter?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
My Lords, I thank the noble Baroness for discussing this
with me beforehand, which was very welcome. I agree that there may be serious consequences from DoH. The DoH protocol has been defined by the Internet Engineering Task Force . Where I do not agree with the noble Baroness is that this is not an obscure
organisation; it has been the dominant internet technical standards organisation for 30-plus years and has attendants from civil society, academia and the UK Government as well as the industry. The proceedings are available online and are not restricted.
It is important to know that DoH has not been rolled out yet and the picture in it is complex--there are pros to DoH as well as cons. We will continue to be part of these discussions; indeed, there was a meeting last week, convened by the NCSC , with
DCMS and industry stakeholders present.
Lord Clement-Jones Liberal Democrat Lords Spokesperson (Digital)
My Lords, the noble Baroness has raised a very important issue, and it sounds from the
Minister 's Answer as though the Government are somewhat behind the curve on this. When did Ministers actually get to hear about the new encrypted DoH protocol? Does it not risk blowing a very large hole in the Government's online safety strategy set out
in the White Paper ?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
As I said to the noble Baroness, the Government attend the IETF . The
protocol was discussed from October 2017 to October 2018, so it was during that process. As far as the online harms White Paper is concerned, the technology will potentially cause changes in enforcement by online companies, but of course it does not
change the duty of care in any way. We will have to look at the alternatives to some of the most dramatic forms of enforcement, which are DNS blocking.
Lord Stevenson of Balmacara Opposition Whip (Lords)
My Lords, if there is obscurity, it is probably in the use of the technology itself and the terminology that we have to use--DoH and the other protocols that have been referred to are complicated. At heart, there are two issues at
stake, are there not? The first is that the intentions of DoH, as the Minister said, are quite helpful in terms of protecting identity, and we do not want to lose that. On the other hand, it makes it difficult, as has been said, to see how the Government
can continue with their current plan. We support the Digital Economy Act approach to age-appropriate design, and we hope that that will not be affected. We also think that the soon to be legislated for--we hope--duty of care on all companies to protect
users of their services will help. I note that the Minister says in his recent letter that there is a requirement on the Secretary of State to carry out a review of the impact and effectiveness of the regulatory framework included in the DEA within the
next 12 to 18 months. Can he confirm that the issue of DoH will be included?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
Clearly, DoH is on
the agenda at DCMS and will be included everywhere it is relevant. On the consideration of enforcement--as I said before, it may require changes to potential enforcement mechanisms--we are aware that there are other enforcement mechanisms. It is not true
to say that you cannot block sites; it makes it more difficult, and you have to do it in a different way.
The Countess of Mar Deputy Chairman of Committees, Deputy Speaker (Lords)
My Lords, for the
uninitiated, can the noble Lord tell us what DoH means --very briefly, please?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
It is not possible
to do so very briefly. It means that, when you send a request to a server and you have to work out which server you are going to by finding out the IP address, the message is encrypted so that the intervening servers are not able to look at what is in
the message. It encrypts the message that is sent to the servers. What that means is that, whereas previously every server along the route could see what was in the message, now only the browser will have the ability to look at it, and that will put more
power in the hands of the browsers.
Lord West of Spithead Labour
My Lords, I thought I understood this subject until the Minister explained it a minute ago. This is a very serious issue. I was
unclear from his answer: is this going to be addressed in the White Paper ? Will the new officer who is being appointed have the ability to look at this issue when the White Paper comes out?
Lord Ashton of Hyde The
Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
It is not something that the White Paper per se can look at, because it is not within the purview of the Government. The protocol is designed by the
IETF , which is not a government body; it is a standards body, so to that extent it is not possible. Obviously, however, when it comes to regulating and the powers that the regulator can use, the White Paper is consulting precisely on those matters,
which include DNS blocking, so it can be considered in the consultation.
Facebook is set to begin telling its users why posts appear in their news feeds, presumably in response to government concerns over its influence over billions of people's reading habits.
The social network will today introduce a button on each post
revealing why users are seeing it, including factors such as whether they have interacted often with the person who made the post or whether it is popular with other users.
It comes as part of a wider effort to make Facebook's systems more
transparent and secure in advance of the EU elections in May and attempts by European and American politicians to regulate social media. John Hegeman, Facebook's vice president of news feed, told the Telegraph:
We hear
from people frequently that they don't know how the news feed algorithm works, why things show up where they do, as well as how their data is used, This is a step towards addressing that.
We haven't done as much as we could do to
explain to people how the products work and help them access this information... I can't blame people for being a little bit uncertain or suspicious.
We recognise how important the platform that Facebook has become now is in the
world, and that means we have a responsibility to ensure that people who use it have a good experience and that it can't be used in ways that are harmful.
We are making decisions that are really important, and so we are trying to
be more and more transparent... we want the external world to be able to hold us accountable.