Vera Jourova is the European Commissioner for justice, consumers and gender equality. Once she opened a Facebook account. It did not go well. Jourova said at a news conference:
For a short time, I had a Facebook account. It was a channel of dirt. I didn't expect such an influx of hatred. I decided to cancel the account because I realised there will be less hatred in Europe after I do this.
Jourova's words carry more weight than most. She has a policy beef with Facebook, and also the means to enforce it. Jourova says Facebook's terms of service are misleading, and has called upon the company to clarify them. In a post Thursday on
that other channel of dirt, Twitter.com, she said:
I want #Facebook to be extremely clear to its users about how their service operates and makes money. Not many people know that Facebook has made available their data to third parties or that for instance it holds full copyright about any
picture or content you put on it.
Jourova says European authorities could sanction Facebook next year if it doesn't like what it hears from the company soon. I was quite clear that we cannot negotiate forever, she said at the news conference. We need to see the result.
Google bosses have forced employees to delete a confidential memo circulating inside the company that revealed disgrace details about a plan to launch a censored search engine in China, The Intercept has learned.
The memo, authored by a Google engineer, disclosed that the search system, codenamed Dragonfly, would require users to log in to perform searches, track their location -- and share the resulting history with a Chinese partner, presumably a proxy
for the government, who would have unilateral access to the data. This Chinese 'partner' would be able to edit the data controlling what should be censored.
The memo was shared earlier this month among a group of Google employees who have been organizing internal protests over the censored search system.
The Dragonfly memo reveals that a prototype of the censored search engine was being developed as an app for both Android and iOS devices, and would force users to sign in so they could use the service. The memo confirms, as The Intercept first
reported last week, that users' searches would be associated with their personal phone number. The memo adds that Chinese users' movements would also be stored, along with the IP address of their device and links they clicked on. It accuses
developers working on the project of creating spying tools for the Chinese government to monitor its citizens.
People's search histories, location information, and other private data would be sent out of China to a database in Taiwan, the memo states. But the data would also be provided to employees of a Chinese company who would be granted unilateral
access to the system.
The memo identifies at least 215 employees who appear to have been tasked with working full-time on Dragonfly, a number it says is larger than many Google projects.
Ex Google boss predicts that the internet will split into a Chinese internet and a US internet
The internet will be divided into two different worlds within the next decade -- and China will lead one of them, according to ex- Google CEO Eric Schmidt.
He notes that the control the Chinese government wields over its citizens' online access means it is incompatible with the democratic internet of the west. This means there will be two distinct versions of the world wide web by 2028, one run by
China and the other by the US.
The process is already happening, with the so-called Great Firewall of China blocking Chinese citizens from accessing several of the internet's most popular websites, including Facebook and YouTube.
China wants to expand a ban on foreign TV shows during the evening prime-time hours, according to the latest proposal by the country's media censor.
Since 2004, China has banned foreign TV movies and serials during the peak 7-10pm viewing hours. Now the National Radio and Television Administration is considering banning programming all foreign programmes during this peak period.
The rules will apply to free-to-air and paid channels, as well as streaming sites.
The censors speak of ideological reasoning but maybe its also to do with China's trade war with Donald Trump.
As China's TV gets ever more censored, many people now use streaming sites like iQiyi and Mango TV for their kicks and they are increasingly willing to pay for it. While these sites offer hit western shows such as Game of Thrones, they have also
adopted a similar strategy to Netflix by producing their own content.
But as they gain popularity they may also gain more attention from the censors.
The radio host and colourful conspiracy theorist Alex Jones has been permanently censored by Twitter.
One month after it distinguished itself from the rest of the tech industry by declining to bar the rightwing shock jock its platform, Twitter fell in line with the other major social networks in banning Jones.
Twitter justified the censorship saying:
We took this action based on new reports of Tweets and videos posted yesterday that violate our abusive behavior policy, in addition to the accounts' past violations. We will continue to evaluate reports we receive regarding other accounts
potentially associated with @realalexjones or @infowars and will take action if content that violates our rules is reported or if other accounts are utilized in an attempt to circumvent their ban.
PayPal is the latest tech company to ban Infowars. Paypal told PC<ag:
We undertook an extensive review of the Infowars sites, and found instances that promoted hate or discriminatory intolerance against certain communities and religions, which run counter to our core value of inclusion.
InfoWars said PayPal gave it 10 days to find an alternate payment provider before terminating the service. PayPal didn't cite the specific instances of hate speech, but Infowars says the content involved criticism of Islam and opposition to
transgenderism being to taught children in schools.
The UK government is preparing to establish a new internet censor that would make tech firms liable for content published on their platforms and have the power to sanction companies that fail to take down illegal material and hate speech within
Under legislation being drafted by the Home Office and the Department for Digital, Culture, Media and Sport (DCMS) due to be announced this winter, a new censorship framework for online social harms would be created.
BuzzFeed News has obtained details of the proposals, which would see the establishment of an internet censor similar to Ofcom.
Home secretary Sajid Javid and culture secretary Jeremy Wright are considering the introduction of a mandatory code of practice for social media platforms and strict new rules such as takedown times forcing websites to remove illegal hate speech
within a set timeframe or face penalties. Ministers are also looking at implementing age verification for users of Facebook, Twitter, and Instagram.
The new proposals are still in the development stage and are due to be put out for consultation later this year. The new censor would also develop rules new regulations on controlling non-illegal content and online behaviour . The rules for what
constitutes non-illegal content will be the subject of what is likely to be a hotly debated consultation.
BuzzFeed News has also been told ministers are looking at creating a second new censor for online advertising. Its powers would include a crackdown on online advertisements for food and soft drink products that are high in salt, fat, or sugar.
BuzzFeed News understands concerns have been raised in Whitehall that the regulation of non-illegal content will spark opposition from free speech campaigners and MPs. There are also fears internally that some of the measures being considered,
including blocking websites that do not adhere to the new regulations, are so draconian that they will generate considerable opposition.
A government spokesperson confirmed to BuzzFeed News that the plans would be unveiled later this year.
Fifteen EU-based regulators plus Washington State have made a joint declaration while Australian based study likens loot boxes to gambling, not baseball cards
Fifteen EU gambling regulators from the UK, Ireland, France, Austria, Poland, Latvia, the Czech Republic, Spain, the Isle of Man, Malta, Portugal, Jersey, Norway, and the Netherlands plus US representation from the Washington State Gambling
Regulator published the letter, noting their concerns with the business model.
In addition to the loot box problem, the letter addresses how it will take on websites that let players either gamble or sell in-game items like skins or weapons with real-world money.
One of the signatories, Neil McArthur, CEO of the UK Gambling Commission said:
We have joined forces to call on video games companies to address the clear public concern around the risks gambling and some video games can pose to children. We encourage video games companies to work with their gambling regulators and take
action now to address those concerns to make sure that consumers, and particularly children, are protected.
The letter speaks of the groups concerns but does not detail the direction sthat the group will take in reacting to the concerns.
According to VentureBeat, a study conducted by the Australian Parliament's Environment and Communications References Committee showed that there were links between loot box spending and problematic gambling. The population sample size was 7500
The more severe a gamers' problem gambling was, the more likely they were to spend large amounts of money on loot boxes. These results strongly support claims that loot boxes are psychologically akin to gambling, said the report, conducted by Dr.
David Zendle and Dr. Paul Cairns.
In a statement, the pair added loot boxes could potentially act as an introduction to gambling or take advantage of gambling disorders. They note that the industry tends to brush off loot boxes as similar to harmless products like baseball cards,
football/soccer stickers, and products along those lines.
In related news games maker EA could face legal issues for ignoring a ruling by the Belgian government to remove the Ultimate Team portion from FIFA 18.
The National Secular Society has said Ireland's impending referendum on its blasphemy law should prompt global action in defence of free speech on religion.
On Tuesday evening the Dail, the lower house of the Oireachtas (Ireland's parliament), ratified a proposal to hold a referendum on the issue on Friday 26 October. The decision passed through the house unopposed.
The upper house, the Seanad, is expected to pass the legislation on Thursday.
Currently Ireland's constitution says:
The publication or utterance of blasphemous, seditious, or indecent matter is an offence which shall be punishable in accordance with law. The referendum will propose removing the word blasphemous from that article.
The minister for justice Charlie Flanagan said while the offence remained in the constitution, Ireland would be seen as keeping company with those who do not share the fundamental values we cherish such as belief in freedom of conscience and
NSS chief executive Stephen Evans urged Ireland to take a stand for free speech when the referendum takes place:
Repealing the reference to blasphemy from Ireland's constitution would be a welcome declaration of Ireland's changing attitude to religious privilege and a statement of support with free thinkers globally.
Ireland's referendum should prompt global action in defence of free speech on religion. It should send a message to the rest of the world: offending religious sensibilities is not a crime, and the world will not tolerate those who persecute
people for their thoughts and words.
Ofcom has published a prospectus angling for a role as the UK internet censor. It writes:
Ofcom has published a discussion document examining the area of harmful online content.
In the UK and around the world, a debate is underway about whether regulation is needed to address a range of problems that originate online, affecting people, businesses and markets.
The discussion document is intended as a contribution to that debate, drawing on Ofcom's experience of regulating the UK's communications sector, and broadcasting in particular. It draws out the key lessons from the regulation of content
standards 203 for broadcast and on-demand video services 203 and the insights that these might provide to policy makers into the principles that could underpin any new models for addressing harmful online content.
The UK Government intends to legislate to improve online safety, and to publish a White Paper this winter. Any new legislation is a matter for Government and Parliament, and Ofcom has no view about the institutional arrangements that might
Alongside the discussion paper, Ofcom has published joint research with the Information Commissioner's Office on people's perception, understanding and experience of online harm. The survey of 1,686 adult internet users finds that 79% have
concerns about aspects of going online, and 45% have experienced some form of online harm. The study shows that protection of children is a primary concern, and reveals mixed levels of understanding around what types of media are regulated.
The sales pitch is more or less that Ofcom's TV censorship has 'benefited' viewers so would be a good basis for internet censorship.
Ofcom particularly makes a point of pushing the results of a survey of internet users and their 'concerns'. The survey is very dubious and ends up suggesting thet 79% of users have concerns about going on line.
And maybe this claim is actually true. After all, the Melon Farmers are amongst the 79% have concerns about going online: The Melon Farmers are concerned that:
There are vast amounts of scams and viruses waiting to be filtered out from Melon Farmers email inbox every day.
The authorities never seem interested in doing anything whatsoever about protecting people from being scammed out of their life savings. Have you EVER heard of the police investigating a phishing scam?
On the other hand the police devote vast resources to prosecuting internet insults and jokes, whilst never investigating scams that see old folks lose their life savings.
So yes, there is concern about the internet. BUT, it would be a lie to infer that these concerns mean support for Ofcom's proposals to censor websites along the lines of TV.
In fact looking at the figures, some of the larger categories of 'concern's are more about fears of real crime rather than concerns about issues like fake news.
Interestingly Ofcom has published how the 'concerns' were hyped up by prompting the surveyed a bit. For instance, Ofcom reports that 12% of internet users say they are 'concerned' about fake news without being prompted. With a little prompting by
the interviewer, the number of people reporting being concerned about fake news magically increases to 29%.
It also has to be noted that there are NO reports in the survey of internet users concerned about a lack news balancing opinions, a lack of algorithm transparency, a lack of trust ratings for news sources, or indeed for most of the other
suggestions that Ofcom addresses.
I've seen more fake inferences in the Ofcom discussion document than I have seen fake news items on the internet in the last ten years.
The play Stitching , has opened at the Unifaun Theatre in Malta for a two week run. But Stitching is not your average piece of theatre; it's taken 10 years, international coverage, and even a literal EU court case to get this show up and
Ten years ago, in October 2008, local theatre producer Adrian Buckle sent an email to playwright Anthony Nielson, asking for permission to produce his play Stitching in Malta. Nielson duly granted Unifaun the rights to a performance of his play.
Buckle booked a slot at a local theatre, hires the cast and informs the Board for Film and Stage Classification in order expecting to be issued an age-rating certificate for the piece. However, instead of receiving an age certification, Buckle
received a certificate that simply stated the play had been Banned and disallowed, with no explanation or reason provided. Thus begins a ten-year-long battle that finally brings us to this year's production.
However, the team at Unifaun would not stand for this lack of explanation; they chased for an answer, and in January 2009 the police commissioner delivered a letter that detailed the reasons:
Blasphemy against the State Religion
Obscene contempt for the victims of Auschwitz
An encyclopaedic review of dangerous sexual perversions leading to sexual servitude
Abby's eulogy to the child murderers Fred and Rosemary West
Reference to the abduction, sexual assault and murder of children
In conclusion, the play is a sinister tapestry of violence and perversion where the sum of the parts is greater than the whole. The Board feels that in this case the envelope has been pushed beyond the limits of public
The censorship became major news in Malta and it was decided by the politicians at the time that the established censorship system was no longer compatible with EU human rights requirements, notably Article 10 of the Convention for the Protection
of Human Rights:
Freedom of expression constitutes one of the essential foundations of such a society, one of the basic conditions for its progress and for the development of every man [...] it is applicable not only to 'information' or 'ideas' that are
favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population.
The country's censorship laws were rewritten without calling on the services of stage censors. Film censorship was also reformed with new rules that are based on the UK's, which is at least significantly more free than before.
Yes, the play is crude. Yes, they swear a lot. Yes, they talk about child murderers. Yes, they use a dildo on stage. Yes, they describe sexual acts very explicitly. Yes, it probably made people very uncomfortable. That is why performances are
given an age certification. That is not reason to censor and an artist.
Three performances have passed so far and the world has not ended. Nobody has walked out of the theatre mid-performance in a fit of rage.
Theatre director Maryam Kazemi and theatre manager Saeed Assadi were detained by Iranian authorities over a video trailer for a production of Shakespeare's A Midsummer Night's Dream on 9 September 2018.
The trailer features men and women dancing together, which is illegal in Iran.
Cultural censorship official Shahram Karami said the issue was with the type of music played and the actors' movements used in the trailer.
Both men were later bailed on surety of about $23,000 each.
Tech companies that fail to remove terrorist content quickly could soon face massive fines. The European Commission proposed new rules on Wednesday that would require internet platforms to remove illegal terror content within an hour of it being
flagged by national authorities. Firms could be fined up to 4% of global annual revenue if they repeatedly fail to comply.
Facebook (FB), Twitter (TWTR) and YouTube owner Google (GOOGL) had already agreed to work with the European Union on a voluntary basis to tackle the problem. But the Commission said that progress has not been sufficient.
A penalty of 4% of annual revenue for 2017 would translate to $4.4 billion for Google parent Alphabet and $1.6 billion for Facebook.
The proposal is the latest in a series of European efforts to control the activities of tech companies.
The terror content proposal needs to be approved by the European Parliament and EU member states before becoming law.
The European Court of Human Rights (ECtHR) has found that the UK's mass surveillance programmes, revealed by NSA whistleblower Edward Snowden, did not meet the quality of law requirement and were incapable of keeping the interference
to what is necessary in a democratic society.
The landmark judgment marks the Court's first ruling on UK mass surveillance programmes revealed by Mr Snowden. The case was started in 2013 by campaign groups Big Brother Watch, English PEN, Open Rights Group and computer science expert Dr
Constanze Kurz following Mr Snowden's revelation of GCHQ mass spying.
Documents provided by Mr Snowden revealed that the UK intelligence agency GCHQ were conducting population-scale interception, capturing the communications of millions of innocent people. The mass spying programmes included TEMPORA, a bulk data
store of all internet traffic; KARMA POLICE, a catalogue including a web browsing profile for every visible user on the internet; and BLACK HOLE, a repository of over 1 trillion events including internet histories, email and instant messenger
records, search engine queries and social media activity.
The applicants argued that the mass interception programmes infringed UK citizens' rights to privacy protected by Article 8 of the European Convention on Human Rights as the population-level surveillance was effectively indiscriminate, without
basic safeguards and oversight, and lacked a sufficient legal basis in the Regulation of Investigatory Powers Act (RIPA).
In its judgment, the ECtHR acknowledged that bulk interception is by definition untargeted ; that there was a lack of oversight of the entire selection process, and that safeguards were not sufficiently robust to provide adequate
guarantees against abuse.
In particular, the Court noted concern that the intelligence services can search and examine "related communications data" apparently without restriction -- data that identifies senders and recipients of communications, their
location, email headers, web browsing information, IP addresses, and more. The Court expressed concern that such unrestricted snooping could be capable of painting an intimate picture of a person through the mapping of social networks,
location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with.
The Court acknowledged the importance of applying safeguards to a surveillance regime, stating:
In view of the risk that a system of secret surveillance set up to protect national security may undermine or even destroy democracy under the cloak of defending it, the Court must be satisfied that there are adequate and effective guarantees
The Government passed the Investigatory Powers Act (IPA) in November 2016, replacing the contested RIPA powers and controversially putting mass surveillance powers on a statutory footing.
However, today's judgment that indiscriminate spying breaches rights protected by the ECHR is likely to provoke serious questions as to the lawfulness of bulk powers in the IPA.
Jim Killock, Executive Director of Open Rights Group said:
Viewers of the BBC drama, the Bodyguard, may be shocked to know that the UK actually has the most extreme surveillance powers in a democracy. Since we brought this case in 2013, the UK has actually increased its powers to indiscriminately
surveil our communications whether or not we are suspected of any criminal activity.
In light of today's judgment, it is even clearer that these powers do not meet the criteria for proportionate surveillance and that the UK Government is continuing to breach our right to privacy.
Silkie Carlo, director of Big Brother Watch said:
This landmark judgment confirming that the UK's mass spying breached fundamental rights vindicates Mr Snowden's courageous whistleblowing and the tireless work of Big Brother Watch and others in our pursuit for justice.
Under the guise of counter-terrorism, the UK has adopted the most authoritarian surveillance regime of any Western state, corroding democracy itself and the rights of the British public. This judgment is a vital step towards protecting millions
of law-abiding citizens from unjustified intrusion. However, since the new Investigatory Powers Act arguably poses an ever greater threat to civil liberties, our work is far from over.
Antonia Byatt, director of English PEN said:
This judgment confirms that the British government's surveillance practices have violated not only our right to privacy, but our right to freedom of expression too. Excessive surveillance discourages whistle-blowing and discourages investigative
journalism. The government must now take action to guarantee our freedom to write and to read freely online.
Dr Constanze Kurz, computer scientist, internet activist and spokeswoman of the German Chaos Computer Club said:
What is at stake is the future of mass surveillance of European citizens, not only by UK secret services. The lack of accountability is not acceptable when the GCHQ penetrates Europe's communication data with their mass surveillance techniques.
We all have to demand now that our human rights and more respect of the privacy of millions of Europeans will be acknowledged by the UK government and also by all European countries.
Dan Carey of Deighton Pierce Glynn, the solicitor representing the applicants, stated as follows:
The Court has put down a marker that the UK government does not have a free hand with the public's communications and that in several key respects the UK's laws and surveillance practices have failed. In particular, there needs to be much
greater control over the search terms that the government is using to sift our communications. The pressure of this litigation has already contributed to some reforms in the UK and this judgment will require the UK government to look again at
its practices in this most critical of areas.
The European Parliament has voted to approve new copyright powers enabling the big media industry to control how their content is used on the internet.
Article 11 introduces the link tax which lets news companies control how their content is used. The target of the new law is to make Google pay newspapers for its aggregating Google News service. The collateral damage is that millions of
websites can now be harangued for linking to and quoting articles, or even just sharing links to them.
Article 13 introduces the requirements for user content sites to create censorship machines that pre-scan all uploaded content and block anything copyrighted. The original proposal, voted on in June, directly specified content hosts use
censorship machines (or filters as the EU prefers to call them). After a cosmetic rethink since June, the law no longer specifies automatic filters, but instead specifies that content hosts are responsible for copyright published. And of course
the only feasible way that content hosts can ensure they are not publishing copyrighted material is to use censorship machines anyway. The law was introduced, really with just the intention of making YouTube and Facebook pay more for content from
the big media companies. The collateral damage to individuals and small businesses was clearly of no concern to the well lobbied MEPs.
Both articles will introduce profound new levels of censorship to all users of the internet, and will also mean that there will reduced opportunities for people to get their contributions published or noticed on the internet. This is simply
because the large internet companies are commercial organisations and will always make decisions with costs and profitability in mind. They are not state censors with a budget to spend on nuanced decision making. So the net outcome will be to
block vast swathes of content being uploaded just in case it may contain copyright.
An example to demonstrate the point is the US censorship law, FOSTA. It requires content hosts to block content facilitating sex trafficking. Internet companies generally decided that it was easier to block all adult content rather than to try
and distinguish sex trafficking from non-trafficking sex related content. So sections of websites for dating and small ads, personal services etc were shut down overnight.
The EU however has introduced a few amendments to the original law to slightly lessen the impact an individuals and small scale content creators.
Article 13 will now only apply to platforms where the main purpose ...is to store and give access to the public or to stream significant amounts of copyright protected content uploaded / made available by its users and
that optimise content and promotes for profit making purposes .
When defining best practices for Article 13, special account must now be taken of fundamental rights, the use of exceptions and limitations. Special focus should also be given to ensuring that the burden on SMEs remain
appropriate and that automated blocking of content is avoided (effectively an exception for micro/small businesses). Article 11 shall not extend to mere hyperlinks, which are accompanied by individual words (so it seems links are safe, but
quoted snippets of text must be very short) and the protection shall also not extend to factual information which is reported in journalistic articles from a press publication and will therefore not prevent anyone from reporting such factual
Article 11 shall not prevent legitimate private and non-commercial use of press publications by individual users .
Article 11 rights shall expire 5 years after the publication of the press publication. This term shall be calculated from the first day of January of the year following the date of publication. The right referred to in
paragraph 1 shall not apply with retroactive effect .
Individual member states will now have to decide how Article 11 is implemented, which could create some confusion across borders.
At the same time, the EU rejected the other modest proposals to help out individuals and small creators:
No freedom of panorama. When we take photos or videos in public spaces, we're apt to incidentally capture copyrighted works: from stock art in ads on the sides of buses to t-shirts worn by protestors, to building facades claimed by architects
as their copyright. The EU rejected a proposal that would make it legal Europe-wide to photograph street scenes without worrying about infringing the copyright of objects in the background.
No user-generated content exemption, which would have made EU states carve out an exception to copyright for using excerpts from works for criticism, review, illustration, caricature, parody or pastiche.
A final round of negotiation with the EU Council and European Commission is now due to take place before member states make a decision early next year. But this is historically more of a rubber stamping process and few, if any, significant
changes are expected.
However, anybody who mistakenly thinks that Brexit will stop this from impacting the UK should be cautious. Regardless of what the EU approves, the UK might still have to implement it, and in any case the current UK Government supports many of
the controversial new measures.
Despite waves of calls and emails from European Internet users, the European Parliament today voted to accept the principle of a universal pre-emptive copyright filter for content-sharing sites, as well as the idea that news publishers should
have the right to sue others for quoting news items online -- or even using their titles as links to articles. Out of all of the potential amendments offered that would fix or ameliorate the damage caused by these proposals, they voted for worst
on offer .
There are still opportunities, at the EU level, at the national level, and ultimately in Europe's courts, to limit the damage. But make no mistake, this is a serious setback for the Internet and digital rights in Europe.
It also comes at a trepidatious moment for pro-Internet voices in the heart of the EU. On the same day as the vote on these articles, another branch of the European Union's government, the Commission, announced plans to introduce a new regulation
on preventing the dissemination of terrorist content online . Doubling down on speedy unchecked censorship, the proposals will create a new removal order, which will oblige hosting service providers to remove content within one hour of being
ordered to do so. Echoing the language of the copyright directive, the Terrorist Regulation aims at ensuring smooth functioning of the digital single market in an open and democratic society, by preventing the misuse of hosting services for
terrorist purposes; it encourages the use of proactive measures, including the use of automated tools.
Not content with handing copyright law enforcement to algorithms and tech companies, the EU now wants to expand that to defining the limits of political speech too.
And as bad as all this sounds, it could get even worse. Elections are coming up in the European Parliament next May. Many of the key parliamentarians who have worked on digital rights in Brussels will not be standing. Marietje Schaake, author of
some of the better amendments for the directive, announced this week that she would not be running again. Julia Reda, the German Pirate Party representative, is moving on; Jan Philipp Albrecht, the MEP behind the GDPR, has already left Parliament
to take up a position in domestic German politics. The European Parliament's reserves of digital rights expertise, never that full to begin with, are emptying.
The best that can be said about the Copyright in the Digital Single Market Directive, as it stands, is that it is so ridiculously extreme that it looks set to shock a new generation of Internet activists into action -- just as the DMCA, SOPA/PIPA
and ACTA did before it.
If you've ever considered stepping up to play a bigger role in European politics or activism, whether at the national level, or in Brussels, now would be the time.
It's not enough to hope that these laws will lose momentum or fall apart from their own internal incoherence, or that those who don't understand the Internet will refrain from breaking it. Keep reading and supporting EFF, and join Europe's
powerful partnership of digital rights groups, from Brussels-based EDRi to your local national digital rights organization . Speak up for your digital business, open source project, for your hobby or fandom, and as a contributor to the global
This was a bad day for the Internet and for the European Union: but we can make sure there are better days to come.
even bigger test to businesses than GDPR . It's a regulation that will create a likely deficit in the customer information they collect even post-GDPR.
Current cookie banner notifications, where websites inform users of cookie collection, will make way for cookie request pop-ups that deny cookie collection until a user has opted in or out of different types of cookie collection. Such a pop-up is
expected to cause a drop in web traffic as high as 40 per cent. The good news is that it will only appear should the user not have already set their cookie preferences at browser level.
The outcome for businesses whose marketing and advertising lies predominantly online is the inevitable reduction in their ability to track, re-target and optimise experiences for their visitors.
For any business with a website and dependent on cookies, the new regulations put them at severe risk of losing this vital source of consumer data . As a result, businesses must find a practical, effective and legal alternative to alleviate the
burden on the shoulders of all teams involved and to offset any drastic shortfall in this crucial data.
Putting the power in the hands of consumers when it comes to setting browser-level cookie permissions will limit a business's ability to extensively track the actions users take on company websites and progress targeted cookie-based advertising.
Millions of internet users will have the option to withdraw their dataset from the view of businesses, one of the biggest threats ePrivacy poses.
MEPs approve copyright law requiring Google and Facebook to use censorship machines to block user uploads that may contain snippets of copyright material, including headlines, article text, pictures and video
The European Parliament has approved a disgraceful copyright law that threatens to destroy the internet as we know it.
The rulehands more power to news and record companies against Internet giants like Google and Facebook. But it also allows companies to make sweeping blocks of user-generated content, such as internet memes or reaction GIFs that use copyrighted
material. The tough approach could spell the end for internet memes, which typically lay text over copyrighted photos or video from television programmes, films, music videos and more.
MEPs voted 438 in favour of the measures, 226 against, with 39 abstentions. The vote introduced Articles 11 and 13 to the directive, dubbed the link tax and censorship machines.
Article 13 puts the onus of policing for copyright infringement on the websites themselves. This forces web giants like YouTube and Facebook to scan uploaded content to stop the unlicensed sharing of copyrighted material. If the internet
companies find that such scanning does not work well, or makes the service unprofitable, the companies could pull out of allowing users to post at all on topics where the use of copyright material is commonplace.
The second amendment to the directive, Article 11, is intended to give publishers and newspapers a way to make money when companies like Google link to their stories.Search engines and online platforms like Twitter and Facebook will have to pay a
license to link to news publishers when quoting portions of text from these outlets.
Following Wednesday's vote, EU lawmakers will now take the legislation to talks with the European Commission and the 28 EU countries.
Online game distributor Steam has approved its first uncensored adult game, Negilgee : Love Stories.
Steam had announced its change of policy in June of this year ironically after a bit of backlash when Steam proposing to step up the censorship of adult games. The previous policy required explicit content to be censored at sale but allowed
subsequent patches to restore the cuts.
On Friday, Dharker Studios is slated to start selling an uncensored version of its game Negilgee : Love Stories, which features nudity and sex scenes.
Other developers have also submitted uncensored games for approval on Steam.
An indie developer called Kagura Games, meanwhile, said some developers have already put up their uncensored games up for review, so we'll be following that closely, and consult with Steam to decide what the best course of action is for releasing
our future titles on Steam.
Australia's Herald Sun newspaper has republished its cartoon of tennis star Serena Williams on a defiant front page in which it attacked its critics and foreshadowed a future where satire is outlawed. The front page reads:
WELCOME TO PC WORLD
If the self-appointed censors of Mark Knight get their way on his Serena Williams cartoon, our new politically correct life will be very dull indeed.
The page features a collection of Mark Knight cartoons, including the depiction of Williams spitting a dummy and stamping on her racquet.
The cartoon, first published on Monday, was Knight's take on the tennis star's bad behaviour insulting the umpire calling him a thief.
The cartoon caused a reaction in the PC worlds some how suggesting that it is not allowed to mock the bad behaviour of a black woman.
Knight has rejected such suggestions saying:
I saw the world number one tennis player have a huge hissy fit and spit the dummy. That's what the cartoon was about, her poor behaviour on the court.
I drew her as an African-American woman. She's powerfully built. She wears these outrageous costumes when she plays tennis. She's interesting to draw. I drew her as she is, as an African-American woman.
Niche porn producer, Pandora Blake, Misha Mayfair, campaigning lawyer Myles Jackman and Backlash are campaigning to back a legal challenge to the upcoming internet porn censorship regime in the UK. They write on a new
We are mounting a legal challenge.
Do you lock your door when you watch porn 203 or do you publish a notice in the paper? The new UK age verification law means you may soon have to upload a proof of age to visit adult sites. This would connect your legal identity to a database of
all your adult browsing. Join us to prevent the damage to your privacy.
The UK Government is bringing in age verification for adults who want to view adult content online; yet have failed to provide privacy and security obligations to ensure your private information is securely protected.
The law does not currently limit age verification software to only hold data provided by you just in order to verify your age. Hence, other identifying data about you could include anything from your passport information to your credit card
details, up to your full search history information. This is highly sensitive data.
What are the Privacy Risks?
Data Misuse - Since age verification providers are legally permitted to collect this information, what is to stop them from increasing revenue through targeting advertising at you, or even selling your personal data?
Data Breaches - No database is perfectly secure, despite good intentions. The leaking or hacking of your sensitive personal information could be truly devastating. The Ashley Madison hack led to suicides. Don't let the Government allow your
private sexual preferences be leaked into the public domain.
What are we asking money for?
We're asking you to help us crowdfund legal fees so we can challenge the new age verification rules under the Digital Economy Act 2017. We re asking for 2£10,000 to cover the cost of initial legal advice, since it's a complicated area of law.
Ultimately, we'd like to raise even more money, so we can send a message to Government that your personal privacy is of paramount importance.
Lucy Powell writes in the Guardian, (presumably intended as an open comment):
Closed forums on Facebook allow hateful views to spread unchallenged among terrifyingly large groups. My bill would change that
You may wonder what could bring Nicky Morgan, Anna Soubry, David Lammy, Jacob Rees-Mogg and other senior MPs from across parliament together at the moment. Yet they are all sponsoring a bill I'm proposing that will tackle online hate, fake news
and radicalisation. It's because, day-in day-out, whatever side of an argument we are on, we see the pervasive impact of abuse and hate online 203 and increasingly offline, too.
Social media has given extremists a new tool with which to recruit and radicalise. It is something we are frighteningly unequipped to deal with.
Worryingly, it is on Facebook, which most of us in Britain use, where people are being exposed to extremist material. Instead of small meetings in pubs or obscure websites in the darkest corners of the internet, our favourite social media site is
increasingly where hate is cultivated. From hope to hate: how the early internet fed the far right Read more
Online echo chambers are normalising and allowing extremist views to go viral unchallenged. These views are spread as the cheap thrill of racking up Facebook likes drives behaviour and reinforces a binary worldview. Some people are being groomed
unwittingly as unacceptable language is treated as the norm. Others have a more sinister motive.
While in the real world, alternative views would be challenged by voices of decency in the classroom, staffroom, or around the dining-room table, there are no societal norms in the dark crevices of the online world. The impact of these bubbles of
hate can be seen, in extreme cases, in terror attacks from radicalised individuals. But we can also see it in the rise of the far right, with Tommy Robinson supporters rampaging through the streets this summer, or in increasing Islamophobia and
Through Facebook groups (essentially forums), extremists can build large audiences. There are many examples of groups that feature anti-Muslim or antisemitic content daily, in an environment which, because critics are removed from the groups,
normalises these hateful views. If you see racist images, videos and articles in your feed but not the opposing argument, you might begin to think those views are acceptable and even correct. If you already agree with them, you might be motivated
This is the thinking behind Russia's interference in the 2016 US presidential election. The Russian Internet Research Agency set up Facebook groups, amassed hundreds of thousands of members, and used them to spread hate and fake news, organise
rallies, and attack Hillary Clinton. Most of its output was designed to stoke the country's racial tensions.
It's not only racism that is finding a home on Facebook. Marines United was a secret group of 30,000 current and former servicemen in the British armed forces and US Marines. Members posted nude photos of their fellow servicewomen, taken in
secret. A whistleblower described the group as revenge porn, creepy stalker-like photos taken of girls in public, talk about rape. It is terrifying that the group grew so large before anyone spoke out, and that Facebook did nothing until someone
informed the media.
Because these closed forums can be given a secret setting, they can be hidden away from everyone but their members. This locks out the police, intelligence services and charities that could otherwise engage with the groups and correct
disinformation. This could be particularly crucial with groups where parents are told not to vaccinate their children against diseases. Internet warriors: inside the dark world of online haters Read more
Despite having the resources to solve the problem, Facebook lacks the will. In fact, at times it actively obstructs those who wish to tackle hate and disinformation. Of course, it is not just Facebook, and the proliferation of online platforms
and forums means that the law has been much too slow to catch up with our digital world.
We should educate people to be more resilient and better able to spot fake news and recognise hate, but we must also ensure there are much stronger protections to spread decency and police our online communities. The responsibility to regulate
these social media platforms falls on the government. It is past time to act. Advertisement
That's why I am introducing a bill in parliament which will do just that. By establishing legal accountability for what's published in large online forums, I believe we can force those who run these echo chambers to stamp out the evil that is
currently so prominent. Social media can be a fantastic way of bringing people together 203 which is precisely why we need to prevent it being hijacked by those who instead wish to divide.
On Wednesday, the EU will vote on whether to accept two controversial proposals in the new Copyright Directive; one of these clauses, Article 13, has the potential to allow anyone, anywhere in the world, to effect mass, rolling waves of
censorship across the Internet.
The way things stand today, companies that let their users communicate in public (by posting videos, text, images, etc) are required to respond to claims of copyright infringement by removing their users' posts, unless the user steps up to
contest the notice. Sites can choose not to remove work if they think the copyright claims are bogus, but if they do, they can be sued for copyright infringement (in the United States at least), alongside their users, with huge penalties at
stake. Given that risk, the companies usually do not take a stand to defend user speech, and many users are too afraid to stand up for their own speech because they face bankruptcy if a court disagrees with their assessment of the law.
This system, embodied in the United States' Digital Millennium Copyright Act (DMCA) and exported to many countries around the world, is called notice and takedown, and it offers rightsholders the ability to unilaterally censor the Internet on
their say-so, without any evidence or judicial oversight. This is an extraordinary privilege without precedent in the world of physical copyright infringement (you can't walk into a cinema, point at the screen, declare I own that, and get the
movie shut down!).
But rightsholders have never been happy with notice and takedown. Because works that are taken down can be reposted, sometimes by bots that automate the process, rightsholders have called notice and takedown a game of whac-a-mole , where they
have to keep circling back to remove the same infringing files over and over.
Rightsholders have long demanded a notice and staydown regime. In this system, rightsholders send online platforms digital copies of their whole catalogs; the platforms then build copyright filters that compare everything a user wants to post to
this database of known copyrights, and block anything that seems to be a match.
Tech companies have voluntarily built versions of this system. The most well-known of the bunch is YouTube's Content ID system, which cost $60,000,000 to build, and which works by filtering the audio tracks of videos to categorise them.
Rightsholders are adamant that Content ID doesn't work nearly well enough, missing all kinds of copyrighted works, while YouTube users report rampant overmatching, in which legitimate works are censored by spurious copyright claims: NASA gets
blocked from posting its own Mars rover footage; classical pianists are blocked from posting their own performances , birdsong results in videos being censored , entire academic conferences lose their presenters' audio because the hall they
rented played music at the lunch-break--you can't even post silence without triggering copyright enforcement. Besides that, there is no bot that can judge whether something that does use copyrighted material is fair dealing. Fair dealing is
protected under the law, but not under Content ID.
If Content ID is a prototype, it needs to go back to the drawing board. It overblocks (catching all kinds of legitimate media) and underblocks (missing stuff that infuriates the big entertainment companies). It is expensive, balky, and
It's coming soon to an Internet near you.
On Wednesday, the EU will vote on whether the next Copyright Directive will include Article 13, which makes Content-ID-style filters mandatory for the whole Internet, and not just for the soundtracks of videos--also for the video portions, for
audio, for still images, for code, even for text. Under Article 13, the services we use to communicate with one another will have to accept copyright claims from all comers, and block anything that they believe to be a match.
This measure will will censor the Internet and it won't even help artists to get paid.
Let's consider how a filter like this would have to work. First of all, it would have to accept bulk submissions. Disney and Universal (not to mention scientific publishers, stock art companies, real-estate brokers, etc) will not pay an army of
data-entry clerks to manually enter their vast catalogues of copyrighted works, one at a time, into dozens or hundreds of platforms' filters. For these filters to have a hope of achieving their stated purpose, they will have to accept thousands
of entries at once--far more than any human moderator could review.
But even if the platforms could hire, say, 20 percent of the European workforce to do nothing but review copyright database entries, this would not be acceptable to rightsholders. Not because those workers could not be trained to accurately
determine what was, and was not, a legitimate claim--but because the time it would take for them to review these claims would be absolutely unacceptable to rightsholders.
It's an article of faith among rightsholders that the majority of sales take place immediately after a work is released, and that therefore infringing copies are most damaging when they're available at the same time as a new work is released
(they're even more worried about pre-release leaks).
If Disney has a new blockbuster that's leaked onto the Internet the day it hits cinemas, they want to pull those copies down in seconds, not after precious days have trickled past while a human moderator plods through a queue of copyright claims
from all over the Internet.
Combine these three facts:
Anyone can add anything to the blacklist of copyrighted works that can't be published by Internet users;
The blacklists have to accept thousands of works at once; and
New entries to the blacklist have to go into effect instantaneously.
It doesn't take a technical expert to see how ripe for abuse this system is. Bad actors could use armies to bots to block millions of works at a go (for example, jerks could use bots to bombard the databases with claims of ownership over the
collected works of Shakespeare, adding them to the blacklists faster than they could possibly be removed by human moderators, making it impossible to quote Shakespeare online).
But more disturbing is targeted censorship: politicians have long abused takedown to censor embarrassing political revelations or take critics offline , as have violent cops and homophobic trolls .
These entities couldn't use Content ID to censor the whole Internet: instead, they had to manually file takedowns and chase their critics around the Internet. Content ID only works for YouTube -- plus it only allows trusted rightsholders to add
works wholesale to the notice and staydown database, so petty censors are stuck committing retail copyfraud.
But under Article 13, everyone gets to play wholesale censor, and every service has to obey their demands: just sign up for a rightsholder account on a platform and start telling it what may and may not be posted. Article 13 has no teeth for
stopping this from happening: and in any event, if you get kicked off the service, you can just pop up under a new identity and start again.
Some rightsholder lobbyists have admitted that there is potential for abuse here, they insist that it will all be worth it, because it will get artists paid. Unfortunately, this is also not true.
For all that these filters are prone to overblocking and ripe for abuse, they are actually not very effective against someone who actually wants to defeat them.
Let's look at the most difficult-to-crack content filters in the world: the censoring filters used by the Chinese government to suppress politically sensitive materials. These filters have a much easier job than the ones European companies will
have to implement: they only filter a comparatively small number of items, and they are built with effectively unlimited budgets, subsidized by the government of one of the world's largest economies, which is also home to tens of millions of
skilled technical people, and anyone seeking to subvert these censorship systems is subject to relentless surveillance and risks long imprisonment and even torture for their trouble.
Those Chinese censorship systems are really, really easy to break , as researchers from the University of Toronto's Citizen Lab demonstrated in a detailed research report released a few weeks ago.
People who want to break the filters and infringe copyright will face little difficulty. The many people who want to stay on the right side of the copyright --but find themselves inadvertently on the wrong side of the filters--will find
themselves in insurmountable trouble, begging for appeal from a tech giant whose help systems all dead-end in brick walls. And any attempt to tighten the filters to catch these infringers, will of course, make it more likely that they will block
A system that allows both censors and infringers to run rampant while stopping legitimate discourse is bad enough, but it gets worse for artists.
Content ID cost $60,000,000 and does a tiny fraction of what the Article 13 filters must do. When operating an online platform in the EU requires a few hundred million in copyright filtering technology, the competitive landscape gets a lot more
bare. Certainly, none of the smaller EU competitors to the US tech giants can afford this.
On the other hand, US tech giants can afford this (indeed, have pioneered copyright filters as a solution , even as groups like EFF protested it ), and while their first preference is definitely to escape regulation altogether, paying a few
hundred million to freeze out all possible competition is a pretty good deal for them.
The big entertainment companies may be happy with a deal that sells a perpetual Internet Domination License to US tech giants for a bit of money thrown their way, but that will not translate into gains for artists. The fewer competitors there are
for the publication, promotion, distribution and sale of creative works, the smaller the share will be that goes to creators.
We can do better: if the problem is monopolistic platforms (and indeed, monopolistic distributors ), tackling that directly as a matter of EU competition law would stop those companies from abusing their market power to squeeze creators.
Copyright filters are the opposite of antitrust, though: it will make the biggest companies much bigger, to the great detriment of all the little guys in the entertainment industry and in the market for online platforms for speech.
Many thanks to my local MEP Athea McIntyre who responded to my email about the rise of the censorship machines
I appreciate your concerns regarding the new Copyright reform proposals. However, the objective of Article 13 is to make sure authors, such as musicians, are appropriately paid for their work, and to ensure that platforms fairly share revenues
which they derive from creative works on their sites with creators. I will be voting for new text which seeks to exclude small and microenterprise platforms from the scope and to introduce greater proportionality for SMEs.
In the text under discussion, if one of the main purposes of a platform is to share copyright works, if they optimise these works and also derive profit from them, the platform would need to conclude a fair license with the rightholders, if
rightholders request this. If not, platforms will have to check for and remove specific copyright content once this is supplied from rightholders. This could include pirated films which are on platforms at the same time as they are shown at the
cinema. However, if a platform's main purpose is not to share protected works, it does not optimise copyright works nor to make profit from them, it would not be required to conclude a license. There are exemptions for online encyclopaedias
(Wikipedia), sites where rightholders have approved to the uploading of their works and software platforms, while online market places (including Ebay) are also out of the scope.
Closing this value gap is an essential part of the Copyright Directive, which Secretary of
State Matthew Hancock supports addressing . My Conservative colleagues and I support the general policy justification behind it, which is to make sure that platforms are responsible for their sites and that authors are fairly rewarded and
incentivised to create work. Content recognition will help to make sure creators, such as song writers, can be better identified and paid fairly for their work. Nevertheless, this should not be done at the expense of users' rights. We are
dedicated to striking the right balance between adequately rewarding rightholders and safeguarding users' rights. There are therefore important safeguards to protect users' rights, respect data protection, and to make sure that only proportionate
measures are taken.
I will therefore be supporting the mandate to enter into trilogue negotiations tomorrow so that the Directive can become law.
[Surely one understand that musicians are getting a bit of a rough deal from the internet giants and one can see where McIntyre is coming from. However it is clear that little thought has been made into how rules will
pan out in the real profit driven world where the key take holders are doing their best for their shareholders, not the European peoples. It is surely driving the west into poverty when laws are so freely passed just to do a few nice things,
whilst totally ignoring the cost of destroying people's businesses and incomes].
Offsite Comment: ...And from the point of view of the internet giants
ARTICLE 19 is leading a coalition of international human rights organisations, who will tell the European Court of Justice (CJEU) that the de-listing of websites under the right to be forgotten should be limited in order to protect global
freedom of expression. The hearing will take place on September 11 with a judgment expected in early 2019.
The CJEU hearing in Google vs CNIL is taking place after France's highest administrative court asked for clarification in relation to the 2014 ruling in Google Spain. This judgment allows European citizens to ask search engines like Google to
remove links to inadequate, irrelevant or ... excessive content -- commonly known as the right to be forgotten (RTBF). While the content itself remains online, it cannot be found through online searches of the individual's name.
The CJEU has been asked to clarify whether a court or data regulator should require a search engine to de-list websites only in the country where it has jurisdiction or across the entire world.
France's data regulator, the Commission Nationale de l'Informatique et des Libertes (CNIL) has argued that if they uphold a complaint by a French citizen, search engines such as Google should not only be compelled to remove links from google.fr
but all Google domains.
ARTICLE 19 and the coalition of intervening organisations have warned that forcing search engines to de-list information on a global basis would be disproportionate. Executive Director of ARTICLE 19, Thomas Hughes said:
This case could see the right to be forgotten threatening global free speech. European data regulators should not be allowed to decide what Internet users around the world find when they use a search engine. The CJEU must limit the scope of the
right to be forgotten in order to protect the right of Internet users around the world to access information online.
ARTICLE 19 argues that rights to privacy and rights to freedom of expression must be balanced when it comes to making deciding whether websites should be de-listed. Hughes added:
If European regulators can tell Google to remove all references to a website, then it will be only a matter of time before countries like China, Russia and Saudi Arabia start to do the same. The CJEU should protect freedom of expression not set
a global precedent for censorship.
Pornhub's Age verification system AgeID has announced an exclusive partnership with OCL and its Portes solution for providing anonymous face-to-face age verification solution where retailers OK the age of customers who buy a card enabling porn
access. The similar AVSecure scheme allows over 25s to buy the access card without showing any ID but may require to see unrecorded ID from those appearing less than 25.
According to the company, the PortesCard is available to purchase from selected high street retailers and any of the U.K.'s 29,000 PayPoint outlets as a voucher. Each PortesCard will cost £4.99 for use on a single device, or £8.99 for use across
multiple devices. This compares with £10 for the AVSecure card.
Once a card or voucher is purchased, its unique validation code must be activated via the Portes app within 24 hours before expiring. Once the user has been verified they will automatically be granted access to all adult sites using AgeID. Maybe
this 24 hour limit is something to do with an attempt to restrict secondary sales of porn access codes by ensuring that they get tied to devices almost immediately. It all sounds a little hasslesome.
As an additional layer of protection, parents can quickly and simply block access on their children's devices to sites using Portes, so PortesCards cannot be associated with AgeID.
But note that an anonymously bought card is not quite a 100% safe solution. One has to consider whether if the authorities get hold of a device whether the can then see a complete history of all websites accessed using the app or access code. One
also has to consider whether someone can remotely correlate an 'anonymous' access code with all the tracking cookies holding one's id.
The government is amending its Counter-Terrorism and Border Security Bill with regards to criminalising accessing terrorism related content on the internet.
MPs, peers and the United Nations have already raised human rights concerns over pre-existing measures in the Counter-Terrorism and Border Security Bill, which proposed to make accessing propaganda online on three or more different occasions a
The Joint Human Rights Committee found the wording of the law vague and told the government it violated Article 10 of the European Convention on Human Rights (ECHR). The committee concluded in July:
This clause may capture academic and journalistic research as well as those with inquisitive or even foolish minds.
The viewing of material without any associated intentional or reckless harm is, in our view, an unjustified interference with the right to receive information...unless amended, this implementation of this clause would clearly risk breaching
Article 10 of the ECHR and unjustly criminalising the conduct of those with no links to terrorism.
The committee called for officials to narrow the new criminal offence so it requires terrorist intent and defines how people can legally view terrorist material.
The United Nations Special Rapporteur on the right to privacy also chipped accusing the British government of straying towards thought crime with the law.
In response, the government scrapped the three clicks rule entirely and broadened the concept of viewing to make the draft law read:
A person commits an offence if...the person views or otherwise accesses by means of the internet a document or record containing information of that kind.
It also added a clause saying a reasonable excuse includes:
Having no reason to believe, that the document or record in question contained, or was likely to contain, information of a kind likely to be useful to a person committing or preparing an act of terrorism.