Melon Farmers Unrated

Internet News


March

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2020   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec   Dec    

 

There are more important harms to be thinking about than Pornhub...

Miserable MPs whinge about an uptick of people entertaining themselves on Pornhub during the coronavirus lockdown


Link Here 27th March 2020
British MPs have claimed that that measures to reform and regulate the porn industry have faltered, putting vulnerable people at risk.

Last year attempts to introduce age verification systems into open access porn sites to stop children being able to access extreme online content stalled, and MPs are warning that regulation proposed in a new online harms bill, currently at consultation stage in parliament, does not go far enough.

Tracy Brabin, the shadow culture secretary, whinged:

The online harms bill doesn't go far enough. We have to get control over this industry, said  We have a duty of care to young people whose videos are being shared who might not want them shared, and ... to potential victims of sex trafficking and rape.

MPs from both sides of the political divide agree. Conservative MP Maria Miller, chair of the women and equalities committee, said: These are hugely important issues and [the online harms bill] is taking too long, we have been talking about this for two years now. She said the promised duty of care should include a way to hold companies to account if unlawful material is posted.

Activist Laila Mickelwait, part of a group of activists at Exodus Cry, told the Guardian: Pornhub handing out 'free' premium content is a way for them to cash in on those around the world impacted by the pandemic. Pornhub is collecting an incredible amount of user data including IP addresses by allowing web beacons and other special information targeting technology on all user devices, and monetising it for their own gain.

 

 

Protect Speech and Security Online...

Calling on Americans to reject the Graham-Blumenthal Proposal


Link Here25th March 2020
Full story: Internet Censorship in USA...Domain name seizures and SOPA

Senators Lindsey Graham and Richard Blumenthal are quietly circulating a serious threat to your free speech and security online. Their proposal would give the Attorney General the power to unilaterally write new rules for how online platforms and services must operate in order to rely on Section 230, the most important law protecting free speech online. The AG could use this power to force tech companies to undermine our secure and private communications.

We must stop this dangerous proposal before it sees the light of day. Please tell your members of Congress to reject the so-called EARN IT Act.   

The Graham-Blumenthal bill would establish a National Commission on Online Child Exploitation Prevention tasked with recommending best practices for providers of interactive computer services regarding the prevention of online child exploitation conduct. But the Attorney General would have the power to override the Commission's recommendations unilaterally. Internet platforms or services that failed to meet the AG's demands could be on the hook for millions of dollars in liability.

It's easy to predict how Attorney General William Barr would use that power: to break encryption. He's said over and over that he thinks the best practice is to weaken secure messaging systems to give law enforcement access to our private conversations. The Graham-Blumenthal bill would finally give Barr the power to demand that tech companies obey him or face overwhelming liability from lawsuits based on their users' activities. Such a demand would put encryption providers like WhatsApp and Signal in an awful conundrum: either face the possibility of losing everything in a single lawsuit or knowingly undermine their own users' security, making all of us more vulnerable to criminals. The law should not pit core values--Internet users' security and expression--against one another.

The Graham-Blumenthal bill is anti-speech, anti-security, and anti-innovation. Congress must reject it.

 

 

Lost Girls and lost minds at the BBFC...

Passed 12 uncut for sexual threat, language, self-harm, sexual violence references and over 20 instances of the word 'fuck'


Link Here17th March 2020
Lost Girls is a 2020 USA mystery thriller by Liz Garbus.
Starring Amy Ryan, Thomasin McKenzie and Gabriel Byrne. BBFC link IMDb

When Mari Gilbert's (Academy Award® nominee Amy Ryan) daughter disappears, police inaction drives her own investigation into the gated Long Island community where Shannan was last seen. Her search brings attention to over a dozen murdered sex workers Mari will not let the world forget. From Academy Award® nominated filmmaker Liz Garbus, LOST GIRLS is inspired by true events detailed in Robert Kolker's "Lost Girls: An Unsolved American Mystery."

Lost Girls is major offering from Netflix that demonstrated a major failing at the BBFC with its automated random rating generator used for Netflix ratings.

A ludicrous 12 rating was posted on the BBFC site, and people started to question it. As described by Neil

It was originally rated 12 and a few of us flagged that the system had failed because the content was above and beyond the 12 bracket (dead prostitutes, domestic abuse, over 20 instances of the word fuck (some directed and aggressively used) along with a continual menacing tone.

Funny because they had just done a press release about their new approach to classifying domestic abuse on screen at the beginning of last week!

Anyway - first thing Monday morning, some poor BBFC examiner went and re-rated it. The original 12 rating was deleted and replace d with 15 for strong language, sex references.

Here's the thread from twitter where the BBFC confesses to how their classifying system works without a BBFC examiner.

The BBFC started the conversation rolling with an ill-judged self promotional tweet implicitly boasting about the importance of its ratings:

BBFC @BBFC · As the weekend approaches, @NetflixUK have released lots of binge-worthy content. What will you be tuning in to watch? Whatever you choose, check the age rating on our website: http:// bbfc.co.uk

  • Straight Outta Compton 36.1%

  • Love Is Blind 8.2%

  • Locke & Key 9.8%

  • A Quiet Place 45.9%

Well Scott took them at their word and checked out their ratings for Lost Girls. He wasn't impressed:

You need to go back to actually classifying Netflix material formally, rather than getting an algorithm to do it. This is rated R Stateside for language throughout, which in your terms means frequent strong language, so definitely not a 12!:

The BBFC responded, perhaps before  realising the extent of the failing

Hi Scott, thanks for flagging, we are looking into this. Just to explain, a person at Netflix watches the content from start to end, and tags the content as they view. Everyone who is tagging content receives appropriate training so they know what to look out for.

Scott noted that the BBFC explanation rather makes for a self proving mistruth as there was obviously at least a step in the process that didn't have a human in the driving seat, He tweeted:

Yeah, the BBFC and the OFLC in Aus now use an automated programme for Netflix content - nobody actually sits and watches it. I get that there's lots of material to go through, but this obviously isn't the best idea. Age ratings you trust is the BBFC's tagline - the irony.

Neil adds:

This film needs reviewing with your new guidance about domestic abuse & triggers in mind. Over 20 uses of f***, some very aggressive and directed. Descriptions of violent domestic abuse (titanium plates, etc) and dead sex workers, sustained threatening tone. Certainly not a 12.

At this point it looks as if the BBFC hasn't quite grasped that their system has clearly spewed bollox and tried to justify that the system as infallible even when it is clearly badly wrong:

These tags are then processed by an algorithm that sets out the same high standards as our classification guidelines. Then, this automatically produces a BBFC age rating for the UK, which is consistent with other BBFC rated content.

Scott adds

Ah, I stand corrected - didn't realise there was a middle man who watches the content. Nevertheless, there's still nobody at the BBFC watching it, which I think is an oversight - this film in particular is a perfect example.

Next thing spotted was the erroneous 12 rating deleted and replaced by a human crafted 15 rating.

And one has to revisit he BBFC statement: processed by an algorithm that sets out the same high standards as our classification guidelines. Perhaps we should read the BBFC statement at face value and conclude that the BBFC's high standards are the same standard as the bollox 12 rating awarded to Lost Girls.

 

 

A censorship struggle...

Amazon UK bans Hitler's book, Mein Kampf


Link Here17th March 2020
  Amazon UK has banned the sale of most editions of Hitler's Mein Kampf and other Nazi propaganda books from its store following campaigning by Jewish groups.

Booksellers were informed in recent days that they would no longer be allowed to sell a number of Nazi-authored books on the website.

In one email seen by the Guardian individuals selling secondhand copies of Mein Kampf on the service have been told by Amazon that they can no longer offer this book as it breaks the website's code of conduct. The ban impacts the main editions of Mein Kampf produced by mainstream publishers such as London-based Random House and India's Jaico, for whom it has become an unlikely bestseller .

Other Nazi publications including the children's book The Poisonous Mushroom written by Nazi publisher Julius Streicher, who was later executed for crimes against humanity.

Amazon would not comment on what had prompted it to change its mind on the issue but a recent intervention to remove the books by the London-based Holocaust Educational Trust received the backing of leading British politicians.

 

 

Fined for a failed forgottening...

The Swedish government internet censor fines Google for not taking down links and for warning the targeted website about the censorship


Link Here17th March 2020
Full story: The Right to be Forgotten...Bureaucratic censorship in the EU

The Swedish data protection censor, Datainspektionen has fined Google 75 million Swedish kronor (7 million euro) for failure to comply with the censorship instructions.

According to the internet censor, which is affiliated with Sweden's Ministry of Justice, Google violated the terms of the right-to-be-forgotten rule, a EU-mandated regulation introduced in 2014 allowing individuals to request the removal of potentially harmful private information from popping up in internet searches and directories.

Datainspektionen says an internal audit has shown that Google has failed to properly remove two search results which were ordered to be delisted back in 2017, making either too narrow an interpretation of what content needed to be removed, or failing to remove a link to content without undue delay.

The watchdog has also slapped Google with a cease-and-desist order for its practice of notifying website owners of a delisting request, claiming that this practice defeats the purpose of link removal in the first place.

Google has promised to appeal the fine, with a spokesperson for the company saying that it disagrees with this decision on principle.

 

 

Worthy but blinkered...

Independent report on child abuse material recommends strict age/identity verification for social media


Link Here14th March 2020
  The Independent Inquiry into Child Sexual Abuse, chaired by Professor Alexis Jay, was set up because of serious concerns that some organisation had failed and were continuing to fail to protect children from sexual abuse. It describes its remit as:

Our remit is huge, but as a statutory inquiry we have unique authority to address issues that have persisted despite previous inquiries and attempts at reform.

The inquiry has just published its report with the grandiose title: The Internet.

It has consider many aspects of child abuse and come up with the following short list of recommendation:

  1. Pre-screening of images before uploading

    The government should require industry to pre-screen material before it is uploaded to the internet to prevent access to known indecent images of children.
     
  2. Removal of images

    The government should press the WeProtect Global Alliance to take more action internationally to ensure that those countries hosting indecent images of children implement legislation and procedures to prevent access to such imagery.
     
  3. Age verification

    The government should introduce legislation requiring providers of online services and social media platforms to implement more stringent age verification techniques on all relevant devices.
     
  4. Draft child sexual abuse and exploitation code of practice

    The government should publish, without further delay, the interim code of practice in respect of child sexual abuse and exploitation as proposed by the Online Harms White Paper (published April 2019).

But it should be noted that the inquiry gave not even a passing mention to some of the privacy issues that would have far reaching consequences should age verification be required for children's social media access.

Perhaps the authorities should recall that age verification for porn failed because the law makers were only thinking of the children, and didn't give even a moment of passing consideration for the privacy of the porn users. The lawmaker's blinkeredness resulted in the failure of their beloved law.

Has anyone even considered the question what will happen if they ban kids from social media. An epidemic of tantrums? Collapse of social media companies? kids go back to hanging around on street corners?, the kids find more underground websites to frequent? they play violent computer games all day instead?

 

 

Offsite Article: Tim Berners-Lee calls for urgent action to make cyberspace safer for women and girls...


Link Here12th March 2020
Why can't we have policies that protect everyone equally. Identitarian one sided rules have achieved little beyond unfairness, injustice and society wide aggrievement

See article from theguardian.com

 

 

UK and US play silly games with backdoors for encrypted messaging...

The Chinese will be probing your backdoors as soon as they are introduced


Link Here 8th March 2020
Full story: UK Government vs Encryption...Government seeks to restrict peoples use of encryption
   
 Haha he thought he was protected by a level 5 lock spell,
but every bobby on the street has a level 6 unlock spell,
and the bad guys have level 10.

The Government is playing silly games trying to suggest ways that snooping backdoors on people's encrypted messaging could be unlocked by the authorities whilst being magically safe from bad guys especially those armed with banks of Chinese super computers.

The government wants to make backdoors mandatory for messaging and offers a worthless 'promise' that authority figures would need to agree before the police are allowed to use their key to unlock messages.

Andersen Cheng, chief executive of Post-Quantum, a specialist encryption firm working with Nato and Government agencies, said a virtual key split into five parts - or more - could unlock messages when all five parties agreed and the five key fragments were joined together.

Those five parties could include the tech firm like Facebook, police, security service or GCHQ, an independent privacy advocate or specialist similar to the independent reviewer of terror legislation and the judge authorising the warrant.

Cheng's first company TRL helped set up the secure communications system used by 10 Downing Street to talk with GCHQ, embassies and world leaders, but I bet that system did not include backdoor keys.

The government claims that official access would only be granted where, for example, the police or security service were seeking to investigate communications between suspect parties at a specific time, and where a court ruled it was in the public or nation's interest.

However the government does not address the obvious issue of bad guys getting hold of the keys and letting anyone unlock the messages for a suitable fee. And sometimes those bad guys are armed with the best brute force code cracking powers in the world.

 

 

Protecting the age of innocence...

Whilst endangering everyone else. Australian parliamentary committee recommends age verification from porn


Link Here8th March 2020
Full story: Age Verification for Porn...Endangering porn users for the sake of the children

Protecting the age of innocence

Report of the inquiry into age verification for online wagering and online pornography

House of Representatives Standing Committee on Social Policy and Legal Affairs

Executive Summary

The Committee’s inquiry considered the potential role for online age verification in protecting children and young people in Australia from exposure to online wagering and online pornography.

Evidence to the inquiry revealed widespread and genuine concern among the community about the serious impacts on the welfare of children and young people associated with exposure to certain online content, particularly pornography.

The Committee heard that young people are increasingly accessing or being exposed to pornography on the internet, and that this is associated with a range of harms to young people’s health, education, relationships, and wellbeing. Similarly, the Committee heard about the potential for exposure to online wagering at a young age to lead to problem gambling later in life.

Online age verification is not a new concept. However, the Committee heard that as governments have sought to strengthen age restrictions on online content, the technology for online age verification has become more sophisticated, and there are now a range of age-verification services available which seek to balance effectiveness and ease-of-use with privacy, safety, and security.

In considering these issues, the Committee was concerned to see that, in so much as possible, age restrictions that apply in the physical world are also applied in the online world.

The Committee recognised that age verification is not a silver bullet, and that protecting children and young people from online harms requires government, industry, and the community to work together across a range of fronts. However, the Committee also concluded that age verification can create a significant barrier to prevent young people—and particularly young children—from exposure to harmful online content.

The Committee’s recommendations therefore seek to support the implementation of online age verification in Australia.

The Committee recommended that the Digital Transformation Agency lead the development of standards for online age verification. These standards will help to ensure that online age verification is accurate and effective, and that the process for legitimate consumers is easy, safe, and secure.

The Committee also recommended that the Digital Transformation Agency develop an age-verification exchange to support a competitive ecosystem for third-party age verification in Australia.

In relation to pornography, the Committee recommended that the eSafety Commissioner lead the development of a roadmap for the implementation of a regime of mandatory age verification for online pornographic material, and that this be part of a broader, holistic approach to address the risks and harms associated with online pornography.

In relation to wagering, the Committee recommended that the Australian Government implement a regime of mandatory age verification, alongside the existing identity verification requirements. The Committee also recommended the development of educational resources for parents, and consideration of options for restricting access to loot boxes in video games, including though the use of age verification.

The Committee hopes that together these recommendations will contribute to a safer online environment for children and young people.

Lastly, the Committee acknowledges the strong public interest in the inquiry and expresses its appreciation to the individuals and organisations that shared their views with the Committee.

 

 

But it's probably still OK to hate Trump supporters?...

Twitter updates its censorship rules about hateful content


Link Here 8th March 2020
Full story: Twitter Censorship...Twitter offers country by country take downs

Twitter updated its rules about hateful content on 5th March 2020. The changes are in the area of dehumanizing remarks, which are remarks that treat others as less than human, on the basis of age, disability, or disease. The rules now read:

Hateful conduct policy

Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.

Hateful imagery and display names: You may not use hateful images or symbols in your profile image or profile header. You also may not use your username, display name, or profile bio to engage in abusive behavior, such as targeted harassment or expressing hate towards a person, group, or protected category.

Violent threats

We prohibit content that makes violent threats against an identifiable target. Violent threats are declarative statements of intent to inflict injuries that would result in serious and lasting bodily harm, where an individual could die or be significantly injured, e.g., "I will kill you".

Wishing, hoping or calling for serious harm on a person or group of people

We prohibit content that wishes, hopes, promotes, or expresses a desire for death, serious and lasting bodily harm, or serious disease against an entire protected category and/or individuals who may be members of that category. This includes, but is not limited to:

  • Hoping that someone dies as a result of a serious disease, e.g., "I hope you get cancer and die."

  • Wishing for someone to fall victim to a serious accident, e.g., "I wish that you would get run over by a car next time you run your mouth."

  • Saying that a group of individuals deserve serious physical injury, e.g., "If this group of protesters don't shut up, they deserve to be shot."

References to mass murder, violent events, or specific means of violence where protected groups have been the primary targets or victims

We prohibit targeting individuals with content that references forms of violence or violent events where a protected category was the primary target or victims, where the intent is to harass. This includes, but is not limited to sending someone:

  • media that depicts victims of the Holocaust;

  • media that depicts lynchings.

Inciting fear about a protected category

We prohibit targeting individuals with content intended to incite fear or spread fearful stereotypes about a protected category, including asserting that members of a protected category are more likely to take part in dangerous or illegal activities, e.g., "all [religious group] are terrorists".

Repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone

We prohibit targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.

We also prohibit the dehumanization of a group of people based on their religion, age, disability, or serious disease.

Hateful imagery

We consider hateful imagery to be logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, gender identity or ethnicity/national origin. Some examples of hateful imagery include, but are not limited to:

  • symbols historically associated with hate groups, e.g., the Nazi swastika;

  • images depicting others as less than human, or altered to include hateful symbols, e.g., altering images of individuals to include animalistic features; or

  • images altered to include hateful symbols or references to a mass murder that targeted a protected category, e.g., manipulating images of individuals to include yellow Star of David badges, in reference to the Holocaust.

 

 

Offsite Article: Get that location data turned off...


Link Here8th March 2020
Google tracked his bike ride past a burglarized home. That made him a suspect.

See article from nbcnews.com

 

 

Offsite Article: No Porn for Chinese Stuck Under Virus Lockdown...


Link Here7th March 2020
Internet controls have proved even more restrictive as Chinese life moves online under quarantine. By Celine Sui

See article from foreignpolicy.com

 

 

Irreconcilable differences...

EU Copyright Filters Are On a Collision Course With EU Data Privacy Rules


Link Here4th March 2020

The European Union's controversial new copyright rules are on a collision course with EU data privacy rules. The GDPR guards data protection, privacy, and other fundamental rights in the handling of personal data. Such rights are likely to be affected by an automated decision-making system that's guaranteed to be used, and abused, under Article 17 to find and filter out unauthorized copyrighted material. Here we take a deep dive examining how the EU got here and why Member States should act now to embrace enforcement policies for the Copyright Directive that steer clear of automated filters that violate the GDPR by censoring and discriminating against users.

Platforms Become the New Copyright Police

Article 17 of the EU's Cop yright Directive (formerly Article 13) makes online services liable for user-uploaded content that infringes someone's copyright. To escape liability, online service operators have to show that they made best efforts to obtain rightsholders' authorization and ensure infringing content is not available on their platforms. Further, they must show they acted expeditiously to remove content and prevent its re-upload after being notified by rightsholders.

Prior to passage of the Copyright Directive, user rights advocates alerted lawmakers that operators would have to employ upload filters to keep infringing content off their platforms. They warned that then Article 13 will turn online services into copyright police with special license to scan and filter billions of users' social media posts and videos, audio clips, and photos for potential infringements.

While not everyone agreed about the features of the controversial overhaul of outdated copyright rules, there was little doubt that any automated system for catching and blocking copyright infringement would impact users, who would sometimes find their legitimate posts erroneously removed or blocked. Instead of unreservedly safeguarding user freedoms, the compromise worked out focuses on procedural safeguards to counter over-blocking. Although complaint and redress mechanisms are supposed to offer a quick fix, chances are that censored Europeans will have to join a long queue of fellow victims of algorithmic decision-making and await the chance to plead their case.

Can't See the Wood For the Trees: the GDPR

There's something awfully familiar about the idea of an automated black-box judgment system that weighs user-generated content and has a significant effect on the position of individuals. At recent EU copyright dialogue debates on technical and legal limits of copyright filters, EU data protection rules--which restrict the use of automated decision-making processes involving personal data--were not put on the agenda by the EU officials. Nor were academic experts on the GDPR who have raised this issue in the past (read this analysis by Sophie Stalla-Bourdillon or have a look at this year's CPDP panel on copyright filters ).

Under Article 22 of the GDPR , users have a right "not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." Save for exceptions, which will be discussed below, this provision protects users from detrimental decisions made by algorithms, such as being turned down for an online loan by a service that uses software, not humans, to accept or reject applicants. In the language of the regulation, the word "solely" means a decision-making process that is totally automated and excludes any real human influence on the outcome.

The Copyright-Filter Test Personal Data

The GDPR generally applies if a provider is processing personal data, which is defined as any information relating to an identified or identifiable natural person ("data subject," Article 4(1) GDPR ). Virtually every post that Article 17 filters analyze will have come from a user who had to create an account with an online service before making their post. The required account registration data make it inevitable that Copyright Directive filters must respect the GDPR. Even anonymous posts will have metadata, such as IP addresses ( C-582/14, Breyer v Germany), which can be used to identify the poster. Anonymization is technically fraught, but even purportedly anonymization will not satisfy the GDPR if the content is connected with a user profile, such as a social media profile on Facebook or YouTube.

Defenders of copyright filters might counter that these filters do not evaluate metadata. Instead, they'll say that filters merely compare uploaded content with information provided by rightsholders. However, the Copyright Directive's algorithmic decision-making is about much more than content-matching. It is the decision whether a specific user is entitled to post a specific work. Whether the user's upload matches the information provided by rightsholders is just a step along the way. Filters might not always use personal data to determine whether to remove content, but the decision is always about what a specific individual can do. In other words: how can monitoring and removing peoples' uploads, which express views they seek to share, not involve a decision about based on that individual?

Moreover, the concept of "personal data" is very broad. The EU Court of Justice (Case C-434/16 Nowak v Data Protection Commissioner ) held that "personal data" covers any information "provided that it 'relates' to the data subject," whether through the content (a selfie uploaded on Facebook), through the purpose (a video is processed to evaluate a person's preferences), or through the effect (a person is treated differently due to the monitoring of their uploads). A copyright filter works by removing any content that matches materials from anyone claiming to be a rightsholder. The purpose of filtering is to decide whether a work will or won't be made public. The consequence of using filtering as a preventive measure is that users' works will be blocked in error, while other (luckier) users' works will not be blocked, meaning the filter creates a significant effect or even discriminates against some users.

Even more importantly, the Guidelines on automated decision-making developed by the WP29 , an official European data protection advisory body (now EDPB ) provide a user-focused interpretation of the requirements for automated individual decision-making. Article 22 applies to decisions based on any type of data. That means that Article 22 of the GDPR applies to algorithms that evaluate user-generated content that is uploaded to a platform.

Adverse Effects

Do copyright filters result in "legal" or "significant" effects as envisioned in the GDPR? The GDPR doesn't define these terms, but the guidelines endorsed by the European Data Protection Board enumerate some "legal effects," including denial of benefits and the cancellation of a contract.

The guidelines explain that even where a filter's judgment does not have legal impact, it still falls within the scope of Article 22 of the GDPR if the decision-making process has the potential to significantly affect the behaviour of the individual concerned, has a prolonged impact on the user, or leads to discrimination against the user. For example, having your work erroneously blocked could lead to adverse financial circumstances or denial of economic opportunities. The more intrusive a decision is and the more reasonable expectations are frustrated, the higher the likelihood for adverse effects.

Consider a takedown or block of an artistic video by a creator whose audience is waiting to see it (they may have backed the creator's crowdfunding campaign). This could result in harming the creator's freedom to conduct business, leading to financial loss. Now imagine a critical essay about political developments. Blocking this work is censorship that impairs the author's right of free expression. There are many more examples that show that adverse effects will often be unavoidable.

Legitimate Grounds for Automated Individual Decision-Making

There are three grounds under which automated decision-making may be allowed under the GDPR's Article 22(2). Users may be subjected to automated decision-making if one of three exceptions apply:

  • it's necessary for entering into or performance of a contract,

  • authorized by the EU or member state law, or

  • based on the user's explicit consent.

Necessity

Copyright filters cannot justly be considered "necessary" under this rule . "Necessity" is narrowly construed in the data protection framework, and can't merely be something that is required under terms of service. Rather, a "necessity" defence for automated decision-making must be in line with the objectives of data protection law, and can't be used if there are more fair or less intrusive measures available. The mere participation in an online service does not give rise to this "necessity," and thus provides no serious justification for automated decision-making.

Authorization

Perhaps proponents of upload filters will argue that they will be authorized by the EU member state's law that implement the Copyright Directive. Whether this is what the directive requires has been ambiguous from the very beginning.

Copyright Directive rapporteur MEP Axel Voss insisted that the Copyright Directive would not require upload filters and dismissed claims to the contrary as mere scare-mongering by digital rights groups. Indeed, after months of negotiation between EU institutions, the final language version of the directive conspicuously avoided any explicit reference to filter technologies. Instead, Article 17 requires "preventive measures" to ensure the non-availability of copyright-protected content and makes clear that its application should not lead to any identification of individual users, nor to the processing of personal data, except where provided under the GDPR.

Even if the Copyright Directive does "authorize" the use of filters, Article 22(2)(b) of the GDPR says that regulatory authorization alone is not sufficient to justify automated decision-making. The authorizing law--the law that each EU Member State will make to implement the Copyright Directive--must include "suitable" measures to safeguard users' rights, freedoms, and legitimate interests. It is unclear whether Article 17 provides enough leeway for member states to meet these standards.

Consent

Without "necessity" or "authorization," the only remaining path for justifying copyright filters under the GDPR is explicit consent by users. For data processing based on automated decision-making, a high level of individual control is required. The GDPR demands that consent be freely given, specific, informed, and unambiguous. As take-it-or-leave-it situations are against the rationale of true consent, it must be assessed whether the decision-making is necessary for the offered service. And consent must be explicit, which means that the user must give an obvious express statement of consent. It seems likely that few users will be interested in consenting to onerous filtering processes.

Article 22 says that even if automated decision-making is justified by user consent or by contractual necessity, platforms must safeguard user rights and freedoms. Users always have the right to obtain "human intervention" from platforms, to express their opinion about the content removal, and to challenge the decision. The GDPR therefore requires platforms to be fully transparent about why and how users' work was taken down or blocked.

Conclusion: Copyright-Filters Must Respect Users' Privacy Rights

The significant negative effects on users subjected to automated decision-making, and the legal uncertainties about the situations in which copyright-filters are permitted, should best be addressed by a policy of legislative self-restraint. Whatever decision national lawmakers take, they should ensure safeguards for users' privacy, freedom of speech and other fundamental rights before any uploads are judged, blocked or removed.

If Member States adopt this line of reasoning and fulfill their legal obligations in the spirit of EU privacy rules, it could choke off any future for EU-mandated, fully-automated upload filters. This will set the groundwork for discussions about general monitoring and filtering obligations in the upcoming Digital Service Act.

(Many thanks to Rossana Ducato for the exchange of legal arguments, which inspired this article).


 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2020   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec   Dec    


 


TV News

Movie News

Games News

Internet News
 
Advertising News

Phone News
 

Technology News

Gambling News

Books News

Music News

Art News

Stage News
 

melonfarmers icon

Home

Index

Links

Email

Shop
 


US

World

Media

Nutters

Liberty
 

Film Cuts

Cutting Edge

Info

Sex News

Sex+Shopping
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys