Melon Farmers Unrated

Internet News


2019: March

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2020   2022   2023   2024   Latest 
Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    

 

Updated: The State of Play...

Age verification and UK internet porn censorship


Link Here31st March 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
The Government has been very secretive about its progress towards the starting of internet censorship for porn in the UK. Meanwhile the appointed internet porn censor, the BBFC, has withdrawn into its shell to hide from the flak. It has uttered hardly a helpful word on the subject in the last six months, just at a time when newspapers have been printing uniformed news items based on old guesstimates of when the scheme will start.

The last target date was specified months ago when DCMS minister Margot James suggested that it was intended to get the scheme going around Easter of 2019. This date was not achieved but the newspapers seem to have jumped to the conclusion that the scheme would start on 1st April 2019. The only official response to this false news is that the DCMS will now be announcing the start date shortly.

So what has been going on?

Well it seems that maybe the government realised that asking porn websites and age verification services to demand that porn users identify themselves without any real legal protection on how that data can be used is perhaps not the wisest thing to do. Jim Killock of Open Rights Group explains that the delays are due to serious concerns about privacy and data collection:

When they consulted about the shape of age verification last summer they were surprised to find that nearly everyone who wrote back to them in that consultation said this was a privacy disaster and they need to make sure people's data doesn't get leaked out.

Because if it does it could be that people are outed, have their relationships break down, their careers could be damaged, even for looking at legal material.

The delays have been very much to do with the fact that privacy has been considered at the last minute and they're having to try to find some way to make these services a bit safer. It's introduced a policy to certify some of the products as better for privacy (than others) but it's not compulsory and anybody who chooses one of those products might find they (the companies behind the sites) opt out of the privacy scheme at some point in the future.

And there are huge commercial pressures to do this because as we know with Facebook and Google user data is extremely valuable, it tells you lots about what somebody likes or dislikes or might want or not want.

So those commercial pressures will kick in and they'll try to start to monetise that data and all of that data if it leaked out would be very damaging to people so it should simply never be collected.

So the government has been working on a voluntary kite mark scheme to approve age verifiers that can demonstrate to an auditor they will keep user data safe. This scheme seems to be in its early stages as the audit policy was first outlines to age verifiers on 13th March 2019. AvSecure reported on Twitter:

Friday saw several AV companies meet with the BBFC & the accreditation firm, who presented the framework & details of the proposed scheme.

Whilst the scheme itself seems very deep & comprehensive, there were several questions asked that we are all awaiting answers on.

The Register reports that AgeID has already commissioned a data security audit using the information security company, the NCC Group. Perhaps that company can therefore be rapidly approved by the official auditor, whose identity seems to being kept secret.

So the implementation schedule must presumably be that the age verifiers get audited over the next couple of months and then after that the government can give websites the official 3 months notice required to give websites time to implement the now accredited age verification schemes.

The commencement date will perhaps be about 5 or 6 months from now.

Update: Announcement this week

31st March 2019. See  article from thetimes.co.uk

The government is expected to announce a timetable on Wednesday for the long-awaited measure to force commercial providers of online porn to check users' ages.

 

 

Commented: Maybe the Brexiteers are right...

EU parliament gives final approval to internet copyright law that will destroy European livelihoods and give unprecedented censorship control to US internet and media giants


Link Here31st March 2019
The European Parliament has backed disgraceful copyright laws which will change the nature of the net.

The new rules include holding technology companies responsible for material posted without proper copyright permission. This will destroy the livelihoods of European people making their living from generating content.

The Copyright Directive was backed by 348 MEPs, with 278 against.

It is now up to member states to approve the decision. If they do, they will have two years to implement it once it is officially published.

The two clauses causing the most controversy are known as Article 11 and Article 13.

Article 11 states that websites will either have to pay to use links from news websites or else be banned from linking to or quoting news services.

Article 13 holds larger technology companies responsible for material posted without a copyright licence.

It means they would need to pre-censor content before it is uploaded. Only the biggest US internet companies will have the technology to achieve this automatically, even then technical difficulties in recognising content will results in inevitable over censorship from having to err on the side f caution.

The campaign group Open Knowledge International described it as a massive blow for the internet.

We now risk the creation of a more closed society at the very time we should be using digital advances to build a more open world where knowledge creates power for the many, not the few, said chief executive Catherine Stihler. Skip Twitter post by @Senficon

Dark day for internet freedom: The @Europarl_EN has rubber-stamped copyright reform including #Article13 and #Article11. MEPs refused to even consider amendments. The results of the final vote: 348 in favor, 274 against #SaveYourInternet pic.twitter.com/8bHaPEEUk3 204 Julia Reda (@Senficon) March 26, 2019.

Update: Europe's efforts to curb the internet giants only make them stronger

31st March 2019. See  article from theguardian.com by Kenan Malik

New legislation on copyright will hurt small users and boost tech titans' influence

 

 

Virtual Prohibited Network...

Russia threatens that VPNs will be blocked if they don't censor a list of specified websites. Russia also introduces a bill to enable a Russian only internet.


Link Here29th March 2019
Full story: Internet Censorship in Russia...Russia and its repressive state control of media
Russia's media censor Roskomnadzor has threatened to block access to popular VPN-services which allow users to gain access to websites which have been banned by Moscow.

Russia has introduced internet censorship laws, requiring search engines to delete some results, messaging services to share encryption keys with security services and social networks to store users' personal data on servers within the country.

But VPN services can allow users to establish secure internet connections and reach websites which have been banned or blocked. Russia's communications regulator Roskomnadzor said it had asked the owners of 10 VPN services to implement the country's registry of banned websites and block access to the specified sites.

The internet censor said that it had sent notifications to NordVPN, Hide My Ass!, Hola VPN, Openvpn, VyprVPN, ExpressVPN, TorGuard, IPVanish, Kaspersky Secure Connection and VPN Unlimited, giving them a month to reply.

In the cases of non-compliance with the obligations stipulated by the law, Roskomnadzor has threatened to block the offending VPNs.

Meanwhile a new censorship bill has been introduced to the Russian parliament (Duma) that established the concept of a Russian internet called Runet that can operate independently of the worldwide internet.

Runet is envisaged as a Russian space that allows state censors to block Russian internet users from access to foreign websites whilst allowing them to continue using local websites approved by the internet censor. It also provides for continued internet access in Russia space should the rest of the world cut off Russia.

Russia notes the overwhelming majority of the key services running the worldwide internet are under US control. Prime Minister Dmitri Medvedev said: That's not very good actually.

The legislation was initially drafted in response to a new US cyber strategy that accuses Russia, along with China, Iran, and North Korea, of using the web to undermine its democracy and economy.

Update: VPN Providers respond niet

3rd May 2019. See article from cloudwards.net

The worst Roskomnadzor can do is add VPN websites to the existing list of banned websites, which can be subverted by using a VPN that's not on the list.

That, and the whole not wanting to be an accomplice to the stifling of free speech thing, led the VPNs we contacted to refuse to comply with the order.

 

 

Offsite Article: Censorship via copyright...


Link Here29th March 2019
Busybodies on Both Sides of the Atlantic Are Trying to Kill the Internet

See article from reason.com

 

 

Just a few days longer!...

New Zealand Government get attached to blocking widely used websites so as to block a few posts of Brenton Tarrant's murderous video


Link Here27th March 2019

New Zealand's largest ISPs are continuing to block websites which hosted videos of the Christchurch terror attack, after a last-minute request by the Government.

In the wake of the mosque shootings, a number of New Zealand's biggest ISPs took what they themselves acknowledged was an unprecedented step - blocking websites which were hosting a live streamed video of the recent mosque attack.In an open letter explaining the move and calling for action from larger tech companies, the chief executives of Spark, Vodafone and 2degrees said the decision was the right one in such extreme and tragic circumstances.

On Tuesday evening, both Spark and Vodafone told Newsroom they would start to remove the remaining website blocks overnight. A Spark spokeswoman said:

We believe we have now reached the point where we need to cease our extreme temporary measures to block these websites and revert to usual operating procedures.

However, less than two hours after its initial response, Spark said the websites would continue to be blocked for several more days following specific requests from Government.

 

 

Someone's telling stories...

The BBFC has made a pretty poor show of setting out guidelines for the technical implementation of age verification, and now the Stop Age Verification campaign has pointed out that the BBFC has made legal errors about text porn


Link Here25th March 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
The BBFC seems a little behind the curve in its role as porn censor. Its initial draft of its guidelines gave absolutely no concern for the safety and well being of porn users. The BBFC spoke of incredibly sensitive identity and browsing date being entrusted to adult websites and age verifiers, purely on the forlorn hope that these companies would follow 'best practice' voluntary guidelines to keep the data safe. The BBFC offered next to no guidelines that defined how age verification should work and what it really needs to do.

As time has moved on, it has obviously occurred to the BBFC or the government that this was simply not good enough, so we are now waiting on the implementation of some sort of kite marking scheme to try to provide at least a modicum of trust in age verifiers to keep this sensitive data safe.

But even in this period of rework, the BBFC hasn't been keeping interested parties informed of what's going on. The BBFC seem very reluctant to advise or inform anyone of anything. Perhaps the rework is being driven by the government and maybe the BBFC isn't in a position to be any more helpful.

Anyway it is interesting to note that in an article from stopageverification.org.uk , that the BBFC has been reported to being overstepping the remit of the age verification laws contained in the Digital Economy Act:

The BBFC posts this on the Age verifiers website :

All types of pornographic content are within the scope of the legislation. The legislation does not exclude audio or text from its definition of pornography. All providers of commercial online pornography to persons in the UK are required to comply with the age-verification requirement.

Except that's not what the legislation says :

Pornographic material is defined in s.15 of the act. This sets out nine categories of material. Material is defined in that section (15(2) as material means204 (a) a series of visual images shown as a moving picture, with or without sound; (b) a still image or series of still images, with or without sound; or (c) sound;

It clearly doesn't mention text.

The BBFC need to be clear in their role as Age Verifier. They can only apply the law as enacted by Parliament. If they seek to go beyond that they could be at risk of court action.

 

 

Commented No they're not thinking of the children...

The Guardian suggests that the start of internet porn censorship will be timed to help heal the government's reputational wounds after the Brexit debacle


Link Here25th March 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
The Observer today published an article generally supporting the upcoming porn censorship and age verification regime. It did have one interesting point to note though:

Brexit's impact on the pornography industry has gone unnoticed. But the chaos caused by the UK's disorderly exit from the European Union even stretches into the grubbier parts of cyberspace.

A new law forcing pornography users to prove that they are adults was supposed to be introduced early next month. But sources told the Observer that it may not be unveiled until after the Brexit impasse is resolved as the government, desperate for other things to talk about, believes it will be a good news story that will play well with the public when it is eventually unveiled.

Comment: The illiberal Observer

25th March 2019. Thanks to Alan

Bloody hell! Have you seen this fuckwittage from the purportedly liberal Observer?

Posh-boy churnalist Jamie (definitely not Jim) Doward regurgitates the bile of authoritarian feminist Gail Dines about the crackpot attempt to stop children accessing a bit of porn. This is total bollox.

It's getting on for sixty years since I spotted that my girl contemporaries were taking on a different and interesting shape - a phenomenon I researched by reference to two bodies of literature: those helpful little books for the amateur and professional photographer in which each photo of a lady was accompanied by F number and exposure time and those periodicals devoted to naturism. This involved no greater subterfuge than taking off my school cap and turning up my raincoat collar to hide my school tie. I would fervently hope that today's lads can run rings round parental controls and similar nonsense.

 

 

Offsite Article: No Brexit does not have the copyright on mass protest...


Link Here24th March 2019
Tens of thousands of people across Europe staged protests on Saturday against the upcoming EU internet censorship in the name of copyright law

See article from dw.com

 

 

An Iron Curtain for the Internet...

Russia extends repressive state censorship to news deemed to be fake and to insulting politicians, even when justified


Link Here22nd March 2019
Full story: Internet Censorship in Russia...Russia and its repressive state control of media
President Vladimir Putin has tightened his grip on the Russian Internet by signing two censorship bills into law. One bans fake news while the other makes it illegal to insult public officials.

Russia has never really been a liberal democracy. It lacks an independent judiciary, and the government has found a variety of techniques to harass and intimidate independent media in the country.

But the new legislation gives the Russian government more direct tools to censor online speech. Under one bill, individuals can face fines and jail time if they publish material online that shows a clear disrespect for society, the state, the official state symbols of the Russian Federation, the Constitution of the Russian Federation, and bodies exercising state power. Punishments can be as high as 300,000 rubles ($4,700) and 15 days in jail.

A second bill subjects sites publishing unreliable socially significant information to fines as high as 1.5 million rubles ($23,000).

 

 

Advertising your private data...

ICO and Ofcom survey public opinion on online advertising targeted from snooping on browsing history


Link Here22nd March 2019
Full story: Behavioural Advertising...Serving adverts according to internet snooping

The ICO has commissioned research into consumers' attitudes towards and awareness of personal data used in online advertising.

This research was commissioned by the Information Commissioner's Office. Ofcom provided advice on the research design and analysis. The objective of this research was to understand the public's awareness and perceptions of how online advertising is served to the public based on their personal data, choices and behaviour.

Advertising technology -- known as adtech -- refers to the different types of analytics and digital tools used to direct online advertising to individual people and audiences. It relies on collecting information about how individuals use the internet, such as search and browsing histories, and personal information, such as gender and year of birth, to decide which specific adverts are presented to a particular person. Websites also use adtech to sell advertising space in real-time.

The research finds that more than half (54%) of participants would rather see relevant online adverts. But while 63% of people initially thought it acceptable for websites to display adverts, in return for the website being free to access, this fell to 36% once it was explained how personal data might be used to target adverts.

 

 

Overreaction...

Australian and New Zealand ISPs blocked multiple websites which hosted the video by the New Zealand mosque murderer


Link Here21st March 2019
Full story: Internet Censorship in Australia...Wide ranging state internet censorship

ISPs in Australia have blocked access to dozens of websites, including 4chan and 8chan, in the name of blocking the video of last week's New Zealand mass shooting.

In Australia, ISP Vodafone said that blocking requests generally come from courts or law enforcement agencies but that this time ISPs acted on their own. Telstra and Optus also blocked the sites in Australia. Besides 4chan and 8chan, ISP-level blocking affected the social network Voat, the blog Zerohedge, video hosting site LiveLeak, and others. The ban on 4chan was lifted a few hours later.

Raising issues of wider censorship, LiveLeak removed the offending videos but was not immediately removed from the list of censored sites.

The ISPs' decision to block access to websites was controversial as they acted to censor content without instruction from either the Australian Communications and Media Authority or the eSafety Commissioner, and most smaller service providers have decided to keep access open.

The ISPs are facing some government pressure, though. Australia Prime Minister Scott Morrison called Telstra, Optus, and Vodafone to a meeting to discuss ways to prevent distribution and livestreaming of violent videos.

New Zealand ISPs took a similar approach. The country's main iSPs, Spark, Vodafone, Vocus and 2degrees, are blocking any website which has footage of the Friday 15 March Christchurch mosque shootings. The ISPs agreed to work together to identify and block access at [the] DNS level to such online locations, such as 4chan and 8chan.

New Zealand Telecommunications Forum Chief Executive Geoff Thorn said the industry is working together to ensure this harmful content can't be viewed by New Zealanders. He acknowledged that there is the risk that some sites that have legitimate content could have been mistakenly blacklisted, but this will be rectified as soon as possible. .

Australia and New Zealand also do not have net neutrality rules that prevent ISPs from blocking websites on their own volition

 

 

Internet Censorship Law...

New law in South Africa will appoint the film censors as the arbiters of internet hate speech


Link Here21st March 2019
Full story: Internet Censorship in South Africa...Proppsal to block all porn from South Africans
South Africa's National Assembly has officially passed the Films and Publications Amendment Bill, with the bill now scheduled to be sent to President Cyril Ramaphosa for assent.

The bill extends film censorship to online content and appoints The Film and Publications Board (FPB), the country's film censors, as arbiters of internet censorship of hate speech, revenge porn and website blocking.

Some of the other notable changes include:

  • Revenge porn: Under the bill, any person who knowingly distributes private sexual photographs and films without prior consent and with intention to cause the said individual harm shall be guilty of an offence and liable upon conviction.
  • Hate speech: The bill states that any person who knowingly distributes in any medium, including the internet and social media any film, game or publication which amounts to propaganda for war, incites imminent violence, or advocates hate speech, shall be guilty of an offence.
  • Website blocking: If an internet access provider has knowledge that its services are being used for the hosting or distribution of child pornography, propaganda for war, incitement of imminent violence or advocating hatred based on an identifiable group characteristic it shall immediately remove this content, or be subject to a fine.

According to Dominic Cull of specialised legal advice firm, Ellipsis, the bill which is on its way to president Cyril Ramaphosa is extremely badly written. He notes that the introduction bill means that there is definite potential for abuse in terms of infringement of free speech .

One of my big objections here is that if I upload something which someone else finds objectionable, and they think it hate speech, they will be able to complain to the FPB.

If the FPB thinks the complaint is valid, they can then lodge a takedown notice to have this material removed.

These sentiments were echoed by legal expert Nick Hall of MakeGamesSA, who said:

The big question around the bill has always been enforceability and the likelihood of the FPB to do anything with it. Practically, are they going to go after small-scale YouTubers? No, probably not, as they don't have the means to do so.

Instead, my concern has always been that the legislation becomes a tool for them to use censorship.

 

 

Dark Days...

Wikipedia protests against the EU's disgraceful new copyright laws favouring US conglomerates over European people


Link Here21st March 2019
Websites and businesses across Europe went dark yesterday in protest of disgraceful changes to copyright law being introduced by the European Union.

Ahead of a final vote on the legislation next Tuesday, March 26th, a number of European Wikipedia sites are going dark for the day, blocking all access and directing users to contact their local EU representative to protest the laws. Other major sites, such as Twitch and PornHub, are showing protest banners on their homepages and social media. Meanwhile, any users uploading content to Reddit will be shown this notice: Critics of the Copyright Directive say it could lead to messages like this.

The law in question is the EU Copyright Directive, a long-awaited update to copyright law. Two provisions have been singled out by critics as dangerous to European people's freedom and livehoods.

These are Article 11, which lets publishers charge platforms if they link to their stories (the link tax'), and Article 13, which makes platforms legally responsible for users uploading copyrighted material (the so-called 'upload filter').

Article 13 is particularly dangerous, say critics. It will make all platforms hosting user-generated content legally responsible for users uploading copyrighted content. The only way to stop these uploads, say critics, will be to scan content before its uploaded, leading to the creation of filters that will err on the side of censorship and will be abused by copyright trolls.

Wikipedia said the rules would be a "net loss for free knowledge." Volunteer editors for the German, Czech, Danish, and Slovak Wikipedias have all blacked out their sites for the day.

As well as the website blackouts , more than five million internet users have signed a petition protesting Article 13 . Marches and demonstrations are also planned in European cities across the weekend and on Monday and Tuesday before the final vote.

Update: The latest from MEP Julia Reda

21st March 2019. See tweets from twitter.com

The official version of the #copyright trilogue agreement is online now, translations will follow shortly. Don't get a heart attack when you see #Article13 has been renumbered #Article17, both the old and the new numbers will show up on MEPs' voting lists. http://www.europarl.europa.eu/doceo/document/A-8-2018-0245-AM-271-271_EN.pdf

Our efforts to defeat #Article13 just got a huge boost! Polish @Platforma_org will vote AGAINST the #copyright directive unless #Article13 is deleted! They're the second largest single political party in EPP after @CDU. Thanks @MichalBoni https://twitter.com/MichalBoni/status/1109057398566764544 #SaveYourInternet

At a press conference in Berlin, @AxelVossMdEP confirmed rumours that some press publishers have threatened parliamentarians with bad election coverage if they vote against the #copyright reform. Voss does not consider this problematic. #Article11 #Article13 #SaveYourInternet

Update: Anti censorship hub

22nd March 2019. See  article from avn.com

Pornhub posted a banner at the top of the European version of its site on Thursday, as seen in the image at the top of this page. The discussion forum Reddit204the self-described front page of the internet204and the sprawling online encyclopedia Wikipedia also protested the planned new law, according to a Business Insider report .

 

 

Offsite Article: Britain's Pornographer and Puritan Coalition...


Link Here 21st March 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Backlash speculates that the UK's upcoming porn censorship will play into the hands of foreign tube site monopolies

See article from backlash.org.uk

 

 

Offsite Article: Censorship or social responsibility?...


Link Here19th March 2019
Amazon removes some books peddling vaccine misinformation.

See article from washingtonpost.com

 

 

Mental health issues...

Parliamentary group calls for Ofcom to become the UK internet censor


Link Here18th March 2019

An informal group of MPs, the All Party Parliamentary Group on Social Media and Young People's Mental Health and Wellbeing has published a report calling for the establishment of an internet censor. The report clams:

  • 80% of the UK public believe tighter regulation is needed to address the impact of social media on the health and wellbeing of young people.
  • 63% of young people reported social media to be a good source of health information.
  • However, children who spend more than three hours a day using social media are twice as likely to display symptoms of mental ill health.
  • Pressure to conform to beauty standards perpetuated and praised online can encourage harmful behaviours to achieve "results", including body shame and disordered eating, with 46% of girls compared to 38% of all young people reporting social media has a negative impacted on their self-esteem.

The report titled, #NewFilters to manage the impact of social media on young people's mental health and wellbeing , puts forward a number of policy recommendations, including:

  • Establish a duty of care on all social media companies with registered UK users aged 24 and under in the form of a statutory code of conduct, with Ofcom to act as regulator.
  • Create a Social Media Health Alliance, funded by a 0.5% levy on the profits of social media companies, to fund research, educational initiatives and establish clearer guidance for the public.
  • Review whether the "addictive" nature of social media is sufficient for official disease classification.
  • Urgently commission robust, longitudinal research, into understanding the extent to which the impact of social media on young people's mental health and wellbeing is one of cause or correlation.
Chris Elmore MP, Chair of the APPG on Social Media on Young People's Mental Health and Wellbeing said:

"I truly think our report is the wakeup call needed to ensure - finally - that meaningful action is taken to lessen the negative impact social media is having on young people's mental health.

For far too long social media companies have been allowed to operate in an online Wild West. And it is in this lawless landscape that our children currently work and play online. This cannot continue. As the report makes clear, now is the time for the government to take action.

The recommendations from our Inquiry are both sensible and reasonable; they would make a huge difference to the current mental health crisis among our young people.

I hope to work constructively with the UK Government in the coming weeks and months to ensure we see real changes to tackle the issues highlighted in the report at the earliest opportunity."

 

 

Offsite Article: Don't be a verified idiot...


Link Here16th March 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Get a VPN. The Guardian outlines some of the dangers of getting age verified for porn

See article from theguardian.com

 

 

Delegated ratings...

The BBFC allows Netflix to give BBFC age ratings to its content if it follows BBFC rules


Link Here14th March 2019

The BBFC has launched an innovative new industry collaboration with Netflix to move towards classifying all content on the service using BBFC age ratings.

Netflix will produce BBFC age ratings for content using a manual tagging system along with an automated rating algorithm, with the BBFC taking up an auditing role. Netflix and the BBFC will work together to make sure Netflix's classification process produces ratings which are consistent with the BBFC's Classification Guidelines for the UK.

It comes as new research by the British Board of Film Classification (BBFC) and the Video Standards Council Rating Board (VSC) has revealed that almost 80% of parents are concerned about children seeing inappropriate content on video on demand or online games platforms.

The BBFC and the VSC have joined forces to respond to calls from parents and are publishing a joint set of Best Practice Guidelines to help online services deliver what UK consumers want.

The Best Practice Guidelines will help online platforms work towards greater and more consistent use of trusted age ratings online. The move is supported by the Department for Digital, Culture, Media and Sport as part of the Government's strategy to make the UK the safest place to be online.

This includes recommending the use of consistent and more comprehensive use of BBFC age labelling symbols across all Video On Demand (VOD) services, and PEGI symbols across online games services, including additional ratings info and mapping parental controls to BBFC age ratings and PEGI ratings.

The voluntary Guidelines are aimed at VOD services offering video content to UK consumers via subscription, purchase and rental, but exclude pure catch-up TV services like iPlayer, ITV Hub, All4, My 5 and UKTV Player.

The research also shows that 90% of parents believe that it is important to display age ratings when downloading or streaming a film online, and 92% of parents think it's important for video on demand platforms to show the same type of age ratings they would expect at the cinema or on DVD and Blu-ray 203 confirmed by 94% of parents saying it's important to have consistent ratings across all video on demand platforms, rather than a variety of bespoke ratings systems.

With nine in 10 (94%) parents believing it is important to have consistent ratings across all online game platforms rather than a variety of bespoke systems, the VSC is encouraging services to join the likes of Microsoft, Sony PlayStation, Nintendo and Google in providing consumers with the nationally recognised PEGI ratings on games - bringing consistency between the offline and online worlds.

The Video Recordings Act requires that the majority of video works and video games released on physical media must be classified by the BBFC or the VSC prior to release. While there is no equivalent legal requirement that online releases must be classified, the BBFC has been working with VOD services since 2008, and the VSC has been working with online games platforms since 2003. The Best Practice Guidelines aim to build on the good work that is already happening, and both authorities are now calling for the online industry to work with them in 2019 and beyond to better protect children.

David Austin, Chief Executive of the BBFC, said:

Our research clearly shows a desire from the public to see the same trusted ratings they expect at the cinema, on DVD and on Blu-ray when they choose to watch material online. We know that it's not just parents who want age ratings, teenagers want them too. We want to work with the industry to ensure that families are able to make the right decisions for them when watching content online.

Ian Rice, Director General of the VSC, said:

We have always believed that consumers wanted a clear, consistent and readily recognisable rating system for online video games and this research has certainly confirmed that view. While the vast majority of online game providers are compliant and apply PEGI ratings to their product, it is clear that more can be done to help consumers make an informed purchasing decision. To this end, the best practice recommendations will certainly make a valuable contribution in achieving this aim.

Digital Minister Margot James said:

Our ambition is for the UK to be the safest place to be online, which means having age ratings parents know and trust applied to all online films and video games. I welcome the innovative collaboration announced today by Netflix and the BBFC, but more needs to be done.

It is important that more of the industry takes this opportunity for voluntary action, and I encourage all video on demand and games platforms to adopt the new best practice standards set out by the BBFC and Video Standards Council.

The BBFC is looking at innovative ways to open up access to its classifications to ensure that more online video content goes live with a trusted age rating. Today the BBFC and Netflix announce a year-long self-ratings pilot which will see the online streaming service move towards in-house classification using BBFC age ratings, under licence.

Netflix will use an algorithm to apply BBFC Guideline standards to their own content, with the BBFC setting those standards and auditing ratings to ensure consistency. The goal is to work towards 100% coverage of BBFC age ratings across the platform.

Mike Hastings, Director of Editorial Creative at Netflix, said:

The BBFC is a trusted resource in the UK for providing classification information to parents and consumers and we are excited to expand our partnership with them. Our work with the BBFC allows us to ensure our members always press play on content that is right for them and their families.

David Austin added:

We are fully committed to helping families chose content that is right for them, and this partnership with Netflix will help us in our goal to do just that. By partnering with the biggest streaming service, we hope that others will follow Netflix's lead and provide comprehensive, trusted, well understood age ratings and ratings info, consistent with film and DVD, on their UK platforms. The partnership shows how the industry are working with us to find new and innovative ways to deliver 100% age ratings for families.

 

 

Enemy of the people...

With days to go until the #CopyrightDirective vote, #Article13's father admits it requires filters and says he's OK with killing Youtube


Link Here14th March 2019
The new EU Copyright Directive will be up for its final vote in the week of Mar 25, and like any piece of major EU policy, it has been under discussion for many years and had all its areas of controversy resolved a year ago -- but then German MEP Axel Voss took over as the "rapporteur" (steward) of the Directive and reintroduced the long-abandoned idea of forcing all online services to use filters to block users from posting anything that anyone, anywhere claimed was their copyrighted work.

There are so many obvious deficiencies with adding filters to every message-board, online community, and big platform that the idea became political death, as small- and medium-sized companies pointed out that you can't fix the EU's internet by imposing costs that only US Big Tech firms could afford to pay, thus wiping out all European competition.

So Voss switched tactics, and purged all mention of filters from the Directive, and began to argue that he didn't care how online services guaranteed that their users didn't infringe anyone's copyrights, even copyrights in works that had only been created a few moments before and that no one had ever seen before, ever. Voss said that it didn't matter how billions of user posts were checked, just so long as it all got filtered.

(It's like saying, "I expect you to deliver a large, four-legged African land-mammal with a trunk, tusk and a tail, but it doesn't have to be an elephant -- any animal that fits those criteria will do).

Now, in a refreshingly frank interview, Voss has come clean: the only way to comply with Article 13 will be for every company to install filters.

When asked whether filters will be sufficient to keep Youtube users from infringing copyright, Voss said, "If the platform's intention is to give people access to copyrighted works, then we have to think about whether that kind of business should exist." That is, if Article 13 makes it impossible to have an online platform where the public is allowed to make work available without first having to submit it to legal review, maybe there should just no longer be anywhere for the public to make works available.

Here's what Europeans can do about this:

* Pledge 2019 : make your MEP promise to vote against Article 13. The vote comes just before elections, so MEPs are extremely interested in the issues on voters' minds.

* Save Your Internet : contact your MEP and ask them to protect the internet from this terrible idea.

* Turn out and protest on March 23 , two days ahead of the vote. Protests are planned in cities and towns in every EU member-state.

 

 

Tumbling from grace...

Tumblr loses 30% of its traffic after its porn ban


Link Here14th March 2019
Since Tumblr announced its porn ban in December, many users reacted by explaining that they mainly used the site for browsing not-safe-for-work content, and they threatened to leave the platform if the ban were enforced. It now appears that many users have made good on that threat: Tumblr's traffic has dropped nearly 30% since December.

The ban removed explicit posts from public view, including any media that portrayed sex acts, exposed genitals, and female-presenting nipples.

 

 

Offsite Article: A review of age verification methods...


Link Here14th March 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
This is how age verification will work under the UK's porn censorship law

See article from wired.co.uk

 

 

Where there's a will...

Despite banning internet porn Uganda finds that its 6th most visited website is porn555.com


Link Here13th March 2019
Full story: Internet Censorship in Uganda...Banning VPNs and taxing social media
Despite the prevailing porn ban in Uganda, it can safely be said that pornographic materials and information has never been more consumed than now. The latest web rankings from Alexa show that Ugandans consume more pornographic materials and information than news and government information, among other relevant materials.

The US website Porn555.com is ranked as the 6th most popular website in Uganda, ahead of Daily Monitor, Twitter, BBC among others.

The country's internet censors claim to have blocked 30 of the main porn websites so perhaps that is the reason for porn555 to be the most popular rather then the more obvious PornHub, YouPorn, xHamster etc.

 

 

Offsite Article: What could possibly go wrong?...


Link Here13th March 2019
UK porn censorship risks creating sex tape black market on Twitter, WhatsApp and even USB sticks

See article from thescottishsun.co.uk

 

 

Curtained off...

Thousands of Russian protest against an extension of internet censorship


Link Here11th March 2019
Full story: Internet Censorship in Russia...Russia and its repressive state control of media

Thousands of people in Moscow and other Russian cities took to the streets over the weekend to protest legislation they fear could lead to widespread internet censorship in the country.

The protests, which were some of the biggest protests in the Russian capital in years, came in response to a bill in parliament that would route all internet traffic through servers in Russia, making virtual private networks (VPNs) ineffective. Critics note that the bill creates an internet firewall similar to China's.

People gathered in a cordoned off Prospekt Sakharova street in Moscow, made speeches on a stage and chanted slogans such as hands off the internet and no to isolation, stop breaking the Russian internet. The rally gathered around 15,300 people, according to White Counter, an NGO that counts participants at rallies. Moscow police put the numbers at 6,500.

 

 

Censoring in a digital world...

Lords committee supports the creation of a UK internet censor


Link Here10th March 2019
The House of Lords Communications Committee has called for a new, overarching censorship framework so that the services in the digital world are held accountable to an enforceable set of government rules.

The Lords Communications Committee writes:

Background

In its report 'Regulating in a digital world' the committee notes that over a dozen UK regulators have a remit covering the digital world but there is no body which has complete oversight. As a result, regulation of the digital environment is fragmented, with gaps and overlaps. Big tech companies have failed to adequately tackle online harms.

Responses to growing public concern have been piecemeal and inadequate. The Committee recommends a new Digital Authority, guided by 10 principles to inform regulation of the digital world.

Chairman's Comments

The chairman of the committee, Lord Gilbert of Panteg , said:

"The Government should not just be responding to news headlines but looking ahead so that the services that constitute the digital world can be held accountable to an agreed set of principles.

Self-regulation by online platforms is clearly failing. The current regulatory framework is out of date. The evidence we heard made a compelling and urgent case for a new approach to regulation. Without intervention, the largest tech companies are likely to gain ever more control of technologies which extract personal data and make decisions affecting people's lives. Our proposals will ensure that rights are protected online as they are offline while keeping the internet open to innovation and creativity, with a new culture of ethical behaviour embedded in the design of service."

Recommendations for a new regulatory approach Digital Authority

A new 'Digital Authority' should be established to co-ordinate regulators, continually assess regulation and make recommendations on which additional powers are necessary to fill gaps. The Digital Authority should play a key role in providing the public, the Government and Parliament with the latest information. It should report to a new joint committee of both Houses of Parliament, whose remit would be to consider all matters related to the digital world.

10 principles for regulation

The 10 principles identified in the committee's report should guide all regulation of the internet. They include accountability, transparency, respect for privacy and freedom of expression. The principles will help the industry, regulators, the Government and users work towards a common goal of making the internet a better, more respectful environment which is beneficial to all. If rights are infringed, those responsible should be held accountable in a fair and transparent way.

Recommendations for specific action Online harms and a duty of care

  • A duty of care should be imposed on online services which host and curate content which can openly be uploaded and accessed by the public. Given the urgent need to address online harms, Ofcom's remit should expand to include responsibility for enforcing the duty of care.

  • Online platforms should make community standards clearer through a new classification framework akin to that of the British Board of Film Classification. Major platforms should invest in more effective moderation systems to uphold their community standards.

Ethical technology

  • Users should have greater control over the collection of personal data. Maximum privacy and safety settings should be the default.

  • Data controllers and data processors should be required to publish an annual data transparency statement detailing which forms of behavioural data they generate or purchase from third parties, how they are stored, for how long, and how they are used and transferred.

  • The Government should empower the Information Commissioner's Office to conduct impact-based audits where risks associated with using algorithms are greatest. Businesses should be required to explain how they use personal data and what their algorithms do.

Market concentration

  • The modern internet is characterised by the concentration of market power in a small number of companies which operate online platforms. Greater use of data portability might help, but this will require more interoperability.

  • The Government should consider creating a public-interest test for data-driven mergers and acquisitions.

  • Regulation should recognise the inherent power of intermediaries.

 

 

Offsite Article: Best VPNs to avoid the UK's Porn Age Verification...


Link Here 10th March 2019
At least somebody will do well out of porn censorship

See article from vpncompare.co.uk

 

 

Disrespect of the people...

Russia's parliament passes law to jail people for disrespecting the state, politicians, national symbols, and of course Putin


Link Here8th March 2019
Full story: Internet Censorship in Russia...Russia and its repressive state control of media

Russia's parliament has advanced repressive new internet laws allowing the authorities to jail or fine those who spread supposed 'fake news' or disrespect government officials online.

Under the proposed laws, which still await final passage and presidential signature, people found guilty of spreading indecent posts that demonstrate disrespect for society, the state, (and) state symbols of the Russian Federation, as well as government officials such as President Vladimir Putin, can face up to 15 days in administrative detention. Private individuals who post fake news can be hit will small fines of between $45 and $75, and legal entities face much higher penalties of up to $15,000, according to draft legislation.

The anti-fake news bill, which passed the Duma, or lower house of parliament, also compels ISPs to block access to content which offends human dignity and public morality.

It defines fake news as any unverified information that threatens someone's life and (or) their health or property, or threatens mass public disorder or danger, or threatens to interfere or disrupt vital infrastructure, transport or social services, credit organizations, or energy, industrial, or communications facilities.

 

 

Cuts of meat...

Images of butchered meat are now defined as sensitive and liable to offend on Instagram


Link Here7th March 2019
A chef has criticised Instagram after it decided that a photograph she posted of two pigs' trotters and a pair of ears needed to be protected from 'sensitive' readers.

Olia Hercules, a writer and chef who regularly appears on Saturday Kitchen and Sunday Brunch , shared the photo alongside a caption in which she praised the quality and affordability of the ears and trotters before asking why the cuts had fallen out of favour with people in the UK.

However Hercules later discovered that the image had been censored by the photo-sharing app with a warning that read: Sensitive content. This photo contains sensitive content which some people may find offensive or disturbing.

Hercules hit back at the decision on Twitter, condemning Instagram and the general public for becoming detached from reality.

 

 

Offsite Article: Remote control...


Link Here7th March 2019
Russians give up on TV news propaganda and move to YouTube instead

See article from economist.com

 

 

Offsite Article: No doubt they will again say sorry, we'll do better next time...


Link Here 7th March 2019
Facebook asked to explain why it reveals people's private phone numbers used for security without permission

See article from privacyinternational.org

 

 

Maybe realisation that endangering parents is not a good way to protect children...

Sky News confirms that porn age verification will not be starting from April 2019 and notes that a start date has yet to be set


Link Here6th March 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust

Sky News has learned that the government has delayed setting a date for when age verification rules will come into force due to concerns regarding the security and human rights issues posed by the rules. A DCMS representative said:

This is a world-leading step forward to protect our children from adult content which is currently far too easy to access online.

The government, and the BBFC as the regulator, have taken the time to get this right and we will announce a commencement date shortly.

Previously the government indicated that age verification would start from about Easter but the law states that 3 months notice must be given for the start date. Official notice has yet to be published so the earliest it could start is already June 2019.

The basic issue is that the Digital Economy Act underpinning age verification does not mandate that identity data and browsing provided of porn users should be protected by law. The law makers thought that GDPR would be sufficient for data protection, but in fact it only requires that user consent is required for use of that data. All it requires is for users to tick the consent box, probably without reading the deliberately verbose or vague terms and conditions provided. After getting the box ticked the age verifier can then do more or less what they want to do with the data.

Realising that this voluntary system is hardly ideal, and that the world's largest internet porn company Mindgeek is likely to become the monopoly gatekeeper of the scheme, the government has moved on to considering some sort of voluntary kitemark scheme to try and convince porn users that an age verification company can be trusted with the data. The kitemark scheme would appoint an audit company to investigate the age verification implementations and to approve those that use good practises.

I would guess that this scheme is difficult to set up as it would be a major risk for audit companies to approve age verification systems based upon voluntary data protection rules. If an 'approved' company were later found to be selling, misusing data or even getting hacked, then the auditor could be sued for negligent advice, whilst the age verification company could get off scot-free.

 

 

Offsite Article: Targeted realisation...


Link Here6th March 2019
Group of European privacy campaigners reveal that the likes of Google realised that their targeting advertising scheme is illegal under GDPR

See article from mashable.com

 

 

The Counter Terrorism Internet Referral Unit (CTIRU)...

Open Rights Group reports on UK police censors charged with removing internet terrorist material


Link Here5th March 2019
The Counter-Terrorism Internet Referral Unit (CTIRU) was set up in 2010 by ACPO (and run by the Metropolitan Police) to remove unlawful terrorist material content from the Internet, with a specific focus on UK based material.

CTIRU works with internet platforms to identify content which breaches their terms of service and requests that they remove the content.

CTIRU also compile a list of URLs for material hosted outside the UK which are blocked on networks of the public estate.

As of December 2017, CTIRU is linked to the removal of 300,000 pieces of illegal terrorist material from the internet

Censor or not censor?

The CTIRU consider its scheme to be voluntary, but detailed notification under the e-Commerce Directive has legal effect, as it may strip the platform of liability protection. Platforms may have "actual knowledge" of potentially criminal material, if they receive a well-formed notification, with the result that they would be regarded in law as the publisher from this point on.

At volume, any agency will make mistakes. The CTIRU is said to be reasonably accurate: platforms say they decline only 20 or 30% of material. That shows considerable scope for errors. Errors could unduly restrict the speech of individuals, meaning journalists, academics, commentators and others who hold normal, legitimate opinions.

A handful of CTIRU notices have been made public via the Lumen transparency project. Some of these show some very poor decisions to send a notification. In one case, UKIP Voices, an obviously fake, unpleasant and defamatory blog portraying the UKIP party as cartoon figures but also vile racists and homophobes, was considered to be an act of violent extremism. Two notices were filed by the CTIRU to have it removed for extremism. However, it is hard to see that the site could fall within the CTIRU's remit as the site's content is clearly fictional.

In other cases, we believe the CTIRU had requested removal of extremist material that had been posted in an academic or journalistic context.

Some posters, for instance at wordpress.com, are notified by the service's owners, Automattic, that the CTIRU has asked for content to be removed. This affords a greater potential for a user to contes tor object to requests. However, the CTIRU is not held to account for bad requests. Most people will find it impossible to stop the CTIRU from making requests to remove lawful material, which might still be actioned by companies, despite the fact that the CTIRU would be attempting to remove legal material, which is clearly beyond its remit.

When content is removed, there is no requirement to notify people viewing the content that it has been removed because it may be unlawful or what those laws are, nor that the police asked for it to be removed. There is no advice to people that may have seen the content or return to view it again about the possibility that the content may have been intended to draw them into illegal and dangerous activities, nor are they given advice about how to seek help.

There is also no external review, as far as we are aware. External review would help limit mistakes. Companies regard the CTIRU as quite accurate, and cite a 70 or 80% success rate in their applications. That is potentially a lot of requests that should not have been filed, however, and that might not have been accepted if put before a legally-trained and independent professional for review.

As many companies will perform little or no review, and requests are filed to many companies for the same content, which will then sometimes be removed in error and sometimes not, any errors at all should be concerning.

Crime or not crime?

The CTIRU is organised as part of a counter-terrorism programme, and claim its activities warrant operating in secrecy, including rejecting freedom of information requests on the grounds of national security and detection and prevention of crime.

However, its work does not directly relate to specific threats or attempt to prevent crimes. Rather, it is aimed at frustrating criminals by giving them extra work to do, and at reducing the availability of material deemed to be unlawful.

Taking material down via notification runs against the principles of normal criminal investigation. Firstly, it means that the criminal is "tipped off" that someone is watching what they are doing. Some platforms forward notices to posters, and the CTIRU does not suggest that this is problematic.

Secondly, even if the material is archived, a notification results in destruction of evidence. Account details, IP addresses and other evidence normally vital for investigations is destroyed.

This suggests that law enforcement has little interest in prosecuting the posters of the content at issue. Enforcement agencies are more interested in the removal of content, potentially prioritised on political rather than law enforcement grounds, as it is sold by politicians as a silver bullet in the fight against terrorism.

Beyond these considerations, because there is an impact on free expression if material is removed, and because police may make mistakes, their work should be seen as relating to content removal rather than as a secretive matter.

Statistics

Little is know about the CTIRU's work, but it claims to be removing up to 100,000 "pieces of content" from around 300 platforms annually. This statistic is regularly quoted to parliament, and is given as an indication of the irresponsibility of major platforms to remove content. It has therefore had a great deal of influence on the public policy agenda.

However, the statistic is inconsistent with transparency reports at major platforms, where we would expect most of the takedown notices to be filed. The CTIRU insists that its figure is based on individual URLs removed. If so, much further analysis is needed to understand the impact of these URL removals, as the implication is that they must be hosted on small, relatively obscure services.

Additionally, the CTIRU claims that there are no other management statistics routinely created about its work. This seems somewhat implausible, but also, assuming it is true, negligent. For instance, the CTIRU should know its success and failure rate, or the categorisation of the different organisations or belief systems it is targeting. An absence of collection of routine data implies that the CTIRU is not ensuring it is effective in its work. We find this position, produced in response to our Freedom of Information requests, highly surprising and something that should be of interest to parliamentarians.

Lack of transparency increases the risks of errors and bad practice at the CTIRU, and reduces public confidence in its work. Given the government's legitimate calls for greater transparency on these matters at platforms, it should apply the same standards to its own work.

Both government and companies can improve transparency at the CTIRU. The government should provide specific oversight, much in the same way as CCTV and Biometrics have a Commissioner. Companies should publish notifications, redacted if necessary, to the Lumen database or elsewhere. Companies should make the full notifications available for analysis to any suitably-qualified academic, using the least restrictive agreements practical.

 

 

European businesses will then have to pay Google for their censorship machines...

Report from the European Parliament about an upcoming internet censorship law


Link Here 5th March 2019
Full story: Internet Censorship in EU...EU introduces swathes of internet censorship law
Members of the European Parliament are considering a proposition for the censorship of terrorist internet content issued by the European Commission last September.

The IMCO Committee ("Internal Market and Consumers protection") has just published its initial opinions on the proposition.

laquadrature.net reports

Judicial Review

The idea is that the government of any European Member State will be able to order any website to remove content considered "terrorist". No independent judicial authorisation will be needed to do so, letting governments abuse the wide definition of "terrorism". The only thing IMCO accepted to add is for government's orders to be subject to "judicial review", which can mean anything.

In France, the government's orders to remove "terrorist content" are already subject to "judicial review", where an independent body is notified of all removal orders and may ask judges to asses them. This has not been of much help: only once has this censorship been submitted to a judge's review. It was found to be unlawful, but more than one year and half after it was ordered. During this time, the French government was able to abusively censor content, in this case, far-left publications by two French Indymedia outlets.

Far from simplifying, this Regulation will add confusion as authorities from one member state will be able to order removal in other one, without necessarily understanding context.

Unrealistic removal delays

Regarding the one hour delay within which the police can order a hosting service provider to block any content reported as "terrorist", there was no real progress either. It has been replaced by a deadline of at least eight hours, with a small exception for "microentreprises" that have not been previously subject to a removal order (in this case, the "deadline shall be no sooner than the end of the next working day").

This narrow exception will not allow the vast majority of Internet actors to comply with such a strict deadline. Even if the IMCO Committee has removed any mention of proactive measures that can be imposed on Internet actors, and has stated that "automated content filters" shall not be used by hosting service providers, this very tight deadline, and the threat of heavy fines will only incite them to adopt the moderation tools developed by the Web's juggernauts (Facebook and Google) and use the broadest possible definition of terrorism to avoid the risk of penalties. The impossible obligation to provide a point of contact reachable 24/7 has not been modified either. The IMCO opinion has even worsened the financial penalties that can be imposed: it is now "at least" 1% and up to 4% of the hosting service provider's turnover.

Next steps

The next step will be on 11 March, when the CULT Committee (Culture and Education) will adopt its opinion.

The last real opportunity to obtain the rejection of this dangerous text will be on 21 March 2019, in the LIBE Committee (Civil Liberties, Justice and Home Affairs). European citizens must contact their MEPs to demand this rejection. We have provided a dedicated page on our website with an analysis of this Regulation and a tool to directly contact the MEPs in charge.

Starting today, and for the weeks to come, call your MEPS and demand they reject this text.

 

 

AgeID scarily will require an email address and ID to view PornHub...

There's also a rather unconvincing option to use an app, but that seems to ID your device instead


Link Here 4th March 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Pornhub and sister websites will soon require ID from users before being able to browse its porn.

The government most recently suggested that this requirement would start from about Easter this year, but this date has already slipped. The government will give 3 months notice of the start date and as this has not yet been announced, the earliest start date is currently in June.

Pornhub and YouPorn will use the AgeID system, which requires users to identify themselves with an email address and a credit card, passport, driving licence or an age verified mobile phone number.

Metro.co.uk spoke to a spokesperson from AgeID to find out how it will work (and what you'll actually see when you try to log in). James Clark, AgeID spokesperson, said:

When a user first visits a site protected by AgeID, a landing page will appear with a prompt for the user to verify their age before they can access the site.

First, a user can register an AgeID account using an email address and password. The user verifies their email address and then chooses an age verification option from our list of 3rd party providers, using options such as Mobile SMS, Credit Card, Passport, or Driving Licence.

The second option is to purchase a PortesCard or voucher from a retail outlet. Using this method, a customer does not need to register an email address, and can simply access the site using the Portes app.

Thereafter, users will be able to use this username/password combination to log into all porn sites which use the Age ID system.

It is a one-time verification, with a simple single sign-on for future access. If a user verifies on one AgeID protected site, they will not need to perform this verification again on any other site carrying AgeID.

The PortesCard is available to purchase from selected high street retailers and any of the UK's 29,000 PayPoint outlets as a voucher. Once a card or voucher is purchased, its unique validation code must be activated via the Portes app within 24 hours before expiring.

If a user changes device or uses a fresh browser, they will need to login with the credentials they used to register. If using the same browser/device, the user has a choice as to whether they wish to login every time, for instance if they are on a shared device (the default option), or instead allow AgeID to log them in automatically, perhaps on a mobile phone or other personal device.

Clark claimed that AgeID's system does not store details of people's ID, nor does it store their browsing history. This sounds a little unconvincing and must be taken on trust. And this statement rather seems to be contradicted by a previous line noting that user's email will be verified, so that piece of identity information at least will need to be stored and read.

The Portes App solution seems a little doubtful too. It claims not to log device data and then goes on to explain that the PortesCard needs to be locked to a device, rather suggesting that it will in fact be using device data. It will be interesting to see what app permissions the app will require when installing. Hopefully it won't ask to read your contact list.

This AgeID statement rather leaves the AVSecure card idea in the cold. The AVSecure system of proving your age anonymously at a shop, and then obtaining a password for use on porn websites seems to be the most genuinely anonymous idea suggested so far, but it will be pretty useless if it can't be used on the main porn websites.

 

 

Forcing Europeans to rent censorship machines from Google...

German Data Privacy Commissioner Says Article 13 Inevitably Leads to Filters, Which Inevitably Lead to Internet Oligopoly


Link Here 4th March 2019

German Data Privacy Commissioner Ulrich Kelber is also a computer scientist, which makes him uniquely qualified to comment on the potential consequences of the proposed new EU Copyright Directive. The Directive will be voted on at the end of this month, and its Article 13 requires that online communities, platforms, and services prevent their users from committing copyright infringement, rather than ensuring that infringing materials are speedily removed.

In a new official statement on the Directive ( English translation ), Kelber warns that Article 13 will inevitably lead to the use of automated filters, because there is no imaginable way for the organisations that run online services to examine everything their users post and determine whether each message, photo, video, or audio clip is a copyright violation.

Kelber goes on to warn that this will exacerbate the already dire problem of market concentration in the tech sector, and expose Europeans to particular risk of online surveillance and manipulation.

That's because under Article 13, Europe's online companies will be required to block all infringement , even if they are very small and specialised (the Directive gives an online community three years' grace period before it acquires this obligation, less time if the service grosses over ?5m/year). These small- and medium-sized European services (SMEs) will not be able to afford to license the catalogues of the big movie, music, and book publishers, so they'll have to rely on filters to block the unlicensed material.

But if a company is too small to afford licenses, it's also too small to build filters. Google's Content ID for YouTube cost a reported ?100 million to build and run, and it only does a fraction of the blocking required under Article 13. That means that they'll have to buy filter services from someone else. The most likely filter vendors are the US Big Tech companies like Google and Facebook, who will have to build and run filters anyway, and could recoup their costs by renting access to these filters to smaller competitors.

Another possible source of filtering services is companies that sell copyright enforcement tools like Audible Magic ( supplier to Big Tech giants like Facebook ), who have spent lavishly to lobby in favour of filters (along with their competitors ).

As Kelber explains, this means that Europeans who use European services in the EU will nevertheless likely have every public communication they make channeled into offshore tech companies' servers for analysis. These European services will then have to channel much of their revenues to the big US tech companies or specialist filter vendors.

So Article 13 guarantees America's giant companies a permanent share of all small EU companies' revenues and access to an incredibly valuable data-stream generated by all European discourse, conversation, and expression. These companies have a long track record of capitalising on users' personal data to their advantage, and between that advantage and the revenues they siphon off of their small European competitors, they are likely to gain permanent dominance over Europe's Internet.

Kelber says that this is the inevitable consequence of filters, and has challenged the EU to explain how Article 13's requirements could be satisfied without filters. He's called for "a thoughtful overhaul" of the bill based on "data privacy considerations," describing the market concentration as a "clear and present danger."

We agree, and so do millions of Europeans. In fact, the petition against Article 13 has attracted more signatures than any other petition in European history and is on track to be the most popular petition in the history of the human race within a matter of days.

With less than a month to go before the final vote in the European Parliament on the new Copyright Directive, Kelber's remarks couldn't be more urgent. Subjecting Europeans' communications to mass commercial surveillance and arbitrary censorship is bad for human rights and free expression, but as Kelber so ably argues, it's also a disaster for competition.

Take Action: Stop Article 13

 

 

Cyber martial law...

Thailand's military government passes extreme internet surveillance, censorship and non judicial enforcement law


Link Here4th March 2019
Full story: Internet Censorship in Thailand...Thailand implements mass website blocking
Thailand's military-controlled parliament has unanimously passed a new Cybersecurity Act to give the junta deeper control over the internet.

The act allows the National Cybersecurity Committee, run by Thailand's generals, to summon individuals for questioning and enter private property without court orders in case of actual or anticipated 'serious cyber threats'. Court warrants are not required for action in emergency cases and criminal penalties will be imposed on those who do not comply with official orders.

The authorities can now search and seize data and hardware without a warrant if a threat is identified by the unaccountable body.

 

 

Six shooters...

Internet giants respond to impending government internet censorship laws with sex principles that should be followed


Link Here1st March 2019
The world's biggest internet companies including Facebook, Google and Twitter are represented by a trade group call The Internet Association. This organisation has written to UK government ministers to outline how they believe harmful online activity should be regulated.

The letter has been sent to the culture, health and home secretaries. The letter will be seen as a pre-emptive move in the coming negotiation over new rules to govern the internet. The government is due to publish a delayed White Paper on online harms in the coming weeks.

The letter outlines six principles:

  • "Be targeted at specific harms, using a risk-based approach
  • "Provide flexibility to adapt to changing technologies, different services and evolving societal expectations
  • "Maintain the intermediary liability protections that enable the internet to deliver significant benefits for consumers, society and the economy
  • "Be technically possible to implement in practice
  • "Provide clarity and certainty for consumers, citizens and internet companies
  • "Recognise the distinction between public and private communication"

Many leading figures in the UK technology sector fear a lack of expertise in government, and hardening public sentiment against the excesses of the internet, will push the Online Harms paper in a more radical direction.

Three of the key areas of debate are the definition of online harm, the lack of liability for third-party content, and the difference between public and private communication.

The companies insist that government should recognise the distinction between clearly illegal content and content which is harmful, but not illegal. If these leading tech companies believe this government definition of harm is too broad, their insistence on a distinction between illegal and harmful content may be superseded by another set of problems.

The companies also defend the principle that platforms such as YouTube permit users to post and share information without fear that those platforms will be held liable for third-party content. Another area which will be of particular interest to the Home Office is the insistence that care should be taken to avoid regulation encroaching into the surveillance of private communications.


 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2020   2022   2023   2024   Latest 
Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    


 


TV News

Movie News

Games News

Internet News
 
Advertising News

Phone News
 

Technology News

Gambling News

Books News

Music News

Art News

Stage News
 

melonfarmers icon

Home

Index

Links

Email

Shop
 


US

World

Media

Nutters

Liberty
 

Film Cuts

Cutting Edge

Info

Sex News

Sex+Shopping
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys