UK government publishes its intentions to significantly ramp up internet censorship
|8th June 2018
See article from dailymail.co.uk
response to the internet strategy green paper [pdf] from assets.publishing.service.gov.uk
Who is liable if a user posts copyrighted music to YouTube without authority? Is it the user or is it YouTube? The answer is of course that it is the user who would be held liable should copyright holders seek compensation. YouTube would be held
responsible only if they were informed of the infringement and refused to take it down.
This is the practical compromise that lets the internet work.
So what would happen if the government changed the liability laws so that YouTube was held
liable for unauthorised music as soon as it was posted. There maybe millions of views before it was spotted. If YouTube were immediately liable they may have to pay millions in court judgements against them.
There is lot of blather about YouTube
having magic Artificial Intelligence that can detect copyrighted music and block it before it us uploaded. But this is nonsense, music is copyrighted by default, even a piece that has never been published and is not held in any computer database.
YouTube does not have a database that contains all the licensing and authorisation, and who exactly is allowed to post copyrighted material. Even big companies lie, so how could YouTube really know what could be posted and what could not.
If the law were to be changed, and YouTube were held responsible for the copyright infringement of their posters, then the only possible outcome would be for YouTube to use its AI to detect any music at all and block all videos which contain
music. The only music allowed to be published would be from the music companies themselves, and even then after providing YouTube with paperwork to prove that they had the necessary authorisation.
So when the government speaks of changes to
liability law they are speaking of a massive step up in internet censorship as the likely outcome.
In fact the censorship power of such liability tweaks has been proven in the US. The recently passed FOSTA law changed liability law so that
internet companies are now held liable for user posts facilitating sex trafficking. The law was sold as a 'tweak' just to take action against trafficking. But it resulted in the immediate and almost total internet censorship of all user postings
facilitating adult consensual sex work, and a fair amount of personal small ads and dating services as well.
The rub was that sex traffickers do not in any way specify that their sex workers have been trafficked, their adverts are exactly the same
as for adult consensual sex workers. With all the artificial intelligence in the world, there is no way that internet companies can distinguish between the two.
When they are told they are liable for sex trafficking adverts, then the only possible
way to comply is to ban all adverts or services that feature anything to do with sex or personal hook ups. Which is of course exactly what happened.
So when UK politicians speak of internet liability changes and sex trafficking then they are
talking about big time, large scale internet censorship.
And Theresa May said today via a government press release as reported in the Daily Mail:
Web giants such as Facebook and Twitter must automatically
remove vile abuse aimed at women, Theresa May will demand today.
The Prime Minister will urge companies to utilise the same technology used to take down terrorist propaganda to remove rape threats and harassment.
Speaking at the G7 summit in Quebec, Mrs May will call on firms to do more to tackle content promoting and depicting violence against women and girls, including illegal violent pornography.
She will also demand
the automatic removal of adverts that are linked to people-trafficking.
May will argue they must ensure women can use the web without fear of online rape threats, harassment, cyberstalking, blackmail or vile comments.
She will say: We know that technology plays a crucial part in advancing gender equality and empowering women and girls, but these benefits are being undermined by vile forms of online violence, abuse and harassment.
What is illegal offline is illegal online and I am calling on world leaders to take serious action to deal with this, just like we are doing in the UK with our commitment to legislate on online harms such as cyber-stalking and
In a world that is being ripped apart by identitarian intolerance of everyone else, its seems particularly unfair that men should be expected to happily put up with the fear of online threats, harassment, cyberstalking,
blackmail or vile comments. Surely laws should be written so that all people are treated totally equally.
Theresa May did not say to much about liability law directly in her press release, but it is laid out pretty clearly in the government
document just published, titled Government response to the internet strategy green paper [pdf]:
Platform liability and illegal harms
Online platforms need to take responsibility for the content they host. They need to
proactively tackle harmful behaviours and content. Progress has been made in removing illegal content, particularly terrorist material, but more needs to be done to reduce the amount of damaging content online, legal and illegal.
We are developing options for increasing the liability online platforms have for illegal content on their services. This includes examining how we can make existing frameworks and definitions work better, as well as what the liability regime should look like in the long-run.
Terms and Conditions
Platforms use their terms and conditions to set out key information about who can use the service, what content is acceptable and what action can be taken if users don't comply
with the terms. We know that users frequently break these rules. In such circumstances, the platforms' terms state that they can take action, for example they can remove the offending content or stop providing services to the user. However, we do not see
companies proactively doing this on a routine basis. Too often companies simply do not enforce their own terms and conditions.
Government wants companies to set out clear expectations of what is acceptable on their platforms in
their terms, and then enforce these rules using sanctions when necessary. By doing so, companies will be helping users understand what is and isn't acceptable.
We believe that it is
right for Government to set out clear standards for social media platforms, and to hold them to account if they fail to live up to these. DCMS and Home Office will jointly work on the White Paper which will set out our proposals for forthcoming
legislation. We will focus on proposals which will bring into force real protections for users that will cover both harmful and illegal content and behaviours. In parallel, we are currently
assessing legislative options to modify the online liability
regime in the UK, including both the smaller changes consistent with the EU's eCommerce directive, and the larger changes that may be possible when we leave the EU.
Worrying or what?
The ICO thinks a few guidelines can keep kids off Facebook until they are 13
|5th January 2018
See article from dailymail.co.uk
The Information Commissioner's Office (ICO) has warned Facebook, Twitter and Snapchat to tighten up their age controls and kick off underage users.
The ICO stepped in after it became aware that millions of British children join the platform before
they were 13. New ICO guidelines state that social media giants must examine whether they put children at risk -- by showing minors adverts for alcohol or gambling, for example.
The guidance, which is under consultation, also calls on the firms to
do a better job of kicking underage users off their platforms, and to stop or deter children from sharing their information online.
Elizabeth Denham, the Information Commissioner, threatened:
services to provide protection for children or having a system to verify age, organisations, including social media companies, need to change the way they offer services to children.
It's also vital that we ensure children's
interests and rights are protected online in the same way they are in all other aspects of life.
In November, an Ofcom report revealed that half of British 12-year-olds and more than a quarter of ten-year-olds have their own social
media profiles. At the moment, all the major web giants demand that users are over 13 before they get an account -- but they do next to nothing to enforce that rule.
Facebook, Twitter and Snapchat insist it is unrealistic to have to verify the age
of users under the age of 18.
The ICO does not seem to have addressed the enormity of their demand. Facebook and social networks are the very essence of smart phones. If children aren't allowed to share things, how does any website or app feed up
news and articles to anyone if it does now what the reader likes nor who is linked to that person. Typing in what you want to see is no longer practical or desirable, so the basic idea of sending people more of what they have already shown they liked is
the only game in town. Of course the kids could play games all day instead, but maybe that has a downside too.
Political campaigners at the NSPCC call for more internet censorship in the name of child protection
|29th April 2017
See article from nspcc.org.uk
The NSPCC writes:
We're calling on social networks to be regulated and fined when they fail to protect children after it was revealed that 4 out of 5 children feel social media companies aren't doing enough to protect them
1,696 children and young people who took part in our Net Aware research, 1,380 thought social media sites needed to do more to protect them from inappropriate or harmful content. When asked about what they were coming across on social media sites,
children reported seeing:
- bullying and hatred.
We're calling on Government to draw up minimum standards that internet companies must meet to safeguard children. These standards must include:
- age-ratings in line with those for films set by the British Board of Film Classification
- safe accounts automatically offered to under 18's -- with default privacy settings,
proactive filtering of harmful content and mechanisms to guard against grooming
- fines for companies who fail to protect children.
A case that questions the Pentagon's limits on free speech for soldiers. Marine sacked after commenting
on Facebook: 'Screw Obama and I will not follow all orders from him'.
See article from guardian.co.uk
Government know better than parents when their kids are ready to use Facebook
See article from telegraph.co.uk
Tim Loughton, the Children's Minister, has accused mothers and fathers of aiding and abetting pre-teens to open accounts on Facebook.
His whinge was in response to Labour MP Ann Coffey who urged the Government and mobile phone companies to
do more to combat sexting , where teenagers send sexual pictures of themselves to each other using camera phones.
Loughton said parents had a responsibility to monitor youngsters online, adding:
Facebook page, you should be at least 13 to do that. That is not legally enforceable.
We know, and I know from personal experience, the temptations for younger children to set up a Facebook site and get involved with those social
And I also know that in too many cases they do that aided and abetted by parents. So it's not just a question of giving information to parents, it's making sure parents are acting responsibly on behalf of their children
A Facebook spokesman said:
Facebook is currently designed for two age groups (13-18 year olds and 18 and up), and we provide extensive safety and privacy controls based on the age provided.
If someone reports an underage account to use then we will remove it, and use back-end end technology to try and prevent them signing up again.
However, recent reports have highlighted just how difficult it is
to implement age restrictions on the Internet and that there is no single solution to ensuring younger children don't circumvent a system or lie about their age.
However, we agree with safety experts that communication between
parents/guardians and kids about their use of the Internet is vital.
Just as parents are always teaching and reminding kids how to cross the road safely, talking about internet safety should be just as important a lesson to learn.
Yahoo! Mail found blocking emails about Wall Street protest
See article from
Presumably this is along the lines of what Dave Cameron and co are thinking when they talk about internet censorship in times of troubles.
Thinking about e-mailing your friends and neighbors about the protests against Wall Street happening right
now? If you have a Yahoo e-mail account, think again. ThinkProgress has reviewed claims that Yahoo is censoring e-mails relating to the protest and found that after several attempts on multiple accounts, we too were prevented from sending messages about
the Occupy Wall Street demonstrations.
Over the weekend, thousands gathered for a Tahrir Square -style protest of Wall Street's domination of American politics. The protesters, organized online and by organizations like Adbusters,
have called their effort Occupy Wall Street and have set up the website: www.OccupyWallSt.org. However, several YouTube users posted videos of themselves trying to email a message inviting their friends to visit the Occupy Wall St campaign
website, only to be blocked repeatedly by Yahoo.
ThinkProgress emails relating to the OccupyWallSt.org protest were blocked with the following message (emphasis added):
Your message was not sent Suspicious
activity has been detected on your account. To protect your account and our users, your message has not been sent. If this error continues, please contact Yahoo! Customer Care for further help. We apologize for the inconvenience.
in a later update:
Yahoo's customer care Twitter account acknowledges blocking the emails, but says it was an unintentional error: We apologize 4 blocking 'occupywallst.org' It was not intentional & caught by
our spam filters. It is resolved, but may be a residual delay.
Social networking bosses appear for questioning by parliamentary committee
See article from blog.indexoncensorship.org
Commons Home Affairs select committee, 11th September 2011
Following accusations that social media were used to play a key role in the social unrest in August, representatives from Research in Motion, Twitter and Facebook appeared for questioning
by the Commons Home Affairs select committee.
Stephen Bates, Managing Director of BlackBerry's Research in Motion, Richard Allen, Director of Policy at Facebook and Alexander McGilvray of Twitter were questioned by the committee, chaired by MP
Keith Vaz, regarding the role of social media in the riots which spread across the country in August, and the trio insisted that all three platforms were used as a force for good.
In the midst of the unrest, calls were made to shut down social
networking, particularly BlackBerry messenger, as it was suggested that this was being used to organise violence. Cutting off Facebook, Twitter and BlackBerry messenger in times of unrest seems no different to the censoring this kind of media experiences
in China and oppressive countries over the world.
The committee heard that should it be necessary, all three of the representatives of the social media, who work within frameworks to condone with the law, would not resist closing down social
media, but did not feel that it would be necessary.
Bates, Allen and McGilvray all said that throughout the unrest in August, social media were used in a positive way -- to contact family and friends to advise that users were safe, to help
clean-up in the wake of the riots, and perhaps most importantly as a tool of communication, used to quell and correct rumours.
A key issue addressed by the committee was responsibility. Bates admitted that BlackBerry messenger had been used in a
malicious way to organise crime, but stressed the need for balance when addressing the issue.
Keith Vaz advised that there may be times when closing down social media was necessary, asking Why should the government not use the powers to close
down these networks if there is mass disorder and this is the only way to stop it happening.
The Enhanced Terrorism and Investigation Measures Bill will outline powers including curfews and further restrictions
on communications, association and movement.
See press release
The Home Office has published draft terror legislation to be used in supposedly exceptional circumstances.
The Enhanced Terrorism and Investigation Measures Bill follows the government's review of CT powers, published in January, that claims
enhanced measures are necessary in extraordinary circumstances.
IHome Secretary Theresa May said:
So we will publish, but not introduce, legislation allowing more stringent measures, including curfews and
further restrictions on communications, association and movement.
These measures will require an even higher standard of proof to be met and would be introduced if in exceptional circumstances they were required to
protect the public from the threat of terrorism.
We will invite the Opposition to discuss this draft legislation with us on Privy Council terms. These powers would be enacted only with the agreement of both Houses of
Musing that David Cameron has already got the powers to turn off the internet at times of riot
See article from
China enjoys David Cameron speaking in favour of Chinese style internet censorship
See article from blogs.computerworlduk.com