|
UK Parliamentary committee claims that people failing to vote the 'correct' way is nothing to do with politicians' crap policies that don't look after British people, and must be all to do with fake news
|
|
|
| 31st July 2018
|
|
| 28th July 2018. See article from bbc.co.uk See
committee report [pdf] from dominiccummings.files.wordpress.com |
Parliament's Digital, Culture, Media and Sport (DCMS) Committee has been investigating disinformation and fake news following the Cambridge Analytica data scandal and is claiming that the UK faces a democratic crisis due to the spread of pernicious
views and the manipulation of personal data. In its first report it will suggest social media companies should face tighter censorship. It also proposes measures to combat election interference. The report claims that the relentless targeting
of hyper-partisan views, which play to the fears and prejudices of people, in order to influence their voting plans is a threat to democracy. The report was very critical of Facebook, which has been under increased scrutiny following the Cambridge
Analytica data scandal. Facebook has hampered our efforts to get information about their company throughout this inquiry. It is as if it thinks that the problem will go away if it does not share information about the problem, and reacts only when
it is pressed, the report said. It provided witnesses who have been unwilling or unable to give full answers to the committee's questions. The committee suggests: 1. Social media sites should be held responsible for harmful content on
their services Social media companies cannot hide behind the claim of being merely a 'platform', claiming that they are tech companies and have no role themselves in regulating the content of their sites, the committee said. They continually change what is and is not seen on their sites, based on algorithms and human intervention.
They reward what is most engaging, because engagement is part of their business model and their growth strategy. They have profited greatly by using this model. The committee suggested a new category of tech company should be created, which
was not necessarily a platform or a publisher but something in between. This should establish clear legal liability for the tech companies to act against harmful and illegal content on their platforms, the report said.
2. The
rules on political campaigns should be made fit for the digital age The committee said electoral law needed to be updated to reflect changes in campaigning techniques. It suggested creating a public register for political
advertising so that anybody can see what messages are being distributed online political advertisements should have a digital imprint stating who was responsible, as is required with printed leaflets and advertisements social media sites should be held
responsible for interference in elections by malicious actors electoral fraud fines should be increased from a maximum of £20,000 to a percentage of organisations' annual turnover
3. Technology companies should be taxed to fund
education and regulation Increased regulation of social media sites would result in more work for organisations such as the Electoral Commission and Information Commissioner's Office (ICO). The committee suggested a levy on tech
companies should fund the expanded responsibilities of the regulators. The money should also be spent on educational programmes and a public information campaign, to help people identify disinformation and fake news.
4. Social
networks should be audited The committee warned that fake accounts on sites such as Facebook and Twitter not only damage the user experience, but potentially defraud advertisers. It suggested an independent authority such as the
Competition and Markets Authority should audit the social networks. It also said security mechanisms and algorithms used by social networks should be available for audit by a government regulator, to ensure they are operating responsibly.
Offsite Comment: Now MPs want to police political discussion 31st July 2018. See
article from spiked-online.com by Mick Hume
Those members of parliament are half right at least. Democracy in Britain and the West is at risk today. But contrary to the wild claims in their fake-news report, the real risk does not come from Russian bloggers or shady groups farming Facebook users'
data. The big threat comes from political elitists like the cross-party clique of Remainer MPs who dominate the DCMS committee. ...Read the full
article from spiked-online.com Offsite Comment: British MPs, like authoritarians
from Moscow to Malaysia...
31st July 2018. See article from nationalreview.com by Andrew Stuttaford
It looks a lot as if these MPs, like authoritarians from Moscow to Malaysia, have been inspired by the
strikingly illiberal precedent set by Angela Merkel's social media law . In particular, part of the idea behind sticking social media companies with legal liability is to scare them into going even further in muzzling free speech than the strict letter
of the law requires. ...Read the full article from nationalreview.com |
|
Germany's highest court upholds legislation allowing public Wi-Fi previously impractical due to laws holding networks responsible for copyright infringement by users
|
|
|
| 31st July 2018
|
|
| See article from xbiz.com
|
Germany's highest court last week upheld legislation that offers Wi-Fi operators immunity from acts carried out by third-party users. The decision by the Federal Court of Justice now makes it easier for individuals and businesses to offer Wi-Fi
without fearing civil prosecution for acts of copyright infringement committed by others. Prior to the ruling, because of the legal concept known as Störerhaftung, or interferer's liability, a third party who played no deliberate part in
someone else's actions could be held responsible for them. As a result, Wi-Fi hot spots are few and far between in Germany. Visitors from abroad have found themselves shut out at public venues and unable to access the web like they could in other
countries. Copyright holders are still able to get court orders requiring WiFi providers to block copyright infringing websites.
|
|
In light of Facebook's disgraceful disregard for its users' digital wellbeing, Trump's government seems to be stepping in and preparing a GDPR style privacy law
|
|
|
| 30th July 2018
|
|
| See article from foxnews.com
|
The US Federal Government is quietly meeting with top tech company representatives to develop a proposal to protect web users' privacy amid the ongoing fallout globally of scandals that have rocked Facebook and other companies. Over the past month,
the Commerce Department has met with representatives from Facebook and Google, along with Internet providers like AT&T and Comcast, and consumer advocates, sources told the Washington Post. The goal of these meetings is to come up with a data
privacy proposal at the federal level that could serve as a blueprint for Congress to pass sweeping legislation in the mode of the European Union GDPR. There are currently no laws that govern how tech companies harness and monetize US users' data.
A total of 22 meetings with more than 80 companies have been held on this topic over the last month. One official at the White House told the Post this week that recent developments have been seismic in the privacy policy world, prompting the
government to discuss what a modern U.S. approach to privacy protection might look like. |
|
|
|
|
|
30th July 2018
|
|
|
A detailed explanation of how Google ended domain fronting so as to make it easier for countries like Russia to censor the internet See
article from thenextweb.com |
|
Egypt Sentences Tourist to Eight Years Jail for Complaining about Vacation Online
|
|
|
|
29th July 2018
|
|
| See article from eff.org
|
When she went to Egypt for vacation, Mona el-Mazbouh surely didn't expect to end up in prison. But after the 24-year-old Lebanese tourist posted a video in which she complained of sexual harassment--calling Egypt a lowly, dirty country and its citizens
pimps and prostitutes--el-Mazbouh was arrested at Cairo's airport and found guilty of deliberately spreading false rumors that would harm society, attacking religion, and public indecency. She was sentenced to eight years in prison. The video that
el-Mazbouh posted was ten minutes long, and went viral on Facebook, causing an uproar in Egypt. In the video, el-Mazbouh also expressed anger about poor restaurant service during Ramadan and complained of her belongings being stolen. Egyptian men and
women posted videos in response to her original video, prompting el-Mazbouh to delete the original video and post a second video on Facebook apologizing to Egyptians. Nevertheless, Mona was arrested at the end of her trip at the Cairo airport in
May 31, 2018 and charged with spreading false rumors that aim to undermine society, attack religions, and public indecency. Under Egyptian law, defaming and insulting the Egyptian people is illegal. Unhappy tourists have always criticized the
conditions of the countries they visit; doing so online, or on video, is no different from the centuries of similar complaints that preceded them offline or in written reviews. Beyond the injustice of applying a more vicious standard online to offline
speech, this case also punishes Mona for a reaction that was beyond her control. Mona had no influence over whether her video went viral. She did not intend her language or her actions to reach a wider audience or become a national topic of discussion.
It was angry commenters' reactions and social media algorithms that made the video viral and gave it significance beyond a few angry throwaway insults. Mona el-Mazbouh is just one of many innocent Internet users who have been caught up in the
Egyptian governments' attempts to vilify and control the domestic use of online media. At minimum, she should be released from her ordeal and returned to her country immediately. But more widely, Egypt's leaders need to pull back from their hysterical
and arbitrary enforcement of repressive laws, before more people -- including the foreign visitors on which much of Egypt's economy is based -- are hurt.
|
|
|
|
|
| 28th July 2018
|
|
|
US House Judiciary Committee Falsely Claims Credit For Stopping 90% Of All Sex Trafficking Because Of FOSTA internet censorship See
article from techdirt.com |
|
Using fake 'outrage' to censor programmes people don't like
|
|
|
| 27th July 2018
|
|
| See article from standard.co.uk See
Ban fat-shaming show Insatiable, its
critics cry. But none of them have seen it. From theguardian.com |
Over 100,000 people have signed a petition against the release of the Netflix TV show Insatiable , accusing it of 'fat shaming'. But to date it is still unknown what exactly is the plot line and whether there is any 'fat shaming' going on. 12
hour-long episodes of Insatiable will be released on Netflix on August 10. Netflix describes Insatiable as a dark, twisted, revenge comedy, but will also delve into topics such as bullying, eating disorders and body image. It follows Ryan
as the unfortunately-nicknamed Fatty Patty as she gets bullied for her weight by her high school peers. After having her jaw wired shut as a result of someone punching her in the face, she undergoes a transformation and becomes slim, hot, and vows to
take revenge on the mean girls who tormented her. Social justice warriots went on the warpath after Netflix released the official trailer for Insatiable. An online petition was subsequently created by a woman named Florence, calling for the
programme to be banned. In the petition, Florence writes: The toxicity of this series, is bigger than just this one particular series. This is not an isolated case, but part of a much larger problem that I can promise
you every single woman has faced in her life, sitting somewhere on the scale of valuing their worth on their bodies, to be desirable objects for the male gaze. That is exactly what this series does. It perpetuates not only the toxicity of diet culture
but the objectification of women's bodies.
|
|
Patrolling Rubens House in Antwerp to protect social media users from nudity
|
|
|
| 26th July 2018
|
|
| See article from news.artnet.com See
video from YouTube |
The Flemish Tourism Board has responded to Facebook's relentless censorship of nudity in classical paintings by Peter Paul Rubens In the satirical video, a team of Social Media Inspectors block gallery goers from seeing paintings at the Rubens
House in Antwerp. Facebook-branded security--called fbi--redirect unwitting crowds away from paintings that depict nude figures. We need to direct you away from nudity, even if artistic in nature, says one Social Media Inspector. The Flemish
video, as well as a cheeky open letter from the tourism board and a group of Belgian museums, asks Facebook to roll back its censorship standards so that they can promote Rubens. "Breasts, buttocks and Peter Paul Rubens cherubs are all considered
indecent. Not by us, but by you, the letter, addressed to Facebook CEO Mark Zuckerberg, says. Even though we secretly have to laugh about it, your cultural censorship is making life rather difficult for us. The Guardian reported that Facebook is
planning to have talks with the Flemish tourist board. |
|
|
|
|
| 26th July 2018
|
|
|
Microsoft comes clean over Windows 10 snooping as part of its GDPR compliance See
article from v3.co.uk |
|
India parliament considers a new internet censorship bill based on the recent US FOSTA law
|
|
|
| 25th July 2018
|
|
| See article from livemint.com
|
Indian politicians have been admiring the effectiveness of the recent US censorship law, FOSTA that bans anything adult on the internet by making websites responsible for anything that facilitates sex trafficking. As websites can't distinguish
trafficking from adult consensual sex work then the internet companies are forced to ban anything to do with sex work and even dating. A new session of the Indian Parliament kicked off on 18 July with the introduction of the Trafficking of
Persons (Prevention, Protection and Rehabilitation) Bill . There are a few problematic provisions in the proposed legislation, which may severely impact freedom of expression. For instance, Section 36 of the Bill, which aims to prescribe
punishment for the promotion or facilitation of trafficking, proposes a minimum three-year sentence for producing, publishing, broadcasting or distributing any type of material that promotes trafficking or exploitation. An attentive reading of the
provision, however, reveals that it has been worded loosely enough to risk criminalizing many unrelated activities as well. The phrase any propaganda material that promotes trafficking of person or exploitation of a trafficked person in any manner
has wide amplitude, and many unconnected or even well-intentioned actions can be construed to come within its ambit as the Bill does not define what constitutes promotion. For example, in moralistic eyes, any sexual content online could be seen as
promoting prurient interests, and thus also promoting trafficking. In July 2015, the government asked internet service providers (ISPs) to block 857 pornography websites sites on grounds of outraging morality and decency, but later rescinded the
order after widespread criticism. If historical record is any indication, Section 36 in this present Bill will legitimize such acts of censorship. Section 39 proposes an even weaker standard for criminal acts by proposing that any act of
publishing or advertising which may lead to the trafficking of a person shall be punished (emphasis added) with imprisonment for 5-10 years. In effect, the provision mandates punishment for vaguely defined actions that may not actually be connected to
the trafficking of a person at all. Another by-product of passing the proposed legislation would be a dramatic shift in India's landscape of intermediary liability laws, i.e., rules which determine the liability of platforms such as Facebook and
Twitter, and messaging services like Whatsapp and Signal for hosting or distributing unlawful content. Provisions in the Bill that criminalize the publication and distribution of content, ignore that unlike the physical world, modern electronic
communication requires third-party intermediaries to store and distribute content. This wording can implicate neutral communication pipeways, such as ISPs, online platforms, mobile messengers, which currently cannot even know of the presence of such
material unless they surveil all their users. Under the proposed legislation, the fact that human traffickers used Whatsapp to communicate about their activities could be used to hold the messaging service criminally liable.
|
|
Catholic scientists develop AI based software to cover nude female images with bikinis
|
|
|
| 25th
July 2018
|
|
| See article from
dailymail.co.uk |
An AI system developed by a Catholic institute in Brazil seeks out lewd pictures and digitally adds swimwear to censor the images images. Researchers warned that while the AI was designed to be used for good, cyber criminals could one day reverse
the process to erase bikinis from people's photos. The AI was trained by software engineers at the Pontifical Catholic University of Rio Grande do Sul using 2,000 images of women. It is a type of AI known as a generative adversarial network, which
that learn to perform tasks by recognising patterns commonly found in a set of images. Project scientist Dr Rodrigo Barros told the Register : When we train the network, it attempts to learn how to map data from
one domain - nude pictures - to another domain - swimsuit pictures. Researchers warned that while the system was designed to be used for good, cyber criminals could one day reverse the process to erase bikinis from people's photos (stock image)
Researchers warned that while the system was designed to be used for good, cyber criminals could one day reverse the process to erase bikinis from people's photos.
He added that the AI was developed to test out a novel way of
censoring images on the internet.
|
|
Infowars set to petition the British parliament for a digital rights law to guarantee free speech on the internet
|
|
|
| 24th July 2018
|
|
| See article from metro.co.uk See
Infowars petition from change.org |
The well known alt-right news website Infowars is preparing to launch a campaign aimed at persuading politicians to stop tech giants censoring its content. It notes that Facebook, Google and Twitter are using algorithms to automatically clampdown on
right-wing publications as well as those which support Donald Trump. Infowars has now started its first petition on the website Change.org demanding that social media companies end censorship of alternative voices online. It is calling for a
new Digital Rights Act to guarantee free speech on the internet. Metro.co.uk adds: We have been told that Infowars staff have approached Conservative politicians in the UK and arranged for an MP to ask a
question in parliament about the issue.
Infowars is also planning to contact the White House, where its calls are likely to reach the ears of Donald Trump himself. Paul Joseph Watson, the British editor-at-large of Infowars,
told Metro.co.uk: Since social media platforms are now de facto becoming the Internet and have formed into monopolies, the argument that they are private companies who can behave with impunity is no longer a valid
argument. We demand congressional and parliamentary scrutiny. We demand a Digital Rights Act to secure free speech online.
|
|
|
|
|
| 24th
July 2018
|
|
|
Stories that are satirical, ludicrous or (obviously) fictional might do well online, but this doesn't mean people are believing them en masse. See
article from spiked-online.com |
|
|
|
|
| 24th July
2018
|
|
|
Be Careful What You Wish For. By Antonio García Martínez See article from wired.com |
|
|
|
|
| 24th July 2018
|
|
|
But what can you do? shrugs judge See article from theregister.co.uk
|
|
BBFC boss writes a 'won't somebody think of the children' campaigning piece in support of the upcoming porn censorship law, disgracefully from behind a paywall
|
|
|
| 22nd
July 2018
|
|
| See article from telegraph.co.uk by
David Austin |
David Austin as penned what looks like an official BBFC campaigning piece trying to drum up support for the upcoming internet porn censorship regime. Disgracefully the article is hidden behind a paywall and is restricted to Telegraph paying subscribers.
Are children protected by endangering their parents or their marriage? The article is very much a one sided piece, focusing almost entirely on the harms to children. It says nothing about the extraordinary dangers faced by adults when
handing over personal identifying data to internet companies. Not a word about the dangers of being blackmailed, scammed or simply outed to employers, communities or wives, where the standard punishment for a trivial transgression of PC rules is the sack
or divorce. Austin speaks of the scale of the internet business and the scope of the expected changes. He writes: There are around five million pornographic websites across the globe. Most of them have no
effective means of stopping children coming across their content. It's no great surprise, therefore, that Government statistics show that 1.4 million children in the UK visited one of these websites in one month. ...
The BBFC will be looking for a step change in the behaviour of the adult industry. We have been working with the industry to ensure that many websites carry age-verification when the law comes into force. ...
Millions of British adults watch pornography online. So age-verification will have a wide reach. But it's not new. It's been a requirement for many years for age-restricted goods and services, including some UK hosted pornographic
material.
I guess at this last point readers will be saying I never knew that. I've never come across age verification ever before. But the point here is these previous rules devastated the British online porn industry and the reason
people don't ever come across it, is that there are barely any British sites left. Are children being protected by impoverishing their parents? Not that any proponents of age verification could care less about British people being
able to make money. Inevitably the new age verification will further compound the foreign corporate monopoly control on yet another internet industry. Having lorded over a regime that threatens to devastate lives, careers and livelihoods, Austin
ironically notes that it probably won't work anyway: The law is not a silver bullet. Determined, tech-savvy teenagers may find ways around the controls, and not all pornography online will be age-restricted. For
example, the new law does not require pornography on social media platforms to be placed behind age-verification controls.
|
|
|
|
|
|
22nd July 2018
|
|
|
The context behind the controversy over Mark Zuckerberg's comments on Holocaust denial. By Ezra Klein See article from vox.com
|
|
Daily Telegraph reports that the upcoming porn censorship regime looks set to be delayed by a few months
|
|
|
| 21st July 2018
|
|
| See article from
telegraph.co.uk |
The Telegraph reveals: The government is braced for criticism next week over an anticipated delay in its prospective curbs on under 18s' access to hardcore porn sites.
The current timetable culminating
in the implementation of UK porn censorship by the end of the year required that the final censorship guidelines are presented to MPs before they go on holiday on Thursday. They will then be ready to approve them when they return to work in the autumn.
It sound like they won't be ready for publishing by this Thursday. The BBFC noted that they were due to send the results of the public consultation along with the BBFC censorship rules to the government by late May of this year so presumably the
government is still pondering what to do. 'Best practice' just like Facebook and Cambridge Analytica Back in April when the BBFC initiated its rather naive draft rules for public consultation its prose tried to suggest that we can
trust age verifiers with our most sensitive porn browsing data because they will voluntarily follow 'best practice'. But in light of the major industry player, in this case Facebook, allowing Cambridge Analytica to so dramatically abuse our personal
data, the hope that these people will follow best practice' is surely forlorn. GDPR And there was the implementation of GDPR. The BBFC seemed to think that this was all that was needed to keep our data safe. But when t comes down to
it all GDPR seems to have done is to train us, like Pavlov's dogs, to endlessly tick the consent box for all these companies to do what the hell they like with our data. Ingenious kids Then there was a nice little piece of research
this week that revealed that network level ISP filtering of porn has next to no impact on preventing young porn seekers from obtaining their kicks. The research notes seems to suggest that it is not enough to block porn one lad because he has 30 mates
whose house he can round to surf the web there, or else it only takes a few lads to be able to download porn and it will soon be circulated to the whole community on a memory stick or whatever. Mass Buy in I guess the government is
finding it tough to find age verification ideas that are both convenient for adult users, whilst remaining robust about preventing access by the under 18s. I think the governments needs to find a solution that will achieve a mass buy in by adult users.
If the adults don't want to play ball with the age verification process, then the first fall back position is for them to use a VPN. I know that from my use of VPNS that they are very good, and once you turn it on then I find it gets left on all day. I
am sure millions of people using VPNs would not go down well with the security services on the trail of more serious crimes than under age porn viewing. I think the most likely age verification method proposed to date that has a chance of a mass
buy-in is the AVSecure system of anonymously buying a porn access card from a local shop, and using a PIN, perhaps typed in once a day. Then they are able to browse without further hassle on all participating websites. But I think it would require a
certain pragmatism from government to accept this idea, as it would be so open to over 18s buying a card and then selling the PIN to under 18s, or perhaps sons nicking their Dad's PINS when they see the card lying around, (or even perhaps installing a
keyboard logger to nick the password). The government would probably like something more robust where PINS have to be matched to people's proven ID. But I think pron users would be stupid to hand over their ID to anyone on the internet who can
monitor porn use. The risks are enormous, reputational damage, blackmail, fraud etc, and in this nasty PC world, the penalty of the most trivial of moral transgressions is to lose your job or even career. A path to failure The
government is also setting out on a path when it can do nothing but fail. The Telegraph piece mentioned above is already lambasting the government for not applying the rules to social media websites such as Twitter, that host a fair bit of porn. The
Telegraph comments: Children will be free to watch explicit X-rated sex videos on social media sites because of a loophole in a new porn crackdown, Britain's chief censor has admitted. David
Austin, chief executive of the BBFC, has been charged by ministers with enforcing new laws that require people to prove they are over 18 to access porn sites. However, writing for telegraph.co.uk, Mr Austin admitted it would not be a silver bullet as
online porn on sites such as Facebook and YouTube would escape the age restrictions. Social media companies will not be required to carry age-verification for pornographic content on their platforms. He said it was a matter for government to review this
position.
|
|
Uganda introduces a significant tax on social media usage
|
|
|
| 21st July 2018
|
|
| 3rd July 2018. See article from
torrentfreak.com |
Uganda has just introduced a significant tax on social media usage. It is set at 200 shillings a day which adds up to about 3% of the average annual income if used daily. Use of a long list of websites including Facebook, Whatsapp, Twitter, Tinder
triggers the daily taxed through billing by ISPs. And as you may expect Uganda internet users are turning to VPNs so that ISPs can't detect access to taxed apps and websites. In response, the government says it has ordered local ISPs to
begin blocking VPNs. In a statement, Uganda Communications Commission Executive Director, Godfrey Mutabazi said that Internet service providers would be ordered to block VPNs to prevent citizens from avoiding the social media tax. Mutabazi told
Dispatch that ISPs are already taking action to prevent VPNs from being accessible but since there are so many, it won't be possible to block them all. In the meantime, the government is trying to portray VPNs as more expensive to use than the tax. In a
post on Facebook this morning, Mutabazi promoted the tax as the sensible economic option. it appears that many Ugandans are outraged at the prospect of yet another tax and see VPN use as a protest, despite any additional cost. Opposition figures
have already called for a boycott with support coming in from all corners of society. The government appears unmoved, however. Frank Tumwebaze, Minister of Information Technology and Communications said: If we tax
essentials like water, why not social media?
Update: And the people were not impressed 13th July 2018. See article
from bbc.com Uganda is reviewing its decision to impose taxes on the use of social media and on money transactions by mobile phone, following a public backlash. Prime Minister Ruhakana Rugunda made the announcement soon after police
broke up a protest against the taxes. President Yoweri Museveni had pushed for the taxes to boost government revenue and to restrict criticism via WhatsApp, Facebook and Twitter. The social media tax is 6000 Uganda shillings a month
(£1.25), but it is represents about 3% of the average wage. Activists argue that while the amount may seem little, it represents a significant slice of what poorer people are paying for getting online. There is also a 1% levy on the total value of mobile
phone money transactions, affecting poorer Ugandans who rarely use banking services. In a statement to parliament, Rugunda said: Government is now reviewing the taxes taking into consideration the concerns of
the public and its implications on the budget.
A revised budget is due to be tabled in parliament on 19 July. Update: And the government continues to repress the people 21st July 2018. See
article from qz.com Uganda's government has 'reviewed' its new social media tax and has decided to stick with it. Matia Kasaija, the finance minister, decided against rescinding the social media tax. His reasoning echoed Museveni's initial reason for floating the tax: stopping gossip.
|
|
Poland ratchets up the oppression of internet users by requiring ISPs to snitch on attempts to access banned websites
|
|
|
| 20th July 2018
|
|
| See article from europeangaming.eu
|
The Polish government is demanding that ISPs snitch on their customers who attempt to access websites it deems illegal. The government wants to make the restrictions stricter for unauthorised online gambling sites and will require local ISPs to
inform it about citizens' attempts to access them. According to the Panoptykon Foundation, a digital rights watchdog, the government will compile a central registry of unauthorized websites to monitor. According to the digital rights body, the
government seeks to introduce a chief snooper that would compel data from ISPs disclosing which citizens tried to access unauthorised websites. In addition, the ISPs would have to keep the smooping requests secret from the customer. Local
organisations are unsurprisingly worried that the censorship's expansion could turn out to be the first of many steps in an online limitation escalation.
|
|
Law Is Causing Online Censorship and Removal of Protected Speech
|
|
|
|
20th July 2018
|
|
| 18th July 2018. See press release from eff.org
|
On Thursday, July 19, at 4 pm, the Electronic Frontier Foundation (EFF) will urge a federal judge to put enforcement of FOSTA on hold during the pendency of its lawsuit challenging the constitutionality of the federal law. The hold is needed, in
part, to allow plaintiff Woodhull Freedom Foundation, a sex worker advocacy group, to organize and publicize its annual conference, held August 2-5. FOSTA , or the Allow States and Victims to Fight Online Sex Trafficking Act, was
passed by Congress in March. But despite its name, FOSTA attacks online speakers who speak favorably about sex work by imposing harsh penalties for any website that might be seen as facilitating prostitution or contribute to sex trafficking. In Woodhull
Freedom Foundation v. U.S. , filed on behalf of two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist, EFF maintains the law is unconstitutional because it muzzles constitutionally protected
speech that protects and advocates for sex workers and forces speakers and platforms to censor themselves. Enforcement of the law should be suspended because the plaintiffs are likely to win the case and because it has caused, and
will continue to cause, irreparable harm to the plaintiffs, EFF co-counsel Bob Corn-Revere of Davis Wright Tremaine will tell the court at a hearing this week on the plaintiffs' request for a preliminary injunction. Because of the risk of criminal
penalties, the plaintiffs have had their ads removed from Craigslist and censored information on their websites. Plaintiff Woodhull Freedom Foundation has censored publication of information that could assist sex workers negatively impacted by the law.
FOSTA threatens Woodhull's ability to engage in protected online speech, including livestreaming and live tweeting its August meeting, unless FOSTA is put on hold. Update: A little reserved 20th July 2018. See
article from avn.com
Judge Richard Leon of United States District Court in Washington D.C. heard Woodhull's request for a preliminary injunction that would stop the law from remaining in effect until the group's lawsuit, but did not issue a judgement. Nor did he announce a
date when he would issue a ruling. According to one account from inside the courtroon, Leon sounded skeptical that the law had actually caused harm to the plaintiffs in the case.
|
|
Sikhs threaten to protest against a Sunny Leone bio pic because of her real name
|
|
|
| 19th July 2018
|
|
| See article from bbc.co.uk |
Sikh leaders in India have threatened to protest over the title of a biopic because it uses the name Kaur. Sunny Leone is a former porn star turned Bollywood actress who plays herself in web series Karenjit Kaur: The Untold Story of Sunny Leone
. Kaur - Leone's real name - is used by Sikh women as a surname or middle name and symbolises gender equality. The web series depicts her life and premiered on 16 July for Zee5, a streaming platform in India. In a letter to Subhash Chandra,
the chairman of Essel Group which owns Zee5, Indian politician Manjinder Singh Sirsa called for the show to be pulled from the network or have the name Kaur removed from the title. But Chandra responded simply by explaining that her name can't be
changed. Other Sikh groups and leaders have expressed similar sentiments and have threatened to protest outside the network's offices if their demands aren't met. |
|
Israel set to adopt a new internet censorship law
|
|
|
| 18th July 2018
|
|
| See article from
jta.org |
The Israeli government would have far-reaching power to remove or block content from social media sites under legislation coming up for a vote in the Knesset. The so-called Facebook Law could delete the content for reasons that include incitement to
terrorism without criminal proceeding and without any admissible evidence. The legislation, which was approved Sunday by the Law, Constitution and Justice Committee, is expected to be voted on before the Knesset ends its summer session on July 22.
Along with Facebook, among the social media outlets that would be covered by the legislation are Twitter, WhatsApp, Telegram, YouTube and reddit. |
|
The BBFC consultation on UK internet porn censorship
|
|
|
| 17th July 2018
|
|
| See BBFC minutes May 2018 [pdf] from bbfc.co.uk
|
Nobody seems to have heard much about the progress of the BBFC consultation about the process to censor internet porn in the UK. The sketchy timetable laid out so far suggests that the result of the consultation should be published prior to the
Parliamentary recess scheduled for 26th July. Presumably this would provide MPs with some light reading over their summer hols ready for them to approve as soon as the hols are over. Maybe this publication may have to be hurried along though, as
pesky MPs are messing up Theresa May's plans for a non-Brexit, and she would like to send them packing a week early before they can cause trouble. ( Update 18th July . The early holidays idea has
now been shelved). The BBFC published meeting minutes this week that mentions the consultation: The public consultation on the draft Guidance on Age Verification Arrangements and the draft Guidance on Ancillary
Service Providers closed on 23 April. The BBFC received 620 responses, 40 from organisations and 580 from individuals. Many of the individual responses were encouraged by a campaign organised by the Open Rights Group. Our proposed
response to the consultation will be circulated to the Board before being sent to DCMS on 21 May.
So assuming that the response was sent to the government on the appointed day then someone has been sitting on the results for quite a
long time now. Meanwhile its good to see that people are still thinking about the monstrosity that is coming our way. Ethical porn producer Erica Lust has been speaking to News Internationalist. She comments on the way the new law will compound
MindGeek's monopolitistc dominance of the online porn market: The age verification laws are going to disproportionately affect smaller low-traffic sites and independent sex workers who cannot cover the costs of
installing age verification tools. It will also impact smaller sites by giving MindGeek even more dominance in the adult industry. This is because the BBFC draft guidance does not enforce sites to offer more than one age
verification product. So, all of MindGeeks sites (again, 90% of the mainstream porn sites) will only offer their own product; Age ID. The BBFC have also stated that users do not have to verify their age on each visit if access is restricted by password
or a personal ID number. So users visiting a MindGeek site will only have to verify their age once using AgeID and then will be able to login to any complying site without having to verify again. Therefore, viewers will be less likely to visit competitor
sites not using the AgeID technology, and simultaneously competitor sites will feel pressured to use AgeID to protect themselves from losing viewers. ...Read the full
article from newint.org
|
|
A Key Victory Against European Copyright Filters and Link Taxes - But What's Next?
|
|
|
|
17th July 2018
|
|
| See article from eff.org CC
by Danny O'Brien See Guy In Charge Of
Pushing Draconian EU Copyright Directive, Evasive About His Own Use Of Copyright Protected Images. from techdirt.com |
Against all the odds, but with the support of nearly a million Europeans , MEPs voted earlier this month to
reject the EU's proposed copyright reform--including controversial proposals to create a new "snippet" right for news publishers, and mandatory
copyright filters for sites that published user uploaded content. The change was testimony to how powerful and fast-moving Net activists can be. Four weeks ago, few knew that these crazy provisions were even being considered. By
the June 20th vote, Internet experts were weighing in , and
wider conversations were starting on sites like Reddit. The result was a vote on July
5th of all MEPS, which ended in a 318 against 278 victory in favour of withdrawing the Parliament's support for the languages. Now all MEPs will have a chance in September to submit new amendments and vote on a final text -- or reject the directive
entirely. While re-opening the text was a surprising set-back for Article 13 and 11, the battle isn't over: the language to be discussed on in September will be based on
the original proposal by the European Commission, from two years ago -- which included the first versions of the copyright filters, and
snippet rights. German MEP Axel Voss's controversial modifications will also be included in the debate, and there may well be a flood of other proposals, good and bad, from the rest of the European Parliament. There's still
sizeable support for the original text: Article 11 and 13's loudest proponents, led by Voss, persuaded many MEPs to support them by arguing that these new powers would restore the balance between American tech giants and Europe's newspaper and creative
industries -- or "close the value gap", as their arguments have it. But using mandatory algorithmic censors and new intellectual property rights to restore balance is like Darth Vader bringing balance to the Force: the
fight may involve a handful of brawling big players, but it's everybody else who would have to deal with the painful consequences. That's why it remains so vital for MEPs to hear voices that represent the wider public interest.
Librarians ,
academics , and redditors, everyone from small Internet businesses and celebrity Youtubers, spoke up in a way
that was impossible for the Parliament to ignore. The same Net-savvy MEPs and activists that wrote and fought for the GDPR put their names to challenge the idea that these laws would rein back American tech companies. Wikipedians stood up and were
counted: seven independent, European-language encyclopedias consensed to shut down on the day of the vote. European alternatives to Google, Facebook and
Twitter argued that this
would set back their cause . And
European artists spoke up that the EU shouldn't be setting up censorship and ridiculous link rights in their name.
To make sure the right amendments pass in September, we need to keep that conversation going. Read on to find out what you can do, and who you should be speaking to. Who Spoke Up In The European Parliament?
As we noted last week, the decision to challenge the JURI committee's language on Article 13 and 11 last week was not automatic -- a minimum of 78 MEPs needed to petition for it to be put to the vote. Here's
the list of those MEPs who actively stepped forward to stop the bill. Also heavily involved was Julia Reda, the Pirate Party MEP who worked
so hard on making the rest of the proposed directive so positive for copyright reform, and then re-dedicated herself to stopping the worst excesses of the JURI language, and
Marietje Schaake , the Parliament's foremost advocate for human rights online. These are the core of the opposition to
Article 13 and 11. A look at that list, and the final list of
votes on July 5th, shows that the proposals have opponents in every corner of Europe's political map. It also shows that every MEP who voted for Article 13 and 11, has someone close to them politically who knows why it's wrong.
What happens now? In the next few weeks, those deep in the minutiae of the Copyright directive will be crafting amendments for MEPs to vote on in September. The tentative schedule is that the amendments
are accepted until Wednesday September 5th, with a vote at 12:00 Central European Time on Wednesday September 12th. The European Parliament has a fine tradition of producing a rich supply of amendments (the GDPR had thousands).
We'll need to coalesce support around a few key fixes that will keep the directive free of censorship filters and snippet rights language, and replace them with something less harmful to the wider Net. Julia Reda already proposed
amendments. And one of Voss' strongest critics in the latest vote was Catherine Stihler, the Scottish MEP who had created and passed consumer-friendly directive language in her committee, which Voss ignored. (Here's her
barnstorming speech before the final vote.) While we wait for those amendments to appear, the next step
is to keep the pressure on MEPs to remember what's at stake -- no mandatory copyright filters, and no new ancillary rights on snippets of text. In particular, if you
talk to your MEP , it's important to convey how you feel these proposals will affect you . MEPs are hearing from giant tech and media companies. But they are only just
beginning to hear from a broader camp: the people of the Internet.
|
|
Ofcom boss Sharon White sneers at the British people, and volunteers Ofcom to be their internet news censor
|
|
|
| 16th July 2018
|
|
| 13th July 2018. See article from theguardian.com
|
Sharon White, the CEO of Ofcom has put her case to be the British internet news censor, disgracefully from behind the paywalled website of the The Times. White says Ofcom has done research showing how little users trust what they read on social media.
She said that only 39% consider social media to be a trustworthy news source, compared with 63% for newspapers, and 70% for TV. But then again many people don't much trust the biased moralising from the politically correct mainstream media,
including the likes of Ofcom. White claims social media platforms need to be more accountable in how they curate and police content on their platforms, or face regulation. In reality, Facebook's algorithm seems pretty straightforward, it
just gives readers more of what they have liked in the past. But of course the powers that be don't like people choosing their own media sources, they would much prefer that the BBC, or the Guardian , or Ofcom do the choosing. Sharon White, wrote
in the Times: The argument for independent regulatory oversight of [large online players] has never been stronger. In practice, this would place much greater scrutiny on how effectively the
online platforms respond to harmful content to protect consumers, with powers for a regulator to enforce standards, and act if these are not met.
She continued, disgracefully revealing her complete contempt of the British people:
Many people admit they simply don't have the time or inclination to think critically when engaging with news, which has important implications for our democracy.
White joins a growing number of the
establishment elite arguing that social media needs cenorship. The government has frequently suggested as much, with Matt Hancock, then digital, culture, media and sport secretary, telling Facebook in April: Social
media companies are not above the law and will not be allowed to shirk their responsibilities to our citizens.
Update: The whole pitch to offer Ofcom's services as a news censor 15th July 2018. See
Sunday Times article republished by Ofcom from ofcom.org.uk
Ofcom has published Sharon White's pitch for Ofcom to become the internet news censor. White is nominally commenting on two research reports:
There seems to be 4 whinges about modern news reading via smart phones and all of them are just characteristics of the medium that will never change regardless of whether we have news censors or not.
- Fake News: mostly only exists in the minds of politicians. No one else can find hardly any. So internet news readers are not much bothered by trying to detect it.
- Passive news reading. Its far too much trouble typing in stuff on a smart
phone to be bothered to go out and find stuff for yourself. So the next best thing is to use apps that do the best job in feeding you articles that are of interest.
- Skimming and shallow reading of news feeds. Well there's so much news out there
and the news feed algorithm isn't too hot anyway so if anything isn't quite 100% interesting, then just scroll on. This isn't going to change any time soon.
- Echo chambers. This is just a put-down phrase for phone users choosing to read the news
that they like. If a news censor thinks that more worthy news should be force fed into people's news readers than they will just suffer the indignity of being rapidly swiped into touch.
Anyway this is Sharon White's take: Picking up a newspaper with a morning coffee. Settling down to watch TV news after a day's work. Reading the sections of the Sunday papers in your favourite order.
For decades, habit and routine have helped to define our relationship with the news. In the past, people consumed news at set times of day, but heard little in between. But for many people, those habits, and the news landscape that
shapes them, have now changed fundamentally. Vast numbers of news stories are now available 24/7, through a wide range of online platforms and devices, with social media now the most popular way of accessing news on the internet.
Today's readers and viewers face the challenge to keep up. So too, importantly, does regulation. The fluid environment of social media certainly brings benefits to news, offering more choice, real-time updates, and a platform for
different voices and perspectives. But it also presents new challenges for readers and regulators alike -- something that we, as a regulator of editorial standards in TV and radio, are now giving thought for the online world. In
new Ofcom research, we asked people about their relationship with news in our always-on society, and the findings are fascinating. People feel there is more news than ever before, which presents a challenge for their time and
attention. This, combined with fear of missing out, means many feel compelled to engage with several sources of news, but only have the capacity to do so superficially. Similarly, as many of us now read news through social media
on our smartphones, we're constantly scrolling, swiping and clearing at speed. We're exposed to breaking news notifications, newsfeeds, shared news and stories mixed with other types of content. This limits our ability to process, or even recognise, the
news we see. It means we often engage with it incidentally, rather than actively. In fact, our study showed that, after being exposed to news stories online, many participants had no conscious recollection of them at all. For
example, one recalled seeing nine news stories online over a week -- she had actually viewed 13 in one day alone. Others remembered reading particular articles, but couldn't recall any of the detail. Social media's attraction as a
source of news also raises questions of trust, with people much more likely to doubt what they see on these platforms. Our research shows only 39% consider social media to be a trustworthy news source, compared to 63% for newspapers, and 70% for TV.
Fake news and clickbait articles persist as common concerns among the people taking part in our research, but many struggle to check the validity of online news content. Some rely on gut instinct to tell fact from fiction, while
others seek second opinions from friends and family, or look for established news logos, such as the Times. Many people admit they simply don't have the time or inclination to think critically when engaging with news, which has important implications for
our democracy. Education on how to navigate online news effectively is, of course, important. But the onus shouldn't be on the public to detect and deal with fake and harmful content. Online companies need to be much more
accountable when it comes to curating and policing the content on their platforms, where this risks harm to the public. We welcome emerging actions by the major online players, but consider that the argument for independent
regulatory oversight of their activities has never been stronger. Such a regime would need to be based on transparency, and a set of clear underpinning principles. In practice, this would place much greater scrutiny on how
effectively the online platforms respond to harmful content to protect consumers, with powers for a regulator to enforce standards, and act if these are not met. We will outline further thoughts on the role independent regulation could play in the
autumn. When it comes to trust and accountability, public service broadcasters like the BBC also have a vital role to play. Their news operations provide the bedrock for much of the news content we see online, and as the
broadcasting regulator, Ofcom will continue to hold them to the highest standards. Ofcom's research can help inform the debate about how to regulate effectively in an online world. We will continue to shine a light on the
behavioural trends that emerge, as people's complex and evolving relationship with the media continues to evolve.
And perhaps if you have skimmed over White's piece a bit rapidly, here is the key paragraph again:
In practice, this would place much greater scrutiny on how effectively the online platforms respond to harmful content to protect consumers, with powers for a regulator to enforce standards, and act if these are not met. We will
outline further thoughts on the role independent regulation could play in the autumn. |
|
|
|
|
|
16th July 2018
|
|
|
A detailed critique of censorship clauses of the bill. By Index on Censorship See
article from indexoncensorship.org |
|
YouTube bans Erica Lust's series In Conversation with Sex Workers
|
|
|
| 15th July 2018
|
|
| 14th July 2018. See article
from motherboard.vice.com See banned videos from erikalust.com |
YouTube has banned Erika Lust's series In Conversation with Sex Workers. There was NO explicit content, NO sex, NO naked bodies, NO (female) nipples or anything else that breaks YouTube's strict guidelines in the series, Lust wrote on her
website. It was simply sex workers speaking about their work and experiences. Presumably the censorship is inspired by the US FOSTA internet censorship where YouTube would be held liable for content that facilitates sex trafficking. It is cheaper
and easier for YouTube to take down any content that could in anyway connected to sex trafficking than spend time checking it out. Erika Lust, a Barcelona-based erotic filmmaker, wrote in a blog post on Wednesday that YouTube terminated her
eponymous channel on July 4, when it had around 11,000 subscribers. The ban came after an interviewee for the company's series In Conversation With Sex Workers, which had been on YouTube for about a week, tweeted to promote her involvement in the film.
Within hours of that tweet the channel was terminated, citing violation of community guidelines. Update: Charlotte Rose too 15th July 2018. See
article from twitter.com
Charlotte Rose, a well known sex industry campaigner tweets: We've just received an email to say that my YouTube channel has been suspended with no room for appeal - looks like no more live streams of
@rosetalkssex? I don't understand why I've been a target of a total censorship! @YouTube removes my Chanel and 5 years worth of content (which made them money) with no warning.
|
|
|
|
|
|
14th July 2018
|
|
|
Now the US has pulled out of the UN Human Rights Council its direction accelerates away from human rights See article from accessnow.org
|
|
|
|
|
| 14th July 2018
|
|
|
Russia's State Duma has adopted a draft law that aims to tackle apps through which pirated content is distributed See
article from torrentfreak.com |
|
Research finds that ISP porn filters have an insignificant effect on preventing adolescents from seeking out porn
|
|
|
| 13th July 2018
|
|
| See article from liebertpub.com |
A paper has been published on the effects of network level website blocking to try and prevent adolescents from seeking out porn. Internet Filtering and Adolescent Exposure to Online Sexual Material
BY Andrew K. Przybylski, and Victoria Nash Abstract Early adolescents are spending an increasing amount of time online, and a significant share of caregivers now use Internet
filtering tools to shield this population from online sexual material. Despite wide use, the efficacy of filters is poorly understood. In this article, we present two studies: one exploratory analysis of secondary data collected in the European Union,
and one preregistered study focused on British adolescents and caregivers to rigorously evaluate their utility. In both studies, caregivers were asked about their use of Internet filtering, and adolescent participants were interviewed about their recent
online experiences. Analyses focused on the absolute and relative risks of young people encountering online sexual material and the effectiveness of Internet filters. Results suggested that caregiver's use
of Internet filtering had inconsistent and practically insignificant links with young people reports of encountering online sexual material. Conclusions The struggle to shape the experiences young
people have online is now part of modern parenthood. This study was conducted to address the value of industry, policy, and professional advice concerning the appropriate role of Internet filtering in this struggle. Our preliminary findings suggested
that filters might have small protective effects, but evidence derived from a more stringent and robust empirical approach indicated that they are entirely ineffective. These findings highlight the need for a critical cost -- benefit analysis in light of
the financial and informational costs associated with filtering and age verification technologies such as those now being developed in some European countries like the United Kingdom. Further, our results highlight the need for registered trials to
rigorously evaluate the effectiveness of costly technological solutions for social and developmental goals.
The write up doesn't really put its conclusions with any real context as to what is actually happening beyond the kids still
being able to get hold of porn. The following paragraph gives the best clue of what is going on: We calculated absolute risk reduction of exposure to online sexual material associated with caregivers using filtering
technology in practical terms. These resultswere used to calculate the number of households which would have to be filtered to prevent one young person, who would otherwise see sexual material online, from encountering it over a 12-month period.
Depending on the form of content, results indicated that between 17 and 77 households would need to be filtered to prevent a young adolescent from encountering online sexual material. A protective effect lower than we would consider practically
significant.
This seems to suggest that if one kid has a censored internet then he just goes around to a mate's house who isn't censored, and downloads from there. He wouldn't actually be blocked from viewing porn until his whole
circle of friends are similarly censored. It only takes one kid to be able download porn, as it can then be loaded on a memory stick to be passed around. |
|
Russia calls on volunteers to snitch on websites
|
|
|
| 13th July 2018
|
|
| See article from zdnet.com |
Russia's interior minister says he wants citizens to scour the internet for banned material. The Russian internet censor Roskomnadzor, has an ever-expanding list of banned sites featuring material that Russian authorities don't like. The list
takes in everything from LGBT sites to critics of the Kremlin and sites that allegedly carry terrorist propaganda, the main justification for many of Russia's online censorship and surveillance laws. Free-speech activists reckon the number of
blocked websites now tops 100,000, but how best to keep adding to that list? Russia's interior minister, Vladimir Kolokoltsev, says volunteers should step up to aid the search for banned information. Whilst speaking about the challenges faced by
search and rescue volunteers, he said volunteers could help public authorities in preventing drug abuse, combating juvenile delinquency, and monitoring the internet networks to search for banned information.
|
|
Egypt's Draconian New Cybercrime Bill Will Only Increase Censorship
|
|
|
| 13th July 2018
|
|
| See article from
eff.org |
The new 45-article cybercrime law, named the Anti-Cyber and Information Technology Crimes law, is divided into two parts. The first part of the bill stipulates that service providers are obligated to retain user information (i.e. tracking data) in the
event of a crime, whereas the second part of the bill covers a variety of cybercrimes under overly broad language (such as threat to national security). Article 7 of the law, in particular, grants the state the authority to shut
down Egyptian or foreign-based websites that incite against the Egyptian state or threaten national security through the use of any digital content, media, or advertising. Article 2 of the law authorizes broad surveillance capabilities, requiring
telecommunications companies to retain and store users' data for 180 days. And Article 4 explicitly enables foreign governments to obtain access to information on Egyptian citizens and does not make mention of requirements that the requesting country
have substantive data protection laws. Update: Passed 17th July 2018. See
article from kveo.com Egypt's parliament has passed three controversial draft bills regulating the press
and media. The draft bills, which won the parliament's approval on Monday, will also regulate the Supreme Media Regulatory Council, the National Press Authority and the National Media Authority. The bills still need to be approved by the
president, Abdel-Fattah el-Sissi, before they can become laws.
|
|
|
|
|
| 13th July 2018
|
|
|
Don't Fall for This Scam Claiming You Were Recorded Watching Porn See article from gizmodo.com |
|
FOSTA internet censorship has blind-sided police in their pursuit of pimps and traffickers
|
|
|
|
12th July 2018
|
|
| See
article from techdirt.com |
Supporters of the US internet censorship law FOSTA were supposedly attempting to target pimps and traffickers, but of course their target was the wider sex work industry. Hence they weren't really interested in the warning that the law would make it
harder to target pimps and sex traffickers as their activity would be driven off radar. Anyway it seems that the police at least have started to realise that the warning is coming true, but I don't suppose this will bother the politicians much.
Over in Indianapolis, the police have just arrested their first pimp in 2018, and it involved an undercover cop being approached by the pimp. The reporter asks why there have been so few such arrests, and the police point the finger right at the shutdown
of Backpage: The cases, according to Sgt. John Daggy, an undercover officer with IMPD's vice unit, have just dried up. The reason for that is pretty simple: the feds closed police's best source of leads, the online personals site Backpage, earlier
this year. Daggy explained: We've been a little bit blinded lately because they shut Backpage down. I get the reasoning behind it, and the ethics behind it, however, it has blinded us. We used to look at Backpage as a
trap for human traffickers and pimps. With Backpage, we would subpoena the ads and it would tell a lot of the story. Also, with the ads we would catch our victim at a hotel room, which would give us a crime scene. There's a ton of
evidence at a crime scene. Now, since [Backpage] has gone down, we're getting late reports of them and we don't have much to go by.
|
|
Jeremy Wright is appointed as the new culture minister
|
|
|
| 12th July 2018
|
|
| See article from en.wikipedia.org |
Jeremy Wright has been appointed as the new Secretary of State for Digital, Culture, Media and Sport. He is the government minister in charge of the up 'n' coming regime to censor internet porn. He will also be responsible for several government
initiatives attempting to censor social media. He is a QC and was previously the government's Attorney General. His parliamentary career to date has not really given any pointers to his views and attitudes towards censorship. The previous
culture minister, Matt Hancock has move upwards to become minister for health. Perhaps in his new post he can continue to whinge about limiting what he considers the excessive amount of screen time being enjoyed by children. |
|
Sky, TalkTalk and Virgin tell Parliament that they would welcome the establishment of an official state internet censor
|
|
|
| 11th July
2018
|
|
| See article from telegraph.co.uk
|
Sky, TalkTalk and Virgin Media would back the creation of an internet censor to set out a framework for internet companies in the UK, the House of Lords Communications Committee was told. The three major UK ISPs were reporting to a House of Lords'
ongoing inquiry into internet censorship. The companies' policy heads pushed for a new censor, or the expansion of the responsibility of a current censor, to set the rules for content censorship and to better equip children using the internet amid safety
concerns . At the moment Information Commissioner's Office has responsibility for data protection and privacy; Ofcom censors internet TV; the Advertising Standards Authority censors adverts; and the BBFC censors adult porn. Citing a report
by consultancy Communications Chambers, Sky's Adam Kinsley said that websites and internet providers are making decisions but in a non structured way. Speaking about the current state of internet regulation, Kinsley said:
Companies are already policing their own platforms. There is no accountability of what they are doing and how they are doing it. The only bit of transparency is when they decide to do it on a global basis and at a time of their
choosing. Policy makers need to understand what is happening, and at the moment they don't have that. The 13-strong House of Lords committee, chaired by Lord Gilbert of Panteg, launched an inquiry earlier this year to explore how the
censorship of the internet should be improved. The committee will consider whether there is a need for new laws to govern internet companies. This inquiry will consider whether websites are sufficiently accountable and transparent, and whether they have
adequate governance and provide behavioural standards for users. The committee is hearing evidence from April to September 2018 and will launch a report at the end of the year. |
|
By demonstrating that legitimate businesses are being unconstitutionally censored
|
|
|
|
11th July 2018
|
|
| See article from eff.org CC by
David Greene |
The EFF writes: We are asking a court to declare the Allow States and Victims to Fight Online Sex Trafficking Act of 2017 ("FOSTA") unconstitutional and prevent it from being enforced. The law was written so poorly that it
actually criminalizes a substantial amount of protected speech and, according to experts, actually hinders efforts to prosecute sex traffickers and aid victims. In our lawsuit, two human rights organizations, an individual
advocate for sex workers, a certified non-sexual massage therapist, and the Internet Archive, are challenging the law as an unconstitutional violation of the First and Fifth Amendments. Although the law was passed by Congress for the worthy purpose of
fighting sex trafficking, its broad language makes criminals of those who advocate for and provide resources to adult, consensual sex workers and actually hinders efforts to prosecute sex traffickers and aid victims. EFF
strongly opposed FOSTA
throughout the
legislative process . During the months-long Congressional debate on the law we expressed our concern that the law violated
free speech rights and would do heavy damage to online freedoms. The law that was
ultimately passed by Congress and signed into law by President Trump was actually the
most egregiously bad of those Congress had been considering.
What FOSTA Changed FOSTA made three major changes to existing law. The first two involved changes to federal criminal law:
First, it created an entirely new federal crime by adding a new section to the Mann Act. The new law makes it a crime to "own, manage or operate" an online service with the intent to "promote or facilitate"
"the prostitution of another person." That crime is punishable by up to 10 years in prison. The law further makes it an "aggravated offense," punishable by up to 25 years in prison and also subject to civil lawsuits if
"facilitation" was of the prostitution of 5 or more persons, or if it was done with "reckless disregard" that it "contributed to sex trafficking." An aggravated violation may also be the basis for an individual's civil
lawsuit. The prior version of the Mann Act only made it illegal to physically transport a person across state lines for the purposes of prostitution. Second, FOSTA expanded existing federal criminal sex trafficking law.
Before SESTA, the law made it a crime to knowingly advertise sexual services of a minor or any person doing so only under force, fraud, or coercion, and also criminalized several other modes of conduct. The specific knowledge requirement for advertising
(that one must know he advertisement was for sex trafficking) was an acknowledgement that advertising was entitled to some First Amendment protection. The prior law additionally made it a crime to financially benefit from "participation in a
venture" of sex trafficking. FOSTA made seemingly a small change to the law: it defined "participation in a venture" extremely broadly to include "assisting, supporting, or facilitating." But this new very broad language has
created great uncertainty about liability for speech other than advertising that someone might interpret as "assisting" or "supporting" sex trafficking, and what level of awareness of sex trafficking the participant must have.
As is obvious, these expansions of the law are fraught with vague and ambiguous terms that have created great uncertainty about what kind of online speech is now illegal. FOSTA does not define "facilitate",
"promote", "contribute to sex trafficking," "assisting," or supporting" -- but the inclusion of all of these terms shows that Congress intended the law to apply expansively. Plaintiffs thus reasonably fear it will be
applied to them. Plaintiffs Woodhull Freedom Foundation and Human Rights Watch advocate for the decriminalization of sex work, both domestically and internationally. It is unclear whether that advocacy is considered "facilitating" prostitution
under FOSTA. Plaintiffs Woodhull and Alex Andrews offer substantial resources online to sex workers, including important health and safety information. This protected speech, and other harm reduction efforts, can also be seen as "facilitating"
prostitution. And although each of the plaintiffs vehemently opposes sex trafficking, Congress's expressed sense in passing the law
was that sex trafficking and sex work were "inextricably linked." Thus, plaintiffs are legitimately concerned that their advocacy on behalf of sex workers will be seen as being done in reckless disregard of some "contribution to sex
trafficking," even though all plaintiffs vehemently oppose sex trafficking. The third change significantly undercut the protections of one of the Internet's most important laws, 47 U.S.C. § 230, originally a provision of the
Communications Decency Act, commonly known simply as Section 230 or CDA 230:
FOSTA significantly undermined the legal protections intermediaries had under 42 U.S.C. § 230, commonly known simply as Section 230. Section 230 generally immunized intermediaries form liability arising from content created by
others--it was thus the chief protection that allowed Internet platforms for user-generated content to exist without having to review every piece of content appearing posted to them for potential legal liability. FOSTA undercut this immunity in three
significant ways. First, Section 230 already had an exception for violations of federal criminal law, so the expansion of criminal law described above also automatically expanded the Section 230 exception. Second, FOSTA nullified the immunity also for
state criminal lawsuits for violations of state laws that mirror the violations of federal law. And third, FOSTA allows for lawsuits by individual civil litigants.
The possibility of these state criminal and private civil lawsuit is very troublesome. FOSTA vastly magnifies the risk an Internet host bears of being sued. Whereas federal prosecutors typically carefully pick and choose which
violations of law they pursue, the far more numerous state prosecutors may be more prone to less selective prosecutions. And civil litigants often do not carefully consider the legal merits of an action before pursing it in court. Past experience teaches
us that they might file lawsuits merely to intimidate a speaker into silence -- the cost of defending even a meritless lawsuit being quite high. Lastly, whereas with federal criminal prosecutions, the US Department of Justice may offer clarifying
interpretations of a federal criminal law that addresses concerns with a law's ambiguity, those interpretations are not binding on state prosecutors and the millions of potential private litigants. FOSTA Has Already Censored
The Internet As a result of these hugely increased risks of liability, many platforms for online speech have shuttered or restructured. The following as just two examples:
Two days after the Senate passed FOSTA, Craigslist eliminated its Personals section, including non-sexual subcategories such as "Missed Connections" and "Strictly Platonic." Craigslist
attributed this change to FOSTA, explaining "Any tool or service can be misused. We can't take such risk without jeopardizing all our other services, so we are
regretfully taking craigslist personals offline. Hopefully we can bring them back some day." Craigslist also shut down its Therapeutic Services section and will not permit ads that were previously listed in Therapeutic Services to be re-listed in
other sections, such as Skilled Trade Services or Beauty Services. VerifyHim formerly maintained various online tools that helped sex workers avoid abusive clients. It described itself as "the biggest dating blacklist
database on earth." One such resource was JUST FOR SAFETY, which had screening tools designed to help sex workers check to see if they might be meeting someone dangerous, create communities of common interest, and talk directly to each other about
safety. Following passage of FOSTA, VerifyHim took down many of these tools, including JUST FOR SAFETY, and explained that it is "working to change the direction of the site."
Plaintiff Eric Koszyk is a certified massage therapist running his own non-sexual massage business as his primary source of income. Prior to FOSTA he advertised his services exclusively in Craigslist's Therapeutic Services section.
That forum is no longer available and he is unable to run his ad anywhere else on the site, thus seriously harming his business. Plaintiff the Internet Archive fears that it can no longer rely on Section 230 to bar liability for content created by third
parties and hosted by the Archive, which comprises the vast majority of material in the Archive's collection, on account of FOSTA's changes to Section 230. The Archive is concerned that some third-party content hosted by the Archive, such as archives of
particular websites, information about books, and the books themselves, could be construed as promoting or facilitating prostitution, or assisting, supporting, or facilitating sex trafficking under FOSTA's expansive terms. Plaintiff Alex Andrews
maintains the website RateThatRescue.org, a sex worker-led, public, free, community effort to share information about both the organizations and services on which sex workers can rely, and those they should avoid. Because the site is largely
user-generated content, Andrews relies on Section 230's protections. She is now concerned that FOSTA now exposes her to potentially ruinous civil and criminal liability. She has also suspended moving forward with an app that would offer harm reduction
materials to sex workers. Human Rights Watch relies heavily on individuals spreading its reporting and advocacy through social media. It is concerned that social media platforms and websites that host, disseminate, or allow users to spread their reports
and advocacy materials may be inhibited from doing so because of FOSTA. And many many others are experiencing the same uncertainty and fears of prosecution that are plaguing other advocates, service providers, platforms, and
platform users since FOSTA became law. We have asked the court to preliminarily enjoin enforcement of the law so that the plaintiffs and others can exercise their First Amendment rights until the court can issue a final ruling.
But there is another urgent reason to halt enforcement of the law. Plaintiff Woodhull Freedom Foundation is holding its annual Sexual Freedom Summit August 2-, 2018. Like past years, the Summit features a track on sex work, this year titled "Sex as
Work," that seeks to advance and promote the careers, safety, and dignity of individuals engaged in professional sex work. In presenting and promoting the Sexual Freedom Summit, and the Sex Work Track in particular, Woodhull operates and uses
interactive computer services in numerous ways: Woodhull uses online databases and cloud storage services to organize, schedule and plan the Summit; Woodhull exchanges emails with organizers, volunteers, website developers, promoters and presenters
during all phases of the Summit; Woodhull has promoted the titles of all workshops on its Summit website ; Woodhull also publishes the biographies and contact information
for workshop presenters on its website, including those for the sex workers participating in the Sex Work Track and other tracks. Is publishing the name and contact information for a sex worker "facilitating the prostitution of another person"?
If it is, FOSTA makes it a crime. Moreover, most, if not all, of the workshops are also promoted by Woodhull on social media such as Facebook and Twitter; and Woodhull wishes to stream the Sex Work Track on Facebook, as it does
other tracks, so that those who cannot attend can benefit from the information and commentary. Without an injunction, the legality under FOSTA of all of these practices is uncertain. The preliminary injunction is necessary so that
Woodhull can conduct the Sex as Work track without fear of prosecution. It is worth emphasizing that Congress was repeatedly warned that it was passing a law that would censor far more speech than was necessary to address the
problem of sex trafficking, and that the law would indeed hinder law enforcement efforts and pose great dangers to sex workers. During the Congressional debate on FOSTA and SESTA, anti-trafficking groups such as
Freedom Network and the
International Women's Health Coalition issued statements warning that the laws would hurt efforts to aid trafficking
victims, not help them. Even Senator Richard Blumenthal, an original cosponsor of the SESTA (the Senate bill) criticized the new Mann Act provision when it was proposed in the House bill, telling
Wired "there is no good reason to proceed with a proposal that is opposed by the very survivors it claims to
support." Nevertheless, Senator Blumenthal ultimately voted to pass FOSTA. In support of the
preliminary injunction , we have submitted the declarations of several experts who confirm the
harmful effects FOSTA is having on sex workers, who are being driven back to far more dangerous street-based work as online classified sites disappear, to the loss of online "bad date lists" that informed sex workers of risks associated with
certain clients, to making sex less visible to law enforcement, which can no longer scour and analyze formerly public websites where sex trafficking had been advertised. For more information see the Declarations of
Dr. Alexandra Lutnick ,
Prof. Alexandra Frell Levy , and
Dr. Kimberly Mehlman-Orozco . |
|
A Tweetathon demanding an end to the taxing of social media in Uganda
|
|
|
| 9th July 2018
|
|
| See
article from advox.globalvoices.org CC by Nwachukwu Egbunike
|
Join the Global Voices Sub-Saharan Africa team ( @gvssafrica ) for a
multilingual tweetathon demanding an end to the taxation of social media in Uganda. On July 1, the Ugandan government began enforcing a new law that imposes a 200 shilling [US$0.05, 2£0.04] daily levy on people using internet
messaging platforms, despite protests to the contrary from local and international online free speech advocates. This move, according to Ugandan President Yoweri Museveni, has the dual purpose of strengthening the national budget
and also curtailing gossip by Ugandans on social media. It was also popular among local telecom providers, who do not directly benefit from the use of foreign-based over-the-top services such as Facebook, Twitter, and WhatsApp. The polic was preceded with an order to register all new mobile SIM cards with the National Biometric Data Centre. The measure also forces Ugandans to only use mobile money accounts in order to recharge their SIM cards and makes it mandatory to pay a one percent levy on the total value of transaction on any mobile money transaction .
These new policies make it more costly for Ugandans -- especially those living in poverty -- to communicate and perform everyday tasks using their mobile devices. On July 2, civil society and legal
advocates in Uganda filed a court challenge against the law, arguing that it violates the country's constitution. A protester demonstrates his opposition to Uganda's social media tax at a gathering on July 6, 2018.
On July 6, concerned citizens and civil society advocates issued a joint press statement [see below] calling on Ugandans to avoid paying the tax by using alternate methods to exchange money and access social media, and to join a
National Day of Peaceful Protest Against Unfair Taxation on Wednesday, July 11, 2018. The Global Voices community and our network of friends and allies wish to support this and other efforts to demand an end to the tax. We believe
that this tax is simply a ploy to censor Ugandans and gag dissenting voices. We believe social media should be freely accessible for all people, including Ugandans. The Ugandan social media tax must go! On
Monday, July 9, beginning at 14:00 East Africa Time, we plan to tweet at community leaders, government and diplomatic actors, and media influencers to increase awareness and draw public attention to the issue. We especially encourage fellow bloggers and
social media users all over the world to join us.
|
|
Facebook censors the US Declaration of Independence as hate speech
|
|
|
| 9th July 2018
|
|
| See article from
telegraph.co.uk |
One moment Facebook's algorithms are expected to be able to automatically distinguish terrorism support from news reporting or satire, the next moment, it demonstrates exactly how crap it is by failing to distinguish hate speech from a profound, and
nation establishing, statement of citizens rights. Facebook's algorithms removed parts of the US Declaration of Independence from the social media site after determining they represented hate speech. The issue came to light when a local
paper in Texas began posting excerpts of the historic text on its Facebook page each day in the run up to the country's Independence Day celebrations on July 4. However when The Liberty County Vindicator attempted to post its tenth extract, which
refers to merciless Indian savages, on its Facebook page the paper received a notice saying the post went against its standards on hate speech. Facebook later 'apologised' as it has done countless times before and allowed the posting.
|
|
An excellent summary of the issues leading to the EU disgracefully proposing internet censorship for the benefit of mostly American media corporations
|
|
|
| 8th July 2018
|
|
| See article from
technollama.co.uk CC by Andres |
As we have been covering in the last couple of articles, a controversial EU Copyright Directive has been under discussion at the European Parliament, and in a surprising turn of events,
it voted to reject fast-tracking the tabled proposal by the JURI Committee which contained controversial proposals, particularly in
Art 11 and
Art 13 . The proposed Directive will now get a full discussion and debate in plenary in September. I say
surprising because for those of us who have been witnesses (and participants) to the Copyright Wars for the last 20 years, such a defeat of copyright maximalist proposals is practically unprecedented, perhaps with the exception of
SOPA/PIPA . For years we've had a familiar pattern in the passing of copyright legislation: a proposal has been made to enhance protection and/or
restrict liberties, a small group of ageing millionaire musicians would be paraded supporting the changes in the interest of creators. Only copyright nerds and a few NGOs and digital rights advocates would complain, their opinions would be ignored and
the legislation would pass unopposed. Rinse and repeat. But something has changed, and a wide coalition has managed to defeat powerful media lobbies for the first time in Europe, at least for now. How was this possible?
The main change is that the media landscape is very different thanks to the Internet. In the past, the creative industries were monolithic in their support for stronger protection, and they included creators, corporations, collecting
societies, publishers, and distributors; in other words the gatekeepers and the owners were roughly on the same side. But the Internet brought a number of new players, the tech industry and their online platforms and tools became the new gatekeepers.
Moreover, as people do not buy physical copies of their media and the entire industry has moved towards streaming, online distributors have become more powerful. This has created a perceived imbalance, where the formerly dominating industries need to
negotiate with the new gatekeepers for access to users. This is why creators complain about a value gap between what they
perceive they should be getting, and what they actually receive from the giants. The main result of this change from a political standpoint is that now we have two lobbying sides in the debate, which makes all the difference when
it comes to this type of legislation. In the past, policymakers could ignore experts and digital rights advocates because they never had the potential to reach them, letters and articles by academics were not taken into account, or given lip service
during some obscure committee discussion just to be hidden away. Tech giants such as Google have provided lobbying access in Brussels, which has at least levelled the playing field when it comes to presenting evidence to legislators.
As a veteran of the Copyright Wars, I have to admit that it has been very entertaining reading the reaction from the copyright industry lobby groups and their individual representatives, some almost going apoplectic with rage at
Google's intervention. These tend to be the same people who spent decades lobbying legislators to get their way unopposed, representing large corporate interests unashamedly and passing laws that would benefit only a few, usually to the detriment of
users. It seems like lobbying must be decried when you lose. But to see this as a victory for Google and other tech giants completely ignores the large coalition that shares the view that the proposed Articles 11 and 13 are very
badly thought-out, and could represent a real danger to existing rights. Some of us have been fighting this fight when Google did not even exist, or it was but a small competitor of AltaVista, Lycos, Excite and Yahoo! At the same
time that more restrictive copyright legislation came into place, we also saw the rise of free and open source software, open access, Creative Commons and open data. All of these are legal hacks that allow sharing, remixing and openness. These were
created precisely to respond to restrictive copyright practices. I also remember how they were opposed as existential threats by the same copyright industries, and treated with disdain and animosity. But something wonderful happened, eventually open
source software started winning (we used to buy operating systems), and Creative Commons became an important part of the Internet's ecosystem by propping-up valuable common spaces such as Wikipedia. Similarly, the Internet has
allowed a great diversity of actors to emerge. Independent creators, small and medium enterprises, online publishers and startups love the Internet because it gives them access to a wider audience, and often they can bypass established gatekeepers. Lost
in this idiotic "Google v musicians" rhetoric has been the threat that both Art 11 and 13 represent to small entities. Art 11 proposes a new publishing right that has been proven to affect smaller players in Germany and Spain; while Art 13
would impose potentially crippling economic restrictions to smaller companies as they would have to put in place automated filtering systems AND redress mechanisms against mistakes. In fact, it has been often remarked that Art 13 would benefit existing
dominant forces, as they already have filtering in place (think ContentID). Similarly, Internet advocates and luminaries see the proposals as a threat to the Internet, the people who know the Web best think that this is a bad
idea. If you can stomach it, read this thread featuring a copyright lobbyist attacking Neil Gaiman, who has been one of the Internet celebrities that
have voiced their concerns about the Directive. Even copyright experts who almost never intervene in digital rights
affairs the have been vocal in their opposition to the changes. And finally we have political representatives from various parties and backgrounds who have been vocally opposed to the changes. While the leader of the political
opposition has been the amazing Julia Reda, she has managed to bring together a variety of voices from other parties and countries. The vitriol launched at her has been unrelenting, but futile. It has been quite a sight to see her opponents both try to
dismiss her as just another clueless young Pirate commanded by Google, while at the same time they try to portray her as a powerful enemy in charge of the mindless and uninformed online troll masses ready to do her bidding. All of
the above managed to do something wonderful, which was to convey the threat in easy-to-understand terms so that users could contact their representatives and make their voice heard. The level of popular opposition to the Directive has been a great sight
to behold. Tech giants did not create this alliance, they just gave various voices access to the table. To dismiss this as Google's doing completely ignores the very real and rich tapestry of those defending digital rights, and it
is quite clearly patronising and insulting, and precisely the reason why they lost. It was very late until they finally realised that they were losing the debate with the public, and not even the last-minute deployment of musical dinosaurs could save the
day. But the fight continues, keep contacting your MEPs and keep applying pressure. Appendix So who supported internet censorship in the EU parliamentary vote? Mostly the EU Conservative Group and also
half the Social Democrat MEPs and half the Far Right MEPs
|
|
Tanzania's extortionate registration free makes blogging impossible for anyone but the rich
|
|
|
| 8th July 2018
|
|
| See article from theverge.com
|
In May, Tanzanian bloggers lost an appeal that had temporarily suspended a new set of regulations granting the country's Communication Regulatory Authority discretionary powers to censor online content. Officially dubbed the Electronic and Postal
Communications (Online Content) Regulations, 2018 , the statute requires online content creators -- traditional media websites, online TV and radio channels, but also individual bloggers and podcasters -- to pay roughly two million Tanzanian shillings
(930 US dollars) in registration and licensing fees. They must store contributors' details for 12 months and have means to identify their sources and disclose financial sponsors. Cyber cafes must install surveillance cameras, and all owners of
electronic mobile devices, including phones, have to protect them with a password. Failure to comply with the regulations -- which also forbid online content that is indecent, annoying, or that leads to public disorder -- will result in a five million
Tanzanian shillings (2,202 US dollars) fine, a jail term of not less than a year or both. These new regulations are already forcing young content creators--and often poorer ones--offline. For a country like Tanzania, whose GDP per capita is 879 US
dollars --and where approximately 70% of the population lives on less than two dollars a day--the financial burden of these new laws means it is impossible to continue blogging. |
|
|
|
|
| 6th July 2018
|
|
|
You probably didn't realise that your fancy new television was spying on EVERYTHING you watch and you probably signed up for it with Samba TV See
article from thesun.co.uk |
|
|
|
|
|
4th July 2018
|
|
|
China has the world's most centralised internet system See article from economist.com
|
|
Brave Introduces Beta of Private Tabs with Tor for Enhanced Privacy while Browsing
|
|
|
| 2nd July 2018
|
|
| See article from brave.com |
Brave explains in a blog post: Today we're releasing our latest desktop browser Brave 0.23 which features Private Tabs with Tor, a technology for defending against network surveillance. This new functionality, currently in beta,
integrates Tor into the browser and gives users a new browsing mode that helps protect their privacy not only on device but over the network. Private Tabs with Tor help protect Brave users from ISPs (Internet Service Providers), guest Wi-Fi providers,
and visited sites that may be watching their Internet connection or even tracking and collecting IP addresses, a device's Internet identifier. Private Tabs with Tor are easily accessible from the File menu by clicking New Private
Tab with Tor. The integration of Tor into the Brave browser makes enhanced privacy protection conveniently accessible to any Brave user directly within the browser. At any point in time, a user can have one or more regular tabs, session tabs, private
tabs, and Private Tabs with Tor open. The Brave browser already automatically blocks ads, trackers, cryptocurrency mining scripts, and other threats in order to protect users' privacy and security, and Brave's regular private tabs
do not save a user's browsing history or cookies. Private Tabs with Tor improve user privacy in several ways. It makes it more difficult for anyone in the path of the user's Internet connection (ISPs, employers, or guest Wi-Fi providers such as coffee
shops or hotels) to track which websites a user visits. Also, web destinations can no longer easily identify or track a user arriving via Brave's Private Tabs with Tor by means of their IP address. Users can learn more about how the Tor network works by
watching this video. Private Tabs with Tor default to DuckDuckGo as the search engine, but users have the option to switch to one of Brave's other nineteen search providers. DuckDuckGo does not ever collect or share users'
personal information, and welcomes anonymous users without impacting their search experience 204 unlike Google which challenges anonymous users to prove they are human and makes their search less seamless. In addition, Brave is
contributing back to the Tor network by running Tor relays. We are proud to be adding bandwidth to the Tor network, and intend to add more bandwidth in the coming months. |
|
Instagram apologises for its censorship of a gay kiss
|
|
|
| 2nd July 2018
|
|
| Thanks to Nick See article from
indy100.com |
Instagram has apologised for censoring a photo of two men kissing for violating community guidelines. The photo - featuring Jordan Bowen and Luca Lucifer - was taken down from photographer Stella Asia Consonni's Instagram. A spokesperson for
the image sharing site regurgitated the usual apology for shoddy censorship saying This post was removed in error and we are sorry. It has since been reinstated.
The photo was published in i-D
magazine as part of a series of photos by Stella exploring modern relationships, which she plans to exhibit later this year. It only reappeared after prominent people in fashion and LGBT+ rights raised awareness about the removal of the photo.
|
|
|
|
|
| 1st July 2018
|
|
|
Russia blocking Telegram showed us how fragile the internet is. By Mehdi Daoudi See article
from thenextweb.com |
|
|