Once Upon a Time ... in Hollywood is a 2019 USA / UK comedy drama by Quentin Tarantino.
Starring Leonardo DiCaprio, Brad Pitt and Margot Robbie.
Quentin Tarantino's Once Upon a Time... in Hollywood visits 1969 Los Angeles, where everything is changing, as TV star Rick Dalton (Leonardo DiCaprio) and his longtime stunt double Cliff Booth (Brad Pitt) make their way around an industry they
hardly recognize anymore. The ninth film from the writer-director features a large ensemble cast and multiple storylines in a tribute to the final moments of Hollywood's golden age.
Director Quentin Tarantino's new film, Once Upon a Time in Hollywood, has been passed by the Indian film censors at the Central Board of Film Certification (CBFC) with an adults sonly A certificate with a couple of curious cuts.
The censor board left multiple instances of the word 'fuck' but has beeped out every usage of the word 'ass', ccording to Pinkvilla, which has access to the censor certificate.
Instagram is adding an option for users to report posts they claim are false. The photo-sharing website is responding to increasing pressure to censor material that government's do not like.
Results then rated as false are removed from search tools, such as Instagram's explore tab and hashtag search results.
The new report facility on Instagram is being initially rolled out only in the US.
Stephanie Otway, a Facebook company spokeswoman Said:
This is an initial step as we work towards a more comprehensive approach to tackling misinformation.
Posting false information is not banned on any of Facebook's suite of social media services, but the company is taking steps to limit the reach of inaccurate information and warn users about disputed claims.
Facebook has introduced a new censorship tool known as Group Quality to evaluate private groups and scrutinize them for any 'problematic content'.
For a long time now, Facebook was facing heat from the media for the fact that the private groups feature is harboring extremists and the spreading of 'fake news'. As a result, the company wrote an
article from newsroom.fb.com introducing a new feature known as Group Quality:
Being in a private group doesn't mean that your actions should go unchecked. We have a responsibility to keep Facebook safe, which is why our Community Standards apply across Facebook, including in private groups. To enforce these policies, we
use a combination of people and technology -- content reviewers and proactive detection. Over the last few years, we've invested heavily in both, including hiring more than 30,000 people across our safety and security teams.
Within this, a specialized team has been working on the Safe Communities Initiative: an effort that started two years ago with the goal of protecting people using Facebook Groups from harm. Made up of product managers, engineers, machine
learning experts and content reviewers, this team works to anticipate the potential ways people can do harm in groups and develops solutions to minimize and prevent it. As the head of Facebook Groups, I want to explain how we're making private
groups safer by focusing on three key areas: proactive detection, tools for admins, and transparency and control for members.
On the plus side Facebook has updated settings used in defining access and visibility of groups which are much clearer than previus incarnations.
Critics say that Facebook's move will not curb misinformation and fake news, but, on the contrary, it may further push it deeper underground making it hard for censor to filter or terminate such content from the site.
Thailand's Digital Economy and Society Minister Puttipong Punnakanta plans to set up a Fake News Center.
The digital minister confirmed that he is looking to create the Fake News Center to:
get rid of fabricated, misleading content on social media which might jeopardize the people's safety and property and violate the Computer Crime Act and other laws.
For instance, content on social media about natural disasters and health care might be fabricated or exaggerated only to confuse and scare viewers. They might be deceived by fraudulent investment scams or lured to buy illegal, hazardous health
He said a dozen government agencies will be asked to cooperate with the Fake News Center such as the police, the military, the Consumer Protection Board, the Food and Drugs Administration and the Public Relations Department, among others.
The UK Government recently outlines its plans for appointing Ofcom as the internet censor overseeing new EU censorship rules introduced under a new Audio Visual Media Services AVMS directive.
In Ireland, the Broadcasting Authority of Ireland (BAI) has pitched for similar powers, with the government currently considering the BAI's position alongside the appointment of an online safety commissioner.
The BAI believes that it could become an EU-wide regulator for online video, because Google and Facebook's European operations are headquartered in Dublin.
Earlier this year, the government announced plans that would see a future online safety commissioner given the power to issue administrative fines, meaning the commissioner would not have to go through a court.
Requirements for Video Sharing Platforms in the Audiovisual Media Services Directive
The Audiovisual Media Services Directive (AVMSD) is the regulatory framework governing EU-wide coordination of national legislation on all audiovisual media. The government launched a consultation on implementing the newly introduced and amended
provisions in AVMSD on 30 May, which is available
One of the main changes to AVMSD is the extension of scope to cover video-sharing platforms (VSPs) for the first time. This extension in scope will likely capture audiovisual content on social media sites, video-sharing sites, pornography sites
and live streaming services. These services are required to take appropriate measures to: protect children from harmful content; protect the general public from illegal content and content that incites violence or hatred, and; respect certain
obligations around commercial communications.
The original consultation, published on 30 May, outlined the government's intention to implement these requirements through the regulatory framework proposed in the
Online Harms White Paper . However, we also indicated the possibility of an interim approach ahead of the regulatory framework coming into force to ensure we meet the transposition deadline of 20 September 2020. We now plan to take forward
this interim approach and have written to stakeholders on 23 July to set out our plans and consult on them.
This open letter and consultation sent to stakeholders, therefore, aims to gather views on our interim approach for implementing requirements pertaining to VSPs through appointing Ofcom as the national regulatory authority. In particular, it asks
how to transpose the definition of VSPs into UK law, and which platforms are in the UK's jurisdiction;
the regulatory framework and the regulator's relationship with industry;
the appropriate measures that should be taken by platforms to protect users;
the information gathering powers Ofcom should have to oversee VSPs;
the appropriate enforcement and sanctions regime for Ofcom;
what form the required out of court redress mechanism should take; and
how to fund the extension of Ofcom's regulatory activities from industry.
Update: The press get wind of the EU censorship nightmare of the new AVMS directive
The government is considering giving powers to fine video-sharing apps and websites to the UK's media censor Ofcom.
The proposal would see Ofcom able to impose multi-million pound fines if it judges the platforms have failed to prevent youngsters seeing pornography, violence and other harmful material.
Ofcom are already the designated internet censor enforcing the current AVMS censorship rules. These apply to all UK based Video on Demand platforms. The current rules are generally less stringent than Ofcom's rules for TV so have not particularly
impacted the likes of the TV catch up services, (apart from Ofcom extracting significant censorship fees for handling minimal complaints about hate speech and product placement).
The notable exception is the regulation of hardcore porn on Video on Demand platforms. Ofcom originally delegated the censorship task to ATVOD but that was a total mess and Ofcom grabbed the censorship roles back. It too became a bit of a non-job
as ATVOD's unviable age verification rules had effectively driven the UK adult porn trade into either bankruptcy or into foreign ownership. In fact this driving the porn business offshore gave rise to the BBFC age verification regime which is
trying to find ways to censor foreign porn websites.
Anyway the EU has now created an updated AVMS directive that extends the scope of content to be censored, as well as the range of websites and apps caught up the law. Where as before it caught TV like video on demand websites, it now catches
nearly all websites featuring significant video content. And of course the list of harms has expanded into the same space as all the other laws clamouring to censor the internet.
In addition, all qualifying video websites will have to register with Ofcom and have to cough up a significant fee for Ofcom's censorship 'services'.
The EU Directive is required to be implemented in EU members' laws by 20th September 2020. And it seems that the UK wants the censors to be up on running from the 19th September 2020.
Even then, it would only be an interim step until an even more powerful internet censor gets implemented under the UK's Online Harms plans.
The Telegraph reported that the proposal was quietly agreed before Parliament's summer break and would give Ofcom the power to fine tech firms up to 5% of their revenues and/or block them in the UK if they failed to comply with its rulings. Ofcom
has said that it is ready to adopt the powers.
A government spokeswoman told the BBC.
We also support plans to go further and legislate for a wider set of protections, including a duty of care for online companies towards their users.
But TechUK - the industry group that represents the sector - said it hoped that ministers would take a balanced and proportionate approach to the issue. Its deputy chief executive Antony Walker said:
Key to achieving this will be clear and precise definitions across the board, and a proportionate sanctions and compliance regime, said
The Internet Association added that it hoped any intervention would be proportionate. Daniel Dyball, the association's executive director.said:
Any new regulation should be targeted at specific harms, and be technically possible to implement in practice - taking into account that resources available vary between companies.
The BBC rather hopefully noted that if the UK leaves the European Union without a deal, we will not be bound to transpose the AVMSD into UK law.
We are deeply concerned that a new form of encryption being introduced to our web browsers will have terrible consequences for child protection.
The new system 204 known as DNS over HTTPS -- would have the effect of undermining the work of the Internet Watch Foundation (IWF); yet Mozilla, provider of the Firefox browser, has decided to introduce it, and others may follow.
The amount of abusive content online is huge and not declining. Last year, the IWF removed more than 105,000 web pages showing the sexual abuse of children. While the UK has an excellent record in eliminating the hosting of such illegal content,
there is still a significant demand from UK internet users: the National Crime Agency estimates there are 144,000 internet users on some of the worst dark-web child sexual abuse sites.
To fight this, the IWF provides a URL block list that allows internet service providers to block internet users from accessing known child sexual abuse content until it is taken down by the host country. The deployment of the new encryption
system in its proposed form could render this service obsolete, exposing millions of people to the worst imagery of children being sexually abused, and the victims of said abuse to countless sets of eyes.
Advances in protecting users' data must not come at the expense of children. We urge the secretary of state for digital, culture, media and sport to address this issue in the government's upcoming legislation on online harms.
Sarah Champion MP;
Tom Watson MP;
Carolyn Harris MP;
Tom Brake MP;
Stephen Timms MP;
Ian Lucas MP;
Tim Loughton MP;
Giles Watling MP;
Madeleine Moon MP;
Vicky Ford MP;
Rosie Cooper MP;
Lord Harris of Haringey
The IWF service is continually being rolled out as an argument against DoH but I am starting to wonder if it is still relevant. Given the universal revulsion against child sex abuse then I'd suspect that little of it would now be located on the
open internet. Surely it would be hiding away in hard to find places like the dark web, that are unlikely to stumbled on by normal people. And of course those using the dark web aren't using ISP DNS servers anyway.
In reality the point of using DoH is to evade government attempts to block legal porn sites. If they weren't intending to block legal sites then surely people would be happy to use the ISP DNS including the IWF service.
Russia is continuing its pressure on Google to censor political political opinion that the government does not like. Media censor Roskomnadzor has sent a letter to Google insisting that it stop promoting banned mass events on YouTube.
It particularly didn't like that YouTube channels were using push notifications and other measures to spread information about protests, such as the recent demonstrations objecting to Moscow banning some opposition politicians from running in
upcoming elections. Some users are allegedly receiving these alerts even if they're not subscribed to the channels.
The Russian agency said it would treat continued promotion as interference in the sovereign affairs of the country and consider Google a hostile influence ostensibly bent on obstructing elections.
Political protests have continued to grow in Russia (the most recent had about 50,000 participants), and they've turned increasingly from the Moscow-specific complaints to general dissatisfaction with President Putin's anti-democratic policies.
A draft executive order from the White House could put the Federal Communications Commission (FCC) in charge of social media censorship. The FFC has a disgraceful record on the subject of internet freedom. It recently showed totally disregard
for the rights of internet users when siding when big business over net neutrality.
Donald Trump's draft order, a summary of which was obtained by CNN, calls for the FCC to develop new regulations clarifying how and when the law protects social media websites when they decide to remove or suppress content on their platforms.
Although still in its early stages and subject to change, the Trump administration's draft order also calls for the Federal Trade Commission to take those new policies into account when it investigates or files lawsuits against misbehaving
US media giants have clearly been showing political bias when censoring conservative views but appointing the FCC as the internet censor does not bode well.
According to the summary seen by CNN, the draft executive order currently carries the title Protecting Americans from Online Censorship . It claims that the White House has received more than 15,000 anecdotal complaints of social media
platforms censoring American political discourse, the summary indicates.
The FTC will also be asked to open a public complaint docket, according to the summary, and to work with the FCC to develop a report investigating how tech companies curate their platforms and whether they do so in neutral ways. Companies whose
monthly user base accounts for one-eighth of the U.S. population or more could find themselves facing scrutiny, the summary said, including but not limited to Facebook, Google, Instagram, Twitter, Pinterest and Snapchat.
The Trump administration's proposal seeks to significantly narrow the protections afforded to companies under Section 230 of the Communications Decency Act, a part of the Telecommunications Act of 1996. Under the current law, internet companies
are not liable for most of the content that their users or other third parties post on their platforms. This law underpins any company wanting to allow users to post their own comments without prior censorship. If protectsion were to be removed
all user posting would need to be censored before being published.
New Zealand ISP Spark says it will block the controversial website 8chan if it resumes service, because it continues to host disturbing material.
8chan is currently down after its web host pulled out in response to 8chan being used by US mass shooters. However, Spark said if 8chan finds another host provider, it would block access. Spark said:
We feel it is the right thing to do given the website's repeated transgressions and continual willingness to distribute disturbing material.
The 8chan internet forum was used by the accused Christchurch mosque gunman to distribute his manifesto and live stream the attack.
However Spark seemed to realise that it would now become a magnet for every easily offended social justice warrior with a pet grievance and said that the the government should step in:
Appropriate agencies of government should put in place a robust policy framework to address the important issues surrounding such material being distributed online and freely available.
Technology commentator Paul Brislen responded:
It's very, very nearly the edge of what's acceptable for what your internet provider to be doing in this kind of situation.
I'm as uncomfortable as they [Spark] are about it. They do really need to find a new way to manage hate-speech and extremist content on the internet.
It's much like the Telecom of old to decide which phone calls you can and can't make.
The risk was someone would now turn around and say okay you blocked 8Chan because of hate speech, now I want you to block this other website because it allows people to access something else. It might be hate speech, it might be pornography, it
might be something that speaks out against a religious group or ethnicity.
You start down a certain track of Spark or any of the other ISPs being forced to decide what is and isn't acceptable for the NZ public and that's not their job at all. They really shouldn't be doing that.
Update: New Zealand's chief censor David Shanks chips in
I applaud the announcement by Spark that they are prepared to block access to 8chan if and when it re-emerges on the internet.
This move is both brave and meaningful. Brave, because a decision not to provide users with access to a site is quite a different thing from a decision not to provide a site with the server capacity and services it needs (which is the choice that
Cloudflare recently made). Meaningful, because everything I have seen tells me that 8chan is the white supremacist killer's platform of choice, with at least three major attacks announced on it within a few months. There is nothing indicating
that upon re-emergence 8chan will be a changed, safer platform. Indeed, it may be even more toxic.
We appreciate that our domestic ISP's have obligations to provide their customers with access to the internet according to their individual terms and conditions. Within those constraints, as the experience post the March 15 attacks show, our
ISP's can act and do the right thing to block platforms that are linked to terrorist atrocities and pose a direct risk of harm to New Zealanders.
I know that ISPs don't take these decisions lightly, and that they do not want to be in the business of making judgments around the content of sites. But these are extraordinary circumstances, and platforms that promote terrorist atrocities
should not be tolerated on the internet, or anywhere else. Spark is making the right call here.
This is a unique set of circumstances, and relying on ISPs to make these calls is not a solution for the mid or long term. I agree with calls for a transparent, robust and sensible regulatory response. Discussions have already started on what
this might look like here in NZ. Ultimately this is a global, internet problem. That makes it complex of course, but I believe that online extremism can be beaten if governments, industry and the public work together
A few days ago Donald Trump responded to more mass shooters by calling on social networks to build tools for identifying potential mass murderers before they act. And across the government, there appears to be growing consensus that social
networks should become partners in surveillance with the government.
So quite a timely moment for the Wall Street Journal to publish an article about FBI plans for mass snooping on social media:
The FBI is soliciting proposals from outside vendors for a contract to pull vast quantities of public data from Facebook, Twitter and other social media to proactively identify and reactively monitor threats to the United States and its
The request was posted last month, weeks before a series of mass murders shook the country and led President Trump to call for social-media platforms to do more to detect potential shooters before they act.
The deadline for bids is Aug. 27.
As described in the solicitation, it appears that the service would violate Facebook's ban against the use of its data for surveillance purposes, according to the company's user agreements and people familiar with how it seeks to enforce them.
The Verge comments on a privacy paradox:
But so far, as the Journal story illustrates, the government's approach has been incoherent. On one hand, it fines Facebook $5 billion for violating users' privacy; on the other, it outlines a plan to potentially store all Americans' public
posts in a database for monitoring purposes.
But of course it is not a paradox, many if not most people believe that they're entitled to privacy whilst all the 'bad' people in the world aren't.
Commercial interests are also very keen on profiling people from their social media postings. There's probably a long list of advertisers who would love a list of rich people who go to casinos and stay at expensive hotels.
Well As Business Insider has noted, one company Hyp3r has been scraping all public postings on Instagram to provide exactly that information:
A combination of configuration errors and lax oversight by Instagram allowed one of the social network's vetted advertising partners to misappropriate vast amounts of public user data and create detailed records of users' physical whereabouts,
personal bios, and photos that were intended to vanish after 24 hours.
The profiles, which were scraped and stitched together by the San Francisco-based marketing firm Hyp3r, were a clear violation of Instagram's rules. But it all occurred under Instagram's nose for the past year by a firm that Instagram had
blessed as one of its preferred Facebook Marketing Partners.
Hyp3r is a marketing company that tracks social-media posts tagged with real-world locations. It then lets its customers directly interact with those posts via its tools and uses that data to target the social-media users with relevant
advertisements. Someone who visits a hotel and posts a selfie there might later be targeted with pitches from one of the hotel's competitors, for example.
The total volume of Instagram data Hyp3r has obtained is not clear, though the firm has publicly said it has a unique dataset of hundreds of millions of the highest value consumers in the world, and sources said more than of 90% of its data came
from Instagram. It ingests in excess of 1 million Instagram posts a month, sources said.
The White House is circulating drafts of a proposed executive order that would address the anti-conservative bias of social media companies. This appears to be the follow up to President Donald Trump pledging to explore all regulatory and
legislative solutions on the issue.
The contents of the order remain undisclosed but it seems that many different ideas are still in the mix. A White House official is reported to have said:
If the internet is going to be presented as this egalitarian platform and most of Twitter is liberal cesspools of venom, then at least the president wants some fairness in the system. But look, we also think that social media plays a vital role.
They have a vital role and an increasing responsibility to the culture that has helped make them so profitable and so prominent.
The social media companies have denied the allegations of bias, but nevertheless the large majority of users censored by the companies are indeed on the right.
Instagram is hiding content hosted by the pole dancing community's most commonly used hashtags.
Pole dancers, performers and entrepreneurs say that the censorship is threatening their livelihood. Sweden-based instructor and performer, Anna-Maija Nyman, told Yahoo Lifestyle.:
The censorship is affecting our whole community because it makes it harder to share and connect, I felt that our community is in danger and under attack.
The controversy for pole dancers began on July 19, when hashtags such as #poledancing, #poledancer and #polesportorg were noticeably wiped off all content previously aggregated by pole dancers around the world.
To alert fellow dancers, California-based pole star, Elizabeth Blanchard, wrote in a post that day that the banning of 19 hashtags appeared to be an effort to shadowban the community. She wrote:
There seems to have been a massive 'cleanse' on instagram and pole dancers have been deemed dirty and inappropriate...or as Instagram puts it we don't 'meet Instagram's community guidelines. There has been lots of talk about shadowbans lately
but this purge of hashtags is hard to mistake as being targeted towards pole dancers.
Shadowbanning is a method used by social networks to quietly silence an account by curtailing how it gets engagement without blocking the ability to post new content. Shadowbanned users are not told that they have been affected, they can continue
to post messages, add new followers and comment on or reply to other posts. But their [content] doesn't appear in the feeds, their replies may be suppressed and they may not show up in searches.
Australia-based instructor, performer and business owner Michelle Shimmy points out that the current restrictions facing pole dancers on the social media platform are part of a much larger issue having to do with Instagram's policy changes to
manage 'problematic' content, which she suggests are inherently sexist.
Apparently Instagram has apologised for its censorship but nobody is expecting a change i th eplicy.
Google has changed its algorithm for the search term 'lesbian' to show informative results instead of pornographic content.
Previously, the first results shown when googling the word were porn videos.
The algorithm has been changed seemingly as a result of a campaign led by the Twitter account @SEO_lesbienne and French news site Numerama. They noted that only the word lesbian linked to sexualised pages, whereas searching for gay or trans
displayed Wikipedia pages, articles and specialised blogs.
Now if you want to find some lesbian porn, you have to type 'lesbian porn'.
8chan is a forum website that has become a home for the far right and those otherwise discontented by modern society for various reasons. There's nothing special about it that cannot be easily replicated elsewhere.
And as Buzzfeed notes:
Pull the plug, it will appear somewhere else, in whatever locale will host it. Because there's nothing particularly special about 8chan, there are no content algorithms, hosting technology immaterial. The only thing radicalizing 8chan users are
other 8chan users.
However in the past six months is has been used to distribute racist and white nationalist manifestos prior to mass shootings.
It has now been refused service from Cloudfare which offers security services, most notably defending against denial of service attacks. Cloudflare announced in a blogpost the company would be terminating 8chan as a client.
This represents a reversal of Cloudflare's position from less than 24 hours earlier, when the co-founder and chief executive, Matthew Prince, defended his company's relationship with 8chan as a moral obligation in an
extensive interview with the Guardian. Prince explained the change:
The rationale is simple: they have proven themselves to be lawless and that lawlessness has caused multiple tragic deaths. Even if 8chan may not have violated the letter of the law in refusing to moderate their hate-filled community, they have
created an environment that revels in violating its spirit.
While removing 8chan from our network takes heat off of us, it does nothing to address why hateful sites fester online. It does nothing to address why mass shootings occur. It does nothing to address why portions of the population feel so
disenchanted they turn to hate. In taking this action we've solved our own problem, but we haven't solved the Internet's.
You'd have though the authorities would be advised to keep an eye on public forums so as to be aware of any grievances that are widely shared. Maybe to try and resolve them, and maybe just to be aware of what people are thinking. For example, if
David Cameron had been better aware of what many people thought about immigration, he might have realised that holding the EU referendum was a disastrously stupid idea.
Internet forum 8chan has gone dark after web services company Voxility banned the site -- and also banned 8chan's new host Epik, which had been leasing web space from it. Epik began working with 8chan today after web services giant Cloudflare cut
off service, following the latest of at least three mass shootings linked to 8chan. But Stanford researcher Alex Stamos noted that Epik seemed to lease servers from Voxility, and when Voxility discovered the content, it cut ties with Epik almost