|
Meta's punishment regime for 'wrong speak' offences likened to re-education camps
|
|
|
|
11th December 2024
|
|
| See article from reclaimthenet.org |
ReclaimTheNet has likened Meta's regime for punishment of transgressions against its rules to the re-education camps run by repressive regimes. The group writes: Like law enforcement in some repressive virtual regimes,
Meta is introducing the concept of re-education of 'citizens' (users), as an alternative to eventually sending them to 'jail' (imposing account restrictions) for first offences. The same community standards now apply across Meta's
platforms: Facebook, Instagram, Messenger, Threads ,while the new rule means that instead of collecting a strike for a first policy violation, users who go through an educational program can have it deleted. There's also
probation...those who receive no strike for a year after that will again be eligible to participate in the remove your warning course. Meta first introduced the option for creators last summer and is now expanding it to everyone.
In announcing the change of the policy, the tech giant refers to research that showed most of those violating its rules for the first time may not be aware they are doing so. This is where the short educational program comes in,
as a way to reduce the risk of receiving that first strike, and Meta says the program is designed to help better explain its policies.
The re-education takes the form of an online training course allowing errant users to own up to
their crime, explain why they did it, and no doubt promise to do better next time. |
|
Facebook ordered to allow images showing trans female breasts whilst still banning natural female breasts
|
|
|
| 18th January 2023
|
|
| See article from nypost.com See
article from oversightboard.com |
Facebook and Instagram will allow transgender and non-binary users to flash their bare breasts -- but women who were born female are still not allowed a similar freedom, according to Meta's advisory board. Meta's Oversight Board -- an independent body
which Meta CEO Mark Zuckerberg has called the company's Supreme Court for content moderation and censorship policies -- ordered Facebook and Instagram to lift a ban on images of topless women for anyone who identifies as transgender or non-binary,
meaning they view themselves as neither male or female. The same image of female-presenting nipples would be prohibited if posted by a cisgender woman but permitted if posted by an individual self-identifying as non-binary, the board noted in its
decision. The board cited a recent decision to overturn a ban on two Instagram posts by a couple that describes themselves as transgender and non-binary that posed topless but covered their nipples -- only to have the post flagged by other users.
Meta banned the image, but the couple won their appeal and the photo was restored online. Meta will rely on human reviewers will now be tasked with trying to determine the sex of breasts. |
|
Meta calls for public comments about the police requested take down of drill music on Facebook
|
|
|
|
18th August 2022
|
|
| See
article from oversightboard.com |
In January 2022, an Instagram account that describes itself as publicising British music posted a video with a short caption on its public account. The video is a 21-second clip of the music video for a UK drill music track called Secrets Not Safe by the
rapper Chinx (OS). The caption tags Chinx (OS) as well as an affiliated artist and highlights that the track had just been released. The video clip shows part of the second verse of the song and fades to a black screen with the text OUT NOW. Drill is a
subgenre of rap music popular in the UK, with a large number of drill artists active in London. Shortly after the video was posted, Meta received a request from UK law enforcement to remove content that included this track. Meta
says that it was informed by law enforcement that elements of it could contribute to a risk of offline harm. The company was also aware that the track referenced a past shooting in a way that raised concerns that it may provoke further violence. As a
result, the post was escalated for internal review by experts at Meta. Meta's experts determined that the content violated the Violence and Incitement policy, specifically the prohibition on coded statements where the method of
violence or harm is not clearly articulated, but the threat is veiled or implicit. The Community Standards list signs that content may include veiled or implicit threats. These include content that is shared in a retaliatory context, and content with
references to historical or fictional incidents of violence. Further information and/or context is always required to identify and remove a number of different categories listed at the end of the Violence and Incitement policy, including veiled threats.
Meta has explained to the Board that enforcement under these categories is not subject to at-scale review (the standard review process conducted by outsourced moderators) and can only be enforced by Meta's internal teams. Meta has further explained that
the Facebook Community Standards apply to Instagram. When Meta took the content down, two days after it was posted, it also removed copies of the video posted by other accounts. Based on the information that they received from UK
law enforcement, Meta's Public Policy team believed that the track might increase the risk of potential retaliatory gang violence, and acted as a threatening call to action that could contribute to a risk of imminent violence or physical harm, including
retaliatory gang violence. Hours after the content was removed, the account owner appealed. A human reviewer assessed the content to be non-violating and restored it to Instagram. Eight days later, following a second request from
UK law enforcement, Meta removed the content again and took down other instances of the video found on its platforms. The account in this case has fewer than 1,000 followers, the majority of whom live in the UK. The user received notifications from Meta
both times their content was removed but was not informed that the removals were initiated following a request from UK law enforcement. In referring this matter to the Board, Meta states that this case is particularly difficult as
it involves balancing the competing interests of artistic expression and public safety. Meta explains that, while the company places a high value on artistic expression, it is difficult to determine when that expression becomes a credible threat. Meta
asks the Board to assess whether, in this case and more generally, the safety risks associated with the potential instigation of gang violence outweigh the value of artistic expression in drill music. In its decisions, the Board
can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases. Respond via
article from oversightboard.com
|
|
Facebook shamed into reversing censorship of the poster for Pedro Amnodovar's Parallel Mothers
|
|
|
| 11th August 2021
|
|
| See article from bbc.co.uk |
Madres paralelas is a 2022 Spain drama by Pedro Almodóvar Starring Penélope Cruz, Rossy de Palma and Aitana Sánchez-Gijón
Two women, Janis and Ana, coincide in a hospital room
where they are going to give birth. Both are single and became pregnant by accident. Janis, middle-aged, doesn't regret it and she is exultant. The other, Ana, an adolescent, is scared, repentant and traumatized. Janis tries to encourage her while they
move like sleepwalkers along the hospital corridors. The few words they exchange in these hours will create a very close link between the two, which by chance develops and complicates, and changes their lives in a decisive way. Instagram's owner Facebook has reversed a ban on a poster for Spanish director Pedro Almodovar's new film, Madres Paralelas (Parallel Mothers), showing a nipple producing a drop of milk. The company was shamed by bad publicity after its naff 'AI' censorship algorithm proved a failure in distinguishing art from porn. Facebook said it had made an exception to its usual ban on nudity because of the clear artistic context.
The promotional image was made to look like an eyeball producing a teardrop. Javier Jaen, who designed the advert for Madres Paralelas (Parallel Mothers), had said the platform should be ashamed for its censorship. |
|
Facebook announces new censorship measures for Facebook groups
|
|
|
| 17th
March 2021
|
|
| See article from about.fb.com by Tom Alison, VP of Engineering
|
It's important to us that people can discover and engage safely with Facebook groups so that they can connect with others around shared interests and life experiences. That's why we've taken action to curb the spread of harmful content, like hate speech
and misinformation, and made it harder for certain groups to operate or be discovered, whether they're Public or Private. When a group repeatedly breaks our rules, we take it down entirely. We're sharing the latest in our ongoing
work to keep Groups safe, which includes our thinking on how to keep recommendations safe as well as reducing privileges for those who break our rules. These changes will roll out globally over the coming months. We are adding
more nuance to our enforcement. When a group starts to violate our rules, we will now start showing them lower in recommendations, which means it's less likely that people will discover them. This is similar to our approach in News Feed, where we show
lower quality posts further down, so fewer people see them. We believe that groups and members that violate our rules should have reduced privileges and reach, with restrictions getting more severe as they accrue more violations,
until we remove them completely. And when necessary in cases of severe harm, we will outright remove groups and people without these steps in between. We'll start to let people know when they're about to join a group that has
Community Standards violations, so they can make a more informed decision before joining. We'll limit invite notifications for these groups, so people are less likely to join. For existing members, we'll reduce the distribution of that group's content so
that it's shown lower in News Feed. We think these measures as a whole, along with demoting groups in recommendations, will make it harder to discover and engage with groups that break our rules. We will also start requiring
admins and moderators to temporarily approve all posts when that group has a substantial number of members who have violated our policies or were part of other groups that were removed for breaking our rules. This means that content won't be shown to the
wider group until an admin or moderator reviews and approves it. If an admin or moderator repeatedly approves content that breaks our rules, we'll take the entire group down. When someone has repeated violations in groups, we will
block them from being able to post or comment for a period of time in any group. They also won't be able to invite others to any groups, and won't be able to create new groups. These measures are intended to help slow down the reach of those looking to
use our platform for harmful purposes and build on existing restrictions we've put in place over the last year.
|
|
Facebook announces that it will censor content to protect itself against being prosecuted under local laws
|
|
|
| 1st September 2020
|
|
| See article from windowscentral.com
|
Facebook has announced changes to its Terms of Service that will allow it to remove content or restrict access if the company thinks it is necessary to avoid legal or regulatory impact. Facebook users have started receiving notifications regarding a
change to its Terms of Service which state: Effective October 1, 2020, section 3.2 of our Terms of Service will be updated to include: We also can remove or restrict access to your content, services or information if
we determine that doing so is reasonably necessary to avoid or mitigate adverse legal or regulatory impacts to Facebook.
It is not clear whether this action is in response to particular laws or perhaps this references creeping
censorship being implemented worldwide. Of course it could be a pretext to continuing to impose biased political censorship in the run up to the US presidential election. |
|
|
|
|
| 18th April 2020
|
|
|
Why there's a danger in allowing a single entity to influence what our society deems decent. By Katie Wheeler See
article from theguardian.com |
|
Facebook seems to be suggesting that if governments are so keen on censoring people's speech then perhaps the governments should take over the censorship job entirely...
|
|
|
| 18th February 2020
|
|
| See article from about.fb.com By Monika Bickert, Vice President,
Facebook Content Policy See But the approach doesn't seem to have gone down well:
Brussels pushes back on Zuckerberg pitchfrom politico.eu See also Facebook's proposed
regulations are just things it's already doingfrom theverge.com |
Today, we're publishing a white paper setting out some questions that regulation of online content might address.
Charting a Way Forward: Online Content Regulation builds on recent developments on this
topic, including legislative efforts and scholarship. The paper poses four questions which go to the heart of the debate about regulating content online:
How can content regulation best achieve the goal of reducing harmful speech while preserving free expression? By requiring systems such as user-friendly channels for reporting content or external oversight of policies or
enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies' efforts. -
How can regulations enhance the accountability of internet platforms? Regulators could consider certain requirements for companies, such as publishing their content standards, consulting with stakeholders when making
significant changes to standards, or creating a channel for users to appeal a company's content removal or non-removal decision. Should regulation require internet companies to meet certain performance targets? Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.
Should regulation define which "harmful content" should be prohibited on the internet? Laws restricting speech are generally implemented by law enforcement officials and the courts. Internet content
moderation is fundamentally different. Governments should create rules to address this complexity -- that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends
and context.
Guidelines for Future Regulation The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms. The following
principles are based on lessons we've learned from our work in combating harmful content and our discussions with others.
Incentives. Ensuring accountability in companies' content moderation systems and procedures will be the best way to create the incentives for companies to responsibly balance values like safety, privacy, and freedom of
expression. The global nature of the internet. Any national regulatory approach to addressing harmful content should respect the global scale of the internet and the value of cross-border communications. They should
aim to increase interoperability among regulators and regulations. Freedom of expression. In addition to complying with Article 19 of the ICCPR (and related guidance), regulators should consider the impacts of their
decisions on freedom of expression. Technology. Regulators should develop an understanding of the capabilities and limitations of technology in content moderation and allow internet companies the flexibility to
innovate. An approach that works for one particular platform or type of content may be less effective (or even counterproductive) when applied elsewhere. Proportionality and necessity. Regulators should take into
account the severity and prevalence of the harmful content in question, its status in law, and the efforts already underway to address the content.
If designed well, new frameworks for regulating harmful content can contribute to the internet's continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together.
Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation. We hope today's white paper helps to stimulate further conversation around the regulation
of content online. It builds on a paper we published last September on data portability , and we plan on publishing similar papers on elections
and privacy in the coming months.
|
|
Mark Zuckerberg pushes back against too much censorship on Facebook
|
|
|
| 2nd February 2020
|
|
| See article from
dailymail.co.uk |
Mark Zuckerberg has declared that Facebook is going to stand up for free expression in spite of the fact it will piss off a lot of people. He made the claim during a fiery appearance at the Silicon Slopes Tech Summit in Utah on Friday. Zuckerberg told
the audience that Facebook had previously tried to resist moves that would be branded as too offensive - but says he now believes he is being asked to partake in excessive censorship: Increasingly we're getting called
to censor a lot of different kinds of content that makes me really uncomfortable, he claimed. We're going to take down the content that's really harmful, but the line needs to be held at some point. It kind of feels like the list
of things that you're not allowed to say socially keeps on growing, and I'm not really okay with that. This is the new approach [free expression], and I think it's going to piss off a lot of people. But frankly the old approach
was pissing off a lot of people too, so let's try something different.
|
| |