|
X updates censorship rules to specifically state that adult content is fine for self declared adult users
|
|
|
| 6th June 2024
|
|
| See article from help.x.com See
article from xbiz.com |
X, the platform formerly known as Twitter, updated its adult content rules to clarify how adult content may be posted and viewed. The new policy states that users may share consensually produced and distributed adult nudity or sexual behavior,
provided it's properly labeled and not prominently displayed. The policy also establishes a specific Adult Content warning, instead of the generic Sensitive Media label. The new rules from the X website read: You may share consensually produced and distributed adult nudity or sexual behavior, provided it's properly labeled and not prominently displayed.
We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed. Sexual expression, whether visual or written, can be a legitimate
form of artistic expression. We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality. We balance this freedom by restricting exposure to Adult
Content for children or adult users who choose not to see it. We also prohibit content promoting exploitation, nonconsent, objectification, sexualization or harm to minors, and obscene behaviors. We also do not allow sharing Adult Content in highly
visible places such as profile photos or banners. How we define Adult Content Adult Content is any consensually produced and distributed material depicting adult nudity or sexual behavior that is
pornographic or intended to cause sexual arousal. This also applies to AI-generated, photographic or animated content such as cartoons, hentai, or anime. Examples include depictions of:
full or partial nudity, including close-ups of genitals, buttocks, or breasts; explicit or implied sexual behavior or simulated acts such as sexual intercourse and other sexual acts.
How to mark your content If you regularly post adult content on X, we ask that you please adjust your media settings. Doing so places all your images and videos behind a content warning that needs to be
acknowledged before your media can be viewed. You can also add a one-time content warning on individual posts. If you continue to fail marking your posts, we will adjust your account settings for you. Users under 18 or viewers who
do not include a birth date on their profile cannot click to view marked content. Learn more about age restricted content here .
|
|
New Twitter CEO outlines how the platform will censor wrongthink
|
|
|
|
10th August 2023
|
|
| See article from reclaimthenet.org |
Linda Yaccarino, CEO of X, previously known as Twitter, has been speaking on TV about how the company will be censoring tweets. During a CNBC interview, Yaccarino discussed the demarcation of duties between herself and ELon Musk, however, it is
her stance on the website's content policies that has raised eyebrows. In clarifying X's approach to moderation, Yaccarino introduced the concept of freedom of speech, not freedom of reach, a policy where users, when posting narratives that are
not in line with approved speech, are labeled, possibly demonetized for that content, and have their visibility reduced on the platform. She remarked: If it is lawful but it's awful, it's extraordinarily difficult for you to see it. insinuating that even legally permissible content might be obscured if deemed undesirable by the company.
The decisions and comments made by Yaccarino might seem like a strict stance against divisive or hurtful rhetoric but critics may see it as an alarming move away from the ethos of open dialogue and free speech. |
|
Twitter is set to enable paywalled videos, maybe for porn
|
|
|
| 2nd November
2022
|
|
| See article from gizmodo.com |
Elon Musk is looking for ways to make Twitter profitable after paying $44 billion for the site. The Washington Post reports that Twitter is working on a new feature dubbed Paywalled Video, which would allow users to charge money for access to videos.
Gizmodo adds that: It's for porn. People on Twitter are going to charge for porn.
When a creator composes a tweet with a video, the creator can enable the paywall once a video has been added
to the tweet. The prices are preset, with creators allowed to charge $1, $2, $5, or $10 for access to the video, with Twitter taking a cut of the payment using Stripe. |
|
Twitter updates its censorship rules about hateful content
|
|
|
| 8th March 2020
|
|
| See article from help.twitter.com |
Twitter updated its rules about hateful content on 5th March 2020. The changes are in the area of dehumanizing remarks, which are remarks that treat others as less than human, on the basis of age, disability, or disease. The rules now read:
Hateful conduct policy Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation,
gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories. Hateful imagery and display
names: You may not use hateful images or symbols in your profile image or profile header. You also may not use your username, display name, or profile bio to engage in abusive behavior, such as targeted harassment or expressing hate towards a person,
group, or protected category. Violent threats We prohibit content that makes violent threats against an identifiable target. Violent threats are declarative statements of intent to inflict injuries
that would result in serious and lasting bodily harm, where an individual could die or be significantly injured, e.g., "I will kill you". Wishing, hoping or calling for serious harm on a person or group of people
We prohibit content that wishes, hopes, promotes, or expresses a desire for death, serious and lasting bodily harm, or serious disease against an entire protected category and/or individuals who may be members of that category.
This includes, but is not limited to:
Hoping that someone dies as a result of a serious disease, e.g., "I hope you get cancer and die." Wishing for someone to fall victim to a serious accident, e.g., "I wish that you would
get run over by a car next time you run your mouth." Saying that a group of individuals deserve serious physical injury, e.g., "If this group of protesters don't shut up, they deserve to be shot."
References to mass murder, violent events, or specific means of violence where protected groups have been the primary targets or victims We prohibit targeting individuals with content that references
forms of violence or violent events where a protected category was the primary target or victims, where the intent is to harass. This includes, but is not limited to sending someone:
Inciting fear about a protected category We prohibit targeting individuals with content intended to incite fear or spread fearful stereotypes about a protected category, including asserting that members
of a protected category are more likely to take part in dangerous or illegal activities, e.g., "all [religious group] are terrorists". Repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other
content that degrades someone We prohibit targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This
includes targeted misgendering or deadnaming of transgender individuals. We also prohibit the dehumanization of a group of people based on their religion, age, disability, or serious disease. Hateful
imagery We consider hateful imagery to be logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, gender identity or
ethnicity/national origin. Some examples of hateful imagery include, but are not limited to:
symbols historically associated with hate groups, e.g., the Nazi swastika; images depicting others as less than human, or altered to include hateful symbols, e.g., altering images of individuals to
include animalistic features; or images altered to include hateful symbols or references to a mass murder that targeted a protected category, e.g., manipulating images of individuals to include yellow Star of David badges, in
reference to the Holocaust.
|
|
Twitter considers letting authors of tweets restrict who is allowed to reply
|
|
|
| 13th January 2020
|
|
| See article from
theverge.com |
Speaking at CES in Las Vegas, Twitter's director of product management, Suzanne Xie, unveiled some new changes that are coming to the platform this year, focusing specifically on conversations. Xie says Twitter is adding a new setting for conversation
participants right on the compose screen. It has four options: Global, Group, Panel, and Statement. Global lets anybody reply, Group is for people you follow and mention, Panel is people you specifically mention in the tweet, and Statement simply allows
you to post a tweet and receive no replies. Xie says that Twitter is in the process of doing research on the feature. Twitter is considering the ability to quote tweets as an alternative to replying. |
|
|
|
|
| 19th October 2019
|
|
|
Twitter details exactly how world leaders are partially exempted from the website's usual biased political censorship rules See
article from blog.twitter.com |
|
Twitter users in the US and Japan can now hide responses to their tweets for everyone
|
|
|
| 23rd
September 2019
|
|
| See article from blog.twitter.com |
Twitter explains in a blog post: Earlier this year we started testing a way to give people more control over the conversations they start. Today, we're expanding this test to Japan and the United States! With
this test, we want to understand how conversations on Twitter change if the person who starts a conversation can hide replies. Based on our research and surveys we conducted, we saw a lot of positive trends during our initial test in Canada, including:
People mostly hide replies that they think are irrelevant, abusive or unintelligible. Those who used the tool thought it was a helpful way to control what they saw, similar to when keywords are muted. We saw that people were more likely to reconsider their interactions when their tweet was hidden: 27% of people who had their tweets hidden said they would reconsider how they interact with others in the future.
People were concerned hiding someone's reply could be misunderstood and potentially lead to confusion or frustration. As a result, now if you tap to hide a Tweet, we'll check in with you to see if you want to also block that
account.
We're interested to see if these trends continue, and if new ones emerge, as we expand our test to Japan and the US. People in these markets use Twitter in many unique ways, and we're excited to see how they might use this new tool.
|
|
The Soska sisters' Rabid
|
|
|
| 13th July
2019
|
|
| See article from
oneangrygamer.net |
Rabid is a 2019 Canada Sci-Fi horror by Jen Soska and Sylvia Soska. Starring Laura Vandervoort, Greg Bryk and Stephen Huszar.
An aspiring model suffers a disfiguring traffic
accident and undergoes a radical untested stem-cell treatment. The experimental transformation is a miraculous success, leaving her more beautiful than before. She finds her confidence and sexual appetite is also strangely increased resulting in several
torrid encounters. But she unknowingly sets off a spiraling contagion, and within 24 hours her lovers become rabid, violent spreaders of death and disease. As the illness mutates, it spreads through society at an accelerated rate causing an
ever-increasing number of people to rampage through the city in a violent and gruesome killing spree. Twitter has banned the account of the Soska Sisters after they posted promotional images for their forthcoming horror Rabid. The
image appears on the cover of the Rue Morgue magazine. The directors commented on their Facebook page: Bad girls. We'll be back. But man, those @mastersfx1 prosthetics in #Rabid must be medically accurate to get
us suspended for advertising our World Premiere with a FrightFest banner. I like how that makeup could be on the cover of @ruemorguemag & @fangoria, but shut down on Twitter. Wild world we are living in.
|
|
Twitter will note that tweets from anyone of less standing would get censored for being politically incorrect
|
|
|
|
29th June 2019
|
|
| See article from blog.twitter.com |
Twitter has announced a new punishment for Donald Trumps' tweets that it considers politically incorrect. Twitter will mark such tweets as 'abusive' and try and hide them away from being found in searches etc. However they will not be taken down. Twitter
explains who its new censorship method will work: In the past, we've allowed certain Tweets that violated our rules to remain on Twitter because they were in the public's interest, but it wasn't clear when and how we made those
determinations. To fix that, we're introducing a new notice that will provide additional clarity in these situations, and sharing more on when and why we'll use it. Serving the public conversation includes providing
the ability for anyone to talk about what matters to them; this can be especially important when engaging with government officials and political figures. By nature of their positions these leaders have outsized influence and sometimes say things that
could be considered controversial or invite debate and discussion. A critical function of our service is providing a place where people can openly and publicly respond to their leaders and hold them accountable. With
this in mind, there are certain cases where it may be in the public's interest to have access to certain Tweets, even if they would otherwise be in violation of our rules. On the rare occasions when this happens, we'll place a notice -- a screen you have
to click or tap through before you see the Tweet -- to provide additional context and clarity. We'll also take steps to make sure the Tweet is not algorithmically elevated on our service. Who does this apply to?
We will only consider applying this notice on Tweets from accounts that meet the following criteria. The account must:
- Be or represent a government official, be running for public office, or be considered for a government position (i.e., next in line, awaiting confirmation, named successor to an appointed position);
-
Have more than 100,000 followers; and
- Be verified.
That said, there are cases, such as direct threats of violence or calls to commit violence against an individual, that are unlikely to be considered in the public interest. What happens to the
Tweet that gets this notice placed on it? When a Tweet has this notice placed on it, it will feature less prominently on Twitter, and not appear in:
- Safe search
- Timeline when switched to Top Tweets
- Live events pages
- Recommended Tweet push notifications
- Notifications tab Explore
|
|
Twitter statement misleadingly suggests it will be cracking down politician's lies
|
|
|
| 26th April 2019
|
|
| See article from
blog.twitter.com |
Twitter writes in a blog post: Strengthening our approach to deliberate attempts to mislead voters Voting is a fundamental human right and the public conversation occurring on Twitter is never more
important than during elections. Any attempts to undermine the process of registering to vote or engaging in the electoral process is contrary to our company's core values. Today, we are further expanding our enforcement
capabilities in this area by creating a dedicated reporting feature within the product to allow users to more easily report this content to us. This is in addition to our existing proactive approach to tackling malicious automation and other forms of platform manipulation
on the service. We will start with 2019 Lok Sabha in India and the EU elections and then roll out to other elections globally throughout the rest of the year. What types of content are in violation?
You may not use Twitter's services for the purpose of manipulating or interfering in elections. This includes but is not limited to:
Misleading information about how to vote or register to vote (for example, that you can vote by Tweet, text message, email, or phone call); Misleading information about requirements for voting,
including identification requirements; and Misleading statements or information about the official, announced date or time of an election.
|
|
Responding to the large amount of aggressive tweeting, founder Jack Dorsey says that the number of likes will soon be downgraded
|
|
|
| 17th April
2019
|
|
| See article from bbc.com |
Twitter co-founder Jack Dorsey has said again there is much work to do to improve Twitter and cut down on the amount of abuse and misinformation on the platform. He said the firm might demote likes and follows, adding that in hindsight he would not have
designed the platform to highlight these. Speaking at the TED technology conference he said that Twitter currently incentivised people to post outrage. Instead he said it should invite people to unite around topics and communities. Rather than focus
on following individual accounts, users could be encouraged to follow hashtags, trends and communities. Doing so would require a systematic change that represented a huge shift for Twitter. One of the choices we made was to make the number
of people that follow you big and bold. If I started Twitter now I would not emphasise follows and I would not create likes. We have to look at how we display follows and likes, he added. |
|
Twitter outlaws misgendering or deadnaming of trans people
|
|
|
| 25th
November 2018
|
|
| See article from thegayuk.com See
censorship rules from help.twitter.com See reponse from
The Britisher from youtu.be |
Deadnaming and misgendering could now get you a suspension from Twitter as it looks to sure up its safeguarding policy for people in the protected transgender category. Twitter's recently updated censorship policy now reads:
Repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone We prohibit targeting individuals with repeated slurs, tropes or other content that
intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals. According to the Ofxord English dictionary
misgendering means: Refer to (someone, especially a transgender person) using a word, especially a pronoun or form of address, that does not correctly reflect the gender with which they identify.
According to thegayuk.com:
Deadnaming is when a person refers to someone by a previous name, it could be done with malice or by accident. It mostly affects transgender people who have changed their name during their transition.
|
|
Twitter consults its users over proposed rules to censor insults against any conceivable group of people except white men
|
|
|
| 26th
September 2018
|
|
| See article from bbc.com See
article from blog.twitter.com |
Twitter is consulting its users about new censorship rules banning 'dehumanising speech', in which people are compared to animals or objects. It said language that made people seem less than human had repercussions. The social network already has
a hateful-conduct policy but it is implemented discriminately allowing some types of insulting language to remain online. For example, countless tweets describing middle-aged white men as gammon can be found on the platform. At present it bans
insults based on a person's: race ethnicity nationality sexual orientation sex gender religious beliefs age disability medical condition but there is an unwritten secondary rule which means that the prohibition excludes groups not favoured under the
conventions of political correctness. Twitter said it intended to prohibit dehumanising language towards people in an identifiable group because some researchers claim it could lead to real-world violence. Asked whether calling men gammon would
count as dehumanising speech, the company said it would first seek the views of its members. Twitter's announcement reads in part: For the last three months, we have been developing a new policy to address dehumanizing
language on Twitter. Language that makes someone less than human can have repercussions off the service, including normalizing serious violence. Some of this content falls within our hateful conduct policy (which prohibits the promotion of violence
against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease), but there are still Tweets many people
consider to be abusive, even when they do not break our rules. Better addressing this gap is part of our work to serve a healthy public conversation. With this change, we want to expand our hateful conduct policy to include
content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target. Many scholars have examined the relationship between dehumanization and violence. For example, Susan Benesch has
described dehumanizing language as a hallmark of dangerous speech, because it can make violence seem acceptable, and Herbert Kelman has posited that dehumanization can reduce the strength of restraining forces against violence.
witter's critics are now using the hashtag #verifiedhate to highlight examples of what they believe to be bias in what the platform judges to be unacceptable. The gammon insult gained popularity after a collage of contributors to the BBC's Question Time programme - each middle-aged, white and male - was shared along with the phrase Great Wall of Gammon in 2017.
The scope of identifiable groups covered by the new rules will be decided after a public consultation that will run until 9 October. Ps before filling in the consultation form, note that it was broken for me and didn't accept my submission.
For the record, Melon Farmer tried to submit the comment: This is yet another policy that restricts free speech. As always, the vagueness of the rules will allow Twitter, or its moderators, to arbitrarily apply its own
morality anyway. But not to worry, the richness of language will always enable people to dream up new ways to insult others.
|
|
US internet companies line up to censor Alex Jones' Info Wars
|
|
|
| 22nd September
2018
|
|
| 7th September 2018. See article from theguardian.com
|
The radio host and colourful conspiracy theorist Alex Jones has been permanently censored by Twitter. One month after it distinguished itself from the rest of the tech industry by declining to bar the rightwing shock jock its platform, Twitter fell in
line with the other major social networks in banning Jones. Twitter justified the censorship saying: We took this action based on new reports of Tweets and videos posted yesterday that violate our abusive
behavior policy, in addition to the accounts' past violations. We will continue to evaluate reports we receive regarding other accounts potentially associated with @realalexjones or @infowars and will take action if content that violates our rules is
reported or if other accounts are utilized in an attempt to circumvent their ban.
Update: Apple censors the Infowars app 8th September 2018. See article from theverge.com
Alex Jones' Infowars app has been permanently banned from Apple's App Store. Apple confirmed the removal with Buzzfeed by citing the App Store guidelines, which forbids content that is offensive, insensitive, upsetting, intended to
disgust, or in exceptionally poor taste. Update: Paying the price of using Paypal 22nd September 2018. See
article from uk.pcmag.com PayPal is the latest tech company to ban Infowars. Paypal told PC<ag:
We undertook an extensive review of the Infowars sites, and found instances that promoted hate or discriminatory intolerance against certain communities and religions, which run counter to our core value of inclusion.
InfoWars said PayPal gave it 10 days to find an alternate payment provider before terminating the service. PayPal didn't cite the specific instances of hate speech, but Infowars says the content involved criticism of Islam and opposition to
transgenderism being to taught children in schools. |
|
|
|
|
| 6th September 2018
|
|
|
Twitter boss hauled up in front of US Congress and admits unfair censorship See article from bbc.co.uk |
|
Twitter steps up the censorship, no doubt conservatives will bear the brunt of it
|
|
|
|
16th May 2018
|
|
| See article from blog.twitter.com
|
Twitter has outlined further censorship measures in a blog post: In March, we introduced our new approach to improve the health of the public conversation on Twitter. One important issue we've been working to address is what some
might refer to as "trolls." Some troll-like behavior is fun, good and humorous. What we're talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like
conversations and search. Some of these accounts and Tweets violate our policies, and, in those cases, we take action on them. Others don't but are behaving in ways that distort the conversation. To put this in context, less than
1% of accounts make up the majority of accounts reported for abuse, but a lot of what's reported does not violate our rules. While still a small overall number, these accounts have a disproportionately large -- and negative -- impact on people's
experience on Twitter. The challenge for us has been: how can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation? A New Approach
Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we're tackling issues of behaviors that distort
and detract from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we're able to improve the health of the conversation,
and everyone's experience on Twitter, without waiting for people who use Twitter to report potential issues to us. There are many new signals we're taking in, most of which are not visible externally. Just a few examples include
if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that repeatedly Tweet and mention accounts that don't follow them, or behavior that might indicate a coordinated attack. We're
also looking at how accounts are connected to those that violate our rules and how they interact with each other. These signals will now be considered in how we organize and present content in communal areas like conversation and
search. Because this content doesn't violate our policies, it will remain on Twitter, and will be available if you click on "Show more replies" or choose to see everything in your search setting. The result is that people contributing to the
healthy conversation will be more visible in conversations and search. Results In our early testing in markets around the world, we've already seen this new approach have a positive impact, resulting
in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations. That means fewer people are seeing Tweets that disrupt their experience on Twitter. Our work is far from done. This is only one part of our
work to improve the health of the conversation and to make everyone's Twitter experience better. This technology and our team will learn over time and will make mistakes. There will be false positives and things that we miss; our goal is to learn fast
and make our processes and tools smarter. We'll continue to be open and honest about the mistakes we make and the progress we are making. We're encouraged by the results we've seen so far, but also recognize that this is just one step on a much longer
journey to improve the overall health of our service and your experience on it. |
|
Hidden camera interviews feature Twitter engineers speaking of biased political censorship and shadow banning where tweets are quietly binned without telling the poster
|
|
|
| 13th
January 2018
|
|
| See article from dailymail.co.uk
See article from projectveritas.com See
video from YouTube |
A new undercover video from a group of conservative investigative journalists appears to show Twitter staff and former employees talking about how they censor content they disagree with. James O'Keefe, Project Veritas founder, posted a video
showing an undercover reporter speaking to Abhinov Vadrevu, a former Twitter software engineer, at a San Francisco restaurant on January 3. There, he discussed a technique referred to as shadow banning, which means that users' content is quietly
blocked without them ever knowing about it. Their tweets would still appear to their followers, but it wouldn't appear in search results or anywhere else on Twitter. So posters just think that no one is engaging with their content, when in reality,
no one is seeing it. Olinda Hassan, a policy manager for Twitter's Trust and Safety team, was filmed talking about development of a system for down ranking shitty people. Another Twitter engineer claimed that staff already have tools to
censor pro-Trump or conservative content. One Twitter engineer appeared to suggest that the social network was trying to ban, like, a way of talking. Anyone found to be aggressive or negative will just vanish. Every single conversation is going to
be rated by a machine and the machine is going to say whether or not it's a positive thing or a negative thing, Twitter software engineer Steven Pierre was filmed on December 8 saying as he discussed the development of an automated censure system.
In the latest undercover Project Veritas video investigation, eight current and former Twitter employees are on camera explaining steps the social media giant is taking to censor political content that they don't like.
|
|
Twitter redefines its 'verified' tick qualifications to exclude the politically incorrect
|
|
|
|
25th November 2017
|
|
| 17th November 2017. See article from
theverge.com |
Twitter announced yesterday that it would begin removing verification badges for famous tweeters that it does not approve of. Not for what is tweeted, but for offline behaviour Twitter does not like. The key phrase in Twitter's policy update is this
one: Reasons for removal may reflect behaviors on and off Twitter. Before yesterday, the rules explicitly applied only to behavior on Twitter. From now on, holders of verified badges will be held accountable for their behavior in the real world as well.
Twitter has promised further information about the new censorship policy in due course. Many questions remain unanswered. What will the company's review consist of? How will it examine users' offline behavior? Will it simply respond to reports, or
will it actively look for violations? Will it handle the work with its existing team, or will it expand its trust and safety team? Twitter has immediately rescinded blue tick verification from accounts belonging to far-right activists, including
Jason Kessler, a US white supremacist, and Tommy Robinson, founder of the English Defence League.
Offsite Comment: Twitter has turned its back on free speech The platform plans to exercise ideological control over its users. 25th November 2017. See
article from spiked-online.com Andrew Doyle |
|
Twitter bosses consider expanding the scope of material to be censored
|
|
|
| 22nd
October 2017
|
|
| 18th October 2017. See article from wired.com
|
There was plenty of strong language flying around on Twitter in response to the Harvey Weinstein scandal. Twitter got a bit confused about who was harassing who, and ended up suspending Weinstein critic Rose McGowan for harassment. Twitter ended up being
boycotted over its wrong call, and so Twitter bosses have been banging their heads together to do something. Wired has got hold of an email outline an expansion of content liable to Twitter censorship and also for more severe sanctions for errant
tweeters. Twitter's head of safety policy wrote of new measures to rolled out in the coming weeks: Non-consensual nudity Our definition of "non-consensual nudity" is expanding to more
broadly include content like upskirt imagery, "creep shots," and hidden camera content. Given that people appearing in this content often do not know the material exists, we will not require a report from a target in order to remove it.
While we recognize there's an entire genre of pornography dedicated to this type of content, it's nearly impossible for us to distinguish when this content may/may not have been produced and distributed consensually. We would rather
error on the side of protecting victims and removing this type of content when we become aware of it. Unwanted sexual advances Pornographic content is generally permitted on Twitter, and it's
challenging to know whether or not sexually charged conversations and/or the exchange of sexual media may be wanted. To help infer whether or not a conversation is consensual, we currently rely on and take enforcement action only if/when we receive a
report from a participant in the conversation. We are going to update the Twitter Rules to make it clear that this type of behavior is unacceptable. We will continue taking enforcement action when we receive a report from someone
directly involved in the conversation. Hate symbols and imagery (new) We are still defining the exact scope of what will be covered by this policy. At a high level, hateful imagery, hate symbols, etc
will now be considered sensitive media (similar to how we handle and enforce adult content and graphic violence). More details to come. Violent groups (new) We are still defining the exact scope of
what will be covered by this policy. At a high level, we will take enforcement action against organizations that use/have historically used violence as a means to advance their cause. More details to come here as well Tweets
that glorify violence (new) We already take enforcement action against direct violent threats ("I'm going to kill you"), vague violent threats ("Someone should kill you") and wishes/hopes of serious
physical harm, death, or disease ("I hope someone kills you"). Moving forward, we will also take action against content that glorifies ("Praise be to for shooting up. He's a hero!") and/or condones ("Murdering makes sense. That
way they won't be a drain on social services"). More details to come. Offsite Article: Changes to the way that 'sensitive' content is defined and blocked from Twitter search 22nd October 2017. See
article from avn.com |
|
Twitter censors the words 'pot' and 'jackass' from sensitive users
|
|
|
|
28th March 2017
|
|
| See article
from usnews.com |
If you're looking to follow news and advocacy about an anticipated Vermont legislature vote this week on legalizing marijuana, a search for the latest tweets that use the combined terms Vermont and marijuana will for many Twitter users
yield zero results. Same goes for searches for tweets using the terms pot, weed or cannabis. The latest results for jackass and jerk , words generally printed without censorship by news outlets, also yield a
blank page with a message claiming: Nothing came up for that search, which is a little weird. Maybe check what you searched for and try again. The omissions are examples of a new censorship syste introduced by Twitter, with users required
to opt out of a filter to see uncensored results. Top results for restricted terms still appear, but results for the most recent posts and for photos, videos and news content tabs do not. |
|
Twitter greys out profiles of users with 'sensitive' content
|
|
|
| 11th March
2017
|
|
| See article from dailymail.co.uk
|
Twitter is continuing its campaign to add controls and warnings to tweets. It now presents a warning when users click on a profile that may include sensitive content . The warning greys out the profile's tweets, bio and profile picture,
but gives users the option to view the profile if they wish. Twitter used to only mark individual tweets with a sensitivity warning, but has now expanded this to censor whole profiles unless users agree to view them. The warning message
given with the greyed out profile says: Caution: This profile may include sensitive content. You're seeing this warning because they tweet sensitive images or language. Do you still want to view it?
Twitter did not publicly announce the new feature, and tweeters with profiles being greyed out are not informed by Twitter. |
|
Twitter claims an unlikely sounding capability to detect abusive tweets and suspend accounts without waiting for complaints to be flagged
|
|
|
| 19th February 2017
|
|
| See
article from forbes.com |
Twitter has introduced a new censorship system with the unlikely sounding capability to detect abusive tweets and suspend accounts without waiting for complaints to be flagged. Transgressions results in the senders receiving half-day suspensions. The
company has refused to provide details on specifically how the new system works, but using a combination of behavioral and keyword indicators, the filter flags posts it deems to be violations of Twitter's acceptable speech policy and issues users
suspensions of half a day during which they cannot post new globally accessible tweets and their existing tweets are visible only to followers. From the platform that once called itself the free speech wing of the free speech party, these
new tools mark an incredible turn of events. The anti-censorship ethic seems to have been lost in a failed attempt to sell the company after prospective buyers were unhappy with the lack of censorship control over the platform. Inevitably Twiiter
has refused to provide even outline ideas of the indicators it is using, especially when it comes to the particular linguistic cues it is concerned with. While offering too much detail might give the upper hand to those who would try to work around the
new system, it is important for the broader community to have at least some understanding of the kinds of language flagged by Twitter's new tool so that they can try and stay within the rules. It is also unclear why Twitter chose not to permit
users to contest what they believe to be a wrongful suspension. Given that the feature is brand-new and bound to encounter plenty of unforeseen contexts where it could yield a wrong result, it is surprising that Twitter chose not to provide a recovery
mechanism where it could catch these before they become news. And the first example of censorship was quick to follow. Many outlets this morning picked up on a frightening instance of the Twitter algorithm's new power to police not only the
language we use but the thoughts we express. In this case a user allegedly tweeted a response to a news report about comments made by Senator John McCain and argued that it was his belief that the senator was a traitor who had committed formal
treason against the nation. Twitter did not respond to a request for more information about what occurred in this case and if this was indeed the tweet that caused the user to be suspended, but did not dispute that the user had been suspended or that his
use of the word traitor had factored heavily into that suspension. See
article from forbes.com
Update: Clues
19th February 2017 Thanks to Joe for a couple of clues about the censorship rules: I've fallen foul of this. Seems to be using trigger words in tweets to people
you either don't follow or don't follow you.
|
|
Twitter updates its censorship rules concerning abusive tweets
|
|
|
| 22nd April
2015
|
|
| See article from
blog.twitter.com |
Twitter has announced new censorship rules related to tweets deemed to be abusive. Twitter explains in a blog post: First, we are making two policy changes, one related to prohibited content, and one about how we enforce
certain policy violations. We are updating our violent threats policy so that the prohibition is not limited to direct, specific threats of violence against others but now extends to threats of violence against others or promot[ing] violence
against others. Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior. The updated language better describes the range of prohibited content and our intention to act when users step over the
line into abuse. On the enforcement side, in addition to other actions we already take in response to abuse violations (such as requiring users to delete content or verify their phone number), we're introducing an additional
enforcement option that gives our support team the ability to lock abusive accounts for specific periods of time. This option gives us leverage in a variety of contexts, particularly where multiple users begin harassing a particular person or group of
people. Second, we have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach. This feature takes into account a wide range of signals and context that frequently correlates with abuse
including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive. It will not affect your ability to see content that you've explicitly sought out, such as
Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content. This feature does not take into account whether the content posted or followed by a user is controversial or unpopular.
|
|
Twitter admits to the first country specific account block.
|
|
|
| 28th February
2013
|
|
| See article from
advocacy.globalvoicesonline.org
|
In February 2012, Twitter introduced a policy that enables individual tweets and accounts to be blocked on a country-by-country basis. If a government submits a court order to Twitter, asking for a tweet or account to be blocked, Twitter will comply. But
the blocking will only occur in the country in question , to users throughout the rest of the world, the affected content will look no different. This past October, Twitter enacted this policy for the first time to block tweets from the account of
the German extreme right-wing group, Besseres Hannover. The German government has formally banned and seized the assets of the group, and some of its members have been charged with inciting racial hatred and creating a criminal organization. The
group announced that it would challenge the blocking in court, but as things stand, Twitter's move to block the group's tweets was in accordance with local German law. Twitter's general counsel, Alex MacGillivray, announced the issue on Twitter
and linked to a copy of the request from German police to block the @hannoverticker account in Germany.
|
16th February 2012 | | | |
US senators Dick Durbin and Tom Coburn have just sent a letter to Twitter CEO Dick Costolo requesting detailed information about the company's handling of takedown notices, injunctions and subpoenas. See
article from forbes.com |
10th February 2012 | | |
Brazil court case to consider asking Twitter to censor tweets that reveal police speed traps
|
See article from
articles.boston.com
|
A request for an injunction to stop Twitter users from alerting drivers to police roadblocks, radar traps and drunk-driving checkpoints could make Brazil the first country to take Twitter up on its plan to censor content at governments' requests. Twitter unveiled plans last month that would allow country-specific censorship of tweets that might break local laws.
As far as we know this is the first time that a country has attempted to take Twitter up on their country-by-country take down, Eva Galperin of the San Francisco-based Electronic Frontier Foundation said: Twitter has given these
countries the tool and now Brazil has chosen to use it, she said. Carlos Eduardo Rodrigues Alves, a spokesman for the federal prosecutor's office, said the injunction request was filed Monday. He said a judge was expected to announce in the
next few days whether he will issue the order against Twitter users.
|
10th February 2012 | | |
And how it is used to stop broadcast of the whereabouts of pirated music
| See
article from huffingtonpost.com
|
In early June, about three weeks before Beyonce's latest album came out, one of her songs, a collaboration with the rapper Andre 3000, made its way to the open seas of the Internet. Twitter recently published a batch of data that sheds light on the
leak and provides insight into how Twitter censors information on the Internet. It began when a website called RapUp published a link to the song, Party . Someone tweeted the link and lots of people retweeted it.
From the perspective of Beyonce's record label, Columbia, this was not cool. So Columbia turned to a London-based contractor called Web Sheriff, which sent a takedown request to Twitter. It contained a list of over 100 of those copyright-infringing
tweets and retweets. Twitter wrote back quickly: We have removed the reported materials from the site. Twitter has removed thousands of tweets from its site over the years, and last month, it published the more than 4,000
takedown requests that have floated into its inbox since 2009. ...Read the full article
|
5th February 2012 | | |
Thailand approves of the new Twitter censorship by country policy
| See
article from guardian.co.uk See
Could Facebook, Twitter Be Charged Under Thailand's Computer Crime Act? from
pbs.org
|
The Thai government becomes the first to publicly endorse Twitter's decision to permit country-specific censorship of content Thai information and communication technology minister, Jeerawan Boonperm, called Twitter's decision a welcome
development and said the ministry already received good co-operation from internet companies such as Google and Facebook. The Thai government would soon be contacting Twitter to discuss ways in which they can collaborate , she told the
Bangkok Post. Thailand has some of the most repressive censorship laws in the world, ranking it 153 out of 178 in Reporters Without Borders' 2011 Press Freedom Index. In particular these are used to target criticism of the monarchy. Lese-majeste
laws include punishments by up to 15 years in prison, but under Thailand's 2007 computer crimes act prosecutors have been able to increase sentences. Thailand's endorsement could have profound ramifications across the region, said Sunai Phasuk of
Human Rights Watch Thailand, while it already adds more damage to an already worrying trend in Thailand . Twitter gives space to different opinions and views, and that is so important in a restricted society -- it gives people a chance to speak
up, he said. But if this censorship is welcomed by Thailand, then other countries, with worse records for human rights and freedom of speech, will find that they have an ally.
|
28th January 2012 | | |
Twitter to be censored on a per country basis
| 27th January 2012. See
article from mashable.com |
Twitter is giving itself the facility to withhold content in specific countries, while keeping that content available for the rest of the world, the company has announced. Until now, the only way for Twitter to censor content was to universally
eliminate it from the site. This change means content deemed inappropriate by a specific government can be withheld locally, explains a blog post called The Tweets Still Must Flow. When we receive a request from an authorized entity, we
will act in accordance with appropriate laws and our terms of service, a Twitter rep told Mashable. If and when content is withheld, affected users will be notified of either an account or tweet's censorship. Twitter will make that decision
public on Chilling Effects, through an expanded partnership that charts Cease and Desist Notices. Update: Twitter Boycott 28th January 2012. See
article from mashable.com Twitter's new approach to censoring tweets has users
rallying around the hashtag #TwitterBlackout, a call to boycott the microblogging service. The change lets Twitter withhold content on a country-by-country basis, when a government deems the tweets inappropriate. Rather than wholly removing the
content from the site, it will now only be blocked locally. Many users have expressed dissatisfaction with the change. Tweets have been streaming in, in various languages, all with the #TwitterBlackout hashtag. Anonymous has also supported
the blackout. One of its tweets read: SPREAD THE WORD #TwitterBlackout I will not tweet for the whole of January 28th due to the new twitter censor rule #Twitter #J28?
Offsite: What Does Twitter's Country-by-Country Takedown System Mean for Freedom of Expression?
28th January 2012. See article from
advocacy.globalvoicesonline.org by Eva galperin So what should Twitter users do? Keep Twitter honest. First, pay attention to the notices that
Twitter sends and to the archive being created on Chilling Effects. If Twitter starts honoring court orders from India to take down tweets that are offensive to the Hindu gods, or tweets that criticize the king in Thailand, we want to know immediately.
Furthermore, transparency projects such as Chilling Effects allow activists to track censorship all over the world, which is the first step to putting pressure on countries to stand up for freedom of expression and put a stop to government censorship.
What else? Circumvent censorship. Twitter has not yet blocked a tweet using this new system, but when it does, that tweet will not simply disappear---there will be a message informing you that content has been blocked due to your
geographical location. Fortunately, your geographical location is easy to change on the Internet. You can use a proxy or a Tor exit node located in another country. Read Write Web also suggests that you can circumvent per-country censorship by simply
changing the country listed in your profile. ...Read the full
article Update: Twitter boss
explains 5th February 2012. See article from mashable.com
Twitter CEO Dick Costolo took the stage at AllThingsD's media conference to defend the company's new censorship policies. He argued that Twitter's new policies allow for greater freedom of speech on the platform. Previously, when a government
demanded that Twitter remove a tweet or block a user, access to that content would be blocked from the entire world. Now, Twitter can hide the tweet or user from that individual country, but allow the rest of the world to see it. Costello explained:
There's been no change in our stance or attitude or policy with respect to content on Twitte. What we announced is a greater capability we now have. Now, when we are issued a valid legal order in a country in which we
operate, such as a DMCA takedown notice, we are able to leave the content up for as many people around the world as possible, while still operating within the local law. You can't operate in these countries and choose the laws you want to abide by.
We don't proactively go do anything. This is purely a reactive capability to what we determine to be a valid and applicable legal order in a country in which we operate. We're fully blocked in Iran and China. And I don't see the
current environment in either country being one in which we could go and operate anytime soon.
|
| |