Mark Zuckerberg has previously described plans to create a high level oversight board to decide upon censorship issues with a wider consideration than just Facebook interests. He suggested that national government interests should be considered
at this top level of policy making. Zuckerberg wrote:
We are responsible for enforcing our policies every day and we make millions of content decisions every week. But ultimately I don't believe private companies like ours should be making so many important decisions about speech on our own. That's
why I've called for governments to set clearer standards around harmful content. It's also why we're now giving people a way to appeal our content decisions by establishing the independent Oversight Board.
If someone disagrees with a decision we've made, they can appeal to us first, and soon they will be able to further appeal to this independent board. The board's decision will be binding, even if I or anyone at Facebook disagrees with it. The
board will use our values to inform its decisions and explain its reasoning openly and in a way that protects people's privacy.
The board will be an advocate for our community -- supporting people's right to free expression, and making sure we fulfill our responsibility to keep people safe. As an independent organization, we hope it gives people confidence that their
views will be heard, and that Facebook doesn't have the ultimate power over their expression. Just as our Board of Directors keeps Facebook accountable to our shareholders, we believe the Oversight Board can do the same for our community.
As well as a detailed charter, Facebook provided a summary of the design of the board.
Along with the charter, we are
providing a summary which breaks down the elements from the
draft charter , the feedback we've received, and the rationale behind our decisions in relation to both. Many issues have spurred healthy and constructive debate. Four areas in particular were:
Governance: The majority of people we consulted supported our decision to establish an independent trust. They felt that this could help ensure the board's independence, while also providing a means to provide additional accountability
checks. The trust will provide the infrastructure to support and compensate the Board.
Membership: We are committed to selecting a diverse and qualified group of 40 board members, who will serve three-year terms. We agreed with feedback that Facebook alone should not name the entire board. Therefore, Facebook will select
a small group of initial members, who will help with the selection of additional members. Thereafter, the board itself will take the lead in selecting all future members, as explained
in this post . The trust will formally appoint members.
Precedent: Regarding the board, the charter confirms that panels will be expected, in general, to defer to past decisions. This reflects the feedback received during the public consultation period. The board can also request that its
decision be applied to other instances or reproductions of the same content on Facebook. In such cases, Facebook will do so, to the extent technically and operationally feasible.
Implementation : Facebook will promptly implement the board's content decisions, which are binding. In addition, the board may issue policy recommendations to Facebook, as part of its overall judgment on each individual case. This is
how it was envisioned that the board's decisions will have lasting influence over Facebook's policies, procedures and practices.
Both Facebook and its users will be able to refer cases to the board for review. For now, the board will begin its operations by hearing Facebook-initiated cases. The system for users to initiate appeals to the board will be made available over
the first half of 2020.
Over the next few months, we will continue testing our assumptions and ensuring the board's operational readiness. In addition, we will focus on sourcing and selecting of board members, finalizing the bylaws that will complement the charter, and
working toward having the board deliberate on its first cases early in 2020.
Facebook has launched a new feature allowing Instagram users to flag posts they claim contain fake news to its fact-checking partners for vetting.
The move is part of a wider raft of measures the social media giant has taken to appease the authorities who claim that 'fake news' is the root of all social ills.
Launched in December 2016 following the controversy surrounding the impact of Russian meddling and online fake news in the US presidential election, Facebook's partnership now involves more than 50 independent 'fact-checkers' in over 30 countries
The new flagging feature for Instagram users was first introduced in the US in mid-August and has now been rolled out globally.
Users can report potentially false posts by clicking or tapping on the three dots that appear in the top right-hand corner, selecting report, it's inappropriate and then false information.
No doubt the facility will be more likely to report posts that people don't like rather for 'false information'.
The Irish Communications Minister Richard Bruton has scrapped plans to introduce restrictions on access to porn in a new online safety bill, saying they are not a priority.
The Government said in June it would consider following a UK plan to block pornographic material until an internet user proves they are over 18. However, the British block has run into administrative problems and been delayed until later this
Bruton said such a measure in Ireland is not a priority in the Online Safety Bill, a draft of which he said would be published before the end of the year.
It's not the top priority. We want to do what we committed to do, we want to have the codes of practice, he said at the Fine Gael parliamentary party think-in. We want to have the online commissioner - those are the priorities we are committed
An online safety commissioner will have the power to enforce the online safety code and may in some cases be able to force social media companies to remove or restrict access. The commissioner will have responsibility for ensuring that large
digital media companies play their part in ensuring the code is complied with. It will also be regularly reviewed and updated.
Bruton's bill will allow for a more comprehensive complaint procedure for users and alert the commissioner to any alleged dereliction of duty. The Government has been looking at Australia's pursuit of improved internet safety.
Google has paid a fine for failing to block access to certain websites banned in Russia.
Roscomnadzor, the Russian government's internet and media censor, said that Google paid a fine of 700,000 rubles ($10,900) related to the company's refusal to fully comply with rules imposed under the country's censorship regime.
Search engines are prohibited under Russian law from displaying banned websites in the results shown to users, and companies like Google are asked to adhere to a regularly updated blacklist maintained by Roscomnadzor.
Google does not fully comply with the blacklist, however, and more than a third of the websites banned in Russia could be found using its search engine, Roscomnadzor said previously.
No doubt Russia is no working on increased fines for future transgressions.
Russia's powerful internal security agency FSB has enlisted the help of the telecommunications, IT and media censor Roskomnadzor to ask a court to block Mailbox and Scryptmail email providers.
It seems that the services failed to register with the authorities as required by Russian law. Both are marketed as focusing strongly on the privacy segment and offering end-to-end encryption.
News source RBK noted that the process to block the two email providers will in legal terms follow the model applied to the Telegram messaging service -- adding, however, that imperfections in the blocking system are resulting in Telegram's
continued availability in Russia.
On the other hand, some experts argued that it will be easier to block an email service than a messenger like Telegram. In any case, Russia is preparing to a new law to come into effect on November 1 that will see the deployment of Deep Packet
Inspection equipment, which should result in more efficient blocking of services.
A parliamentary committee initiated by the Australian government will investigate how porn websites can verify Australians visiting their websites are over 18, in a move based on the troubled UK age verification system.
The family and social services minister, Anne Ruston, and the minister for communications, Paul Fletcher, referred the matter for inquiry to the House of Representatives standing committee on social policy and legal affairs.
The committee will examine how age verification works for online gambling websites, and see if that can be applied to porn sites. According to the inquiry's terms of reference, the committee will examine whether such a system would push adults
into unregulated markets, whether it would potentially lead to privacy breaches, and impact freedom of expression.
The committee has specifically been tasked to examine the UK's version of this system, in the UK Digital Economy Act 2017.
Hopefully they will understand better than UK lawmakers that it is paramount importance that legislation is enacted to keep people's porn browsing information totally safe from snoopers, hackers and those that want to make money selling it.
One of the key learnings from recent events is that there is growing demand for privacy features. The Firefox Private Network is an extension which provides a secure, encrypted path to the web to protect your connection and your personal
information anywhere and everywhere you use your Firefox browser.
There are many ways that your personal information and data are exposed: online threats are everywhere, whether it's through phishing emails or data breaches. You may often find yourself taking advantage of the free WiFi at the doctor's office,
airport or a cafe. There can be dozens of people using the same network -- casually checking the web and getting social media updates. This leaves your personal information vulnerable to those who may be lurking, waiting to take advantage of this
situation to gain access to your personal info. Using the Firefox Private Network helps protect you from hackers lurking in plain sight on public connections. To learn more about Firefox Private Network, its key features and how it works exactly,
please take a look at
this blog post .
As a Firefox user and account holder in the US, you can
start testing the Firefox Private Network today . A Firefox account allows you to be one of the first to test potential new products and services when we make them available in Europe, so sign up
today and stay tuned for further news and the Firefox Private Network coming to your location soon!
Call to regulate video game loot boxes under gambling law and ban their sale to children among measures needed to protect players, say MPs. Lack of honesty and transparency reported among representatives of some games and social media companies
in giving evidence.
The wide-ranging report calls upon games companies to accept responsibility for addictive gaming disorders, protect their players from potential harms due to excessive play-time and spending, and along with social media companies introduce more
effective age verification tools for users.
The immersive and addictive technologies inquiry investigated how games companies operate across a range of social media platforms and other technologies, generating vast amounts of user data and operating business models that maximise player
engagement in a lucrative and growing global industry.
Sale of loot boxes to children should be banned Government should regulate loot boxes under the Gambling Act Games industry must face up to responsibilities to protect players from potential harms Industry levy to support independent research on
long-term effects of gaming Serious concern at lack of effective system to keep children off age-restricted platforms and games
MPs on the Committee have previously called for a new Online Harms regulator to hold social media platforms accountable for content or activity that harms individual users. They say the new regulator should also be empowered to gather data and
take action regarding addictive games design from companies and behaviour from consumers. E-sports, competitive games played to an online audience, should adopt and enforce the same duty of care practices enshrined in physical sports. Finally,
the MPs say social media platforms must have clear procedures to take down misleading deep-fake videos 203 an obligation they want to be enforced by a new Online Harms regulator.
In a first for Parliament, representatives of major games including Fortnite maker Epic Games and social media platforms Snapchat and Instagram gave evidence on the design of their games and platforms.
DCMS Committee Chair Damian Collins MP said:
Social media platforms and online games makers are locked in a relentless battle to capture ever more of people's attention, time and money. Their business models are built on this, but it's time for them to be more responsible in dealing with
the harms these technologies can cause for some users.
Loot boxes are particularly lucrative for games companies but come at a high cost, particularly for problem gamblers, while exposing children to potential harm. Buying a loot box is playing a game of chance and it is high time the gambling laws
caught up. We challenge the Government to explain why loot boxes should be exempt from the Gambling Act.
Gaming contributes to a global industry that generates billions in revenue. It is unacceptable that some companies with millions of users and children among them should be so ill-equipped to talk to us about the potential harm of their products.
Gaming disorder based on excessive and addictive game play has been recognised by the World Health Organisation. It's time for games companies to use the huge quantities of data they gather about their players, to do more to proactively identify
Both games companies and the social media platforms need to establish effective age verification tools. They currently do not exist on any of the major platforms which rely on self-certification from children and adults.
Social media firms need to take action against known deepfake films, particularly when they have been designed to distort the appearance of people in an attempt to maliciously damage their public reputation, as was seen with the recent film of
the Speaker of the US House of Representatives, Nancy Pelosi.
Regulate 'loot boxes' under the Gambling Act:
Loot box mechanics were found to be integral to major games companies' revenues, with further evidence that they facilitated profits from problem gamblers. The Report found current gambling legislation that excludes loot boxes because they do not
meet the regulatory definition failed to adequately reflect people's real-world experiences of spending in games. Loot boxes that can be bought with real-world money and do not reveal their contents in advance should be considered games of chance
played for money's worth and regulated by the Gambling Act.
Evidence from gamers highlighted the loot box mechanics in Electronic Arts's FIFA series with one gamer disclosing spending of up to £1000 a year.
The Report calls for loot boxes that contain the element of chance not to be sold to children playing games and instead be earned through in-game credits. In the absence of research on potential harms caused by exposing children to gambling, it
calls for the precautionary principle to apply. In addition, better labelling should ensure that games containing loot boxes carry parental advisories or descriptors outlining that they feature gambling content.
The Government should bring forward regulations under section 6 of the Gambling Act 2005 in the next parliamentary session to specify that loot boxes are a game of chance. If it determines not to regulate loot boxes under the Act at this time,
the Government should produce a paper clearly stating the reasons why it does not consider loot boxes paid for with real-world currency to be a game of chance played for money's worth.
UK Government should advise PEGI to apply the existing 'gambling' content labelling, and corresponding age limits, to games containing loot boxes that can be purchased for real-world money and do not reveal their contents before purchase.
Safeguarding younger players:
With three-quarters of those aged 5 to 15 playing online games, MPs express serious concern at the lack of an effective system to keep children off age-restricted platforms and games. Evidence received highlighted challenges with age verification
and suggested that some companies are not enforcing age restrictions effectively.
Legislation may be needed to protect children from playing games that are not appropriate for their age. The Report identifies inconsistencies in age-ratings stemming from the games industry's self-regulation around the distribution of games. For
example, online games are not subject to a legally enforceable age-rating system and voluntary ratings are used instead. Games companies should not assume that the responsibility to enforce age-ratings applies exclusively to the main delivery
platforms: all companies and platforms that are making games available online should uphold the highest standards of enforcing age-ratings.
China's internet censor has ordered online AI algorithms to promote 'mainstream values':
Systems should direct users to approved material on subjects like Xi Jinping Thought, or which showcase the country's economic and social development, Cyberspace Administration of China says
They should not recommend content that undermines national security, or is sexually suggestive, promotes extravagant lifestyles, or hypes celebrity gossip and scandals
The Cyberspace Administration of China released its draft regulations on managing the cyberspace ecosystem on Tuesday in another sign of how the ruling Communist Party is increasingly turning to technology to cement its ideological control over
The proposals will be open for public consultation for a month and are expected to go into effect later in the year.
The latest rules point to a strategy to use AI-driven algorithms to expand the reach and depth of the government's propaganda and ideology.
The regulations state that information providers on all manner of platforms -- from news and social media sites, to gaming and e-commerce -- should strengthen the management of recommendation lists, trending topics, hot search lists and push
notifications. The regulations state:
Online information providers that use algorithms to push customised information [to users] should build recommendation systems that promote mainstream values, and establish mechanisms for manual intervention and override.
Today, on World Suicide Prevention Day, we're sharing an update on what we've learned and some of the steps we've taken in the past year, as well as additional actions we're going to take, to keep people safe on our apps, especially those who are
Earlier this year, we began hosting regular consultations with experts from around the world to discuss some of the more difficult topics associated with suicide and self-injury. These include how we deal with suicide notes, the risks of sad
content online and newsworthy depictions of suicide. Further details of these meetings are available on Facebook's new Suicide Prevention page in our Safety Center.
As a result of these consultations, we've made several changes to improve how we handle this content. We tightened our policy around self-harm to no longer allow graphic cutting images to avoid unintentionally promoting or triggering self-harm,
even when someone is seeking support or expressing themselves to aid their recovery. On Instagram, we've also made it harder to search for this type of content and kept it from being recommended in Explore. We've also taken steps to address the
complex issue of eating disorder content on our apps by tightening our policy to prohibit additional content that may promote eating disorders. And with these stricter policies, we'll continue to send resources to people who post content
promoting eating disorders or self-harm, even if we take the content down. Lastly, we chose to display a sensitivity screen over healed self-harm cuts to help avoid unintentionally promoting self-harm.
And for the first time, we're also exploring ways to share public data from our platform on how people talk about suicide, beginning with providing academic researchers with access to the social media monitoring tool, CrowdTangle. To date,
CrowdTangle has been available primarily to help newsrooms and media publishers understand what is happening on Facebook. But we are eager to make it available to two select researchers who focus on suicide prevention to explore how information
shared on Facebook and Instagram can be used to further advancements in suicide prevention and support.
In addition to all we are doing to find more opportunities and places to surface resources, we're continuing to build new technology to help us find and take action on potentially harmful content, including removing it or adding sensitivity
screens. From April to June of 2019, we took action on more than 1.5 million pieces of suicide and self-injury content on Facebook and found more than 95% of it before it was reported by a user. During that same time period, we took action on
more than 800 thousand pieces of this content on Instagram and found more than 77% of it before it was reported by a user.
To help young people safely discuss topics like suicide, we're enhancing our online resources by including Orygen's #chatsafe guidelines in Facebook's Safety Center and in resources on Instagram when someone searches for suicide or self-injury
The #chatsafe guidelines were developed together with young people to provide support to those who might be responding to suicide-related content posted by others or for those who might want to share their own feelings and experiences with
suicidal thoughts, feelings or behaviors.
Banned Books Week is an annual event celebrating the freedom to read. Banned Books Week was launched in 1982 in response to a sudden surge in the number of challenges to books in schools, bookstores and libraries. Typically held during the last
week of September, it highlights the value of free and open access to information. Banned Books Week brings together the entire book community 204 librarians, booksellers, publishers, journalists, teachers, and readers of all types 204 in shared
support of the freedom to seek and to express ideas, even those some consider unorthodox or unpopular.
Banned Books Week UK is a nationwide campaign for radical readers and rebellious readers of all ages celebrate the freedom to read. Between 22 -- 28 September 2019, bookshops, libraries, schools, literary festivals and publishers will be hosting
events and making noise about some of the most sordid, subversive, sensational and taboo-busting books around.
Index On Censorship highlights an example of the censorship
On 19 October 2018, the city of Orange City, Iowa, held a LGBT+ pride parade downtown, and a drag queen story hour in the public library. One man, however, had already checked out of the festivities--and he had taken several library books
Paul Dorr, the director of the Christian organisation Rescue the Perishing , posted a live video to Facebook about an hour before the parade was scheduled to start. During the video Dorr recited a Rescue the Perishing blog post entitled
May God And The Homosexuals of OC Pride Please Forgive Us! and threw four books he claimed were from the library into a flaming trash can.
The books Dorr burned in were all LBGT-themed children's books: Two Boys Kissing , by David Leviathan, is a tween romance; Christine Baldacchino's Morris Micklewhite and the Tangerine Dress is about a young boy who enjoys wearing a
dress; This Day in June, by Gayle E. Pitman, is a picture book about pride; and Suzanne and Max Lang's Families, Families, Families! is a children's book about nontraditional families.
After the due date for the books he had checked out passed without their return, the Orange City Attorney's Office arrested Dorr and charged him with fifth-degree criminal mischief. He has elsewhere insisted that the library has no grievance
against him because he sent in money to cover the replacement costs, But Dorr will still stand trial 6 August, 2019.
DNS over HTTPS (DoH) is an encrypted internet protocol that makes it more difficult for ISPs and government censors to block users from being able to access banned websites It also makes it more difficult for state snoopers like GCHQ to keep tabs
on users' internet browsing history.
Of course this protection from external interference also makes it much internet browsing more safe from the threat of scammers, identity thieves and malware.
Google were once considering introducing DoH for its Chrome browser but have recently announced that they will not allow it to be used to bypass state censors.
Mozilla meanwhile have been a bit more reasonable about it and allow users to opt in to using DoH. Now Mozilla is considering using DoH by default in the US, but still with the proviso of implementing DoH only if the user is not using parental
control or maybe corporate website blocking.
Mozilla explains in a blog post:
What's next in making Encrypted DNS-over-HTTPS the Default
By Selena Deckelmann,
In 2017, Mozilla began working on the DNS-over-HTTPS (DoH) protocol, and since June 2018 we've been running experiments in Firefox to ensure the performance and user experience are great. We've also been surprised and excited by the more than
70,000 users who have already chosen on their own to explicitly enable DoH in Firefox Release edition. We are close to releasing DoH in the USA, and we have a few updates to share.
After many experiments, we've demonstrated that we have a reliable service whose performance is good, that we can detect and mitigate key deployment problems, and that most of our users will benefit from the greater protections of encrypted DNS
traffic. We feel confident that enabling DoH by default is the right next step. When DoH is enabled, users will be notified and given the opportunity to opt out.
Results of our Latest Experiment
Our latest DoH experiment was designed to help us determine how we could deploy DoH, honor enterprise configuration and respect user choice about parental controls.
We had a few key learnings from the experiment.
We found that OpenDNS' parental controls and Google's safe-search feature were rarely configured by Firefox users in the USA. In total, 4.3% of users in the study used OpenDNS' parental controls or safe-search. Surprisingly, there was little
overlap between users of safe-search and OpenDNS' parental controls. As a result, we're reaching out to parental controls operators to find out more about why this might be happening.
We found 9.2% of users triggered one of our split-horizon heuristics. The heuristics were triggered in two situations: when websites were accessed whose domains had non-public suffixes, and when domain lookups returned both public and private
(RFC 1918) IP addresses. There was also little overlap between users of our split-horizon heuristics, with only 1% of clients triggering both heuristics.
Now that we have these results, we want to tell you about the approach we have settled on to address managed networks and parental controls. At a high level, our plan is to:
Respect user choice for opt-in parental controls and disable DoH if we detect them;
Respect enterprise configuration and disable DoH unless explicitly enabled by enterprise configuration; and
Fall back to operating system defaults for DNS when split horizon configuration or other DNS issues cause lookup failures.
We're planning to deploy DoH in "fallback" mode; that is, if domain name lookups using DoH fail or if our heuristics are triggered, Firefox will fall back and use the default operating system DNS. This means that for the minority of
users whose DNS lookups might fail because of split horizon configuration, Firefox will attempt to find the correct address through the operating system DNS.
In addition, Firefox already detects that parental controls are enabled in the operating system, and if they are in effect, Firefox will disable DoH. Similarly, Firefox will detect whether enterprise policies have been set on the device and will
disable DoH in those circumstances. If an enterprise policy explicitly enables DoH, which we think would be awesome, we will also respect that. If you're a system administrator interested in how to configure enterprise policies, please find
Options for Providers of Parental Controls
We're also working with providers of parental controls, including ISPs, to add a canary domain to their blocklists. This helps us in situations where the parental controls operate on the network rather than an individual computer. If Firefox
determines that our canary domain is blocked, this will indicate that opt-in parental controls are in effect on the network, and Firefox will disable DoH automatically.
This canary domain is intended for use in cases where users have opted in to parental controls. We plan to revisit the use of this heuristic over time, and we will be paying close attention to how the canary domain is adopted. If we find that it
is being abused to disable DoH in situations where users have not explicitly opted in, we will revisit our approach.
Plans for Enabling DoH Protections by Default
We plan to gradually roll out DoH in the USA starting in late September. Our plan is to start slowly enabling DoH for a small percentage of users while monitoring for any issues before enabling for a larger audience. If this goes well, we will
let you know when we're ready for 100% deployment.
An internal project to rewrite how Apple's Siri voice assistant handles sensitive topics such as feminism and the #MeToo movement advised developers to respond in one of three ways: don't engage, deflect and finally inform with neutral
information from Wikipedia.
The project saw Siri's responses explicitly rewritten to ensure that the service would say it was in favour of equality, but never say the word feminism -- even when asked direct questions about the topic.
The 2018 guidelines are part of a large tranche of internal documents leaked to the Guardian by a former Siri grader, one of thousands of contracted workers who were employed to check the voice assistant's responses for accuracy until Apple ended
the programme last month in response to privacy concerns raised by the Guardian.
In explaining why the service should deflect questions about feminism, Apple's guidelines explain that Siri should be guarded when dealing with potentially controversial content. When questions are directed at Siri, they can be deflected ...
however, care must be taken here to be neutral.
For example, Apple got tested a little on internet forums about #MeToo. Previously, when users called Siri a slut, the service responded: I'd blush if I could. Now, a much sterner reply is offered: I won't respond to that .
One of the Pentagon's most secretive agencies, the Defense Advanced Research Projects Agency (DARPA), is developing custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips.
DARPA now is developing a semantic analysis program called SemaFor and an image analysis program called MediFor, ostensibly designed to prevent the use of fake images or text. The idea would be to develop these technologies to help private
Internet providers sift through content.
Google have announced potentially far reaching new policies about kids' videos on YouTube. A Google blog post explains:
An update on kids and data protection on YouTube
From its earliest days, YouTube has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased. We've been taking a hard look at areas
where we can do more to address this, informed by feedback from parents, experts, and regulators, including COPPA concerns raised by the U.S. Federal Trade Commission and the New York Attorney General that we are addressing with a settlement
New data practices for children's content on YouTube
We are changing how we treat data for children's content on YouTube. Starting in about four months, we will treat data from anyone watching children's content on YouTube as coming from a child, regardless of the age of the user. This means that
we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this content entirely, and some features will no longer be available on
this type of content, like comments and notifications. In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and we'll also use machine learning to find videos that clearly
target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.
Improvements to YouTube Kids
We continue to recommend parents use YouTube Kids if they plan to allow kids under 13 to watch independently. Tens of millions of people use YouTube Kids every week but we want even more parents to be aware of the app and its benefits. We're
increasing our investments in promoting YouTube Kids to parents with a campaign that will run across YouTube. We're also continuing to improve the product. For example, we recently raised the bar for which channels can be a part of YouTube Kids,
drastically reducing the number of channels on the app. And we're bringing the YouTube Kids experience to the desktop.
Investing in family creators
We know these changes will have a significant business impact on family and kids creators who have been building both wonderful content and thriving businesses, so we've worked to give impacted creators four months to adjust before changes take
effect on YouTube. We recognize this won't be easy for some creators and are committed to working with them through this transition and providing resources to help them better understand these changes.
We are also going to continue investing in the future of quality kids, family and educational content. We are establishing a $100 million fund, disbursed over three years, dedicated to the creation of thoughtful, original children's content on
YouTube and YouTube Kids globally.
Today's changes will allow us to better protect kids and families on YouTube, and this is just the beginning. We'll continue working with lawmakers around the world in this area, including as the FTC seeks comments on COPPA . And in the coming
months, we'll share details on how we're rethinking our overall approach to kids and families, including a dedicated kids experience on YouTube.
The Swiss Lottery and Betting Board has published its first censorship list of foreign gambling websites to be blocked by the country's ISPs.
The censorship follows a change to the law on online gambling intended to preserve a monopoly for Swiss gambling providers.
Over 60 foreign websites external link have been blocked to Swiss gamblers. Last June, 73% of voters approved the censorship law. The law came into effect in January but blocking of foreign gambling websites only started in August.
Swiss gamblers can bet online only with Swiss casinos and lotteries that pay tax in the country.
Foreign service providers that voluntarily withdraw from the Swiss market with appropriate measures will not be blocked.
35 people in New Zealand have been charged by police for sharing and possession of Brenton Tarrant's Christchurch terrorist attack video.
As of August 21st, 35 people have been charged in relation to the video, according to information released under the Official Information Act. At least 10 of the charges are against minors, which have now been referred to the Youth Court.
Under New Zealand law, knowingly possessing or distributing objectionable material is a serious offence with a maximum jail term of 14 years.
So far, nine people have been issued warnings, while 14 have been prosecuted for their involvement.