上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

WEF:2023年数字安全设计干预和创新工具包-网络危害的类型(英文版)(17页).pdf

编号:136193  PDF   DOCX 17页 8.51MB 下载积分:VIP专享
下载报告请您先登录!

WEF:2023年数字安全设计干预和创新工具包-网络危害的类型(英文版)(17页).pdf

1、Toolkit for Digital Safety Design Interventions and Innovations:Typology of Online HarmsI N S I G H T R E P O R TA U G U S T 2 0 2 3ContentsImages:Getty Images,Unsplash 2023 World Economic Forum.All rights reserved.No part of this publication may be reproduced or transmitted in any form or by any me

2、ans,including photocopying and recording,or by any information storage and retrieval system.Disclaimer This document is published by the World Economic Forum as a contribution to a project,insight area or interaction.The findings,interpretations and conclusions expressed herein are a result of a col

3、laborative process facilitated and endorsed by the World Economic Forum but whose results do not necessarily represent the views of the World Economic Forum,nor the entirety of its Members,Partners or other stakeholders.ForewordExecutive summary1 Introduction2 Typology of Online Harms2.1 Threats to

4、personal and community safety2.2 Harm to health and well-being2.3 Hate and discrimination2.4 Violation of dignity2.5 Invasion of privacy2.6 Deception and manipulation3 ConclusionAppendix:ResourcesContributorsEndnotes3457789516Toolkit for Digital Safety Design Interventions and Innovations

5、2ForewordThe internet has transformed the world into a global village,connecting people from different corners of the world with ease and speed.However,it has also heightened various social harms,such as bullying and harassment,hate speech,disinformation and radicalization.The amplification of these

6、 harms has far-reaching consequences,affecting individuals,communities and societies.While the internet is global in nature,harms can be highly local or context-specific:unique risks may arise in different countries or regions or in different communities.Factors such as cultural norms,legal systems

7、and societal values influence how individuals perceive and respond to online threats.Within this context,it is important to acknowledge that digital safety requires a complex range of deliberations,balancing legal,policy,ethical,social and technological considerations.Digital safety decisions must b

8、e rooted in international human rights frameworks.1To address the complex landscape of harms in online spaces,the World Economic Forums Global Coalition for Digital Safety2 is developing the Typology of Online Harms,as outlined in this report.The report intends to work towards creating a common term

9、inology as well as a shared understanding when discussing online harms across jurisdictions.Moreover,it aims to facilitate conversations about online harms,but it does not set out to provide any severity ratings or to be used for regulatory compliance.The Coalition,intending to foster cooperation on

10、 digital safety,has created this document through extensive research,collaboration and expert consultations.This typology complements other significant publications by the Coalition,including Global Principles on Digital Safety,3 published in January 2023,which recognizes the need to advance a share

11、d understanding of online harm issues.It also complements Digital Safety Risk Assessment in Action:A Framework and Bank of Case Studies,4 launched in May 2023.Together,these publications offer valuable resources for policy-makers,industry leaders,civil society organizations,researchers and individua

12、ls,helping to address the issue of harmful content online comprehensively and in a rights-respecting manner.Daniel Dobrygowski Head,Governance and Trust,World Economic ForumMinos Bantourakis Head,Media,Entertainment and Sport Industry,World Economic ForumToolkit for Digital Safety Design Interventio

13、ns and Innovations:Typology of Online HarmsAugust 2023Julie Inman Grant Australian eSafety CommissionerAdam Hildreth Founder,Crisp,a Kroll businessToolkit for Digital Safety Design Interventions and Innovations3Executive summaryDeveloped by a working group of the Global Coalition for Digital Safety,

14、comprising representatives from industry,governments,civil society and academia,this typology serves as a foundation for facilitating multistakeholder discussions and cross-jurisdictional dialogues to find a common terminology and shared understanding of online safety.The Typology of Online Harms is

15、 an integral part of the Toolkit for Digital Safety Design Interventions and Innovations,one of the key workstreams initiated by the Global Coalition for Digital Safety.This toolkit aims to define online harms and identify the potential technology,policy,processes and design interventions needed to

16、advance digital safety in a rights-respecting manner.By aligning with the commitment to foster a shared understanding of online harm issues through a multistakeholder dialogue,as well as the call for governments to clearly distinguish between illegal content and content that is lawful but may be har

17、mful as outlined in the Global Principles on Digital Safety,5 the typology complements the Coalitions efforts to promote digital safety while respecting individuals rights.It can also be effectively used in conjunction with Digital Safety Risk Assessment In Action,6 as this typology provides a commo

18、n language for categorizing and defining the various types of online harms that require assessment.Considering the global nature of the internet and the local implications of online harms,the typology takes into account both regional and local contexts.It recognizes the complex and interconnected na

19、ture of online safety,encompassing content,contact and conduct risks.While recognizing the value of contract as a fourth“C”to reflect risks for children in relation to commercialization and datafication,7 this typology uses a“three C”framework to encompass online safety risks for a broad range of en

20、d users.It categorizes these harms,including threats to personal and community safety such as child exploitation and extremist content,harm to health caused by content promoting suicide or disordered eating and violations of dignity and privacy through bullying and harassment,doxxing and image-based

21、 abuse.Deception and manipulation,such as disinformation and manipulated media,are also addressed.While this publication does not specifically cover emerging technologies such as the metaverse,Web3 or generative AI,it emphasizes the need for the ongoing development of processes that keep pace with t

22、echnological advances and their societal impact.To this end,the Coalition will also look at developing a conceptual and comprehensive framework to ensure that the approach to online harms is future-proof.In addition to the Typology of Online Harms,the Coalition is preparing two upcoming reports that

23、 will complement this work.The first report,Risk Factors,Metrics and Measurement,focuses on approaches to evaluate the risks of adverse impacts from online harms,as well as the benefits of mitigation actions,and the second,Solution-Based Interventions,draws on safety-by-design principles and best pr

24、actices to provide a resource to assist companies in effectively identifying and reducing digital risks,preventing harm and promoting trust and safety.The Typology of Online Harms aims to provide a foundational common language,facilitating multistakeholder and cross-jurisdictional discussions to adv

25、ance digital safety.The typology takes into account both regional and local contexts.It recognizes the complex and interconnected nature of online safety,encompassing content,contact and conduct risks.Toolkit for Digital Safety Design Interventions and Innovations4Introduction1The Typology of Online

26、 Harms serves as a foundation to build a common terminology and shared understanding of the diverse range of risks that arise online,including in the production,distribution and consumption of content.Online harms encompass various dimensions,including harm in content production and distribution,as

27、well as harm in content consumption:Harm in the production of content for example,where a person is physically harmed,and the abuse is recorded or streamed in order to create online material.This could include images or videos of murder,assault or the sexual abuse of adults or children.Harm in the d

28、istribution of content for example,where an intimate image of a person is self-produced and shared voluntarily and is later shared and distributed online without their consent.That person may not have been harmed in the production of the content but is exposed to harm once their intimate image is sh

29、ared.Similarly,victims who are the subject of abuse in the production of content can face compounded trauma when that content is distributed.Those who film,share or consume the content also risk being harmed.The person objectified in such content is also harmed because the distribution of that conte

30、nt can reinforce negative attitudes towards certain populations.Amplifying or resharing hateful comments about a minority group serves as an example that reinforces stereotypes towards the underrepresented group,perpetuating biases and inflicting further harm on these individuals.Harm in the consump

31、tion of content for example,where a person is negatively affected as a result of viewing illegal,age-inappropriate,potentially dangerous or misleading content.Online harms can also occur as a result of online interactions with others(contact)and through behaviour facilitated by technology(conduct).T

32、he following typology of harms builds on existing and emerging international approaches to understanding and mitigating online harms,as listed in the Resources section of this document,and considers the need to address online harms in a rights-respecting way.Harms may be concurrent and intersecting,

33、and their categorization is not always exclusive.For example,while image-based abuse is included under the heading of privacy,it can also relate to harms to personal safety and health and well-being.Similarly,child sexual exploitation and abuse is a threat to personal and community safety,harmful to

34、 health and well-being,a violation of dignity and an invasion of privacy.Online harms may also form part of a broader harm context that can occur across a range of technologies and include behaviours perpetrated online and offline,and risk vectors can overlap for certain harms.For example,while chil

35、d sexual abuse material(CSAM)is listed as primarily a content risk,it may be produced and distributed as a result of contact or conduct,such as grooming and sexual extortion.By framing online harms through a human rights lens,this typology emphasizes the impacts on individual users and aims to provi

36、de a broad categorization of harms to support global policy development.It notes that there are regional differences in how specific harms are defined in different jurisdictions and that there is no international consensus on how to define or categorize common types of harm.Considering the contextua

37、l nature of online harm,the typology does not aim to offer precise definitions that are universally applicable in all contexts.In line with the Global Principles on Digital Safety,this typology underscores the importance of balancing different rights,acknowledging that all types of online harm have

38、the potential to unlawfully deny individuals their right to participate and express themselves online.At this stage it is largely up to individual online service providers to establish rules and guidelines for the types of activity and content that are or are not permitted on their platforms within

39、community guidelines or terms of service.However,these can diverge significantly across services.Toolkit for Digital Safety Design Interventions and Innovations5This typology can help companies,including those at the early stage,to understand the range of online harms that might occur to users of th

40、eir services as well as their impacts on victims,the different modes of abuse and factors that might contribute to harm.Moreover,the typology can aid governments by establishing a shared language to identify online harms and facilitate the efforts of civil society organizations seeking to participat

41、e in multistakeholder discussions that advocate for a safer online ecosystem.By using this typology,stakeholders from all sectors can promote a collaborative approach to address the challenges posed by online harms.Harms to corporations and brands(e.g.copyright infringement)are not within the scope

42、of this typology.The typology does not aim to prescribe actions to be taken in response to harms nor does it seek to assign severity ratings to harms.Furthermore,the typology is focused on online harms affecting individuals and society,but cannot be considered fully exhaustive in terms of all types

43、of harms(e.g.animal cruelty).It does provide a common foundation for multistakeholder discussions to develop a shared terminology for and understanding of online harms.In determining appropriate interventions for content or conduct falling within any of these harm categories,the broader context of i

44、nternational human rights conventions needs to be considered,as do potential contextual exceptions for content and conduct that is newsworthy,educational,artistic or has other merits.Possible harms that may affect future technology paradigms,including the metaverse and Web3,are not outlined in this

45、report.However,as a next step,the Coalition will develop a future-proof framework on online harms to help different stakeholders keep pace with technological advances.Toolkit for Digital Safety Design Interventions and Innovations6Typology of Online Harms2The typology recognizes the complex and inte

46、rconnected nature of online safety,encompassing content,contact and conduct risks.Although online harms are commonly targeted at an individual,they have broader community and societal impacts.Similarly,societal harms have individual impacts and consequences.The United Nations Convention on the Right

47、s of the Child General Comment 25 asserts that:“State parties should take legislative and administrative measures to protect children from violence in the digital environment,including the regular review,updating and enforcement of robust legislative,regulatory and institutional frameworks that prot

48、ect children from recognized and emerging risks of all forms of violence in the digital environment.”8Article 3 of the Universal Declaration of Human Rights states that:“Everyone has the right to life,liberty and security of person.”9 a.Content risks1.Child sexual abuse material(CSAM)Any representat

49、ion by whatever means of a child engaged in real or simulated explicit sexual activities or any representation of the sexual parts of a child for primarily sexual purposes.While the laws of many countries continue to use the term“child pornography”,there has been a global movement towards the use of

50、 the term“child sexual abuse material”(CSAM)to properly convey that sexualized material depicting or otherwise representing children is indeed a representation,and a form,of child sexual abuse.102.Child sexual exploitation material(CSEM)Content that sexualizes and is exploitative of the child,whethe

51、r or not it shows the childs sexual abuse.3.Pro-terror materialMaterial that advocates engaging in a terrorist act because it counsels,promotes,encourages or urges engaging in a terrorist act,provides instruction on engaging in a terrorist act or directly praises engaging in a terrorist act in circu

52、mstances where there is a substantial risk that such praise might have the effect of leading a person to engage in a terrorist act.114.Content that praises,promotes,glorifies or supports extremist organizations or individualsIncludes content that encourages participation in,or intends to recruit ind

53、ividuals to,violent extremist organizations including terrorist organizations,organized hate groups,criminal organizations and other non-state armed groups that target civilians with names,symbols,logos,flags,slogans,uniforms,gestures,salutes,illustrations,portraits,songs,music,lyrics or other objec

54、ts meant to represent violent extremist organizations or individuals.5.Violent graphic content Content that promotes,incites,provides instruction in or depicts acts including murder,attempted murder,torture,rape and kidnapping of another person using violence or the threat of violence.It is importan

55、t to consider the various contexts in which this content may arise,including both condemning/informative purposes and in the context of documenting human rights abuses.Threats to personal and community safety2.1Toolkit for Digital Safety Design Interventions and Innovations76.Content that incites,pr

56、omotes or facilitates violenceIncludes content that contains direct and indirect threats of violence and intimidation.7.Content that promotes,incites or instructs in dangerous physical behaviour Content that promotes,incites or provides instruction in activities conducted in a non-professional conte

57、xt that may lead to serious injury or death for the user or members of the public.b.Contact risks1.Grooming for sexual abuseWhen someone uses the internet to deliberately establish an emotional connection with a young person to lower their inhibitions,and make it easier to have sexual contact with t

58、hem.It may involve an adult posing as a child in an internet application to befriend a child and encourage them to behave sexually online or to meet in person.2.Recruitment and radicalizationIncludes posting or engaging with individuals with the purpose of recruiting individuals to a designated or d

59、angerous organization.c.Conduct risks1.Technology-facilitated abuse(TFA)Using digital technology to enable,assist or amplify abuse or coercive control of a person or group of people.2.Technology-facilitated gender-based violenceA subset of technology-facilitated abuse that captures any act that is c

60、ommitted,assisted,aggravated or amplified by the use of information communication technologies or other digital tools,resulting in or likely to result in physical,sexual,psychological,social,political or economic harm or other infringements of rights and freedoms on the basis of gender characteristi

61、cs.d.Content/contact/conduct risks1.Child sexual exploitation and abuse(CSEA)Can refer to content(e.g.CSAM),contact(e.g.grooming)and conduct(e.g.livestreaming).Harmful online content and behaviour can be seriously damaging,especially for those most at risk.The social,emotional,psychological and even

62、 physical impacts of online harms can be immediate,experienced over time and/or enduring.They can also be experienced both online and offline.Nevertheless,online platforms and tools not only harbour harmful content and behaviour but also provide a safe space for individuals to address these issues.T

63、his allows people to share experiences,raise awareness,access mental health resources and seek support from one another,both online and offline.The United Nations Convention on the Rights of the Child General Comment 25 states that:“The use of digital devices should not be harmful,nor should it be a

64、 substitute for in-person interactions among children or between children and parents or caregivers.State parties should pay specific attention to the effects of technology in the earliest years of life,when brain plasticity is maximal and the social environment,in particular relationships with pare

65、nts and caregivers,is crucial to shaping childrens cognitive,emotional and social development.In the early years,precautions may be required,depending on the design,purpose and uses of technologies.Training and advice on the appropriate use of digital devices should be given to parents,caregivers,ed

66、ucators and other relevant actors,taking into account the research on the effects of digital technologies on childrens development,especially during the critical neurological growth spurts of early childhood and adolescence.”12Article 12(1)of the International Covenant on Economic,Social and Cultura

67、l Rights outlines the right to the“enjoyment of the highest attainable standard of physical and mental health”.13Harm to health and well-being2.2Toolkit for Digital Safety Design Interventions and Innovations8a.Content risks1.Material that promotes suicide,self-harm and disordered eatingContent that

68、 promotes suicidal or self-injurious behaviour.Includes content that promotes,encourages,coordinates or provides instructions on:Suicide Self-injury,including depictions of graphic self-injury imagery Eating disorders,including expressing desire for an eating disorder,sharing tips or coaching on dis

69、ordered eating,or encouraging participation in unhealthy body measurement challenges2.Developmentally inappropriate content Includes childrens access to pornography,particularly of a violent or extreme nature,and graphic,violent material.Online hate and discrimination can negatively affect a persons

70、 mental health,general well-being and online engagement.It can also,in the most extreme cases,lead to harassment and violence offline.The United Nations Convention on the Rights of the Child General Comment 25 calls upon:“State parties to take proactive measures to prevent discrimination on the basi

71、s of sex,disability,socioeconomic background,ethnic or national origin,language or any other grounds and discrimination against minority and Indigenous children,asylum-seeking,refugee and migrant children,lesbian,gay,bisexual,transgender and intersex children,children who are victims and survivors o

72、f trafficking or sexual exploitation,children in alternative care,children deprived of liberty and children in other vulnerable situations.”14The Convention on the Elimination of All Forms of Discrimination against Women(CEDAW)and the International Convention on the Elimination of All Forms of Racia

73、l Discrimination(ICERD)recognize rights to equality and non-discrimination,which can include protections against violations of the right to safety.Article 20(2)of the International Covenant on Civil and Political Rights(ICCPR)states that:“Any advocacy of national,racial or religious hatred that cons

74、titutes incitement to discrimination,hostility or violence shall be prohibited by law.”15 a.Content risks1.Hate speechAny kind of communication in speech,writing or behaviour that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of their inhere

75、nt/protected characteristics in other words,based on their religion,ethnicity,nationality,race,colour,ancestry,gender or other identity factor.Includes dehumanization,which targets individuals or groups by calling them subhuman,comparing them to animals,insects,pests,disease or any other non-human e

76、ntity.b.Conduct risks1.Algorithmic discriminationA decision that results in the denial of financial and lending services,housing,insurance,education enrolment,criminal justice,employment opportunities,healthcare services or access to basic necessities,such as food and water.It is important to acknow

77、ledge that certain practices,such as age restrictions implemented by platforms,serve as protective measures to prevent harmful interactions between unrelated adults and teenagers.Therefore,in this as in other definitions,it is crucial to strike a balance,ensuring that while implementing such measure

78、s there is a commitment to upholding human rights principles,as emphasized in the Global Principles on Digital Safety.Hate and discrimination2.3Toolkit for Digital Safety Design Interventions and Innovations9The Universal Declaration of Human Rights Article 1 states that:“All human beings are born f

79、ree and equal in dignity and rights.They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.”16 a.Conduct risks1.Online bullying and harassment The use of technology to bully someone to deliberately engage in hostile behaviour to hurt them socially,e

80、motionally,psychologically or even physically.This can include abusive texts and emails;hurtful messages,images or videos;excluding others;spreading damaging gossip and chat;or creating fake accounts to trick or humiliate someone.b.Contact risks1.Sexual extortionAlso called“sextortion”,the blackmail

81、ing of a person with the help of self-generated images of that person in order to extort sexual favours,money or other benefits from them under the threat of sharing the material beyond the consent of the depicted person(e.g.posting images on social media).Often,the influence and manipulation typica

82、l of groomers over longer periods of time(sometimes several months)turns into a rapid escalation of threats,intimidation and coercion once the person has been persuaded to send the first sexual images of themself.17Violation of dignity2.4The United Nations Convention on the Rights of the Child Gener

83、al Comment 25 states that:“Privacy is vital to childrens agency,dignity and safety and for the exercise of their rights.Childrens personal data are processed to offer educational,health and other benefits to them.Threats to childrens privacy may arise from data collection and processing by public in

84、stitutions,businesses and other organizations,as well as from such criminal activities as identity theft.Threats may also arise from childrens own activities and from the activities of family members,peers or others,for example,by parents sharing photographs online or a stranger sharing information

85、about a child.”18a.Conduct risks 1.DoxxingThe intentional online exposure of an individuals identity,personal details or sensitive information without their consent and with the intention of placing them at risk of harm.2.Image-based abuseSharing,or threatening to share,an intimate image or video wi

86、thout the consent of the person shown.An“intimate image/video”is one that,where there is a reasonable expectation of privacy,shows nudity,sexual poses,private activity such as showering or someone without the religious or cultural clothing they would normally wear in public.Invasion of privacy2.5Too

87、lkit for Digital Safety Design Interventions and Innovations10Article 25 of the International Covenant on Civil and Political Rights describes the right to free and fair elections.Article 12 of the International Covenant on Economic,Social and Cultural Rights outlines the right to health.The United

88、Nations Convention on the Rights of the Child General Comment 25 asserts that:“State parties should ensure that uses of automated processes of information filtering,profiling,marketing and decision-making do not supplant,manipulate or interfere with childrens ability to form and express their opinio

89、ns in the digital environment.”19a.Content risks1.Disinformation and misinformationTwo distinct types of false or inaccurate information.Misinformation involves the dissemination of incorrect facts,where individuals may unknowingly share or believe false information without the intent to mislead.Dis

90、information involves the deliberate and intentional spread of false information with the aim of misleading others.Both can be used to manipulate public opinion,interfere with democratic processes such as elections or cause harm to individuals,particularly when it involves misleading health informati

91、on.Includes gendered disinformation that specifically targets women political leaders,journalists and other public figures,employing deceptive or inaccurate information and images to perpetuate stereotypes and misogyny.2.Deceptive synthetic mediaContent that has been generated or manipulated via alg

92、orithmic processes(such as artificial intelligence or machine learning)to appear as though based on reality,when it is,in fact,artificial and seeks to harm a particular person or group of people.Includes deepfakes,which are extremely realistic although fake images,audio or video clips that show a re

93、al person doing or saying something that they did not actually do or say.b.Conduct risks1.ImpersonationPosing as an existing person,group or organization in a confusing or deceptive manner.2.ScamsDishonest schemes that seek to manipulate and take advantage of people to gain benefits such as money or

94、 access to personal details.3.PhishingThe sending of fraudulent messages,pretending to be from organizations or people the receiver trusts,to try and steal details such as online banking logins,credit card details and passwords from the receiver.4.CatfishingThe use of social media to create a false

95、identity,usually to defraud or scam someone.People who catfish often make up fake backgrounds,jobs or friends to appear as another person.Using this fake identity,they may trick someone into believing they are in an online romance before asking the person to send money,gifts or nude images.Deception

96、 and manipulation2.6Toolkit for Digital Safety Design Interventions and Innovations11Conclusion3The Typology of Online Harms provides a comprehensive framework for understanding and categorizing various types of online harm through a human rights lens.The typology plays a key role in identifying onl

97、ine harms and providing a foundational terminology for multistakeholder discussions.These discussions,in turn,can facilitate the creation of policies and interventions that effectively address online harms and reduce the associated risks.This typology recognizes the complex nature of online safety,b

98、y classifying the threats into content,contact and conduct risks.Online harms can occur throughout the production,distribution and consumption of content(content)but can also arise as a result of online interactions with others(contact)and through behaviour facilitated by technology(conduct).Further

99、more,the typology categorizes these online harms.For example,it refers to threats to personal and community safety,such as child sexual exploitation material,pro-terror material and extremist context,among other types.Additionally,the typology identifies harms to health and well-being caused by cont

100、ent that promotes suicide or disordered eating.It also acknowledges the importance of dignity and privacy by including examples of bullying and harassment,doxxing and image-based abuse as violations of these principles.Deception and manipulation form another category within the typology,focusing on

101、online harms related to disinformation and deceptive synthetic media.The typology recognizes regional and local distinctions in how specific harms are defined and categorized in different jurisdictions.In this sense,it does not prescribe specific actions or assign severity ratings to online harms.In

102、stead,it aims to serve as a valuable resource for companies to understand the online harms that may occur on their platforms.While this typology offers a foundation for understanding online harms,new technologies,platforms and trends such as the metaverse,Web3 and generative AI may give rise to new

103、forms of harm or exacerbate existing ones.Although future harms are not in the scope of this publication,the Coalition will consider creating a conceptual framework to ensure the approach to online harms is future-proof.In conclusion,the Typology of Online Harms provides a comprehensive framework th

104、at contributes to global efforts to advance digital safety.By understanding different types of online harm,stakeholders can work collaboratively to develop effective policies,interventions and innovations that promote a safer digital ecosystem while respecting human rights and fostering positive onl

105、ine behaviours.Toolkit for Digital Safety Design Interventions and Innovations12Appendix:ResourcesAustralian Government eSafety Commissioner,Assessment Tools:https:/www.esafety.gov.au/industry/safety-by-design/assessment-tools.Australian Government eSafety Commissioner,Glossary of Terms:https:/www.e

106、safety.gov.au/about-us/glossary.Australian Government eSafety Commissioner,Online Safety Act 2021:Abhorrent Violent Conduct Powers Regulatory Guidance,2021:https:/www.esafety.gov.au/sites/default/files/2022-03/Abhorrent%20Violent%20Conduct%20Powers%20Regulatory%20Guidance.pdf.Berkman Center for Inte

107、rnet&Society at Harvard,Enhancing Child Safety&Online Technologies:Final Report of the Internet Safety Technical Task Force to the Multi-State Working Group on Social Networking of State Attorneys General of the United States,2008:https:/www.ojp.gov/ncjrs/virtual-library/abstracts/enhancing-child-sa

108、fety-and-online-technologies-final-report.Child Dignity Alliance,Child Dignity in the Digital World:Technology Working Group Report:https:/ Trust&Safety Partnership,Trust&Safety Glossary of Terms,2023:https:/dtspartnership.org/wp-content/uploads/2023/01/DTSP_Trust-Safety-Glossary13023.pdf.DQ Institu

109、te,Outsmart the Cyber-Pandemic:Empower Every Child with Digital Intelligence by 2020,2018 DQ Impact Report:https:/www.dqinstitute.org/2018dq_impact_report/.ECPAT International,Luxembourg Guidelines:https:/ecpat.org/luxembourg-guidelines/.EU Disinfo Lab,Gender-Based Disinformation:Advancing Our Under

110、standing and Response,20 October 2021:https:/www.disinfo.eu/publications/gender-based-disinformation-advancing-our-understanding-and-response/.European Parliament,Policy Department for Structural and Cohesion Policies,Brussels,Research for CULT Committee Child Safety Online:Definition of the Problem

111、,2018:https:/www.europarl.europa.eu/RegData/etudes/IDAN/2018/602016/IPOL_IDA(2018)602016_EN.pdf.Europol,Internet Organised Crime Threat Assessment,2018:https:/www.europol.europa.eu/internet-organised-crime-threat-assessment-2018.IWF,Internet Watch Foundation Annual Report 2022:www.iwf.org.uk.Kijkwij

112、zer,Youth Protection Roundtable Toolkit(in Dutch):https:/www.kijkwijzer.nl/nieuws/nicam-dossier-5-naar-een-safer-internet-gepubliceerd/.Livingstone,S.and M.Stoilova,4 Cs of Online Risk:Short Report&Blog on Updating the Typology of Online Risks to Include Content,Contact,Conduct,Contract Risks,Childr

113、en Online:Research and Evidence,2021:https:/core-evidence.eu/posts/4-cs-of-online-risk.Meta,Facebook Community Standards:https:/ Harmful Online Content:A Perspective from Broadcasting and On-Demand Standards Regulation,2018:https:/www.ofcom.org.uk/_data/assets/pdf_file/0022/120991/Addressing-harmful

114、-online-content.pdf.Reddit,Reddit Content Policy:https:/ and S.Hassan,A Model of Online Protection to Reduce Childrens Risk Exposure:Empirical Evidence from Asia,Sexuality&Culture,vol.22,issue 4,2018,pp.12051229:https:/ for Digital Safety Design Interventions and Innovations13Thorn.org,Child Pornogr

115、aphy Is Sexual Abuse Material:https:/www.thorn.org/child-pornography-and-abuse-statistics/.TikTok,Community Guidelines:https:/ Help Center,Platform Use Guidelines:https:/ in a Digital World:The State of the Worlds Children 2017:https:/www.unicef.org/reports/state-worlds-children-2017.United Nations

116、Officer of the High Commissioner,General Comment No.25(2021)on Childrens Rights in Relation to the Digital Environment,2021:https:/www.ohchr.org/en/documents/general-comments-and-recommendations/general-comment-no-25-2021-childrens-rights-relation.WHO,ICD-11:International Classification of Diseases

117、11th Revision:https:/icd.who.int/en.Woodlock,D.,ReCharge:Womens Technology Safety,Legal Resources,Research&Training,2015:https:/wesnet.org.au/wp-content/uploads/sites/3/2022/05/ReCharge-national-study-findings-2015.pdf.World Economic Forum,Digital Safety Risk Assessment in Action:A Framework and Ban

118、k of Case Studies,26 May 2023:https:/www.weforum.org/reports/digital-safety-risk-assessment-in-action-a-framework-and-bank-of-case-studies.World Economic Forum,Global Principles on Digital Safety:Translating International Human Rights for the Digital Context,9 January 2023:https:/www.weforum.org/whi

119、tepapers/global-principles-on-digital-safety-translating-international-human-rights-for-the-digital-context.YouTube Help:https:/ for Digital Safety Design Interventions and Innovations14ContributorsAcknowledgementsWorld Economic Forum Minos BantourakisHead,Media,Entertainment and Sport IndustryAgust

120、ina CallegariProject Lead,Global Coalition for Digital SafetyDaniel DobrygowskiHead,Governance and TrustCathy LiHead,AI,Data and Metaverse,Centre for the Fourth Industrial Revolution;Member of the Executive CommitteeLead AuthorsDaniel ChildIndustry Affairs and Engagement Manager,eSafety Commissioner

121、John-Orr HannaChief Intelligence Officer,Crisp,a Kroll BusinessAdam HildrethFounder,Crisp,a Kroll BusinessJulie Inman GrantCommissioner,Australian eSafety CommissionerAntigone DavisVice-President,Global Head of Safety,MetaJulie DawsonChief Policy and Regulatory Officer,YotiInbal GoldbergerVice-Presi

122、dent of Trust and Safety,ActiveFenceSusie HargreavesChief Executive Officer,Internet Watch FoundationLisa HayesHead of Safety Public Policy and Senior Counsel,Americas,TikTokFarah LalaniGlobal Vice-President,Trust and Safety Policy,TeleperformanceHeidi LarsonProfessor of Anthropology,Risk and Decisi

123、on Science,London School of Hygiene and Tropical MedicineDeepali LiberhanDirector,Safety Policy,Global(Head,Regional and Regulatory),MetaSusan NessDistinguished Fellow,Annenberg Public Policy Center of the University of Pennsylvania Akash PugaliaGlobal President,Trust and Safety,TeleperformanceKathe

124、rine SandellGlobal Head of Platform Risk Trust and Safety,Google Ian StevensonChief Executive Officer,CyacombDavid SullivanExecutive Director,Digital Trust and Safety PartnershipJohn TanaghoExecutive Director,International Justice Mission Center to End Online Sexual Exploitation of ChildrenDavid Wri

125、ghtDirector,UK Safer Internet CentreToolkit for Digital Safety Design Interventions and Innovations15Endnotes1.World Economic Forum,Global Principles on Digital Safety:Translating International Human Rights for the Digital Context,9 January 2023:https:/www.weforum.org/whitepapers/global-principles-o

126、n-digital-safety-translating-international-human-rights-for-the-digital-context.2.World Economic Forum,A Global Coalition for Digital Safety,2023:https:/initiatives.weforum.org/global-coalition-for-digital-safety/home.3.World Economic Forum,Global Principles on Digital Safety:Translating Internation

127、al Human Rights for the Digital Context,9 January 2023:https:/www.weforum.org/whitepapers/global-principles-on-digital-safety-translating-international-human-rights-for-the-digital-context.4.World Economic Forum,Digital Safety Risk Assessment in Action:A Framework and Bank of Case Studies,26 May 202

128、3:https:/www.weforum.org/reports/digital-safety-risk-assessment-in-action-a-framework-and-bank-of-case-studies.5.World Economic Forum,Global Principles on Digital Safety:Translating International Human Rights for the Digital Context,9 January 2023:https:/www.weforum.org/whitepapers/global-principles

129、-on-digital-safety-translating-international-human-rights-for-the-digital-context.6.Ibid.7.Livingstone,Sonya,and Stoilova,Mariya,The 4Cs:Classifying Online Risk to Children,CO:RE Short Report Series on Key Topics,SSOAR,2021:https:/doi.org/10.21241/ssoar.71817.8.United Nations Convention on the Right

130、s of the Child,General Comment No.25(2021)on Childrens Rights in Relation to the Digital Environment,2 March 2021:https:/www.unicef.org/bulgaria/en/media/10596/file.9.United Nations,Universal Declaration of Human Rights,2015:https:/www.un.org/en/udhrbook/pdf/udhr_booklet_en_web.pdf.10.The Internatio

131、nal Centre for Missing and Exploited Children,Words Matter:https:/www.icmec.org/resources/terminology/.11.Gilbert+Tobin Centre of Public Law,Review of Commonwealth Laws for Consistency with Traditional Rights,Freedoms and Privileges,25 February 2015:https:/www.alrc.gov.au/wp-content/uploads/2019/08/

132、22._org_gilbert_and_tobin_centre_for_public_law_alrc_freedoms_sub.pdf.12.United Nations Convention on the Rights of the Child,General Comment No.25(2021)on Childrens Rights in Relation to the Digital Environment,2 March 2021:https:/www.unicef.org/bulgaria/en/media/10596/file.13.United Nations,Intern

133、ational Covenant on Economic,Social and Cultural Rights,16 December 1966:https:/www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-economic-social-and-cultural-rights.14.United Nations Convention on the Rights of the Child,General Comment No.25(2021)on Childrens Rights in Rel

134、ation to the Digital Environment,2 March 2021:https:/www.unicef.org/bulgaria/en/media/10596/file.15.United Nations,International Covenant on Civil and Political Rights,16 December 1966:https:/www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights#:text=

135、Article%2020,-1.&text=Any%20propaganda%20for%20war%20shall,shall%20be%20prohibited%20by%20law.16.United Nations,Universal Declaration of Human Rights:https:/www.un.org/en/about-us/universal-declaration-of-human-rights#:text=Article%201,in%20a%20spirit%20of%20brotherhood.17.Facilitators Tips:Strategi

136、es to Overcome Challenges:https:/cdn.icmec.org/wp-content/uploads/2020/02/20213159/PasP-facilitator-tips.pdf(adapted from Prevent Child Abuse,Facilitators Guide to Resilience and Appendix D p.18 from Dealing with Difficult Audience types from Prevent Child Abuse America:https:/preventchildabuse.org/

137、resources/resilience/).18.United Nations Convention on the Rights of the Child,General Comment No.25(2021)on Childrens Rights in Relation to the Digital Environment,2 March 2021:https:/www.unicef.org/bulgaria/en/media/10596/file.19.Ibid.Toolkit for Digital Safety Design Interventions and Innovations

138、16World Economic Forum9193 route de la CapiteCH-1223 Cologny/GenevaSwitzerland Tel.:+41(0)22 869 1212Fax:+41(0)22 786 2744contactweforum.orgwww.weforum.orgThe World Economic Forum,committed to improving the state of the world,is the International Organization for Public-Private Cooperation.The Forum engages the foremost political,business and other leaders of society to shape global,regional and industry agendas.

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(WEF:2023年数字安全设计干预和创新工具包-网络危害的类型(英文版)(17页).pdf)为本站 (Yoomi) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
相关报告
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部