Recommendations for Technology Companies.
Recommendations
a. Integrate safety and privacy from design to delivery. Embed robust safety and privacy policies into the full life cycle of all products and services, including every phase of design, development, delivery and decommission, applying policies consistently to both human and AI-generated media. Cooperate with independent, third-party organizations to conduct and make public ongoing human rights risk assessments related to all products and services to proactively minimize societal risks and mitigate potential harms, including in advance of and around pivotal societal moments. Take measures to protect and empower groups in situations of vulnerability and marginalization, members of civil society and others often targeted online; and to address gender-based and other forms of violence which occur through or are amplified by the use of technology. Innovate to address emerging challenges including the potential prevalence of risks to information ecosystem integrity resulting from AI technologies. Ensure diversity and inclusion in staffing at all stages of product development, and in trust and safety teams. Establish procedures for internal information-sharing to ensure that risk and policy assessments are shared and collectively understood at all levels and functionalities of the company, including leadership. Ensure consistent enforcement of all trust and safety policies.
b. Re-evaluate business models. Assess whether and how platform architecture contributes to the erosion of information ecosystem integrity and undermines human rights, and take proportionate mitigation and remediation measures while respecting freedom of expression. Scope innovative, commercially viable business models that do not rely on targeted programmatic advertising and that serve the public interest.
c. Protect children. Establish and enforce measures to protect and uphold the rights of children, such as age verification and parental controls. Implement policies and practices to prevent and counter child sexual exploitation and abuse which occurs through or is amplified by the use of technology. Establish and publicize special reporting and complaints mechanisms for children.
d. Allocate resources. Allocate sufficient and sustained dedicated in-house trust and safety resources and expertise that are proportionate to risk levels. Designate sufficient resources to address sociocultural linguistic contexts and languages of operation and the differentiated needs of groups in situations of vulnerability and marginalization, in particular in contexts experiencing conflict or facing unstable conditions.
e. Ensure consistent content moderation. Cooperate with independent, third-party organizations to develop content moderation processes in line with international human rights standards and ensure that such policy is enforced consistently and non-arbitrarily across areas of operation. Allocate sufficient resources for human and automated content moderation and curation, applied consistently across all languages and contexts of operation. Take measures to address content that violates platform community standards and undermines human rights, such as limiting algorithmic amplification, labelling and demonetization. Make publicly available disaggregated data on the implementation of content moderation policies and on resources allocated for content moderation across languages and contexts of operation.
f. Uphold labour standards. Provide working conditions that are aligned with international labour and human rights law and prioritize initiatives that ensure the welfare, safety and quality training of all workers, including content moderators, involved in trust and safety efforts.
g. Establish independent oversight. Commission regular external human rights independent audits, which cover terms of service and community standards; trust and safety and advertising policies; risk management; the impacts of advertising and recommender systems across language and operational contexts; content moderation; complaints and appeals processes; transparency mechanisms; and data access for researchers. Assess the impact of products and services on groups in situations of vulnerability and marginalization, on gender equality and on children’s rights. Make the results of these audits public, accessible and understandable for all users.
h. Develop industry standards. Partner with civil society and other stakeholders to co-develop industry accountability frameworks with clearly defined roles and responsibilities, committing to audited public reporting and independent oversight and to robust standards for privacy, transparency, risk management and trust and safety. Make specific provisions for the needs of groups in situations of vulnerability and marginalization and in fragile contexts, establishing effective ways to measure and address risks to human rights. Ensure cooperation between platforms and services, recognizing that risks can spread across various information spaces, each with unique design flaws and policy gaps that can be exploited.
i. Elevate crisis response. Working with stakeholders operating in high-risk areas, establish early warning and escalation processes with accelerated and timely response rates in contexts of crisis and conflict. Establish mechanisms to enable prominent, timely access to reliable, accurate information that serves the public interest
j. Support political processes. Undertake and make publicly accessible human rights risk assessments of all products and services in advance of and throughout elections and other political processes. Enforce all related policies to uphold information integrity, taking measures to address disinformation, harassment and violence against women and other groups commonly targeted in public life, including political candidates.
k. Collaborate with stakeholders. Proactively engage with a diverse range of stakeholders, including States, academia, civil society, children, youth-led organizations and the technical community, to gain deeper understanding of risks to the integrity of the information ecosystem and augment and calibrate trust and safety mechanisms accordingly.
l. Establish robust complaint mechanisms. Ensure transparent, safe, secure and accessible user and non-user complaint, reporting, appeals and redress mechanisms in a timely manner, including special processes for those in situations of vulnerability and marginalization. Establish and enforce procedures to prevent misuse of the reporting and complaints mechanisms, such as through coordinated inauthentic behaviour.
m. Communicate clear policies. Make terms and conditions, policies, community standards and enforcement procedures easily accessible, consistent and understandable, including for children. Make clear all policies, guidelines and rules concerning news and political content.
n. Enforce advertising policies. Establish, publicize and enforce clear and robust policies on advertising and the monetization of content. Review existing publisher and advertising tech partnerships on an ongoing basis to assess whether such policies are upheld by partners in the ad tech supply chain. Publicly report annually on the effectiveness of policy enforcement and any other actions taken.
o. Demonstrate advertising transparency. Clearly mark all adverts, making information on the advertiser, the parameters used for targeting and any use of AI-generated or -mediated content transparent to users. Maintain full, accessible, up-to-date and searchable advertising libraries with information on the source or purchaser, how much was spent and the target audience. Give detailed data to advertisers and researchers on exactly where adverts have appeared in any given timescale, and the accuracy and effectiveness of controls and services around advertising placements and brand adjacency. Undertake transparent reporting regarding revenue sources and sharing arrangements with advertisers and content creators. Clearly label all political advertising, including to indicate content that has been AI-generated or -mediated, and provide easily accessible information on why recipients are being targeted, who paid for the adverts and how much.
p. Support media safety and diversity. Create an enabling environment for the distribution of pluralistic news content, allowing consumers to access a range of media sources. Support independent, free and pluralistic media, especially local and citizen journalism conducted in diverse languages and contexts, while respecting editorial independence. Take all measures to uphold the rights of journalists and media workers online. Make explicit, transparent provisions to help safeguard journalists and media workers against harassment, abuse and threats of violence, reflecting the risks faced by journalists, especially during pivotal societal moments such as elections, natural hazards and human-made crises. Update trust and safety policies and practices specifically to mitigate and address the targeting of women journalists.
q. Provide data access. Provide researchers, including academics across disciplines, journalists, civil society and international organizations, access to the data that they need to better understand information integrity, inform policy and best practice and improve accountability, while respecting user privacy and intellectual property. Such data should be disaggregated to allow for effective study of information ecosystem integrity, including societal risks, impacts on differentiated communities and populations, the implications of the use of AI technologies, potential impacts on the achievement of the Sustainable Development Goals and the effectiveness of risk migation measures. It should include information on: algorithm-driven recommender systems, including explanations of how algorithms are trained to rank, recommend, distribute and flag content; accounts removed, banned or demoted; and resource allocation for trust and safety across languages and contexts. Facilitate data delivery for researchers at minimal cost in accessible, machine-readable formats.
r. Ensure disclosure. Make public State requests for content removal or placement. Disclose all collaborations with fact-checking organizations, including funding or other support provided; and funding provided to political bodies and candidates.
s. Offer control and choice. Offer user-friendly tools, functions and features that ensure informed consent and empower people to easily control their own online experience, including through interoperability with other services, allowing greater choice and providing informed consent over the content they see and how and where their data are used.
t. Label AI content. Clearly label AI-generated or -mediated content, investing in and developing solutions at the organizational level, to ensure that users can easily identify such content and to strengthen rather than undermine user trust in information ecosystem integrity more broadly. This includes information in the metadata that identifies such content as AI-generated or -mediated.
u. Ensure privacy. Ensure that the collection, use, sharing, sale and storage of data respects the privacy of users and that users can easily access information on how their personal data are harnessed, including for algorithmic decisions, and on how their personal data are shared with and obtained from other entities.
v. Foster digital literacy. Support media and information literacy drives to boost digital skills, including to improve public understanding of the function, effects and implications of algorithms. Dedicate literacy and capacity-building resources for all languages and areas of operation, especially fragile contexts. Provide safety-related training materials to children and youth. Enable and make publicly available independent external evaluations of the effectiveness of literacy initiatives.
Comments
Post a Comment