Recommendation for Artificial Intelligence (AI) Actors.
Government, private and public sector actors involved in at least one stage of the artificial intelligence system life cycle. As artificial intelligence (AI) technologies rapidly evolve, these capabilities are poised to reshape our world. From automating everyday tasks to aiding scientific discovery, the potential benefits are vast. However, alongside this progress lies a critical need to ensure safe, secure and trustworthy AI design, development, deployment and decommission. Bias and lack of diversity in training data can lead AI systems to generate misleading information and perpetuate unfairness. This ability to generate realistic content can be misused at scale to create risks to the integrity of the information ecosystem. Emerging risks can be mitigated by prioritizing transparency and fairness in the life cycle of AI technologies. A collaborative effort across government, technology companies and academic and research institutions is needed to ensure that AI is designed, developed, deployed and decommissioned safely and responsibly across its life cycle. By working together, these stakeholders can ensure that AI technologies benefit society and human well-being.
Recommendations
a. Ensure safe, secure and trustworthy AI. Take measures to ensure the safe, secure and trustworthy design, development, deployment, use and decommission of AI technologies. Address and publicly communicate the implications of any innovations or advancements in the field that may present risks to the integrity of the information ecosystem, including malicious uses of AI technologies, overreliance on AI technology without human oversight and any related potential for further erosion of trust across geographies and societal contexts. Train artificial intelligence (AI) on reliable, inclusive information sources on issues critical to public well-being and take measures to mitigate bias stemming from training data, including on gender and racial bias. Partner with a diverse range of stakeholders in carrying out human rights risk assessments to proactively minimize societal risks and mitigate potential harms, including to women, children, youth and other groups in situations of vulnerability and marginalization.
b. Commission independent audits. Commit to providing access and legal and technical safe harbour to institutional and individual researchers to conduct independent audits of AI models, with appropriate safeguards, such as compliance with company vulnerability disclosure policies. Ensure public accessibility of the results of independent audits, data about risks related to AI systems—such as the potential for harmful discrimination and ”hallucinations”, namely, content that appears factual but is completely made up—and steps taken to prevent, mitigate and address potential harms.
c. Respect intellectual property. Respect intellectual property rights, ensuring fair compensation for use of intellectual property, including original journalism, used in training AI tools.
d. Display data provenance. Develop and implement solutions and policies on provenance, through visible and invisible forms, such as authenticity certification, watermarking and labelling. Undertake multi-stakeholder efforts towards the standardization of user-friendly labelling.
e. Support literacy. Invest at the organizational level in the development and deployment of literacy initiatives to enhance public understanding of how AI models function and the implications for information consumers globally, with a focus on risks to information integrity.
f. Enable user feedback. Provide users the ability to alert or report inaccurate or misleading provenance information, while protecting user privacy.
Comments
Post a Comment