Trust and resilience throughout societies are key components of information integrity.
Trust, in this context,
refers to The confidence that people have in the sources and reliability of the information that they access,
including official sources and information, and in the
mechanisms that allow information to flow throughout
the ecosystem. Resilience refers to The ability of societies to handle disruptions or manipulative actions within the information ecosystem.
Trust and resilience are vulnerable to actions driven
by State and non-State actors who seek to exploit the
information ecosystem for strategic, political or financial gain. These actions, at times widely coordinated,
can result in a range of harms and jeopardize people’s
ability to critically assess science and facts.
Large technology companies hold significant power
in the information ecosystem and exercise inordinate
influence over the manner in which stakeholders, including other businesses, advertisers, news media and
individual users, interact with and access information.
Advances in artificial intelligence (AI) technologies,
such as generative AI, have introduced the means to
create risks to information spaces at scale and with
minimal costs. AI-generated or AI-mediated content,
purporting to be real or original, can be highly believable, emotionally resonant and hard to detect and can
spread rapidly across algorithm-driven platforms and
media outlets. This has the potential to exponentially
create, accelerate and deepen trust deficits.
Addressing risks to information integrity demands
robust, forward-looking and innovative digital trust andsafety practices, enforced consistently across languages and contexts. These practices should reflect
the insights of groups in situations of vulnerability and
marginalization that are disproportionately exposed to
potential harm.
Women, older persons, children, youth, persons with
disabilities, Indigenous Peoples, refugees and stateless
people and ethnic or religious minority
groups need to be particularly considered.
Many young people and children spend a significant
portion of their lives online and obtain a vast range of
information from digital channels. They already often
bear the brunt of risks to information spaces and will
be most directly affected by emerging technologies and
media trends.
People are generally more resilient and better equipped
to pre-empt and navigate such risks when they have
access to a diverse range of information sources and
feel included, equal, socioeconomically secure and
politically empowered. When that is not the case, these
risks can often find more fertile ground to proliferate. Responses should therefore acknowledge underlying
societal needs to boost long-term resilience.
All stakeholders committed to acting in the public
interest can strive to adapt to the realities of a constantly evolving communications landscape by harnessing information spaces for common benefit. This
is particularly critical at pivotal societal moments such
as elections, natural hazards and human-made crises,
when risks to information spaces are pronounced, can
deepen social polarization, undermine People’s ability to participate in public life, and, in extreme cases, be used to incite violence.
Activists, journalists, humanitarians and United Nations
personnel, including peacekeepers, election workers,
scientists, medical professionals and others, can
become targets, with potentially dire consequences.
Online harassment and other insidious tactics can
result in the silencing of voices and shrinking of civic spaces. Concerted efforts to safeguard such individuals are paramount.
Comments
Post a Comment