Understanding Algorithms and Developing Interventions

Better knowledge of how algorithms, users, and interventions interact is important for understanding user behaviour and how to counter radicalisation online. This page explains our work to develop tools and gain knowledge in this area.

What are algorithms and positive interventions?

Algorithms are automated processes that make decisions about the types of content users engage with online. Algorithms have an important role in users’ online experience, including in curating and suggesting content we might be interested in, and identifying and addressing content that violates terms of service. 

Online algorithmic systems can be very complex. Their outcomes depend on interactions with users, different types of content, and other algorithms. These systems can learn and adapt over time through those interactions to become more effective at their given task.

Positive interventions are measures that intervene to redirect or influence a person away from extremist or terrorist content. They can play an important role in empowering community voices and disrupting recruitment or propaganda strategies.

Why are they important?

A better understanding of how algorithmic systems play out for different types of users could help inform changes that make online platforms safer for users around the world. They could enable more effective removal of terrorist and extremist violent content online, and enable communities to positively intervene in user journeys that are driving people to that content.

In the Christchurch Call, supporters have made several commitments related to algorithms. These include reviewing the operation of algorithms to better understand intervention points and implement changes, developing effective interventions, and accelerating research into technical solutions. 

In 2021 the Call Community endorsed a workplan on algorithms and positive interventions. This plan focuses our efforts on improving understanding of:

  • online user journeys and the role they may play in radicalisation
  • how online and offline factors interact
  • how content recommendation processes might be exploited and ways to mitigate this.

What work is underway?

At the 2022 Leaders’ Summit, a group of Call Supporters announced the Christchurch Call Initiative on Algorithmic Outcomes (CCIAO). In the first project under this initiative, New Zealand, the USA, Twitter, Microsoft and the charitable organisation OpenMined are developing and testing privacy technologies that will help overcome significant barriers and enable independent research on potential pathways to radicalisation.  See more information about the initiative(external link). 

After one year, the CCIAO reported important progress towards building a privacy-enhancing system for sharing insights about algorithmic outcomes. The system has been built and tested, demonstrating it can safely and securely answer questions from accredited third-party researchers using information securely held by social media platforms.

At the 2023 Leaders’ Summit, it was announced France, Dailymotion, and Georgetown University’s Center for Security and Emerging Technology have joined the CCIAO, pledging additional funding of US$1.3m. Leaders endorsed further development of the CCIAO.

Leaders also welcomed the work of the EU Internet Forum including its 2023 Study on the role and effects of the use of algorithmic amplification to spread terrorist, violent extremist and borderline content(external link), which provides an important snapshot of the different impacts of recommendation systems and user interactions and encouraged further efforts to address these risks.

New regulatory measures to promote researcher access to data, and new voluntary transparency measures, have expanded the ability to build public understanding and promote evidence-based decision making by policymakers, developers, and civil society. Examples include online service providers providing detailed information about the inputs to their content recommendations, and X’s decision to open source the inputs to and the code that supports its content recommendations.

Current priorities 

There is ongoing need for work across the tech sector and government to understand data and information needs for effective research. Specific actions include: 

  • The development of a new governance structure and ethics framework for the CCIAO, to enable its further development 

  • Continued work to build CCIAO into a global network that has a significant impact for the Call, for ethical and responsible deployment of AI, and for the public good. 

 

Resources

See the Algorithms and Positive Interventions Workplan [PDF, 827 KB]

See the full text of the Christchuch Call commitments.