Artificial Intelligence (AI) algorithms play a growing role in our everyday lives, including in how we organise information, and experience the internet. A majority of the content we encounter and view online is curated by algorithms in some form.
Through the Christchurch Call to Action, we have committed to work together to better understand the impacts that algorithms and other processes may have on terrorist and violent extremist content. Leaders and the Call Community regard this algorithmic work as a top priority.
In order to study those impacts, we have to overcome challenges around:
- how to protect user privacy and proprietary information
- how to investigate impacts holistically across society
- and how to achieve reproducibility, affordability, and scale for independent researchers.
Working with an open-source non-profit organisation called OpenMined, the Algorithms Initiative will develop and test ground-breaking privacy-enhancing software infrastructure to address those challenges and help us move forward work under the Call.
While this initiative won’t tell us all we need to know about the outcomes that algorithms are driving online, it will help us better access data so researchers can answer these very questions.
The Christchurch Call Initiative on Algorithms is committed to supporting this work so that we can empower independent researchers to help us build safer platforms and more effective interventions to protect people both online and offline.
If successful, these technologies will be made available to the whole Christchurch Call community and beyond it. The technology, once tested and proven in the Call context, could open up a new field of algorithmic research with a much wider application.
Our community wants to understand the role of online activity as a factor in radicalisation, how terrorist and violent extremist content spreads across platforms. The privacy protective technology being developed through our initiative is one of the most promising ways to open those questions to independent research at a suitable scale. It could also help people working in a number of other fields.
We hope that this work will ultimately help the Christchurch Call Community to understand what online service providers, community organisations, and governments can do to make the online environment safer and more user-friendly.
What is the Christchurch Call Initiative on Algorithmic Outcomes?
New Zealand, the USA, Twitter, and Microsoft are working under the Christchurch Call, partnering with OpenMined, to develop new software tools that will help facilitate more independent research on the impacts of user interactions with algorithmic systems.
What will the Christchurch Call Initiative on Algorithmic Outcomes do?
As the first project under this initiative, the partners will work together to build and test a set of privacy enhancing technologies. Once tested, replicated, and validated, these technologies could form the basis for an infrastructure to support independent study of impacts of algorithms and their interactions with users, including across multiple platforms and types of platforms, and could dramatically lower the barriers to doing this work.
If successful, these technologies will be made available to the Christchurch Call Community and could help support independent research that fulfils our collective Call Commitments, potentially opening up wider applications and new fields of responsible AI research beyond the Call itself.
How does this respond to the Christchurch Call Commitments?
Christchurch Call Supporters have committed to working together and with Civil Society to understand the outcomes of algorithms and other processes that may drive users towards terrorist and violent extremist content, to make changes where this occurs, and to develop effective interventions based on information sharing.
The technologies developed with the help of this initiative should help to overcome some of the barriers identified through our existing work and enable information exchanges among civil society, industry and government, to make it possible to realise those commitments.
Why is it needed?
At the moment it is costly and administratively complex for online service providers to provide access to independent researchers to study impacts on users, while complying with ethical and regulatory obligations. Current secure access programmes often entail significant costs and time from researchers and firms alike. That greatly reduces the possible scale and breadth of external studies of the sort needed to understand the impacts of algorithmic systems. That means independent research typically only studies user impacts on a single platform rather than across a broader demographic (i.e. across multiple platforms).
This new technology is intended to reduce the cost and complexity of access for independent researchers, and allow a multitude of different studies, for instance looking at how social issues manifest online, the impacts of user interactions on artificial intelligence, and the effectiveness of different actions intended to promote a safer online environment. It could also enable studies on a range of other algorithmic and responsible AI topics.
For example, Governments should not require access to private user data from online platforms. However they can benefit from insights obtained by independent researchers that can help inform policy. Civil Society has an interest, among other things, in ensuring privacy is preserved, and that human rights outcomes are effectively assessed and acted upon. These privacy enhancing technologies could help facilitate and scale access to allow important insights without governments or other parties requiring access to the base data.
What is being developed?
Software infrastructure is being built which will integrate a new set of privacy enhancing technologies (PETs). This will enable data scientists to remotely study data and algorithms distributed across multiple secure sites. Technologies including remote execution, federated learning, differential privacy, and secure multi-party computation will enable this remote research in a way that conforms to the data-use policies of the platform under study.
The current project plan envisages work taking place approximately over a 9-month timeframe with a total cost of approximately US$1.5m.
In brief these milestones are to:
- Build the underlying software and systems for the new infrastructure.
- Do a proof of concept test using synthetic data to show that the system works.
- Trusted partner(s) test out the system on real datasets, showing reproducibility and proof of function.
What is each partner contributing?
New Zealand is providing a financial contribution and assistance with establishing the initiative. New Zealand will also provide coordination between the Call Community and the project to ensure transparency, visibility, and that opportunities are maximised for Call supporters and partners to make use of the project’s output.
The support of New Zealand and the USA will help to ensure the project outcomes are developed on behalf of the wider Call Community and will be tested for different types of online service provider.
Twitter‘s ML Ethics, Transparency and Accountability team will be providing the proof-of-concept for the PET’s infrastructure. At the conclusion of Phase 1, trusted partners will be able to securely access and replicate the findings of our Algorithmic amplification paper published last year. Enabling secure access to this data is a milestone in data transparency and security.
Microsoft is providing financial support to the initiative and is exploring options to enable the initiative to be tested on a different platform.
How can I be involved?
As part of the project, OpenMined will provide periodic briefings for the Call Community about its work, and an opportunity to discuss its potential applications.
In subsequent phases of this work, there will be scope for other online service providers who want to test the capabilities of the new tool on their system, and for Call supporters who wish to help finance the development and testing of new functions and capabilities.
We envisage that these tools will take some time to build, test, validate, and refine. In the long term they hope to create significant new opportunities for independent researcher involvement and the application of these tools across the Call Community and beyond it.
If the pilot is successful, what are the next steps?
If successful, the pilot will prove that the underlying techniques can be scaled to meet real-world legal, policy and other requirements. Subsequent phases could focus on additional testing, moving towards building a production-ready system to enable use for a wide range of objectives by a wide range of independent researchers.
See more information about the Christchurch Call's work on Algorithms and Positive Interventions.