About
The ARTAI (Assessing Risk for Trustworthy AI) platform is designed to support assessment of the societal impact and risks of recommender algorithms used by major online platforms.
ARTAI aims to enable evaluation of how content is disseminated online and provide transparency and oversight into a currently opaque process that is known to present significant societal harm. In doing so, this project will support the implementation of the EU AI regulation including the Digital Services Act (DSA) (European Parliament and Council, 2020) which mandates increased accountability and calls for new research to foresee societal risk. We developed a simulation platform to evaluate real-world use cases in the use of recommender algorithms. This strategy addressed many of the shortcomings of existing methods and reflected state-of-the-art approaches to recommender system evaluations already used within the tech industry. The aim of enabling simulations is to capture the kind of content that is sent to users online depending on how they interact with the system and what their attributes and interests may be. Evaluating patterns of dissemination in the context of existing literature on social risks and testimony from users of online platforms themselves will allow for social risks to be foreseen before they are experienced on a societal level. To achieve these goals, the ARTAI platform’s architecture is structured into different modules for pre-processing, synthetic user data generation, simulation, and evaluation
Current risk evaluation methods such as code analysis, user surveys, and sock-puppet audits have limitations in terms of scalability and feasibility, particularly with the increasing complexity of deep learning systems. To address this, ARTAI has in the first phase of development, created a simulation environment that enables systematic measurement of user engagement and assessment of the societal impact of recommender algorithms in a controlled setting. ARTAI is the first platform of its kind, to be used by AI auditors, platform providers and researchers studying the social impact of recommender systems. A key feature of ARTAI is its direct integration of insights from those most affected by the use of recommender algorithms. This is achieved through the development of a toolkit for engaging with end-users of recommender algorithms. This, combined with extensive stakeholder engagement with government bodies, industry representatives, researchers and civil society groups ensures that ARTAI is developed to provide transparency and oversight into how recommender algorithms may be contributing to societal risks of online platforms. This solution directly addresses the broader societal challenge by providing a means to identify and mitigate potential harms caused by recommender algorithms. By enabling assessments, ARTAI can uncover harmful patterns in content dissemination, contribute to a safer online environment and support the implementation of new regulatory requirements for AI systems, thus aligning with both EU and national policy objectives.