top of page
Writer's pictureDBT

Qualified Insights and User Requirements for TITAN's AI-Solution to Help Counter Disinformation


image of a digitally generated hand to represent AI

Earlier in 2023, an exciting initiative took place across eight European countries, where citizens actively participated in co-creation workshops. The aim of these workshops was to harness the collective wisdom of the public and use it as a guiding force in developing a trustworthy, ethical, and widely accepted AI-enabled tool for countering disinformation online.


The next step in the human-centric design process, was to take the findings from the citizen workshops and bring them to experts in the field for further discussion and qualification. A wide range of different experts in disinformation, critical thinking and AI were invited for three separate two-hour workshops on the topics of:

  • Informed consent and AI tools

  • Data and Trust

  • AI learning tools and citizen’s need

The results from the discussions helped frame and shape the citizen feedback into user requirements for the TITAN tool to meet the concerns of the citizens and help the project achieve the aim of developing a trustworthy, ethical, and societal accepted tool.


Insights from Expert Involvement in Citizens Concerns

Developed and managed by our project partners DBT and VUB, the outputs of the three expert workshops provided a cornerstone of our human-centric design work in the areas of AI and data ethics.


Expert Workshop on Informed Consent and AI Tools

The experts were asked to consider insights from the citizens: Citizens wanted to understand what they give consent to and why an AI tool need access to their data. Furthermore, citizens are concerned about transparency and if they can trust an AI tool. Outcomes included:

  • User-Centric Approach: Participants emphasised the importance of AI tools prioritising users' interests and needs. This includes avoiding practices such as using manipulative design to trick users into giving consent and refraining from selling user data for commercial gain.

  • Granularity of Consent: The workshop highlighted the need for granting users granular control over their consent. Consent should not be an all-or-nothing choice; instead, users should have the freedom to choose consent options based on time, data amount, and gradual disclosure.

  • Transparency: Transparency in informed consent was a key concern. Users should receive relevant information that truly aids their decision-making, promoting trust in AI tools.

Expert Workshop on Data and Trust

The experts were asked to consider the following insights from the citizens: Citizens want control with their data and transparency about how data is used. Citizens want a choice of what data to give to an AI system, to get access to specific functionalities, and have the option to not give any personal data. Discussions centered around:

  • Data Control: Citizens expressed the desire to have control over their data and the choice of what data to provide to AI systems. The tool should only request data that is necessary for service provision, avoiding over-collection.

  • Transparency in Data Use: The tool should, by default, delete user data when it's no longer needed for its intended purpose. If data is to be used for other purposes, users should be explicitly asked for consent.

  • User-Friendly Explanation: Recognising that not all users are tech-savvy, the tool must provide contextual explanations about data use and system functionalities, ensuring that users can make informed decisions.

  • Trust-Building: Trust was identified as a core element. Exploring bottom-up infrastructure, such as data pods or data trusts, was suggested to enhance trustworthiness.

AI Learning Tools and Citizens' Needs

The experts were asked to consider these insights from the citizens: Citizens’ fear AI oppressing free speech or creating an echo chamber of (dis)information as well as citizens wanting a tool that is not too time consuming, that is inclusive, non-invasive and where you can engage with the tool at different levels. Discussion highlights included:

  • Balancing Purpose and Engagement: The tool should strike a balance between achieving its purpose and respecting users' time. Recognising that some users may only spend a limited amount of time with the tool, it should ensure that time is used effectively without compromising learning.

  • Formative Assessment: The tool, focusing on critical thinking, should provide ongoing feedback to users, allowing them to track their progress and make informed choices.

  • Web Accessibility: Accessibility for users with disabilities is paramount. Design considerations, such as symbols and colours, should be inclusive.

  • Social Interaction: Promoting user engagement through group dialogues, collaboration, or competition was suggested. A broader social network could enhance learning and user interaction.


Adapting Insights into User Requirements

The insights from these workshops served as a wellspring of knowledge, which has now been transformed by researchers into user requirements, which can be downloaded below.

By actively involving citizens in the decision-making process and by using experts to help translate their insights into user requirements, TITAN has a higher chance of creating an AI tool that not only meets the highest ethical standards but also truly serves the needs and values of its users. As we move forward, it is clear that collaboration between experts and citizens will be instrumental in building trustworthy and socially accepted AI tools.

48 views0 comments

Kommentarer


bottom of page