Mass media impact on opinion evolution in biased digital environments: a bounded confidence model Pansanella V., Sîrbu A., Kertesz J., Rossetti G. People increasingly shape their opinions by accessing and discussing content shared on social networking websites. These platforms contain a mixture of other users' shared opinions and content from mainstream media sources. While online social networks have fostered information access and difusion, they also represent optimal environments for the proliferation of polluted information and contents, which are argued to be among the co-causes of polarization/radicalization phenomena.
Moreover, recommendation algorithms - intended to enhance platform usage - likely augment such phenomena, generating the so-called Algorithmic Bias. In this work, we study the efects of the combination of social infuence and mass media infuence on the dynamics of opinion evolution in a biased online environment, using a recent bounded confdence opinion dynamics model with algorithmic bias as a baseline and adding the possibility to interact with one or more media outlets,
modeled as stubborn agents. We analyzed four diferent media landscapes and found that an openminded population is more easily manipulated by external propaganda - moderate or extremist - while remaining undecided in a more balanced information environment. By reinforcing users' biases, recommender systems appear to help avoid the complete manipulation of the population by external propaganda.Source: Scientific reports (Nature Publishing Group) 13 (2023). doi:10.1038/s41598-023-39725-y DOI: 10.1038/s41598-023-39725-y Project(s): HumanE-AI-Net , SoBigData-PlusPlus Metrics:
From mean-field to complex topologies: network effects on the algorithmic bias model Pansanella V., Rossetti G., Milli L. Nowadays, we live in a society where people often form their opinion by accessing and discussing contents shared on social networking websites. While these platforms have fostered information access and diffusion, they represent optimal environments for the proliferation of polluted contents, which is argued to be one of the co-causes of polarization/radicalization. Moreover, recommendation algorithms - intended to enhance platform usage - are likely to augment such phenomena, generating the so called Algorithmic Bias. In this work, we study the impact that different network topologies have on the formation and evolution of opinion in the context of a recent opinion dynamic model which includes bounded confidence and algorithmic bias. Mean-field, scale-free and random topologies, as well as networks generated by the Lancichinetti-Fortunato-Radicchi benchmark, are compared in terms of opinion fragmentation/polarization and time to convergence.Source: COMPLEX NETWORKS 2021 - Tenth International Conference on Complex Networks and Their Applications, pp. 329–340, Madrid, Spain, 30/11-2/12/2021 DOI: 10.1007/978-3-030-93413-2_28 Project(s): SoBigData-PlusPlus Metrics:
Towards a social Artificial Intelligence Pedreschi D., Dignum F., Morini V., Pansanella V., Cornacchia G. Artificial Intelligence can both empower individuals to face complex societal challenges and exacerbate problems and vulnerabilities, such as bias, inequalities, and polarization. For scientists, an open challenge is how to shape and regulate human-centered Artificial Intelligence ecosystems that help mitigate harms and foster beneficial outcomes oriented at the social good. In this tutorial, we discuss such an issue from two sides. First, we explore the network effects of Artificial Intelligence and their impact on society by investigating its role in social media, mobility, and economic scenarios. We further provide different strategies that can be used to model, characterize and mitigate the network effects of particular Artificial Intelligence driven individual behavior. Secondly, we promote the use of behavioral models as an addition to the data-based approach to get a further grip on emerging phenomena in society that depend on physical events for which no data are readily available. An example of this is tracking extremist behavior in order to prevent violent events. In the end, we illustrate some case studies in-depth and provide the appropriate tools to get familiar with these concepts.Source: Human-Centered Artificial Intelligence, edited by Chetouani M., Dignum V., Lukowicz P., Sierra C., pp. 415–428, 2023 DOI: 10.1007/978-3-031-24349-3_21 Metrics: