How human–AI feedback loops alter human perceptual, emotional and social judgements

NATURE, 18/12/2024

Partagé par : 

Seddik Touaoula

How human–AI feedback loops alter human perceptual, emotional and social judgements

"We begin by collecting human data in an emotion aggregation task in which human judgement is slightly biased. We then demonstrate that training an AI algorithm on this slightly biased dataset results in the algorithm not only adopting the bias but further amplifying it. Next, we show that when humans interact with the biased AI, their initial bias increases (Fig. 1a; human–AI interaction). This bias amplification does not occur in an interaction including only human participants (Fig. 1b; human–human interaction).

Fifty participants performed an emotion aggregation task (adapted from refs. 41,42,43,44). On each of 100 trials, participants were presented briefly (500 ms) with an array of 12 faces and were asked to report whether the mean emotion expressed by the faces in the array was more sad or more happy (Fig. 1a; level 1). The faces were sampled from a dataset of 50 morphed faces, created by linearly interpolating between sad and happy expressions (Methods). Based on the morphing ratio, each face was ranked from 1 (100% sad face) to 50 (100% happy face)..." Lire la suite