Designers should offer procedural transparency in algorithmic interfaces to maintain user trust when expectations are violated.
About this paper
The author investigates how varying levels of transparency in algorithmic interfaces impact user trust in the context of peer assessment.
The study finds that a balanced approach to transparency is crucial, as too much or too little information can affect trust negatively or positively depending on whether users' expectations are met or violated.
Here are some methods used in this study:
Which part of the paper did the design guideline come from?
“Figure 1 shows the average trust index for participants who either received a grade that matched their expectations or one that violated expectations. A 2 (expectations violated vs. not violated) by 3 (transparency: low, medium, high) ANOVA was conducted to test the first two hypotheses. Consistent with H1, trust was lower when the received grade was worse than expected (F 1,97 = 13.4, p < 0.001, η 2 p = 0.12). Moreover, as hypothesized in H2, this gap (...)” (‘Results’ section)
Kizilcec, R. F. (2016). How Much Information? Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems.