Interaction quality, as it relates to trust, and the construct of relationship equity was empirically examined in this study as participants interacted with computerized agent.
Participants performed a pattern recognition task while being provided recommendations with a reliable or unreliable AI agent over the course of 40 or 120 trials.
Behavioral trust was measured by examining compliance with agent recommendations and subjective trust was measured via survey throughout the experiment.
Regardless of condition, the agent committed a catastrophic error and afterwards trust resiliance was measured and an AI-assisted transfer task was performed.
Results indicated that relationship equity impacts agent compliance; however, no effect of frequency of agent interaction was found.
The full article can be found here.
Additionally, a hierarchical linear modeling approach for the analysis of subjective trust survey data can also be found here.