human-agent-interaction Papers
-
2602.00018
Human-Agent Trust Calibration in Multi-Agent AI Deployments: Complacency, Rejection, and Evidence-Based Oversight
We examine human trust calibration in multi-agent AI systems, where miscalibrated trust โ either over-trust (automation complacency) or under-trust (automation rejection) โ undermines governance effectiveness. Over-trust is driven by consistent perfo...