Challenge
LLM chatbots are being deployed to a wide range of service contexts, including those where the customer is under considerable stress and their information processing abilities highly constrained.
Approach
We examine how design cues used to outline the chatbot’s role should be suited to cognitive load induced by the service environment. To do this, we developed a LLM-powered roadside assistance chatbot and tested it in an online experiment.
Expected Results
We hypothesize that in high-load environments (i.e., complex and unfamiliar), low-load chatbot role schemas (i.e., low degree of cue variation) are preferrable by not burdening information processing while it is already under strain. Conversely, in low-load environments (i.e., simple and familiar), high-load chatbot role schemas (i.e., high degree of cue variation) are preferable by increasing customer information processing rate.
Lead Researcher
Dr. Joseph Ollier
Project Status
Ongoing