Would you want transparent explanations about how your data is used in personalised healthcare services?
Most people say yes, but in practice, these explanations only increase privacy concerns — a phenomenon termed the Personalisation–Privacy paradox.
In our new paper, authored by Dr. Joseph Ollier in collaboration with Dr. Marcia Nißen and Dr. Prof. Florian von Wangenheim, we explore how chatbots can help resolve this paradox by giving different types of privacy assurances, tested in services varying in their degree of personalisation.
Core take-aways:
– Privacy assurances can work well in high or low personalisation services, depending on exactly what is emphasised (e.g., protection steps, who owns the data, partnership with users, etc.).
– In low personalisation services, users skim assurances, making a surface assessment as to their degree of control over the firm.
– In high personalisation services, users process assurances more deeply, improving both privacy outcomes as well as perceived collaboration with the chatbot.
Many thanks to CSS for their support of this project, which highlights better ways to engage with users on data privacy in digital health settings.
Open access paper can be downloaded here: https://lnkd.in/eNKTNZvW

