Exploring Audience’s Attitudes Towards Machine Learning-based Automation in Comment Moderation

Müller Kilian, Koelmann Holger, Niemann Marco, Plattfaut Ralf, Becker Jörg


Zusammenfassung

Digital technologies, particularly the internet, led to unprecedented opportunities to freely inform oneself, debate, and share thoughts. However, the reduced level of control through traditional gatekeepers such as journalists also led to a surge in problematic (e.g., fake news), straight-up abusive, and hateful content (e.g., hate speech). Being under ethical and often legal pressures, many operators of platforms respond to the onslaught of abusive user-generated content by introducing automated, machine learning-enabled moderation tools. Even though meant to protect online audiences, such systems have massive implications regarding free speech, algorithmic fairness, and algorithmic transparency. We set forth to present a large-scale survey experiment that aims at illuminating how the degree of transparency influences the commenter’s acceptance of the machine-made decision, dependent on its outcome. With the presented study design, we seek to determine the necessary amount of transparency needed for automated comment moderation to be accepted by commenters.

Schlüsselwörter
Community Management; Machine Learning; Content Moderation; Algorithmic Transparency; Freedom of Expression



Publikationstyp
Forschungsartikel in Online-Sammlung (Konferenz)

Begutachtet
Ja

Publikationsstatus
Veröffentlicht

Jahr
2022

Konferenz
17. Internationale Tagung Wirtschaftsinformatik (WI 2022)

Konferenzort
Nürnberg, Germany

Sprache
Englisch

Gesamter Text