Blog and news
August 28, 2024

Can worker preferences be encoded in algorithmic management tools? The role of preference elicitation

Algorithms in the consumer world are designed to anticipate our every desire, and fulfil it. Can workplace algorithms deliver on workers’ preferences in a similar way?

Algorithmic systems in workplaces can assign tasks or shifts to workers, analyse and evaluate their performance, and optimise workflows in factories, warehouses, and transportation. However, the use of such systems can have adverse effects on workers: it can intensify work, lead to erratic schedules, and reduce workers’ sense of agency and autonomy (Nguyen and Mateescu 2019; Lee et al. 2015).

As a philosopher, I am interested in the ethics of algorithmic systems, especially in the question of whether integrating worker preferences in the design of algorithmic systems can contribute to mitigating their risks. For example, algorithms could allocate workers their preferred shifts, or adapt to the workers’ preferred pace or settings. Doing so might improve the autonomy of workers - understood as the control and discretion workers have over their day-to-day work - compared to a system that does not consider worker preferences (Unruh et al. 2022, 760).

In a recent research project at the Technical University of Munich, I worked in an interdisciplinary team with colleagues from engineering and social sciences to create a prototype of a shift scheduling algorithm that considers worker preferences. In this project, we created an interface that allows workers to provide their preferences for tasks across different categories. To give a hypothetical example, there could be a category ‘travel’, where workers can express a preference for or against tasks that require travelling to different work sites. Moreover, workers can further prioritise categories that are especially important for them by distributing points between different categories. For example, if a worker has a very strong preference not to travel for work, this worker could allocate more points to the category ‘travel’, and less points to other categories, thereby effectively weighting this preference more than preferences that he or she expressed in other categories. The algorithm then creates a schedule which aims to give workers their preferred workplaces and shifts, while making sure that all shifts and workplaces are adequately staffed with qualified workers and while observing additional constraints, such as requirements for regular rotation between tasks for ergonomic reasons (Haid et al. 2022; for a different and very interesting approach on preference elicitation see Lee et al. 2021).

Despite the usefulness of such systems in certain use cases, we should be careful not to overstate the case for preference elicitation. The practical implementation of such systems raises important questions and ethical implications. Can all preferences be quantified, and should they be? How can privacy and data protection be ensured when eliciting preference data? How often do preferences need to be elicited to prevent what Delgado et al. call ‘fossilised preference models’ (2023, 12) trained on outdated data? How should the system decide when preferences conflict? And who is accountable when something goes wrong?

Integrating worker preferences in algorithmic systems needs to proceed carefully and responsibly. In cases where using an algorithmic system is beneficial, concepts and principles from AI ethics, like privacy, fairness, and accountability can help to design systems that consider worker preferences in responsible ways. Here, much depends on the specific context of the use case. For example, privacy requirements might differ on the kind of data that is being collected. Who has control over the system and can make manual adjustments, when necessary, might depend on organizational structure and setup. What is considered a fair distribution of shifts might differ between different work communities and groups. These questions need to be considered during the design, development, and implementation of a system.

Further, preference elicitation is not sufficient for improving workers’ autonomy. Autonomy can be infringed by an algorithmic system even when workers can input their preferences in the system. For example, if there is only a limited range of options available for workers to choose from, and if workers cannot express preferences over what really matters to them, then the system might not increase their autonomy in a meaningful way. Consider, for example, a worker who has a strong preference for a certain shift pattern, but the system only allows the worker to input preferences regarding the kinds of tasks that get allocated.  Moreover, the autonomy of workers might also be infringed if workers have no say in decisions about whether an algorithm that affects them is used, for what purpose it is used and how is designed, developed, and deployed.

Finally, it matters which method is used to elicit preferences. The methods that can be used to elicit preferences for different ways of working can affect autonomy in different ways (Gal 2018). For example, imagine an algorithmic system that automatically predicts worker preferences based on data about the workers, and adjusts shift and task allocation accordingly. Such a system, even if it accurately predicts the worker’s preferences, removes the worker’s ability to make a choice. In contrast, a system that allows workers to explicitly express their choices leaves the act of choosing an option and expressing a preference to workers. The variety of potential impacts on the autonomy of workers, and the scope of additional ethically relevant considerations raised by such technologies, highlights the need for including workers in decision-making about the use of technology at work, and for ethical frameworks that go beyond autonomy, such as moral frameworks based on the value of human dignity (Lamers et al. 2022).

Approaches for eliciting worker preferences in designing algorithmic systems can bring benefits for both employers and workers. However, much depends on the details of the design and implementation (for discussion of the effects of algorithms on choice, see Gal 2018, 70–75). It is important, for example, how preferences are elicited, who makes decisions about the design of the system, and how to ensure that values such as privacy, autonomy and dignity are respected. Piloting and the evaluation of tools that encourage or facilitate preference elicitation approaches are vital to developing more specific frameworks that ensure these approaches are used in responsible ways.

Preference elicitation should be embedded within wider frameworks and processes for the governance of technology at work, by and with workers. The IFOW model for Good Work Algorithmic Impact Assessment, which includes preference elicitation as a possible ‘mitigation’ to algorithmic management tools, also sets out a process whereby workers shape the nature of the technology from its inception, procurement or design. Stay tuned on developments in the IFOW Sandbox to see how to get preference elicitation right.

Dr Charlotte Unruh is a Lecturer in Philosophy at the University of Southampton. Charlotte specialises in normative and applied ethics. In the philosophy of work, she is especially interested in meaningful work and the ethics of artificial intelligence at the workplace. The project “A Human Preference-Aware Optimization System” (PIs Professor Tim Büthe and Professor Johannes Fottner) at the Technical University of Munich was funded by the TUM Institute for Ethics in AI.

References

Delgado, Fernando, Stephen Yang, Michael Madaio, and Qian Yang. 2023. “The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice.” In Equity and Access in Algorithms, Mechanisms, and Optimization, 1–23. Boston MA USA: ACM. https://doi.org/10.1145/3617694.3623261.

Gal, Michal S. 2018. “Algorithmic Challenges to Autonomous Choice.” Michigan Technology Law Review 25 (1): 59–104.

Haid, Charlotte, Sebastian Stohrer, Charlotte Franziska Unruh, Tim Büthe, and Johannes Fottner. 2022. “Accommodating Employee Preferences in Algorithmic Worker-Workplace Allocation.” In Applied Human Factors Engineering. https://doi.org/10.54941/ahfe1002235.

Lamers, Laura, Jeroen Meijerink, Giedo Jansen, and Mieke Boon. 2022. “A Capability Approach to Worker Dignity under Algorithmic Management.” Ethics and Information Technology 24 (1): 10. https://doi.org/10.1007/s10676-022-09637-y.

Lee, Min Kyung, Daniel Kusbit, Evan Metsky, and Laura Dabbish. 2015. “Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1603–12. CHI ’15. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2702123.2702548.

Lee, Min Kyung, Ishan Nigam, Angie Zhang, Joel Afriyie, Zhizhen Qin, and Sicun Gao. 2021. “Participatory Algorithmic Management: Elicitation Methods for Worker Well-Being Models.” In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 715–26. AIES ’21. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3461702.3462628.

Nguyen, Aiha, and Alexandra Mateescu. 2019. “Explainer: Algorithmic Management in the Workplace.” Data & Society. Data & Society Research Institute. February 6, 2019. https://datasociety.net/library/explainer-algorithmic-management-in-the-workplace/.

Unruh, Charlotte Franziska, Charlotte Haid, Johannes Fottner, and Tim Büthe. 2022. “Human Autonomy in Algorithmic Management.” In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 753–62. Oxford United Kingdom: ACM. https://doi.org/10.1145/3514094.3534168.

Author

Dr Charlotte Unruh

Share

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.