Algorithmic Nudges Don’t Have to Be Unethical

Algorithmic Nudges Don’t Have to Be Unethical

by Bloomberg Stocks
0 comments 134 views
A+A-
Reset

“Nudging” — the strategy of changing users’ behavior based on how apparently free choices are presented to them — has come a long way since the concept was popularized byUniversity of Chicago economist Richard Thaler and Harvard Law School professor Cass Sunstein in 2008. With so much data about individual users and with the AI to process it, companies are increasingly using algorithms to manage and control individuals — and in particular, employees. This has implications for workers’ privacy and has been deemed by some to be manipulation. The author outlines three ways ways that companies can take advantage of these strategies while staying within ethical bounds: Creating win-win situations, sharing information about data practices, and being transparent about the algorithms themselves.

Companies are increasingly using algorithms to manage and control individuals not by force, but rather by nudging them into desirable behavior — in other words, learning from their personalized data and altering their choices in some subtle way. Since the Cambridge Analytica Scandal in 2017, for example, it is widely known that the flood of targeted advertising and highly personalized content on Facebook may not only nudge users into buying more products, but also to coax and manipulate them into voting for particular political parties.

University of Chicago economist Richard Thaler and Harvard Law School professor Cass Sunstein popularized the term “nudge” in 2008, but due to recent advances in AI and machine learning, algorithmic nudging is much more powerful than its non-algorithmic counterpart. With so much data about workers’ behavioral patterns at their fingertips, companies can now develop personalized strategies for changing individuals’ decisions and behaviors at large scale. These algorithms can be adjusted in real-time, making the approach even more effective.

Algorithmic nudging tactics are increasingly being employed in work environments as companies are using texts, gamification, and push notifications to influence their workforce. For example, ride-hailing company Uber has been employing the psychological trick of rewarding badges to incentivize their more than 3 millions of independent, autonomous drivers to work longer hours without forcing them to do so. Likewise, Deliveroo pushes notifications to their food delivery workers’ smartphones in order to nudge them into working faster.

For many companies, nudging workers is a promising approach to achieve their organizational goals through higher worker performance and/or cost savings. For example, Virgin Atlantic has been reported to shape their employee’s behavior by nudging their pilots in using less fuel, helping the British airline to substantially reduce costs; Google has reportedly employed nudging in order to incentivize their workforce into eating healthier snacks and reducing food waste at the cafeteria. In our four years of research on Uber, my colleagues and I found that compared to traditional management, algorithmic management and nudging approaches allowed the company to efficiently coordinate a large workforce at relatively low cost.

But companies should be wary: These practices have come under criticism for their questionable ethics and are of increasing concern to the regulators and the broader public. Challenges to these practices come largely in the form of attention to privacy violations, accusations that nudges manipulate unwitting individuals to their disadvantage, and concern about algorithmic transparency and bias. Currently, companies that employ such techniques are largely those in the gig economy, where workers are not considered employees. That designation has largely protected employers from regulation in this area, but this may be starting to change.

For example, in July 2020 British Uber drivers filed a lawsuit against Uber claiming that the company is failing to fulfill its legal obligations under Europe’s data protection regulations (GDPR), citing its lack of transparency about its algorithms. Likewise, in the U.S., the Federal Trade Commission has repeatedly funded research studies and published consumer guidelines to promote consumer privacy and algorithmic accountability. Such privacy related-concerns have been fueled by news about questionable corporate practices such as Amazon’s employee wristbands, which can vibrate to point warehouse workers in the direction of a product but which also track the employee’s every move.

But companies should not give up on algorithmic management to control workers through nudging. Building on approaches that work — and those that didn’t — at Uber, as well as those I have observed through publicly available reporting about Facebook, Amazon, and Google, here are three recommendations for companies that want to engage in algorithmic management and nudging, but want to avoid ethical and regulatory snares:

Create a Win-Win Situation

Uber’s nudging of employees, Facebook’s nudging of social media users, and Amazon’s nudging of workers so clearly serve the interests of the companies over those of the consumers or workers that it’s easy to assume that all nudging goes against the interests of the target individual. However, I believe that companies can create a win-win situation which nudging is beneficial to all involved.

Research by Thaler and Sunstein indicates that nudging can encourage individuals to improve their own health, wealth, and happiness through positive reinforcement of their decisions.

Translating this to the context of work and algorithmic management, then, organizations should seek to implement AI-powered and personalized reward systems that also benefit the worker. While I don’t know of any specific companies doing this now, I can imagine a world in which these systems nudge workers to increase their own safety and convey to them a feeling of appreciation. In the future, companies could enroll workers in retirement savings programs by default in order to encourage them to save more. Many of these behaviors in turn help the company as well, as employee safety and satisfaction benefit the bottom line.

While Uber’s original nudges were entirely focused on outcomes that benefited the company rather than their workers, more recently the company has started taking first steps toward a more balanced approach through their Uber Pro rewards service. Drivers who maintain certain requirements such as high ratings and low cancellation rates can now access rewards ranging from reduced gym memberships to cash back on fuel purchases. While more work is needed in order to fully respond to drivers’ needs, the current program is a starting point.

Share Information About Data Collection and Storage

Algorithmically-driven nudging is dependent on access to vast amounts of fine-grained data about employees’ preferences and past behavior. While tech giants such as Google and Amazon collect this kind of data about users’ web searches, clicks, likes, and purchase decisions, Uber collects data about their drivers’ every move, such as their GPS location, driving and speeding habits, and ride acceptance rate.

Companies engaging in algorithmic nudging need to be transparent about the collection and storage of this data. GDPR sets a good standard here: it requires companies to actively provide information about the nature of the personal user (including workers!) data they collect and store. For example, Twitter has updated its terms of service and privacy settings in order to comply with GDPR regulations, among others making sure that communication and information to users about their data usage is concise, transparent, and easily accessible, using clear and plain language.

Relatedly, it is a wide-spread practice that companies sell data to third parties. Companies should avoid this where possible or at least properly disclose their intent to consumers and workers.

Explain the Algorithm’s Logic

Making it clear to employees what the algorithm is doing with their data is also critical to staying within ethical bounds. Individuals who are significantly affected by the outcomes of machine learning models are due an accounting of how a particular decision is made. This is important when it comes to profiling — the use of new advances in digital technologies to allow algorithmic decisions to be tailored towards a specific individual. Transparency of information is crucial in order to allow workers to make informed decisions about whether to opt out of the algorithm, and to show that an automated decision is not prone to racial or gender bias.

However, sharing information such as why some workers are treated differently from others or what the desired outcome of specific nudging strategies is can be a challenge, especially given that these algorithms are always dynamically adapting themselves based on the changing environment. Even so, more and more companies are investing in explainable AI solutions, employing techniques in order to make sure that complex computational outcomes can be understood by human stakeholders.

One way to approach this problem is counterfactual explanations. These show what the outcome of a decision-making algorithm would have been for a specific individual if they had different characteristics or attributes — a simple and non-technical way to show how the algorithm works.

For example, Uber could share detailed information about which factors (such as driver rating, ride acceptance rate, number of customer complaints) would affect each drivers’ classification into Gold, Platinum, or Diamond status members in the rewards program. Even better would be sharing detailed information about what exact rating and acceptance rate, or how many customer complaints would result a specific driver being promoted to the next level.

Due to companies’ increasing access to their workers’ data and the rapid pace of technological advancements, behavioral nudging is likely to continue in the future. Companies who join this trend need to carefully manage their nudging techniques in order to remain legitimate by creating win-win solutions and investing in the transparency of the collection, storage, and processing of their workers’ data.

Read More

You may also like

Leave a Comment