Everyone in Your Organization Needs to Understand AI Ethics

Everyone in Your Organization Needs to Understand AI Ethics

by Bloomberg Stocks
0 comment 12 views
A+A-
Reset

When most organizations think about AI ethics, they often overlook some of the sources of greatest risk: procurement officers, senior leaders who lack the expertise to vet ethical risk in AI projects, and data scientists and engineers who don’t understand the ethical risks of AI. Fixing this requires both awareness and buy-in on your AI ethics program across the organization. To achieve this, consider these six strategies: 1) remove the fear of not getting it right away, 2) tailor your message to your audience, 3) tie your efforts to your company purpose, 4) define what ethics means in an operational way, 5) lean on trusted and influential individuals, and 6) never stop educating.

Many organizations have come around to seeing the business imperative of an AI ethical risk program. Countless news reports — from faulty and discriminatory facial recognition to privacy violations to black box algorithms with life-altering consequences — have put it on the agendas of boards, CEOs, and Chief Data and Analytics Officers. What most leaders don’t understand, however, is that addressing these risks requires raising awareness of them across their entire organization. Those that do understand this often don’t know how to proceed.

For companies that use AI, this needs to be a top priority. Over 50% of executives report “major” or “extreme” concern about the ethical and reputational risks of AI in their organization given its current level of preparedness for identifying and mitigating those risks. That means that building an AI ethical risk program that everyone is bought into is necessary for deploying AI at all. Done well, raising awareness can both mitigate risks at the tactical level and lend itself to the successful implementation of a more general AI ethical risk program.

Building this awareness usually breaks down into three significant problems.

First, procurement officers are one of the greatest — and most overlooked — sources of AI ethical risks. AI vendors sell into most every department in your organization, but especially HR, marketing, and finance. If your HR procurement officers don’t know how to ask the right questions to vet AI products then they may, for instance, import the risk of discriminating against protected sub-populations during the hiring process.

Second, senior leaders often don’t have the requisite knowledge for spotting ethical flaws in their organization’s AI, putting the company at risk, both reputationally and legally. For instance, if a product team is ready to deploy an AI but first needs the approval of an executive who knows little about the ethical risks of the product, the reputation of the brand (not to mention the executive) can be at high risk.

Third, an AI ethical risk program requires knowledgeable data scientists and engineers. If they don’t understand the ethical risks of AI, they may either fail to understand their new responsibilities as articulated in the program, or they may understand them but not understand their importance, which in turn leads to not taking them seriously. On the other hand, if you have an organization that understands the ethical, reputational, and legal risks of AI they will understand the importance of implementing a program that systematically addresses those issues cross-organizationally.

Creating this cross-organizational awareness well requires work. It requires a consistent message that’s also tailored to the specific concerns of each group. After all, the interests and responsibilities of the C-suite are different from those of product owners and designers which are different from those of data scientists and engineers — speaking in the same language to all of them results in speaking to none of them. The message can’t be superficial, or it will result in people thinking AI ethics is either a PR issue or niche concern. And it needs a clear C-suite leader responsible for devising and overseeing the execution of a strategy that results in organization-wide awareness — this issue won’t be taken seriously if the message doesn’t come from the top. Here’s where organizations should start.

How to Build Awareness — and Get Buy-In

It’s crucial to ensure that every employee knows the risks and feels vested in the success of AI within the organization. They not only need to be aware that these issues exist, but they also need to know how those risks impact their particular job and how it fits in to their job description. It’s one thing for someone in HR to know they need to hire people that are right for the job and what that might look like while also, incidentally, being aware of the ethical risks of AI. It’s another for that person to see identifying and mitigating those risks as part of her job by, for instance, knowing that the responsible discharging of her responsibilities includes asking AI vendors that provide hiring software to provide documentation on how they identified and mitigated the biases in their AI.

Here are six measures you can take for building organizational awareness and buy-in the right way.

1. Remove the fear of AI and AI ethics. One barrier organizations face is that people outside of IT can be intimidated by the topic. “Artificial intelligence,” “machine learning,” and “discriminatory algorithms” can seem like daunting concepts, which leads people to shy away from the topic altogether. It’s crucial for building organizational awareness that people become familiar and comfortable with the concepts, if not the technical underpinnings.

Possessing basic AI literacy is, in one sense, not very difficult. After all, machine learning is, in essence, learning by example, which everyone is familiar with. Similarly, being bad at something because you didn’t have enough examples is also familiar to everyone. If you’re communicating to people how a discriminatory algorithm might be created, you can explain that some are the result of software that didn’t have enough examples to learn and so the software makes mistakes (e.g. not having enough examples of black women’s faces for your facial recognition software resulting in the software being really bad at picking out black women’s faces). More generally, many of the ethical risks of AI and their various sources can be articulated to a non-technical audience and results in employees with the confidence they need to handle the issues.

Requiring your people in HR and marketing to have a basic familiarity with how AI works and how AI ethical risks arise may seem like a tall order. However, most organizations have created a great deal of awareness around cybersecurity risks, which also entails cybersecurity literacy, and this probably seemed like a virtual impossibility before companies committed to making it happen. But, if your people don’t know the basics, they won’t know to ask the right questions (e.g. of AI vendors) when it’s crucial that they do so.

2. Tailor your communications to your audience. Senior leaders that see themselves as stewards of their brand’s reputation are interested in avoiding risks that threaten that reputation. For them, speaking in the language of “ethical and reputational risk” is important for them to see the relevance of AI ethics to their concerns and responsibilities. Product designers, on the other hand, are less concerned about avoiding risk than with making “cool” and helpful products. Explaining how AI ethics by design improves their product, especially for the growing number of values-driven consumers and citizens generally, can be a highly effective way to reach that audience. Finally, data scientists and engineers want robust models that are effective. Talking their language means explaining how issues of biased algorithms and black boxes decrease the power of the tool and/or its adoption. No one wants to build an inaccurate or unused model.

Giving examples and stories of AI gone wrong that each audience can relate to is also important. This doesn’t only need to involve PR disasters. It can also include, for instance, Amazon’s inability to sufficiently mitigate the biases in their AI-powered hiring software which admirably led Amazon to pull the plug on the project rather than deploying something that could be harmful to job applicants and the brand alike. Further, to the extent possible, you should use examples that are particular to your industry. There are cases in which AI has realized certain ethical risks in healthcare, but if you’re talking to members of a fintech, employees will connect more with a story from a peer company.

3. Tie your attempts to build organizational awareness to your company’s mission or purpose. If it’s already built into your organizational culture what your mission/purpose is, integrate your discussion of AI ethics with that. Explain how AI ethics/ethical risk management is a further extension of that mission, a set of guardrails around what you’re (not) willing to do in pursuit of that mission.

For example, your mission might be to provide the best financial advice possible. But you can’t provide that advice unless people trust you and people can’t trust you if you’re negligent in your deployment of AI. When AI goes wrong, it goes wrong at scale, so communicating to your organization that part of providing the best financial advice possible entails protecting your clients, and that part of protecting them requires the ethical, responsible, or trustworthy deployment of AI, AI ethics is no longer seen as something bolted on to your operations. Instead, you communicate that it is a further extension of your mission and core values.

4. Define what AI ethics means in your organization in an operational way. It’s one thing to say you’re “for privacy” or that you “respect privacy.” It’s another thing to actually do something about it. To make sure your AI ethics value commitments aren’t seen as mere PR, tie those commitments to ethical guardrails, e.g. “we will never sell your data to third-parties” or “we will always anonymize data shared with third parties”.

If you have a well-crafted AI ethics statement, it will include those guardrails, which play a dual role. First, they communicate to your team what you’re actually doing (or plan to do) about the AI ethical risks. And second, it immediately communicates that this is not PR or something fuzzy. When values are articulated and communicated in a way that ties them to actions, those communications are credible and memorable.

One way to ease your audience into understanding how AI ethics is not fuzzy and is something that can be implemented is to explain the very real and tough ethical questions faced in healthcare and the ways they have tackled those issues. Relatedly, you can discuss how the healthcare industry has incorporated ethical risk mitigation into infrastructure and process so they can see it can be done.

5. Invite trusted and influential members across various functions to join you in your efforts. Some organizations, like Microsoft, have created a system of “AI Ethics Champions.” These are people throughout the organization that are charged with raising awareness of AI ethical risks with their teams. One important feature of an AI Ethics Champion program is that it empowers leaders who already have the trust and support of their team. Further, they know their respective teams better than, say, the Chief Learning Officer or the Chief Data Officer, or whomever leads the organizational awareness strategy.

6. Continuously educate. Building organizational awareness is not something you do on a Wednesday afternoon or a weekend retreat. It requires ongoing and diverse touchpoints, from internal and external speakers, workshops, newsletters, and so on. Indeed, AI and emerging technologies generally are rapidly changing and evolving, and with those changes come novel sources of ethical risk. To ensure that your organization doesn’t fall too far behind, continuously educating your people will be a crucial bulwark against the rising tide of technological advances.

Business leaders know that figuring out how to develop, procure, and deploy AI in a safe and ethical manner is crucial for continued growth and maintaining a competitive edge. It is important that this goal is not confused with a technical goal for data scientists and engineers to achieve. The responsible deployment of AI — whether it is used for internal or external purposes — requires awareness of the ethical risks of AI and organizational buy-in to a strategy that mitigates them.

Read More

You may also like

Leave a Comment