arrow-backBACK
Blog Home / Ask Me Anything

“Ask Me Anything” with Reid Blackman, PhD & AI Ethics Consultant

By June 05, 2020

  • facebook
  • twitter
  • pinterest
  • linkedin

Ai4 recently hosted an “Ask Me Anything” session with Reid Blackman, PhD on our Ai4 Slack Channel. Read the full transcription below…

MODERATOR: It’s my pleasure to welcome our next AMA guest Reid Blackman, Philosophy PhD and Professional AI Ethicist, Founder and CEO of Virtue. You now have 1 hour to ask Reid anything on the topic of AI Ethics! Ready, set, go.

REID: Thanks! Hi everyone - Looking forward to your questions!

PARTICIPANT 1: There are over 200 (depending on who is counting) statements of AI ethics or Principles for Ethical AI. Which should we read and why? Of the many specific principles floated, which is top priority and why?

REID: We are taking more precaution to how we interact and the frequency. We are also having fewer needed people while working on continuously working to become better.

I’m not sure I would bother with them. For ethics principles to get widespread agreement they need to be watered down. Take “We’ll always develop our AI with fairness as a priority” or something like that. Who’s against that?! Even the KKK says they’re for fairness…they just have a very different idea about what that amounts to! What’s needed is a greater focus on what things developers and companies need to do.

PARTICIPANT 1: Starting from that point then, that statements of principles are worth wall paper, should we focus on audit mechanisms?

REID: Can you say more about what you mean by “audit mechanisms”?

PARTICIPANT 1: The UK ICO has an AI audit framework, which is one example: https://ico.org.uk/about-the-ico/news-and-events/ai-auditing-framework/. Another is the EU's accountability framework

REID: Got it. Yes, that’s much better, I think.

PARTICIPANT 1: Other forms of audit mechanisms might include standards for XAI or unboxing the algorithm

REID: It’s key, in my estimation, to have a strong ethical risk due diligence process in place. But that standardly can’t be generic. It has to be created with the concerns of the organization in mind, and those concerns can vary from org to org. WRT, for example, XAI, I’m not convinced explainability is always ethically required. in some cases it is enough that the AI works reliably.

My general view is that explainability is ethically important when providing explanations for a given output is part and parcel of respecting the people it is affecting

PARTICIPANT 2: Hi Reid, thanks for joining! For-profit companies often have to find a balance between doing whatever it takes to increase their own profits versus acting in the best interest of others. Does AI change the nature of this topic, or is it simply a case of increasing power of firms and therefore increasing their responsibility as well? If it changes the nature of the question, what is different about AI versus other tech/actions?

REID: I don’t think AI changes the nature of the topic. I think that the ethical risks of AI are reputational risks that companies should take very seriously, even if it’s only for bottom line reasons. Since the nature of AI is to do things fast and at scale, there’s additional reason for companies to pay close attention to those risks generated by AI

PARTICIPANT 2: Makes a lot of sense! In a way, if reputation matters to firms, and with AI in the limelight as it is, it might actually be a real asset for firms to act on ethical AI versus the opposite. Thanks for your thoughts.

REID: That’s exactly right. When companies are moved to consider AI ethics, it’s standardly the result of reputational risk mitigation.

PARTICIPANT 3: Hey Reid, thanks again for being here with us! I’d like to throw a question into the mix as well: Where do you most often see companies run into ethics issues around AI? And what are the common solutions?

REID: There are a variety of issues. Most obviously there are biased algos (various companies are being investigated by regulators on that front). There are also big ethical issues arising from mishandling of people’s data (companies are getting sued on this front)

PARTICIPANT 3: Yes, biased algos has been a huge theme at our conferences. It’s too bad that litigation, regulatory fines, and bad PR, have to be the driving force here! It seems more like the decisions being made are more in “good business” rather than in “good ethics” if that makes sense

REID: Yeah definitely. But I think that’s okay. Look...

Scenario 1: You don’t help the elderly person across the street.

Scenario 2: You help them so others praise you.

Scenario 3: You help moved by compassion.

Right now companies are mostly at scenario 1. I can’t get them to get to scenario 3. I can’t change their hearts, so to speak, but if bottom line risk gets them to scenario 2, well, that’s an improvement in my book.

PARTICIPANT 3: That is a very good point. Maybe being outcome oriented is enough until humanity can evolve spiritually evolve a bit

REID: There is no common solution as yet b/c orgs don’t know how to tackle the problem. More specifically, one of the big problems is that there is no cross-org standard about who owns the problem (and thus who should spend their budget on it!).

Is it the Chief data officer? CISO? CIO? CTO? CinnovO? In truth, they are all good candidates but it’s very rare that anyone understands themselves as being responsible for it. I also don’t think CEOs are paying enough attention to spend the time to figure out who should own it.

PARTICIPANT 4: Which companies/sectors are on the cutting edge of AI ethics? And what exactly does being on the cutting edge mean?

REID: I’m not sure there’s an industry at the cutting edge. I think that of the big tech firms Microsoft is doing the most in this arena and I know they are pushing for more. Salesforce has a number of people dedicated to ethical AI and they write/talk about it a lot. In general, there’s lots of talk by companies about it but very little action that anyone can see.

PARTICIPANT 5: Hi Reid. Do you think any existing regulatory bodies are well-suited to promoting or enforcing AI ethics (assuming they hire appropriate subject matter experts)? If not, what do you think are their biggest shortcoming? Rephrased: what are the qualitative differences between AI ethics and business ethics?

REID: Great question. I don’t know of any such bodies, and part of the problem, to your point, is that the right SMEs are not engaged. As an example, the High level expert group on AI for the EU Alliance was composed of around 50 people, they came out with ethics guidelines, and there were only two ethicists among them!

As for bodies in the US that could regulate this area, I don’t know of any, but admittedly knowledge of US regulatory bodies is not my strong suit! Further, we’re much more likely to see this roll out state by state before see a national AI ethics strategy. Illiniois has somethgn called BIPA which is pretty strong, and now we have CA’s CCPA. Other states are following suit.

PARTICIPANT 6: Imagine a company wanted to be on the cutting edge of AI ethics, would they hire a programmer and retask them to ethics, or hire an ethicist and teach them to program? Which basket of skills does an AI ethicist need to do the job well enough to push a company to the bleeding edge on this issue?

REID: Great question! The answer is, neither! I can no more teach an engineer to be an ethicist in a few sessions than an engineer could teach me to be an engineer in a few sessions. Being on the cutting edge would require creating an AI (or better because broader, digital) ethical risk management program.

That would include things like an ethical risk due diligence process placed at key points during internal product development and procurement from third party vendors. It would also include a deliberate body (e.g. an AI Ethics Board) that would handle tougher cases. That board would have to include BOTH engineers and ethicists (and others as well, e.g. anthropologists that are aware of culture variances and how deployments will affect different peoples).

Then there’s the process by which things go to that ethics board and what powers they have. then there’s building organization awareness, a compliance program, proper documentation, etc..

For some reason ethics is not regarded as an area of expertise. And this despite the fact that there are tons of experts! But ethicists (aka philosophers with a certain training) are bad at PR!

PARTICIPANT 7: Hi Reid, thanks for doing this! Do you have any thoughts on how to minimize/eliminate bias in AI?

REID: Thanks for the question. There are broadly two ways of doing this. First, there are certain tools that have been developed to track things like this during product testing. Second, there is some qualitative training that needs to get done for those who are going to use those tools. And if not training, then involving SMEs to help them use those tools. (Those tools require developers to form hypotheses about who might be discriminated against, why, and how, and that kind of social knowledge and reasoning is not the forte of engineers).

I should say that one thing that people talk a lot about in this context is creating a diverse team of engineers. While I think that’s a good thing and may play a role in mitigating bias, I think it can’t possibly be the whole solution.

One reason is that the explanation for why teams are not diverse now has to do with a set of complicated social and historical injustices  that will take decades to fix. But the problems with discriminatory AI is happening now and is only ramping up. If the only strategy to combat this is to wait decades until historical injustices right themselves, we’re screwed.

The second reason is that while people have special access to what their own lives are like and plausibly the lives of people like them (in relevant respects), they aren’t thereby better sociologists or anthropologists or ethicists, nor are they better at engaging in the the kinds of qualitative critical analysis required to think through issues of discrimination.

PARTICIPANT 8: In software security, we have a lot of operating system support for enforcing security policies, and enterprise software often includes integrations with management software so security teams can gain visibility into and control over misuse. Do you think a similar model would work for AI, where enterprise AI software is built to integrate with security teams, which can monitor and restrict use?

If it would work, is the academic research on AI safety generally leading in this direction, or do you see major shortcomings in research that would make such a model difficult to implement?

REID: Absolutely AI ethical risk management processes should include combing this work with security policies, processes, etc. One person on my team is a CISO who does governance, risk and compliance wrt security, and he’s an invaluable member of the team precisely because he can talk to those security people and find out where we can get AI ethical risk mitigation processes to dovetail with what they’re already doing.

I was talking to a CISO just the other day about adding an ethical risk due diligence to their cyber due diligence when vetting third party vendors. And to your point about research, I don’t think that’s the issue. Where, exactly, AI ethical risk mitigation comes into contact with security in an organization is a very “on the ground” matter. The question is how the program gets created. In my view, it gets created best when it’s at least partly woven into cyber policy/process/governance, etc.

PARTICIPANT 9: Hey Reid! I'm curious, what are the most egregious violations of AI ethics you’ve seen companies commit?

REID: Oof! I think the biggest ones are probably the ones many of us have seen in the news. One that is pretty scary is that Optum is being investigated by regulators for creating an AI makes recommendations to doctors and nurses about which patients to attend to and when. Allegedly this AI recommended they pay more attention to white patients than to sicker black patients. (this was reported in WSJ and WashPo).

I hasten to add that there is no evidence the discrimination was intentional by the developers or anyone else involved. This kind of discrimination can arise despite the best intentions of the programmers.

PARTICIPANT 10: What do you think of AGI (artificial general intelligence) or "superintelligent" AI as a threat? Do those concepts fall within your AI ethics work? Do you find that companies developing AI models actually think about how to avoid an AI apocalypse?

REID: I’m not particularly worried, myself. There are so many massive problems even with just machine learning that I don’t see AGI coming along so quickly. Of course, it’s possible it will be invented sooner rather than later, but the possibility that X is no evidence for X.

As for companies trying to avoid the AI apocalypse…definitely not. They’re looking to acquire/develop AI where ever they see it’ll help their bottom line.  I think the ethical risks of ANI are large enough for businesses to take them seriously….and they’re here now!

PARTICIPANT 11: What's coming next in AI ethics? Imagine we did this again in 1 year (or maybe even had an in person event safely), what do you think would be new between now and then?

REID: So in short I think we’ll be talking largely about the same stuff. Awareness of these issues, let alone willingness/money to do something about it, is just percolating now. I think we’ll have some more high profile cases that will push us along, but I don’t think anything will fundamentally change the nature of the conversation. Companies will still be behind in their AI ethical risk management efforts and they’ll suffer as a result.

Well, I do think we’ll see more regulations in place, so that’s something we’ll talk about. NY is supposed to come out with something like the CCPA or GDPR pretty soon, and that could have an interesting impact, depending on the contents of the regulation.

PARTICIPANT 12: If you were to estimate the dollar amount cost for a large enterprise to maintain healthy AI ethics, what would you say that is? I know it's a tricky question, but I'll take a rough estimate.

REID: Good question. I think there are varying levels of health (as there are with people). It starts with the C-suite knowing about those risks and having a decent idea about how to tackle them. So there we’re talking about some workshops and not a high dollar amount, particularly relative to the revenue of a large enterprise. After that we’re talking about creating a strategic roadmap to set concrete goals, strategies for achieving them, etc., ,and now we’re probably in the 100-300K region, depending on the size of the org. If you’re talking about a full blown digital ethical risk program/becoming an AI ethics center of excellence, you’re probably getting close to the 1M range. As you said, these are estimates of course! And I hasten to add that this is for a large enterprise. Smaller orgs mean smaller $ amounts!

MODERATOR: Annnnd…. That’s a wrap! Thank you again, Reid, for taking out some time to be with us and answer questions for the Ai4 community. It’s been great having you!Are there any final plugs or messages you’d like to convey to the group?

REID: Thanks everyone! Feel free to reach out to me directly at [email protected]. I also post a lot of content on LinkedIn: https://www.linkedin.com/in/reid-blackman-ph-d-0338a794/.

Recent Posts

https://ai4.io/blog/2023/12/12/developing-computer-vision-applications-in-data-scarce-environments/
https://ai4.io/blog/2022/02/25/5-effective-risk-management-strategies-when-trading-in-crypto/
https://ai4.io/blog/2022/02/15/how-ai-will-be-impacted-by-the-rise-of-nfts/