Ethical Design & Breaking Biases in AI - with Lisa Woodley

 

 

 

Professional headshot of a woman
Lisa Woodley

Lisa Woodley, Vice President of Digital Experience at NTT DATA Services, has spent the past 20 years creating and leading diverse, multidisciplinary teams that unite experts in design, psychology, computer science, data analytics, and more to drive innovation. Woodley, a leader in digital experience design who shares her expertise via the Intro to User Experience Design course, is also a leader in ethical design, and, for more than a decade, has used her experience and her voice to teach students at Rutgers University about design ethics, how to intentionally use their tech powers for good, and, most importantly, why it’s their responsibility to do so.

Woodley and colleague Anisha Biggers recently gave a great interview in which they discussed inherent biases in AI, how easily racism and misogyny can be “learned” by automation software and natural language processors, and how humans can mitigate biases in the AI and automation solutions they create. The full Q&A is reprinted below with permission from NTT DATA Services.

Breaking the Bias in AI - Q&A with Lisa Woodley and Anisha Biggers

The theme of International Women’s Day 2022 is #BreakTheBias. In preparation for a week’s worth of internal events related to IWD, we’re devoting a blog post to how NTT DATA is working to Break the Bias in automation and AI. Kim Curley, VP, Workforce Readiness Consulting, recently had an opportunity to ask NTT DATA’s Lisa Woodley and Anisha Biggers (both subject matter experts in automation solutions) about their experience with bias and how they have worked to eliminate bias in digital transformation initiatives.

NTT DATA is a leader in automation and AI and recognizes the bias traps that can be created without the right level of care and consideration on the front end. NTT DATA adopted a set of AI Guidelines in 2019 that recognizes our responsibility in creating a human-centered society in which we coexist with AI. This year’s IWD theme provided a terrific opportunity for us to talk bias in automation and AI with these experts.

Q: What event or situation caused you to get interested in how bias is embedded into automation / AI?

Anisha Biggers: My interest in machine-based bias started in 2009 when Nikon’s face-detection cameras were accused of racism. Their camera technology could not recognize whether the Asian faces had their eyes open or closed, and an automated message would flash on the screen asking, “Did someone blink?” I remember wondering why the design team would not consider all categories and races as a starting data set for facial recognition. It would have been the logical thing to do.

For automation specifically, it was more of a direct experience. I had a cordial ‘argument’ with one of my clients who implemented an RPA (Robotic Process Automation) solution to reduce manual error and improve cycle time but was not considered successful. The solution was a technology solution that enabled straight-through processing. When we interviewed identified stakeholders, we realized the solution was designed by RPA developers and IT system owners — without engaging the business teams who will eventually use it. It was designed by technology teams with a data ‘input processing and output approach,’ without considering the daily variations and ad-hoc decisions for the same process.

Any automation/ AI solution will be as good — and biased — as its designers.

Lisa Woodley: I agree. My experience really started around my passion for ethical design. The use of data and manipulative design practices has negatively impacted our psychology and society, and I’m passionate about designers taking responsibility for where we are and doing something to change it. We represent the human in technology innovation, and it’s our job to draw a line to ensure the future we design benefits people and society or, at the very minimum, does no harm.

As I started to dive further into ethical design concepts, it became clear that inclusion HAD to be a part of the conversation. You can’t have ethical design without inclusive design. Inclusive design means understanding where the biases are and actively banishing them from what you create. Now how does this relate to bias in AI and automation? All the machine learning algorithms that are out there collect data, analyze it, and recommend actions — those algorithms drive what we design — things like what offers we’re presenting to a consumer as they navigate a site.

As designers, we can’t just accept that. We have to question whatever the machine says and design for it. How are you determining your target personas and customer segments? Is it fair? Are we leaving anyone out? Are the criteria based on biases? How do we know?

Q: What causes bias to be present in automation / AI?

Anisha: We need to understand that we humans are the ones designing the solutions. Humans are inherently biased. Anything we create will be biased as the sum of the ‘creators’ background, experiences and social circle. We, as people, design things based on our understanding of the world around us. So saying “Bias is present in automation/AI” might not be correct. “Bias is designed into automation/AI” is how we should approach this topic.

Lisa: Where we run into trouble is we assume because it’s a machine it has no bias, but AI can only be trained on what we give it — what we know and/or what’s already happening, and that is inherently biased. We have to start not from a position of preventing bias from creeping in but instead removing the bias we know is already there.

For example, we know that historically there’s inequality in mortgage lending. We might think, “Well, let’s take the inequality out by training a machine to approve mortgages.” No human, no bias, right? But if we train the machine on historical views of who has gotten approved in the past, we’ll only propagate the inequality that’s already happening.

Q: Why is breaking the bias in automation / AI important?

Anisha: We live in a hyperconnected world, an age of digital revolution and social media. Hyperautomation and AI are here to stay as we progress as a society. Breaking the bias is no longer an option. We must minimize bias as much as we can. Our future depends on it. Upcoming generations are born into technology. They learn, interact, and socialize using technology. If we arm people with biased technology, we accelerate that spread of bias that could have been contained without the help of technology.

Lisa: We are increasingly giving over decisions to AI. From who gets approved for a credit card, how much of a mortgage loan you qualify for, your credit score, and what you see when you do internet searches or log onto social media. It is absolutely everywhere, impacting every aspect of our life. Most people don’t realize the extent. If we don’t recognize the prevalence and work to break the impact bias has on the decisions AI makes, we will exponentially increase the socio-economic gaps that already exist.

Q: Can you give an example of unconscious bias built into automation / AI? And ways that bias could have been avoided?

Anisha: An AI chatbot, ‘Tay’ (built for conversational understanding), was taught to be racist by Twitter in less than 24 hours. The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation." The moment Tay was alive, people started tweeting the bot all sorts of misogynistic and racist remarks. And Tay — being essentially a hyperconnected robot parrot— started repeating these sentiments back to users, proving correct that old programming adage: garbage in, garbage out.

As we design and build intelligent decision-making solutions that learn from human counterparts, the same sort of bad training problem can arise in more problematic circumstances. This is why, for AI/automation solutions, not only do we need to understand how a specific technology solution design will impact humans but vice versa. It is essentially a continuous learning loop between technology and humans.

Lisa: An example that honestly scares me the most is a type of AI facial recognition Anisha mentioned previously. Only this time, it is being used for community safety and policing. I highly encourage anyone interested in this topic to watch the documentary Coded Bias on Netflix. It focuses on Joy Bolamwini, a researcher at MIT Media Lab and her discovery that facial recognition does not see dark-skinned faces accurately. It shows the impact this has on the real world as communities and police are increasingly relying on facial recognition for crime prevention. Her discovery prompted her to push for new legislation against bias and form the Algorithmic Justice League. Their mission is to bring together researchers, policymakers, and industry practitioners to mitigate AI bias and harm.

Q: How can we break the bias – what steps are needed to prevent bias, and what skills do we need to develop to identify and fix it where it might already exist?

Anisha: As I mentioned earlier, we are humans. And humans are biased. We cannot eliminate bias from what we design, but we can certainly minimize it. Understanding the different kinds of biases that exist — or at least the ones we have identified and categorized, followed by acknowledging that we have it — is a start to solving the problem. I found this explanation on Techcrunch.com very helpful:

  • Data-driven bias – facial recognition discrepancies
  • Interaction driven bias – Tay, the corrupted Microsoft chatbot 
  • Emergent bias – information-based social bubbles on Facebook
  • Similarity bias – Customized news and ads on Google based on individual queries
  • Conflicting goal bias — any site that has a learning component based on click-through behavior will present opportunities that reinforce stereotypes

 

Lisa: We can #BreakTheBias by assuming that bias is always there no matter what we think we designed upfront. We can start every project with that assumption and then consistently ask questions about how the machine is learning, where it is getting that information, and how it will use it. What are the consequences if it gets it wrong? More importantly, what if that algorithm does EXACTLY what we programmed it to do. Are there any unintended consequences that might come out of that? AI is like a genie. It will deliver precisely the wish you ask for, and sometimes that ends up being more than you bargained for.

Anisha: Skills-wise, we need design engineers who understand technology and human psychology. To be honest, we need more than just skills to prevent bias in automation/AI. Based on what we are trying to solve, we need a foundational team structure — diverse technologists and designers — to provide varied perspectives and inputs on the impact of the designed solution. We also need AI Ethics committees, like the NTT DATA Center of Excellence, to provide some guardrails around design and innovative solutions. Just because we can design something does not mean we should.

Lisa: In terms of skills? Ok, I might have my own bias here, but we need more designers and user experience researchers involved. I said it before. The designer represents the human. They create the things that are interacting and impacting people, so they should be ones influencing the line between what the business wants, what is possible from a technology perspective and what is responsible from an inclusion and ethics perspective. Design thinking, starting with empathy or understanding the human, needs to be at the forefront of future technology innovations and services. We need to flip the current model. Instead of leveraging technology to achieve business goals without considering the human impact, we need to put the human at the center of our technological endeavors.

Q: What’s your favorite automation / AI use case?

Anisha: Alexa bloopers would be my favorite. Our two toddlers asking Alexa questions and her attempts to answer based on what she understood is always interesting. This human-to-machine interaction is extremely beguiling — a determined and frustrated four-year-old trying to articulate what he wants vs. an unemotional AI repeating what she understood. Nobody wins in the end. I recently changed one of my Alexa’s to a male voice. My kids’ immediate response was, “This Alexa sounds mean.” Absolutely fascinating!

Lisa: My husband has become obsessed with automating our home — particularly the lights. Everything is set with timers, motion sensors, voice activation, the works. I’m a huge fan of the movie The Fifth Element, so it became his mission to figure out how to set the lights, so I have to shout “Aziz, Light!” to turn them on and, “Thank you, Aziz” to turn them off.

International Women’s Day is officially March 8. The Women Inspire NTT DATA ERG celebrates women’s achievements and in creating a more equitable, inclusive, and diverse world.

Author(s): Jen Reiseman-Briscoe Published on: 03/18/2022
Tags: design ethics, AI and automation, UXD, product design, User Experience Design