TNS
VOXPOP
Terraform or Bust?
Has your organization used or evaluated a Terraform alternative since new restrictions were placed on its licensing?
We have used Terraform, but are now piloting or using an open source alternative like OpenTofu.
0%
We never used Terraform, but have recently piloted or used alternatives.
0%
We don't use Terraform and don't plan to use or evaluate alternatives.
0%
We use Terraform and are satisfied with the results
0%
We are waiting to see what IBM will do with Terraform.
0%
Tech Culture

Responsible AI at Amazon Web Services: Q&A with Diya Wynn

After fielding a groundswell of customer questions, AWS' Diya Wynn realized there was room to build a customer facing department to aid with responsible AI practices.
Mar 10th, 2023 3:00am by
Featued image for: Responsible AI at Amazon Web Services: Q&A with Diya Wynn

Diya Wynn

Last year’s release of ChatGPT alerted many to the great strides that machine learning has made, and will  continue tomake in the years going forward. But how do we make surethat this great power is being used responsibly, free from bias and malicious intent?

For Amazon Web Services, Diya Wynn is the senior practice manager for Responsible AI. Recently, she sat down with the New Stack to discuss all things Responsible AI.

At AWS, Wynn created the customer facing responsible AI practice, and built a team of individuals with diverse backgrounds, including members of the LGBTQIA+ and differently-abled communities. Her goal was to bring responsible AI to AWS customers through a seven-pillar framework for inclusive, responsible AI use.

After fielding a groundswell of customer questions, she realized there was room to build a customer-facing department to aid with the development and implementation of responsible AI practices.

Wynn is a lifelong technologist, expressing the desire for a career in engineering by the third grade and received her first computer for high reading and math scores at that same time, a time when not every household had a computer. She continued her education with an undergrad at Spelman College and went on to study the management of technology at the NYU School of Tandon Engineering before also studying Artificial Intelligence and Ethics at Harvard University Professional School and MIT Sloan School of Management..

The New Stack: With such a diverse background, why focus on the ethics side over any other aspect?

Diya Wynn: There’s a voice and perspective that I bring, not only as a trained technologist, but also as someone who thinks about the world and its interactions holistically. Technology has the way to shape and change how we engage in the world and that matters, especially as we start thinking about our future.

I have two sons. My oldest is in high school and youngest in middle school. Are we preparing our children for what they’re going to encounter in the future? For me, that was really about the training and education they need for the kind of jobs they can walk into and the work that they will be doing. I don’t believe our educational systems are doing enough to shift with technology to help our students prepare them for the work of tomorrow.

I started exploring and found three things that were important. One was data, the relevance, importance, and value of data and how that shapes the way we engage in work. The second is artificial intelligence and robotics, and the third was virtual worlds, AR/ VR. There’s an element in all of this that’s driving and shaping the way that the world is evolving.

And it’s missing the voice and perspective of people that look like me and that look like my sons. That’s why I started to explore and want to have an influence and impact on technology and what that means.

What is Responsible AI?

Responsible AI is a holistic approach that provides governance, structure, processes, aligning people resources, and technical solutions that can be leveraged to address bias, risk, performance, and other categories.

AWS has a guiding structure and a definition that came about over time and across a number of different teams throughout the organization. Though teams work across the business to help the way we look at responsible AI, each team has ownership and responsibility around the transparency and explainability, showing that there’s fairness, robustness, privacy and security. They’re all responsible for instituting AI responsibly for the services they’re developing.

What is AWS doing to democratize responsible AI?

AWS has a broad strategy. We have a commitment to transforming theory into actions. This means how we’re changing and influencing the way that we build our services and the work that I do with customers — engaging and helping them bring that practice to life, operationalize it inside their organizations.

We invest in education and training to create a more diverse future workforce. There’s an AI/ML scholarship program that’s bringing in those that typically might have been underrepresented to help them study artificial intelligence and machine learning. We also focus on training and educating those who are part of the product and machine learning lifecycle because they need to understand and be aware of potential areas of risk and how we mitigate them.

The last area from a company perspective is about how we invest in advancing the science around responsible AI. We’ve made huge investments and continue to work with institutions, we have scholarships or research grants that are being provided in the way of NSF that are helping to encourage research in the area of responsible AI. We are partnering with institutions that are advancing and working in standards, and all of that is contributing to a growing ecosystem of individuals that are paying attention to this topic.

Let’s talk about bias…

There is a very real understanding that the absence of diversity can create opportunities where bias may occur and the other reality is that we all as people have biases as well, right? And sometimes we’re building these biases into our systems. We are leveraging data that has bias in it, this is especially true when we look at historical data.

We need to create a structure that not only understands this, but also brings in the intentionality about how we approach and address it by either eliminating the bias or making decisions based on a conscious awareness of where the bias exists.

How is that bias addressed?

There are a number of different things we can do. We need to be intentional and bring the voices into the room. Education is key, making people and teams aware that they need to pay attention to this. There are things we can do in the way of checklists and other things, like defining personas, to make people think about who else is being included or not. Are we thinking about the people that are involved and/or everyone our products/services are supposed to serve? Are we bringing in each perspective?

Then of course there’s the aspect of physically having people, and I acknowledge that’s a challenge in technology because we don’t have the diverse representation we want across all demographics. But let’s be real, we’re not expecting that when we go in and talk about a project with a customer that we expect them to hire a new team, because we want to bring in diversity. One of the values that my team brings is a diverse set of individuals, diverse backgrounds, diverse disciplines that are coming together to actually help support customers.

The other thing you can do is leverage your Chief Diversity and Inclusion (DEI) Officer. Teams are investing in and bringing in DEI officers who are trained to think about bias, understand inclusion, and look for ways in which their processes and structures can bring in representation and perspective but in a more general sense to be respectful, recognizing and acknowledging the perspectives of others.

How receptive are customers toward putting responsible AI practices in place?

Customers are going to fall in one of three categories. There are customers that had an experience where they uncovered some impact to their systems or acknowledged an area of bias. Some examples of that were public. Because of that exposure, they’re interested in resolving or putting in some practices to alleviate those pains.

The other set of customers are genuinely interested in doing the right thing. They understand and are aware of some of the potential risks and really want to build systems that their customers trust. They’re asking the questions, “What can we do?”, “Are there practices that we can institute?”

There is another group of customers that we’re seeing that are willing to wait. They’re interested, watching what’s happening, hearing and seeing what’s going on in their market and industry as conversations come up and technology and products get released. They’re asking questions but waiting for regulation. Since there’s nothing forcing them to make changes or institute new practices, they’re not ready to make that investment. But we aren’t far from it.

The NIST AI Risk Framework was just released this January, meaning there are now standards that customers are going to expect individuals and companies to adhere to. The ISO 42001 is coming soon and that includes risk management and governance structure. The EU AI act is expected to sign next year.

NIST’s timeline for the AI Risk Management Framework.

What do you see as the biggest challenge we face moving forward?

Mindset. The first part of the mindset challenge is that we often connect the first two areas of bias specifically with gender and race making some people think, “This doesn’t apply to me.” We’ve got to think about responsible AI, irrespective of whether or not we’re serving an application that touches on someone’s gender or race.

For example, if a model trained on a commercial dataset is used in a religious or public sector context, it won’t pull insights in the same way because it’s biased toward commercial. The data that helps support the religious or public sector context isn’t there. It’s important to have a more holistic, comprehensive look.

The other mindset challenge is that we know representation and diversity matters in the way we design products and diverse perspectives can improve outcomes and results for businesses. We know this, studies prove it, yet why hasn’t it been done?

We don’t have the diversity we need because it requires a mindset shift and that’s not the easiest thing to do. If we knew things were unfair and all it took to change that was to go make it fair, then we wouldn’t be having this conversation, the problem would be solved. But it’s harder than that because it involves reshaping our thinking.

And lastly, there’s an issue with technology that we haven’t quite solved yet. There’s ongoing research and investment in uncovering the ways we can have technology holistically support and value inclusion, ensuring models are fair and remove bias.

Do you think the widespread availability to learning models and AI technology is pushing responsible AI forward or pulling it back?

I think it absolutely elevates the importance of responsible AI. We’ve been talking about AI but there’s been a groundswell in probably the last five to seven years that floods the conversations. In some ways, we might say this is another hype cycle but I think it’s also introducing great proof point for why we need responsible AI.

I can’t tell you how many conversations I’ve had where people are like, “Oh my gosh!” It’s looking at all this data and some of the data is biased, and people are asking, “Why did I get that result?” Well, it’s because the data is biased and so we’ve got to do something else, other than just pull large amounts of data and feed that back. This is definitely helping to advance the conversation.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.