Generative AI (GenAI) is booming. It’s not just a trend; it’s produced a seismic shift in how we approach innovation and technology.

SAS Innovate 2024 has moved on from Las Vegas and is now on tour across the world. If you want a recap of what happened in Vegas or want to rewatch it, I suggest clicking those links.

During the event, I listened in on SAS Sr. Trustworthy AI Specialist Kristi Boyd’s presentation on navigating the ethical waters of GenAI. Attendees gained a deeper understanding of the challenges and opportunities in this space.

Her talk highlighted how we got here, what to do and what’s next. We were able to catch up and discuss a few related topics. 

We have heard about the transformative potential of GenAI. But be straight with me, have we even scratched the surface?

Kristi: I think we are still in the early stages of seeing the real potential of GenAI. We’re seeing organizations investing heavily in GenAI experimentation and proof of concept (PoCs), but many are still unsatisfied with the outcomes and progress. I think there are still numerous opportunities to use GenAI to transform how we operate businesses, run organizations, work and live – and I’m excited to see the transformation.

The big challenge is making sure that the transformation is done in a responsible and human-centric manner. There’s a lot of focus on the technology and the tools, and we need to ensure we keep the humans in the middle of our focus. I’m glad to say that at SAS, all our GenAI strategy is set in the cornerstone principles of responsible innovation, which help us ensure that wherever our technology is showing up - it’s not harming people.

With the increasing adoption of GenAI tools, how do we ensure that ethical considerations remain at the forefront of innovation?

Kristi: We need to remember that GenAI is a subset of AI, meaning that many existing frameworks and best practices apply. This means we should be intentional about the data used to train the models and the questions we are having the models answer and be highly aware of the impact and outcomes on all the stakeholders. For example, we should always check the quality, source, and diversity of the data we feed into the GenAI models and refrain from using biased, incomplete, or inaccurate data.

Generative AI: Benefits, risks and a framework for responsible innovation

Developing and using AI technologies ethically is important, asking “Could we?” and “Should we?”. We must ensure that AI does not harm people and reflects our societal values. We should also be transparent about the purpose and limitations of the GenAI models and ensure that they align with the users and the organization's values and goals. We should also monitor and evaluate the performance and behavior of the GenAI models and be ready to intervene or correct them if they produce harmful or undesired results.

For organizations, this could also mean having a framework in place before embedding GenAI into the process. At SAS, we use a principle-driven framework – promoting human well-being, agency and equity, ensuring accessibility and inclusion, proactively identifying and mitigating adverse impacts, explaining usage openly, operating reliably and safely, and respecting the privacy of data subjects. We also follow a collaborative governance approach called the QUAD; we focus on oversight, platform, controls and culture to anticipate, mitigate, and avoid unintentional harm, particularly for the most vulnerable.

What challenges do organizations face when implementing GenAI, particularly with data privacy and security? How can we help?

Kristi: Implementing GenAI presents organizations with multifaceted challenges, particularly data privacy and security. The accuracy of GenAI outputs and the responsibility of organizations and employees to ensure that the outputs are representative and accurate – are also significant challenges. Governance, transparency, and the presence of unexpected biases are additional hurdles.

Generative AI: What it is and why it matters

Concerns range from accidentally breaching intellectual property and copyright by sharing data in an unlicensed or unvetted tool to the potential for privacy breaches and cybersecurity threats that GenAI can exacerbate. This data may contain private information about people, sensitive business use cases, or health care data. Unauthorized access to or inappropriate disclosure of these types of data can cause harm to individuals or organizations.

While privacy and security were previously associated with intellectual property (IP) and cybersecurity, the definition and scope have expanded in recent years.to encompass data access management, data localization and the rights of data subjects.

An organization's ability to address these challenges is impacted by its literacy in GenAI and cultural attitude—enthusiasm or apprehension. If employees are well-versed in the risks and rewards of GenAI, they can more effectively reap the benefits while being mindful of the limitations. If the organization is clear and intentional in communicating the vision and strategy for GenAI, then it is more likely to have organizational buy-in.

We recognize this challenge at SAS and continuously invest in our offering from a technology perspective and governance.

How do you see the regulatory world evolving concerning GenAI, and what steps can organizations take to stay compliant and ethical in their AI initiatives?

Kristi: GenAI impacts everyone, from vendors to customers to regulators and consumers. It raises questions about trust, compliance, transparency, and safety. The regulatory landscape reflects this reality – we’re seeing across geographies and industries that regulatory bodies are drafting and discussing the legal expectations regarding AI. SAS has also been closely involved in many of these conversations.

I should note that compliance and ethics are two separate bars – compliance is the floor that organizations must meet. Still, in many cases, responsible innovation may require additional efforts. The first step is establishing a system of oversight, which includes the framework for AI governance, AI strategy and formulation of policies. After establishing the oversight frameworks, an organization can consider its compliance needs.

Embedding responsible AI best practices within the model lifecycle

First, it is essential to assess and monitor its readiness for regulations. The organization can draw from the assessment of the regulatory landscape for its industry and then look inward to decipher whether it is prepared to comply. In addition to compliance and risk management, organizations should consider their operating procedures around AI and data management.

Leadership should curate operational procedures that make sense for the organization based on the governance and strategy established in the oversight pillar. Finally, social norms will naturally develop as an organization adapts to the system for trustworthy AI. Fostering shared norms that align with organizational principles and strategy is key. This creates a positive culture where employees feel competent and empowered to engage in trustworthy AI practices, even when leadership is absent.

You and I play different roles at SAS but are a part of one huge team. What role does interdisciplinary collaboration play in addressing the ethical challenges of GenAI?

Kristi: Oh, interdisciplinary collaboration is essential! Even just thinking about how you and I partner to communicate the benefits and challenges of GenAI to different audiences and in the best medium for each audience.

If data scientists only talked to data scientists and sales only talked to sales, we would just have a partial comprehensive understanding of the impact GenAI (or really any technology) is having. Silos in an organization are always challenging, and the culture of an organization is made up of individual groups and departments.

With the potential for GenAI to perpetuate biases in training data, how can we ensure that AI systems promote fairness and equality rather than reinforcing existing societal inequalities?

3 attributes of human centricity in trustworthy AI development

Kristi: GenAI is a subset of AI algorithms designed to generate new content that closely resembles the data on which it’s trained. So first, we need to recognize the biases in our data and society and then start mitigating them intentionally.

My colleague recently wrote a blog post highlighting some of the technical steps that can be taken, which include ensuring comprehensive data representation, prioritizing all communities equally, and designing products with everyone in mind.

Additionally, we need to invest in establishing methods by which we can be transparent about the decision-making process. These data went into the model and why a certain outcome occurred. Explainability and transparency are essential to promote fairness as they help explain how the model works and what biases may be present.

Asking questions about representation in the training data, performance and prediction parity, applied mitigation techniques, and steps to create a more opaque model are all essential and practical steps.

We’re both music enthusiasts. Recently, artist Drake used GenAI to create a now-viral diss song. As GenAI becomes more advanced, do you foresee a future where AI-generated content is indistinguishable from human content? What ethical considerations does this raise?

Kristi: For what it's worth, artists have been using tools and getting support from other artists to complete their craft for a long time. Singers use backup vocals to enhance the sound and lip sync during live performances; musicians have used ghostwriters for years to help with lyricism. Plenty of canvas painters disapprove of digital art, and plenty of digital artists scoff at traditional creative methods. Even Drake’s beef with fellow artist Kendrick Lamar (the reason for the song) highlights a couple of questions that have been discussed in the art community for a while.

One of the questions concerns provenance and authorship – how do we know when an AI system created something versus a human, and how do we keep track? Does it matter who created the piece, or should the focus be on enjoyment and emotional response? If I were reading an article summary of the game last night, would I care that AI did it? Does it matter who painted the art piece that someone is now buying at auction?

There are plenty of examples when it’s hard to tell whether it is human or AI-generated content. There are examples of terrible content written by both humans and AI and examples of excellent content created by both. So perhaps AI helps mediocre artists write better music without giving their fans trust issues.

Another question I hear is about the scope of editorial control. The question is, "If I used an AI to create something, how much of my involvement is needed before it becomes an AI creation?” And if an AI is a tool we’re using for creativity and songwriting – as in your example – how much can I rely on a tool before the outcome is no longer my creation?

Now, we get very existential and philosophical because we’re unpacking the debate of the definition of art – a debate that is as old as art itself. Should Vincent Van Gogh’s work be considered art, or is that category reserved for the Renaissance geniuses? If the latter, then can Beyoncé’s Renaissance album be considered art? I’m sure you can imagine how this debate unpacks across generations and art forms.

What steps must we take to create a culture of ethical innovation and responsible AI development within our organization and society?

Kristi: There are a few key steps we can take. First, it is crucial to raise AI literacy within the organization broadly. This involves increasing awareness of AI's presence and deepening the understanding of its role in our lives and work. I want to use the example of basic literacy around electricity. I couldn’t teach a college class on it, but I know it powers my laptop and router, allowing me to do my work. I also know not to blow dry my hair while in the shower – because of safety concerns. That’s the baseline level of literacy and the level we need to achieve for AI and AI ethics. To achieve this basic level of AI literacy, it’s essential to clearly articulate the principles and values we want to embed into our AI systems and use. What do we want to promote through our AI?

Within an organization, it's vital to nurture shared norms and enhance employee competency regarding AI. Employees should feel empowered to ask challenging questions about AI's ethical implications and how the organization develops, deploys, or uses AI. Cultivating a culture of resilience is also essential. As AI continues to reshape our world and organizations, we must be comfortable with these changes and adaptable enough to thrive. Organizations play a role in developing this resiliency.

Our goal is to equip everyone with the tools for asking critical questions like "For what purpose? To what end? Who may be left out? These questions help start the dialogue that ensures that the development and deployment of AI technologies align with our shared values and ethical standards.

Register and you can rewatch Kristi’s presentation on demand!

Share

About Author

Caslee Sims

I'm Caslee Sims, writer and editor for SAS Blogs. I gravitate toward spaces of creativity, collaboration and community. Whether it be in front of the camera, producing stories, writing them, sharing or retweeting them, I enjoy the art of storytelling. I share interests in sports, tech, music, pop culture among others.

Leave A Reply

Back to Top