The Center for Responsible AI’s Vision for Equitable Tech

NYU Center for Data Science
5 min readDec 12, 2023
Responsible AI

In an era where algorithms determine everything from creditworthiness to carceral sentencing, the imperative for responsible innovation has never been more urgent. The Center for Responsible AI (NYU R/AI) is leading this charge by embedding ethical considerations into the fabric of artificial intelligence research and development. Founded by Julia Stoyanovich, Associate Professor of Data Science, and Institute Associate Professor of Computer Science & Engineering, the Center for Responsible AI aims to redefine the AI landscape by ensuring that responsibility in AI is not an afterthought but the groundwork of its very definition. Engaging in interdisciplinary research, influencing AI policy, and educating on AI’s societal impacts, the center is cultivating a future where AI and ethics coalesce seamlessly.

The Center for Responsible AI is a testament to NYU’s commitment to pioneering research that upholds and advances these ideals. This commitment is exemplified in the work of five of the Center’s affiliated researchers: Tandon CSE PhD students Andrew Bell and Lucas Rosenblatt, CDS PhD students Lucius Bynum and Falaah Arif Khan, and NYU Steinhardt PhD student and CDS alumna Armanda Lewis. Together, their work, buoyed by the NRT FUTURE Program, is a beacon of the center’s mission to harmonize AI with the core principles of social equity, inclusion, and fairness.

Andrew Bell and Lucius Bynum: Challenging Algorithmic Boundaries

Andrew Bell’s exploration of algorithmic fairness sets a foundation for the responsible AI dialogue. His paper, “The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice,” written with CDS PhD student Lucius Bynum, Tandon CSE PhD student Lucas Rosenblatt, and Stoyanovich, unravels the complexities of fairness in AI, dissecting the so-called “impossibility theorem.”

The impossibility theorem posits that three common fairness desiderata — demographic parity, equalized odds ratio, and predictive parity — cannot be satisfied simultaneously due to their mathematical incompatibility. This theorem at first looks like a significant roadblock in achieving algorithmic fairness, suggesting a need to compromise on at least one fairness metric.

However, “The Possibility of Fairness” introduces a novel perspective that leverages the fact that, in practice, perfect equality between metrics is not always necessary, making the impossibility theorem less limiting than it seems. Consider “the 80% rule” in legal discrimination cases. This is an example of how, in real world applications, it’s not always necessary to ensure a metric is 100% equal between groups; rather, 80% is considered “fair enough.” For example, if a company interviewed 10 candidates with privileged backgrounds, they could legally demonstrate that they have a fair hiring process by also interviewing at least 8 candidates from marginalized groups. This breakthrough not only presents a methodological shift but also potentially expands the horizons for AI applications in a socially equitable manner.

“We’re starting to see that what once seemed like hard-and-fast limitations in algorithmic fairness are actually more flexible than we thought,” Bell explains, highlighting how his work aligns with the Center’s ethos of challenging AI norms and pushing boundaries.

From Fairness to Inclusion: Armanda Lewis’ Educational Perspective

Armanda Lewis’ research bridges the gap between AI and education. In her paper, “Multimodal large language models for inclusive collaboration learning tasks,” firmly rooted in the concept of inclusion within learning environments, Lewis resonates with Bell’s emphasis on societal norms. “Inclusivity isn’t just a societal value but a critical component in educational outcomes,” Lewis remarks.

The focal point of Lewis’ research is the measurement and enhancement of inclusivity. Previous studies, mostly rooted in psychology, have explored what constitutes an inclusive environment and its characteristics. However, there’s a dearth of literature on practical approaches to foster inclusiveness. To bridge this gap, Lewis is harnessing her expertise in natural language processing (NLP), acquired at CDS, and applying it to large language models (LLMs), including multimodal LLMs.

This innovative approach does not solely focus on language but also integrates non-linguistic signals like gestures and body language into the AI’s analytical framework. Such a holistic approach to AI and inclusion is relatively untrodden territory, with Lewis contributing foundational evidence to this emerging field.

By using AI to model and facilitate inclusive interactions, Lewis extends the concept of fairness beyond algorithms into the practical realm of education and group dynamics.

Falaah Arif Khan: A Doctrine-First Approach to AI Fairness

Furthering the conversation, Falaah Arif Khan’s research offers a philosophical underpinning to the mathematical approaches highlighted by Bell and applied by Lewis. “Fairness in AI isn’t a mere algorithmic challenge; it’s a philosophical quandary that we must address with a doctrine-first mindset,” Arif Khan posits. In her paper, “Towards Substantive Conceptions of Algorithmic Fairness: Normative Guidance from Equal Opportunity Doctrines,” written with Eleni Manis and Stoyanovich, Arif Khan’s approach to defining fairness metrics through normative considerations creates a dialogue with Bell’s and Lewis’ methodologies, proposing a multidimensional understanding of fairness that considers the diverse impacts of AI.

This research diverges from traditional approaches by advocating a doctrine-first perspective, where the choice of fairness metric is guided by normative considerations rather than a purely mathematical or economic rationale.

This doctrine-first approach emerged as a response to previous models that, while using economic theories of equal opportunity, ultimately relied on statistical measures without considering the broader, lifelong implications of opportunity. Arif Khan’s team, in their pursuit of a richer understanding, delved into philosophy to construct a framework that could bridge the gap between abstract concepts of fairness and their computational representations.

A Responsible AI Ethos

The combined efforts of Bell, Bynum, Rosenblatt, Lewis, Arif Khan, Stoyanovich, and their co-authors signify a comprehensive approach to AI research at R/AI, encompassing the technical, educational, and philosophical facets of making AI-driven systems ethical, and using these systems in an ethical way. Their research, though varied in focus, converges on the common goal of advancing AI in a manner that is fair, inclusive, and attuned to the nuanced fabric of societal needs. It’s through the interlacing of these perspectives that R/AI fosters a holistic understanding of responsible AI, setting the stage for a future where AI is principled and just.

As these researchers continue to push the frontiers of their respective fields, R/AI serves as a model for the kind of interdisciplinary approach necessary to tackle the multifaceted challenges of modern AI. With the support of NSF’s NRT grant, their work not only represents the pinnacle of current research but also lights the way for the future of responsible AI development, one where AI is equitable, inclusive, and fair.

By Stephen Thomas

--

--

NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.