University of Notre Dame Joins AI Safety Institute Consortium

ODSC - Open Data Science
2 min readFeb 14, 2024

Artificial intelligence is a present-day reality transforming industries, societies, and daily life. With the rapid expansion of AI technologies, while promising, they have also introduced complex challenges and risks. Recognizing the urgency to address these concerns, the University of Notre Dame has joined the Artificial Intelligence Safety Institute Consortium (AISIC).

The announcement came through a press release from the university. The AISIC is a collective initiative spearheaded by the National Institute of Standards and Technology to advance the development and deployment of safe, reliable AI.

Established in response to a presidential executive order, the AISIC hopes to emphasize the critical balance between leveraging AI’s potential benefits and mitigating its risks. This initiative represents a pivotal step towards responsible AI usage, aiming to develop standards and measurement techniques that ensure AI systems’ safety and trustworthiness.

Notre Dame’s engagement in the AISIC reflects its commitment to leveraging its research capabilities for the common good. By participating in this consortium, Notre Dame researchers are at the forefront of identifying and addressing the risks associated with AI, contributing their expertise to the development of safer and more ethical AI technologies.

Jeffrey F. Rhoads, vice president for research and professor of aerospace and mechanical engineering, said “We are excited to join AISIC at a pivotal time for AI and for our society… We know that to manage AI risks, we first have to measure and understand them.“.

Nitesh Chawla, the Frank M. Freimann Professor of Computer Science and Engineering and director of the Lucy Family Institute for Data and Society said, “A special focus for the consortium will be dual-use foundation models, the advanced AI systems used for a wide variety of purposes,”.

Recently elected a fellow of the Association for the Advancement of Artificial Intelligence, Chawla continued, “Improving evaluation and measurement techniques will help researchers and practitioners gain a deeper understanding of AI capabilities endowed in a system, including risks and benefits. They will then be able to offer guidance for the industry leaders working to create AI that is safe, secure, and trustworthy. It is a moment for human-machine teaming.”.

Currently, the consortium has more than 200 members. This includes companies and organizations who aim to ensure AI’s transformative potential is unleashed with safety in mind. You read more about the AISIC here.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Interested in attending an ODSC event? Learn more about our upcoming events here.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.