article thumbnail

Igor Jablokov, Pryon: Building a responsible AI future

AI News

The current incarnation of Pryon has aimed to confront AI’s ethical quandaries through responsible design focused on critical infrastructure and high-stakes use cases. “[We We wanted to] create something purposely hardened for more critical infrastructure, essential workers, and more serious pursuits,” Jablokov explained.

article thumbnail

With Generative AI Advances, The Time to Tackle Responsible AI Is Now

Unite.AI

Today, seven in 10 companies are experimenting with generative AI, meaning that the number of AI models in production will skyrocket over the coming years. As a result, industry discussions around responsible AI have taken on greater urgency.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Delivering responsible AI in the healthcare and life sciences industry

IBM Journey to AI blog

Curating AI responsibly is a sociotechnical challenge that requires a holistic approach. There are many elements required to earn people’s trust, including making sure that your AI model is accurate, auditable, explainable, fair and protective of people’s data privacy.

article thumbnail

Making Machines Mindful: NYU Professor Talks Responsible AI

NVIDIA

Responsible AI is hot on its heels. Julia Stoyanovich, associate professor of computer science and engineering at NYU and director of the university’s Center for Responsible AI , wants to make the terms “AI” and “responsible AI” synonymous. Artificial intelligence is now a household term.

article thumbnail

Responsible AI at Google Research: The Impact Lab

Google Research AI blog

The Impact Lab team, part of Google’s Responsible AI Team , employs a range of interdisciplinary methodologies to ensure critical and rich analysis of the potential implications of technology development. We examine systemic social issues and generate useful artifacts for responsible AI development.

article thumbnail

This NIST Trustworthy and Responsible AI Report Develops a Taxonomy of Concepts and Defines Terminology in the Field of Adversarial Machine Learning (AML)

Marktechpost

The NIST AI Risk Management Framework and AI Trustworthiness taxonomy have indicated that these operational characteristics are necessary for trustworthy AI. The goal is to provide a thorough resource that helps shape future practice guides and standards for evaluating and controlling the security of AI systems.

article thumbnail

Introducing the Topic Tracks for ODSC East 2024?—?Highlighting Gen AI, LLMs, and Responsible AI

ODSC - Open Data Science

Introducing the Topic Tracks for ODSC East 2024 — Highlighting Gen AI, LLMs, and Responsible AI ODSC East 2024 , coming up this April 23rd to 25th, is fast approaching and this year we will have even more tracks comprising hands-on training sessions, expert-led workshops, and talks from data science innovators and practitioners.