AI on the Balance Beam

AI is at best misunderstood by most people. Many think we will be overrun by robots. Others fear their jobs will be lost with the advancements in AI. The naysayers fear the loss of privacy and the potential harm of bias from AI powered systems. The fans are relieved that mindless, repetitive tasks will liberate humans to pursue more meaningful pursuits.

The great debate on AI and ethics series focuses on sensitive flashpoint issues of the future of AI in context of ethics, social justice, inclusivity, and fairness. AI on the Balance Beam centers on provocative use cases debated by a brilliant multidisciplinary panel of thought leaders, academics, and cultural icons. The series is co-produced by The Center for Responsible AI@Tandon and NY University.

 

Learning + AI

AI is changing how we educate, with long-term implications for society. If the goal is to deliver personalized learning systems and programs, then what are the opportunities for reskilling and upskilling learners and the current workforce? Who makes the decisions that inform AI-based learning systems? What biases are embedded in the historical data sets used for AI training? Watch and learn from three brilliant presenters as they deconstruct the ethics and efficacy of predictive AI systems determining who learns … what.

Moderator

Steve Elliott, entrepreneur, advisor, investor, and strategist

Experts

Ari Chanen, PhD, Founder and Principal at AI Ari, an international applied AI/ML consultancy exploring novel AI solutions in fintech and other fields
Dr. Allan Hamilton, Regents' Professor of Surgery, Exec. Dir., Arizona Simulation Technology & Educ. Center, Professor. of Neurosurgery, Clinical Professor in Radiation Oncology, Psychology, & Electrical & Computer Engineering
Tracy Kosa, Principal PM & Privacy Researcher, Amazon

AI + Culture

How well does the public understand AI? Are people aware of the power and possibilities of this technology when it comes to its imprint on their personal lives? What is the role of leading technology companies and the federal government is safeguarding our privacy? What would happen if a handful of technology companies pledged billions to establish a Living Museum of Artificial Intelligence. Its mission would be to educate, familiarize and generate feedback and dialogue regarding the opportunities and potential unintended consequences of AI. Is this a great idea? Or the worst ever. Watch and learn from our brilliant experts who debate the ramifications and possibilities.

Moderator

Steve Elliott, entrepreneur, advisor, investor, and strategist

Experts

Angle Bush, founder of Black Women in Artificial Intelligence
Kate MacNevin, Global Chairwoman & CEO, MRM, a global marketing agency divison of the McCann Worldgroup network
Scott Page, Williamson Family Professor of Business Administration, Ross School of Business, University of Michigan-Ann Arbor, External Faculty, Santa Fe Institute.
Enrique Vela, Director of Interiors, Olson Kundig

About The NYU Center for Responsible AI (R/AI)

The Center for Responsible AI (R/AI) is making AI work for everyone. We catalyze policy and industry impact, foster interdisciplinary research and education, and build an equitable and socially sustainable ecosystem of AI innovation

As a research and tool production lab, R/AI is charting a path towards responsible AI. This means ensuring that technical advances are combined with a shift in business practices and much-needed regulatory mechanisms that are informed by social research and robust public participation. https://airesponsibly.com/

Register Now