This event has already passed

Browse other upcoming events on Tech Jobs for Good's Event Calendar

An AI Discussion on Fairness, Responsibility, Bias, and Ethical Issues

Date & Time

April 26, 2022, 7 a.m. - April 26, 2022, 8:15 a.m.

Cost

$0

Location

Online


Sign Up


Description

Join us for an online discussion on Tuesday, April 26, at 7 p.m.

About this event

AI is changing the world we live, work, learn, and play in, permeating our lives and jobs and our devices and places, and enabling innovations and discoveries in research and industry. However, this huge adoption of AI—both for support of human decisions and for completely autonomous decisions—in all aspects of society presents real risks as well as transformative potential. In placing our human trust in such powerful learning and decision-making systems, there is much to consider to ensure fairness. AI itself isn’t fair or unfair, but as the human developers, implementors, and beneficiaries, we must understand how to promote fairness in how they are trained and how they operate so that there is no accidental, unfair bias, exclusion, or other negative impact to any person or group of persons. Join us as Tina Lassiter leads a discussion on the ethical implications and considerations of AI. No prior AI expertise is necessary, as this discussion will focus on the human issues in ensuring fairness in the use of AI, and everyone is encourage to contribute!

Jay Boisseau, Executive DirectorAustin Forum on Technology & Society

Jay is the executive director and founder of The Austin Forum on Technology & Society, which he started in 2006 and is now the leading monthly technology outreach and engagement event in Austin--and starting to attract national and even international attendees online. The Austin Forum is one of the pillars of the Austin tech scene, providing connections to information, ideas, collaborations, and community overall. Through Vizias, Jay also founded the Austin Smart City Alliance (July 2015, formerly Austin CityUP Consortium) and currently serves as the executive director, with a vision of creating an integrated smart city fabric throughout Austin—leveraging mobile devices and IoT data collectors, as well as supercomputers and AI for predictive analytics and scenario simulation—in the years ahead to address city issues, empower city planning, and improve city life in general.

Tina Lassiter, AI Researcher, Responsible AI Institute and Center for AI and Digital Policy

Tina Lassiter is a former German lawyer and recent graduate of the School of Information at the University of Texas at Austin, with a special interest in AI and Ethics. She has published on the use of AI in recruiting and hiring, and has examined how ethical issues surrounding AI are perceived by workers in small and large tech companies, and how ethical considerations and regulatory compliance guidelines can be integrated into company best practices. She currently works as a Hiring Policy Framework and Recommendations Intern with the Responsible AI Institute, and is also part of the research group/policy clinic of the Center for AI and Digital Policy.

Max Marion, Machine Learning Engineer, KUNGFU.AI

Max Marion is a Machine Learning Engineer at the AI services firm KUNGFU.AI based in Austin, Texas, where he helps lead the ethics group. He's worked on AI applications ranging from detecting exterior damage in pictures of vehicles, identifying debris on runways in radar scans, processing insurance coverage text corpora, judging SXSW speaker applications, environmental audio processing, and more. A native Austinite, he attended Occidental College, graduating with a computer science and mathematics degree before moving back home. In his free time, he enjoys playing Ultimate frisbee, strategy games, and hanging out in his backyard with his cat. You can find his poor attempts at professionalism on LinkedIn, and his worst takes on Twitter.