Datacast Episode 48: AI Ethics, Open Data, and Recommendations Fairness with Jessie Smith
The 48th episode of Datacast is my conversation with Jessie Smith — a Ph.D. student at The University of Colorado Boulder researching machine learning and AI ethics with an emphasis on algorithmic fairness and transparency. Give it a listen to hear about her foray into Computer Science Ethics, her involvement with the open data movement, her research on bias and fairness for recommendation systems, her public scholarship via Radical AI and SciFi For Real Life, and more.
Listen to the show on (1) Spotify, (2) Apple Podcasts, (3) Google Podcasts, (4) Stitcher, (5) iHeart Radio, (6) Radio Public, (7) Breaker, and (8) TuneIn
Key Takeaways
Below are highlights from my conversation with Jess:
On Studying Computer Science in College
What interests me about coding, in general, is that I’m not constrained to the physical world. It’s almost this magical space that exists in the cloud. I can create whatever I want so that always intrigued me.
Furthermore, studying computer science is one of the best ways to impact a lot of people.
My favorite course at Cal Poly is called “Professional Responsibility,” — which sounds unsexy, but it’s basically about Computer Science Ethics. It sparked my interest in changing my career in the first place.
On Changing The Engineer’s Mindset
The engineer’s mindset states that technologists (especially computer scientists) tend to ask “how” when encountering a problem. If we always use this approach, we tend to lean into “solutionism” — the idea that there will always have a solution to that problem, and technology is the best solution.
Asking “why” throws the ethical speculation into the mix: Is technology the correct way to solve this problem? Should I be thinking more critically about the unintended consequences of the technology that I’m creating? Should I be asking myself if this technology is created in the first place?
On Interning at GoDaddy
The GoDaddy internships were my first introductions to the tech industry. It was a great way to dip my toes into the water and see what that kind of career would look like. Unfortunately, throughout that process, I figured out that career wasn’t really for me, but I still had a blast and learned a ton.
One thing that stood out about GoDaddy, in particular, is their emphasis on helping women to feel comfortable and welcome — from the recruiting and interviewing process.
On Doing Undergraduate Research in AI Ethics
As a research assistant for the Ethics and Emerging Sciences Group at Cal Poly, I was the team's technologist. The group is based out of the Philosophy department, which looks at the harm of predictive policing from the philosophy background. It was interesting for the first time to have that inter-disciplinary collaboration and be recognized for my technical expertise — informing them of the realistic concerns from the computer science perspective. There tends to be a gap in communication between social scientists and computer scientists regarding solutions for these problems.
I realized that the most motivated people in the responsible tech space do not come from a computer science background, but the social science/humanities/anthropology side.
On Open Data
Open data is valuable at a high-level at places like Colombia and other emerging markets because it is synonymous with transparency. If a government is transparent to the people on what they are doing or what is happening in the country through data, there will be more trust between the governing body and the populists. The more trust that there is from the people, the more progress that the country can make.
The obstacles that hinder open data include: (1) Some too many people collect the data and are unwilling to share them. (2) Even if people are willing to share it, they poorly collect the data — no shared/standardized data schema leading to inaccuracies. (3) There’s so much missing data, leading to people being told the wrong objective truth and misinformation.
On Pursuing a Ph.D.
I never planned to go to graduate school during my undergraduate during the first 3 years. I became very interested in fairness, bias, and accountability in machine learning in my last year. I realized that if I want to go into that field, I need to become an expert and get a doctorate.
I honestly loved every minute of my Ph.D. experience so far. I spent almost every day doing the work that I loved and that I used to do just for fun.
At UC Boulder, I am co-advised by Casey Fiesler at the Internet Rules Lab in the Information Science department and Robin Burke at That Recommender Systems Lab in the Computer Science department. The community that I stumbled into here is incredibly unique and super welcoming. They never encourage competition. It’s always about support and collaboration.
On The ETHItechniCAL Framework
This is a framework that promotes anti-techno solutionism. I argue that sometimes technology is not always the correct solution. It can even be harmful if we try to solve problems without thinking critically about the unintended consequences. Specifically, I encourage coders and technologists to think within inter-disciplinary minds when solving problems with technology.
My call for intentionality is this: We can’t just assume that there’s the best solution. We have to assume that whatever decisions we make will have consequences. We have to understand what the tradeoffs will be and who will be harmed by the design decisions we make.
On Bias and Fairness in Recommendation Systems
When you think about recommendations, people tend to think about movies/music/product recommendations where the stakes are low. But when you think about jobs or housing recommendations, there are many more at stake if there are inaccuracies or discriminations on those systems.
If humans can’t agree on what it means to treat people fairly using algorithms, how could we ever write a recommendation algorithm that is supposed to be “fair”?
There’s a need for awareness of these fairness tradeoffs and a need for greater intentionality and transparency between the systems and the users.
On Issues in AI and Society
Here are the topics that come up during my Radical AI podcast interviews with researchers and practitioners in the AI Ethics space: (1) racial representation in the tech field, (2) surveillance and privacy, (3) value tradeoff, (4) AI for social good, (5) ethical design practices, (6) labor and precarious work, (7) the harm of classification, (8) data objectivity vs. data subjectivity, (9) racism and sexism, (10) algorithms of oppression, (11) using science fiction for ethical speculation, (12) interdisciplinary and collaboration, and (13) teaching social science to computer science students.
On Cultivating Positive Social Impact
I love the idea of leaving the world a better place than it was when I entered it. I constantly come back to this when asking myself why I’m doing the work that I’m doing.
Show Notes
(2:08) Jess discussed her foray into studying Software Engineering at California Polytechnic State University during college and revealed her favorite course on Computer Science Ethics taken there.
(4:31) Jess unpacked her argument that it is important to shift the engineering mindset away from only asking how to ask why — referring to his blog post “Changing The Engineer’s Mindset.”
(7:27) Jess went over her summer internship experience at GoDaddy as a software engineer.
(11:39) Jess talked about her time working as a research assistant for the Ethics and Emerging Sciences Group at Cal Poly, where she examined the ethical implications of AI “predictive policing” systems and survey the current role of fairness metrics for battling algorithmic bias.
(16:27) Jess revealed her experience being involved with the open data movement in Colombia (read her articles “The Truth About Open Data” and “How To Use Data Science For Social Impact”).
(24:22) Jess emphasized the importance of education to spread data literacy in developing nations.
(26:35) Jess discussed her experience as a current Ph.D. student in the Department of Information Science at the University of Colorado, Boulder, where you focus on value tradeoffs in technology and machine learning ethics.
(32:01) Jess unpacked the ETHItechniCAL framework to assist with ethical decision-making that she proposes in “The Trolley Problem Isn’t Theoretical Anymore.”
(35:39) Jess unpacked her argument, saying that computer scientists must be educated to code with social responsibility and equipped with the correct tools to do so — as indicated in “How Tech Shapes Society.”
(39:00) Jess discussed the work “Investigating Potential Factors Associated with Gender Discrimination in Collaborative Recommender Systems” with Masoud Mansoury and Himan Abdollahpouri.
(42:54) Jess discussed the work “Exploring User Opinions of Fairness in Recommender Systems” with Nasim Sonboli.
(47:12) Via her podcast The Radical AI, Jess unpacked the underrated AI and social issues that she came across.
(49:17) Via her YouTube show Sci-Fi in Real Life, Jess shared her 3 favorite videos: “Dying To Be Alive,” “Living On The Edge,” and “Black Mirror Meta Episode.”
(52:25) Jess dug deep into her mission of cultivating positive social impacts for the world.
(54:32) Closing segment.
Her Contact Info
Her Recommended Resources
UC Boulder’s Internet Rules Lab
UC Boulder’s That Recommender Systems Lab
“The Courage To Be Disliked” by Ichiro Kishimi and Fumitake Koga