The 81st episode of Datacast is my conversation with Aarti Bagul — a machine learning engineer at Snorkel AI.
Our wide-ranging conversation touches on her upbringing in India, her undergraduate education in Computer Science and Engineering at NYU, her time at Stanford teaching CS 230 and doing ML research on clinical healthcare, working at Andrew Ng‘s startup and venture fund, the current journey with Snorkel AI, and much more.
Please enjoy my conversation with Aarti!
Listen to the show on (1) Spotify, (2) Apple Podcasts, (3) Google Podcasts, (4) TuneIn, (5) RadioPublic, and (6) Stitcher.
Key Takeaways
Here are the highlights from my conversation with Aarti:
On Her Upbringing
I was born and raised in India, then moved to the US for undergrad when I was 17 to go to NYU. It wasn’t something that I had planned on. Indian parents mostly want their child to become a doctor or an engineer. Right before coming to NYU, I got into med school in India. My parents made me apply to schools abroad as a backup. Then I realized I didn’t want to go to med school. I’m very grateful to my parents for being okay with my decision to study abroad because I didn’t want to be in medicine.
I was from a small city in India, so coming to New York was a bit of a transition. The culture shock is definitely real. But it was a very transformational experience because I think college is already a time when you grow and find yourself as a person. The Indian school system is more academic/textbook-based, while NYU provides more practical opportunities such as working in labs. Furthermore, NYU makes students take many liberal arts classes, so I learned to be a better writer and focused more on cultural development. Finally, NYU’s campus is in the middle of the city, which made me grow up faster and feel like an adult a lot sooner. Overall, it’s a very good experience.
On Her NYU Experience
NYU only offers a BA in Computer Science, not a BS. It doesn’t matter in the US, but a BA feels like not a reputable degree in India. So the only way I could get a BS in CS is to do another degree in the school of engineering, which is Computer Engineering. Earning two degrees requires five years, but I did not want to stay an extra year just to get an additional degree. So I took a lot of classes every quarter with jam-packed semesters to graduate in four years instead of five.
I started working in a research lab my sophomore year, which interested me in computer science. Before that, I wasn’t entirely invested or sold yet. Because of the lab that I was in, I took a lot more machine learning (ML) classes starting my second year. I got to take many graduate-level courses, considering that NYU has an excellent ML department. It was kind of a backward process, where I got to see ML applications in the lab and go back to learn more about the ML theory.
Coming to a foreign country and pursuing a CS major, I was looking out for mentors who are a bit further in their academic careers to get advice on what classes to take and which research mentors to work with.
ACM was one of the first student organizations I joined. It became my little community in NYU. I stayed there for a number of years and took on leadership positions. I also ran the ACM New York Meetup group, a cool way to invite people to be speakers for the benefit of student-hosted events.
Women in Computing was a similar organization. I like the values of building a community of women and supporting each other. Actually, during one semester, I was the President of both organizations while taking a lot of classes and doing research. So NYU time was quite hectic, but I just enjoyed it overall.
On Going to Stanford
I started applying to Ph.D. programs. Since I have mostly done research during my undergrad, Ph.D. felt like a natural path afterward. But then I thought more deeply about it, and I realized that I did not know if I wanted to be in research for sure. That seemed to be the wrong attitude to apply to a Ph.D. and spend the next five years. I didn’t want to be a software engineer, so I briefly considered applied ML roles and converted my Ph.D. applications to Master’s applications. Stanford accepted me with a guaranteed TA-ship that covers my tuition. Bay Area is already a good place for tech, and maybe I should take two years to figure out. Given all those factors, it didn’t seem like an option not to take it.
At Stanford, I focused less on classes because I had already taken many graduate-level courses at NYU. What I wanted to get from my Master’s was a path into what I wanted to do next. I ended up working in Andrew Ng’s lab and one of his startups. You can always say you know how all these decisions make sense in hindsight. But now, in retrospect, I’m glad that happened.
On Teaching
It was fun for me to teach CS 230 for multiple quarters. Communicating complex topics to others is the best way to understand them yourself. Overall, I’m interested in making ML more accessible to people. CS230 was a project-based course, so students would go to lectures and do assignments, but the large focus was a group project that they did over the course of the quarter. I mentored like 15 of them every quarter. Overall, I got to see like 50 plus projects and got a sense of how to structure ML projects, set up a baseline when running into bugs, write an end-of-semester paper, etc. I also got exposed to different subfields of ML and how ML can be applied to specific tasks.
Being a good mentor requires being responsive, helpful, and friendly — while achieving the balance of helping students figure it out on their own versus lightly nudging. It would be best if you were the support to help students with high-level directions, not prescriptive suggestions. For CS230 specifically, many people are not from CS. I needed to be patient dealing with a wide variety of skills in CS and their domain knowledge.
On ML Applications in Healthcare
I had worked on ML applications to clinical medicine during undergrad. That’s why I was attracted to Andrew Ng’s lab (the “Stanford Machine Learning Group”) when I came to Stanford because it focuses on ML for medical imaging. I have gotten a little more skeptical about ML applications in clinical settings because there are important things to think about in terms of robustness and interpretability. There are more risks to actually deploying them in real hospitals.
Furthermore, there isn’t that much focus on tackling problems from the administrative and operational sides. If you would ask me the same question a couple of years earlier, I was way more research-focused and looked at the novelty/coolness of the problem itself. Over the years, I have started focusing more on the practicality of getting these applications into the hands of people. Because I’m not from the US, I don’t understand the healthcare system deeply in the US and the subtleties of bringing things into production in medicine. So I’ve strayed further away on ML for healthcare in practice, but in the lab setting, I immensely enjoyed it.
On Entrepreneurship
At Stanford, I joined the Threshold Venture Fellowship led by Tina Seelig and Heidi Roizen, which selects a group of 12 students every year and helps them develop entrepreneurial traits. The program was an excellent way to meet other like-minded people at Stanford from different majors. Many of them have gone to start companies. Tina and Heidi are also amazing mentors and always available to talk to.
I also participated in Greylock X Fellowship, a good way to meet people across schools, bounce new ideas off each other, and learn about different fields. We got to meet some Greylock founders weekly and hear stories about their startup journeys.
These involvements obviously come off more intentional in hindsight, but I think most decisions I have made have been interesting at the moment as well. Overall, I have been optimizing how I can learn and grow as a person via building new skill sets and meeting new people.
On Landing AI
I worked at Landing AI for six months, full-time over the summer and part-time over another quarter. At the time, they were doing defect detection for different manufacturing parts. Companies would give us images of defective and non-defective parts and ask us to automatically detect if something is effective for quality control/assurance. I learned that deep learning works, but you don’t have a lot of data in a lot of settings. Especially for defect detection, good manufacturing companies do not produce that many defective samples; otherwise, they wouldn’t be in business. So how do you work with a limited amount of data and do something there? I realized that there are many open-ended research questions to deal with unseen data, small-scale data, and not-robust data.
I also learned more about building ML products. Many shortcomings of a model can be baked in product design decisions — like making the product more human-in-the-loop and having a human review phase. Even in a class setting, even though we were working on real-world projects, they were still toy tasks. This was the first time that I saw ML in practice.
On AI Fund
AI Fund hired me into an undefined AI Product role that works across their portfolio companies. For me, that was super exciting. The upside here is huge because if I find a company that works out or I’m interested in, I’m one of the first people who help start this company. Additionally, having Andrew Ng as a mentor is crucial. He has had large various experiences such as doing research, starting teams, running companies, and now taking more business roles. His breadth of experiences has always been an inspiration to me.
There were huge learning curves everywhere. The role was very open-ended, so I had to work with others to define what success would look like and how I could get there. Besides having to figure out ambiguous undefined situations, I had to figure out go-to-market, marketing, financial modeling on whether a company is going to scale, etc.
Later on, as I worked more on the product side, I talked to mentors who could guide me in the right direction and also relied on my intuition. I think a lot of product management is intuitive. Some things might not be optimal but can get the job done in the short term and works for very small companies.
AI Fund works with companies at the 0-to-1 stage, where nothing exists. The number one thing we saw was companies with grandiose visions of what they could be 5 to 10 years from now, but not good ideas of what their MVPs look like. So the biggest advice we kept giving was to ask, “What is your MVP? How much money do you need to get to that?” Unless they were doing novel research tasks, then just use something off-the-shelf. It’s better to have a full-stack engineer with an understanding of ML rather than a qualified ML person in the beginning. It’s also better to have someone with actual domain knowledge of the problem rather than the ML aspect.
On Snorkel AI
After I decided to leave AI Fund, I wanted to join a startup in a stage later than 0-to-1: How to get the first couple of customers? How to generalize and build up a sales pipeline? How to scale faster? Overall, I was also interested in the MLOps space, with companies applying ML across verticals. Specifically for Snorkel, I chatted with colleagues who talked highly about the company. Once I got to see the Snorkel product and learn more about weak supervision, I felt like that was a fundamentally better way to do things. They were also flexible with my role, letting me do both product and ML.
The Snorkel project started as faster data labeling. In traditional ML, you go through each example one at a time. Because there are tens of thousands of examples, it’s very time-consuming. Data labeling is also not a one-stop-shop. Let’s say you mislabel something, and you change your classes. There are a lot of variabilities there. Snorkel is programmatic labeling. Instead of explicitly labeling each data point, you can create functions to label your data. The research project was how to combine various functions into a single label.
With the Snorkel Flow platform, you don’t achieve perfect data labeling that covers your entire dataset in one shot. You start with a certain number of functions, train your model, and quickly iterate from there. It’s a whole different way of iterating on ML applications that is much faster with more targeted analysis. It’s exhilarating to be at the cutting-edge of ML with weak supervision and work on a platform with a super clear business value proposition alongside amazing team members.
On ML Research vs. ML Engineer
In ML research, you have a fixed dataset and spend a lot of time iterating on your models. In the real world, you can’t treat the dataset to be fixed. The way to get model improvements is to fix labels in the dataset and iterate on your data. Andrew has called it “data-centric AI,” and this is the idea behind Snorkel as well.
ML engineers also need decent software engineering skills because a lot of it is building backend software or creating prototypes. Furthermore, ML has become way more customer-facing generally. Hence, ML engineers need a bit of a product sense to understand why they are building something and whether they can get by with minimal heuristics or baseline methods.
Timestamps
(02:00) Aarti shared her upbringing growing up in India and going to New York for undergraduate.
(04:47) Aarti recalled her academic experience getting dual degrees in Computer Science and Computer Engineering at New York University.
(07:17) Aarti shared details about her involvement with the ACM chapter and the Women in Computing club at NYU.
(10:46) Aarti shared valuable lessons from her research internships.
(14:16) Aarti discussed her decision to pursue an MS degree in Computer Science at Stanford University.
(20:27) Aarti reflected on her learnings being the Head Teaching Assistant for CS 230, one of Stanford’s most popular Deep Learning courses.
(23:59) Aarti shared her thoughts on ML applications in both clinical and administrative healthcare settings.
(26:47) Aarti unpacked the motivation and empirical work behind CheXNet, an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists.
(29:39) Aarti went over the implications of MURA, a large dataset of musculoskeletal radiographs containing over 40,000 images from close to 15,000 studies, for ML applications in radiology.
(32:50) Aarti went over her experience working briefly as an ML engineer at Andrew Ng’s startup Landing AI and applying ML to visual inspection tasks in manufacturing.
(36:56) Aarti talked about her participation in external entrepreneurial initiatives such as Threshold Venture Fellowship and Greylock X Fellowship.
(43:41) Aarti reminisced her time in a hybrid ML engineer/product manager/VC associate role at AI Fund, which works intensively with entrepreneurs during their startups’ most critical and risky phase from 0 to 1.
(48:43) Aarti shared advice that AI fund companies tended to receive regarding product-market fit and go-to-market fit strategy.
(54:04) Aarti walked through her decision to onboard Snorkel AI, the startup behind the popular Snorkel open-source project capable of quickly generating training data with weak supervision.
(56:36) Aarti reflected on the difference between being an ML researcher and an ML engineer.
(01:00:18) Closing segment.
Aarti’s Contact Info
People
Books and Papers
“The Art of Doing Science & Engineering” (by Richard Hamming)
“Deep Medicine: How AI Can Make Healthcare Human Again” (by Eric Topol)
“CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning” (Dec 2017)
“MURA: Large Dataset for Abnormality Detection in Musculoskeletal Radiographs” (May 2018)
About the show
Datacast features long-form, in-depth conversations with practitioners and researchers in the data community to walk through their professional journeys and unpack the lessons learned along the way. I invite guests coming from a wide range of career paths — from scientists and analysts to founders and investors — to analyze the case for using data in the real world and extract their mental models (“the WHY and the HOW”) behind their pursuits. Hopefully, these conversations can serve as valuable tools for early-stage data professionals as they navigate their own careers in the exciting data universe.
Datacast is produced and edited by James Le. Get in touch with feedback or guest suggestions by emailing khanhle.1013@gmail.com.
Subscribe by searching for Datacast wherever you get podcasts or click one of the links below:
If you’re new, see the podcast homepage for the most recent episodes to listen to, or browse the full guest list.