James Le

View Original

Datacast Episode 92: Analytics Engineering, Locally Optimistic, and Marketing-Mix Modeling with Michael Kaminsky

The 92nd episode of Datacast is my conversation with Michael Kaminsky, the co-founder of Recast (a marketing optimization platform) and the co-founder of Analytics Engineers Club (a training course for data analysts looking to improve their engineering skills).

Our wide-ranging conversation touches on his Economics education at Arizona State; his wide-ranging analytics career across health economics, child welfare, and consumer brand; his involvement with the Locally Optimistic community for analytics leaders; his current journey with Recast building a Marketing-Mix Modeling solution for modern marketers; thought leaderships on analytics engineering, agile analytics, data education; and much more.

Please enjoy my conversation with Michael!

See this content in the original post

Listen to the show on (1) Spotify, (2) Google Podcasts, (3) Apple Podcasts, (4) iHeartRadio, (5) RadioPublic, and (6) TuneIn

Key Takeaways

Here are the highlights from my conversation with Michael:

On Studying Economics at Arizona State

Arizona State is an incredibly well-resourced school. If you want to get a good education, you can; and if you want to spend the whole four years partying, you can do that as well. It is up to each individual to figure out how to make the most of his experience at Arizona State. I got lucky that I was able to work with a few top-tier professors who were incredible and had the budget to fund a lab and hire people like me as their research assistants.

My favorite class was an advanced microeconomics class that changed my perspective on how we can use math to reason about hard analytical questions. That convinced me that I needed to get a lot better at math very quickly if I wanted to have the type of career I desired. The other was an advanced environmental economics class with a professor who later offered me a job as a research assistant in his lab. That changed the course of my career since I got exposure to working with Stata and using econometrics to answer complex problems. That was a real turning point that pushed me down the path of wanting to work in econometrics and statistics (called data science now).

On His First Job at The Analysis Group

Source: https://www.analysisgroup.com/

The Analysis Group has an interesting business model — where they basically do research for law firms. There are a lot of interesting questions coming up in that sort of work and requiring advanced analyses. I spent a lot of time working with Ph. D.s and some of the most advanced economists in the world on these cases. That was a great training ground for learning to think about data and make arguments with data.

I spent much time working in health economics outcomes research and answering nuanced questions like how the effectiveness of one drug is compared to another. That taught me how to think really hard about these difficult problems and set the problem up correctly. The hardest part of these analyses was always setting the problem up correctly — like how to get a fair comparison between two different populations of patients who are taking two different treatments, especially in the case where those treatments tend to be prescribed to two different classes of patients. You would never have a true randomized control trial to make the comparison. There was a lot of interesting work there from an analytics and statistical perspective.

But more importantly, what I found during my time there is that I was pretty good at writing software. I had never taken any courses on programming in college, but we wrote a lot of code in SAS (a proprietary statistical programming language that was a lot more popular back then). A lot of the analysts spent time writing the same code over and over again. I realized that if I could take a lot of those common operations and build them into libraries or packages that other people could use, I could save the company hundreds or thousands of hours.

So I started turning my focus to thinking about how to take the things we do in similar ways across different projects and build them into one centralized repository of scripts or libraries that other people can use to make them a lot more efficient. Once I started doing that, it opened my eyes to the power of software. It is funny when I reflect on it because I would not have considered myself a software engineer at the time. I did not know a lot of the basic things about software engineering, so it was a great experience getting exposure to the power of what software can be: how much leverage it gives the people who write it and how it can save the analysts time/money.

On Leveling Up His Programming Skills

Coming out of Analysis Group, I was sick of working for big pharmaceutical companies for all the reasons you can probably imagine. Furthermore, the Analysis Group was challenging because there were no software engineers. No one had the type of knowledge that would have been useful to me in terms of teaching me how to be a good software engineer.

I wanted to move to New York and work on a product that I felt was doing good. Case Commons built a product to change case management software, a difficult space with many hard problems. I learned from real software engineers who were writing Ruby on Rails (at the time, the height of modern development practice, now a bit outdated). I got exposure to the tools they were using and the way they think about problems. I learned how to use a real text editor and an IDE, git for version control, etc. — which brought huge speedup in my efficiency. Simply getting exposure to how actual professionals work in the field, instead of trying to rediscover all these things myself, was a big unlock for me in my career.

On Building Effective Data-Driven Products

Source: https://www.casebook.net/platform-overview-more-than-case-management/

Back in 2014, the world was changing quickly regarding the amount of data available and what people were doing with it. My team at Case Commons was filled with incredibly talented people who now are off doing incredibly impactful things. We were dedicated to working hard to build products that would help caseworkers do a better job and save kids’ lives by using all the data that we had.

I learned basically how to do what we would call product management today: what the user wants to do, how to build that for them, what the key features are, what scope to cut, etc. I also learned how hard data work is. Data work is special because it requires both the software component and the data component. It is almost infinitely more complex than just building a web app because you need a deep understanding of what is happening in the world to build an effective product with application plus data.

I started thinking about the problems in data. One thing that slowed down data analysts and data scientists (when I first got there) was cleaning up data. The case management system is incredibly complex. In general, these systems try to map all of the relationships that different people have together. So it is a giant web of interconnected people. In order to build a data product on top of that, it is just complex because there are all of these different types of interactions and relationships between people. People would ask us questions that sounded easy but hard to answer because we had to combine all of the different tables and data.

I started to build tools that would make the data scientists and analysts a lot more effective if we could bring all of the data together so that it was pre-cleaned up and ready to be analyzed. That would save hours and allow us to answer these questions in a meaningful way consistently. I learned that I could make our team a lot more effective by centralizing where the data cleaning and data organization happened.

I took a few steps toward building tools that would push us onto this path of being a lot more efficient than we were before. I got exposure to complex data that I had never seen before. I could tell that a good system has features that would allow writing SQL and persisting different relations in the database that are semantically meaningful to the business, making doing the actual analytics a lot faster. We could share this centralized knowledge rather than having to clean everything from scratch every time we want to do a new analysis.

On Starting a New Data Team at Harry’s

Source: https://www.harrys.com/en/us/our-story

I was looking for a place that would let me build the idea of modern analytics. I told the folks at Harry’s about my vision, and, luckily, they were willing to take a chance on me to come and build this stuff. It worked out well for them and me in the end, but it was a huge risk on both sides. I was coming into an organization that had never had a data person, and they realized that I had never built a data team before. I was grateful to them for taking a chance on me and letting me experiment with what a data team could look like.

The challenges that come with starting a data team are those that everyone talks about. Being the first data hire at a company is challenging, especially at a fast-growing startup. There are a ton of things to do. There is no support infrastructure. None of my bosses knew what analytics was. None of them knew how to write a SQL query. I was responsible for all of the company’s data.

I did one very smart thing when I started there: I focused on trying to deliver a lot of value quickly to build trust within the organization. In particular, I focused on how to build tools that will make other people’s lives a lot easier. Coming in my first week, I walked around and talked to different people who worked with data in different ways. They showed me their enormously complex Excel workbooks. One coworker told me that he had to block out every Thursday to refresh his workbook to give reports for the day. So I sat down and wrote code that could do most of that task for him. It went from being a whole day to being half an hour. After that, I did that for a couple of other people and saved them a ton of time. This process brought me much goodwill to be able to go from there and propose more substantial changes.

The motto of the data team at Harry’s was “we want to help organizations make better decisions faster.” We wanted to put data into the hands of everyone in the organization to help them do their jobs better. So we focused a lot on tooling, efficiency, and taking complex analyses and making them easy so other people would have access to them. Today, this process is a lot more common, but six years ago, many data scientists got a lot of pride from being the only person who could do statistical analyses at their company and sort of guard that. Rather than building tools that everyone can do these analyses, they would feel important when everyone had to come to them with a question.

We flipped that on its head and said: “If we have to hire new people every time we want to do more statistical analyses, that will not scale well. Instead, we need to build software that will allow everyone in the organization to answer those really interesting and hard problems.” We enabled an incredible level of sophistication and data analysis across the business. Instead of building a team that’s good at answering questions, I am proud that we built a team that’s good at building tools that would allow other people to answer their own questions. That is the core insight of analytics engineering, and the idea of building tools enabled us to be successful at Harry’s.

On Analytics and Infrastructure Challenges at Harry’s

Source: https://medium.com/harrys-engineering/matching-cities-for-small-sample-experiments-harrys-engineering-be4e88c2b112

At pretty much every e-commerce company, marketing attribution is the biggest problem they have to solve. They want to spend as many marketing dollars as they can to grow as quickly as possible because that is how they get better margins. But they need to know where to spend the money. They need to make sure they do not spend too much (if so, they will lose money). LTV or paybacks is the fundamental equation of every e-commerce company, and Harry’s was no different. The majority of the analytics time went toward solving that particular problem.

There were various challenges:

  • Marketing is complex in many ways. The initial way everyone would think about solving something is like a good first pass. But it turns out that there are thousands of other edge cases to make something consistently usable — how to do attribution for direct mail, how to match coupon codes, how to get credit for the conversion, etc. These questions are hard, and building a system with reasonable rules for doing them is quite complicated.

  • On top of that, the data is messy and complex. We invested a ton of time in the engineering work — building a system to apply rules to the data.

We also did other work that was interesting but maybe a bit more speculative:

  1. We experimented with tagging and prioritizing customer experience tickets as they came in based on their severity.

  2. We built an automated landing page testing system that would do multi-armed bandit allocation of traffic across different landing pages. That would allow us to speed up our ability to A/B test.

Making decisions faster was always at the core of what we were doing — whether helping a product manager interpret an A/B test or automatically choosing which landing page would get served to different customers. They fell in the range of how we could keep pushing the business forward faster.

On Learning Spanish

Source: https://kaminsky.rocks/2020/01/learning-a-language-is-hard/Source: https://kaminsky.rocks/2020/01/learning-a-language-is-hard/

I have learned a lot about the science of how people learn languages. I took one Spanish class in New York before moving to Mexico City. Now I would say I have successfully learned Spanish — I can show up to parties, make friends, and tell jokes.

Many people talk about how only young kids can learn a new language (or how it is easier for them). It turns out it is not valid. Adults actually learn languages faster than kids when given the same amount of instruction time. Adults have an advantage because they already know a lot of other concepts and can map those grammatical concepts in the target language under their own language. There are a couple of things that lead to that myth:

  1. The bar is much lower for children. It is very easy for an adult to speak like a 6-year-old, but no one gives them credit for that. You cannot show up to a party, speak like a 6-year-old, and have a good time.

  2. Learning languages is hard because it requires a lot of work. No one wants to study their language 45 minutes a day, which is what it takes. I still study Spanish every day, reviewing vocabulary and doing my flashcards. Especially during my first year in Mexico City, I was probably averaging over 45 minutes a day of dedicated study time in Spanish. For most people, learning 30,000 or 40,000 words in order to speak fluently just seems incredibly exhausting and daunting.

On Founding Recast

Source: https://getrecast.com/attribution-stack/

I mentioned earlier about my work at Harry’s that surrounds marketing effectiveness: which channels are working, which ones are not, how should we allocate our marketing budget to grow as fast as possible without losing money, etc. It turned out that this problem is relatively straightforward for small e-commerce brands. It is easy to tell where your customers are coming from and run experiments to track that. If you are doing a couple of million dollars in revenue per year, you are probably only spending money on Facebook or Google.

As the business gets more complex and bigger, those problems get a lot harder. Pure digital tracking does not work, especially if you are doing offline marketing like TV, radio, and podcasts — all of which are pretty hard to measure. Then, if you are selling omnichannel, you are not only selling online but also selling brick-and-mortar. There are not many good solutions out in the world for measuring marketing effectiveness in that complex environment. I have spent a lot of time thinking about that at Harry’s, which started as a DTC (direct-to-consumer) e-commerce brand and eventually launched in most major retail stores. We felt confident about measuring the marketing effectiveness of our DTC brand for our website sales, but we had no idea what was happening on the retail side. As I was leaving Harry’s, I felt that question only would get bigger — as more businesses come up online first and also operate in the mixed channel world of having online and offline advertising/sales.

Source: https://getrecast.com/

I reached out to my friend Tom Vladeck, my partner on Recast. He is also a data scientist by training. He has a background in marketing data science effectively and has frequently heard about this problem from clients of his consulting business. We put our heads together and figured that none of the existing players had a good solution.

  • Many old-school media-mix modeling vendors (Nielsen, New Star Market) provide products oriented around old-school traditional brands. They sell to organizations like Pepsi, and once every six months or a year, they will hand-build a model, deliver a giant 140-page Powerpoint deck to Pepsi’s executives, and suggest how much to spend for Pepsi’s next bi-annual media.

  • The issue is that this is not the way modern marketers work. Modern marketers make decisions on a daily, hourly, or weekly at the slowest basis. The idea of getting a Powerpoint deck once every six months is just not useful to modern marketers.

At Recast, we decided to embrace the idea of media mix modeling and build a statistical model that can provide valid inferences over media mix marketing channels (online/offline top/bottom of the funnel). It turned out the statistics of doing this is really challenging. We spent a lot of time doing pure research. It took us a whole year of doing R&D without any paying clients. We just worked extremely hard on the underlying statistical model. We finally got to a place where it seems like it is working and is meaningful better than anything else in the market by a large amount. Currently, we have a number of clients to partner with, and we are helping them measure channels that they had never been able to measure before. It has been incredibly stressful and challenging but also a ton of fun to build a product from scratch with someone else.

On The Inception and Evolution of Locally Optimistic

I had gone to a few data leaders in New York meetups. After a while, 4 of us (Scott BreitenotherIlan ManSam Swift, and myself) got together and discussed that there was no resource on the Internet in which people talk about analytics problems in their organizations. We decided to start a blog and put our thoughts about working with data in modern companies. We also started a Slack community just to see what would happen. Initially, there were like ten people in the Slack community and four readers per week on our blog. So we started really small.

But then it started growing. It has been such a privilege to be a part of that community and see it grows. People always ask me: “How do you have time to manage that community?” Honestly, I do not do much. The best part about a healthy community is that it is self-managing. Everyone who is a part of the Locally Optimistic community participates in keeping it healthy. We have many volunteers who want to write blog posts, so I help them come up with ideas and edit their drafts. It is also rewarding to observe the fascinating conversations people have all of the time. I am proud that I got to have a hand in starting it.

On “The Analytics Engineer

Source: https://locallyoptimistic.com/post/analytics-engineer/

This role came about when I was at Harry’s. I was the first analytics hire there — writing a lot of software, building a lot of tools, and doing a lot of analyses. Many tools I built were not good. I could look at the code and know that this was not the right way to do this. I knew that there were people in the world who would know how to do it better than I could, so let’s hire this person. This is not a data scientist, as we did not need someone to do statistics or ML. This is not a data analyst because this is a software engineering job. But this is not a data engineer either, since we had a data engineering team already doing other things (like writing Spark jobs and Scala). This is a software engineer sitting on the analytics team, so let’s call it an analytics engineer.

I went to Harry’s HR team with a job posting for an analytics engineer. They were like: “What is on earth are you talking about? There is no way you can post this job. The title does not exist.” I emphasized that this is the role we need and explained exactly the job description. They were like: “How are we going to hire someone for a job title that no one else has? How are we going to pay them because we do not have a market for this compensation?” I stressed that we just have to figure it out.

I remember that it was a real struggle to open up that job description at the time. But once I started talking about the role, especially among other analytics leaders, I got a lot of positive feedback, as it resonated with people. A common sentiment was: “You are describing my job. It is not an analyst nor a data scientist. I am a software engineer building tools for people who do data work.” So we hired a couple of these analytics engineers at Harry’s, and they were incredibly impactful for the business.

As I started evangelizing this title with more people, it resonated with them again. Many folks got hired as analysts but were doing work that is not like analyst’s work. In other words, they felt undervalued. When I published this blog post, it gave these people the vocabulary to start talking about the work they were already doing and make the case that they should be valued for that work. Many analysts were doing analytics and, in their spare time, building some tools on the side. Those tools ended up being the most impactful work, but because their organizations did not know how to think about it, it was not getting enough value. By putting a name on the role and advocating for building a career in it, this post has opened up organizations to hiring for analytics engineers roles and compensating them effectively.

On The Analytics Engineers Club

Claire Carroll is an incredibly talented teacher/instructor. Because I have spent a lot of time thinking about what analytics engineering means, we figured it could be interesting to work on a course here. As we started looking around, we realized a big gap in the market. There is a lot of demand for analytics engineers, but not many of them. No one does training on this because it is such a new field. There is a handful of mostly self-taught practitioners.

Claire and I decided to create a course that teaches all the things we wish we had learned when we were data analysts becoming analytics engineers. We eventually learned those things via trial and error, and it was painful. So we wanted to help people who today are data analysts but are technically inclined and want to learn these things. We can help them do it so much faster than they would if they were doing it independently. That was where the course came from.

Claire and I have both taken online courses in different styles — MOOCs, books, boot camps, etc. We realized that the most effective way of learning is doing it with other students, which keeps learners accountable and learning from peers to cement their knowledge. Unlike a boot camp oriented around getting a job, our course focuses on skills that are important to analytics engineers and can be implemented in their jobs every week. From a pedagogical perspective, we wanted to focus on making the learning really effective. It is not a thing where you work on a bunch of infrastructure that we have made and is isolated from the rest of the stack. We wanted to get our students exposure to real tools that real software engineers use.

On Knowledge Sharing for Analysts

I am excited about figuring out how to communicate, share, and document more complex analyses. The world of BI (self-serve analytics) has made huge strides in the last five years, which has unlocked a lot of value. But some analyses are more complicated than a simple stacked bar chart that gets refreshed every week. They combine both the software code and the data at a point in time. They require explanation along with the chart in order to communicate the learnings. I think we are still early on, and there is a lot of tooling that could be developed that will help analysts and data scientists in the future to be able to pursue those things.

Show Notes

  • (01:48) Mike recalled his undergraduate experience studying Economics at Arizona State University and doing research on statistics/econometrics.

  • (04:59) Mike reflected on his three years working as an analyst in the Boston office of the Analysis Group.

  • (09:08) Mike discussed how he leveled up his programming skills at work.

  • (11:05) Mike shared his learnings about building effective data-driven products while working as a data scientist at Case Commons.

  • (17:20) Mike revisited his transition to a new role as the Director of Analytics at Harry’s, the men’s grooming brand — starting a new data team from scratch.

  • (23:04) Mike unpacked analytics and infrastructure challenges during his time at Harry’s — developing the data warehouse, an internal marketing attribution tool, and a fleet of systems for automated decision-making to improve efficiency.

  • (27:21) Mike reasoned his move to Mexico City — spending time practicing Spanish, among other things.

  • (32:22) Mike talked about his journey of starting a new consulting practice to help companies get more value out of their data, which was primarily shaped by his network.

  • (36:30) Mike shared the founding story behind Recast, whose mission is to help modern brands improve the effectiveness of their marketing dollars.

  • (42:09) Mike dissected the core technical problem that Recast is addressing: performing media mix modeling in the context of “programmatic” channels.

  • (46:14) Mike shared the story behind the inception and evolution of Locally Optimistic, a community for current and aspiring data analytics leaders.

  • (49:29) Mike walked through his 3-part blog series on Agile Analytics — discussing the good aspects, the bad aspects, and the adjustments needed for analytics teams to adopt the Scrum methodology.

  • (53:25) Mike unpacked his post “A Culture of Partnership,” — which discusses the three key activities that can help an analytics team identify the most important opportunities in the business and work effectively with key stakeholders and partner teams to drive value.

  • (57:25) Mike examined his seminal piece called “The Analytics Engineer,” which generated much attention from the analytics community — which argues that the analytics engineer can provide a multiplier effect on the output of an analytics team.

  • (01:03:24) Mike shared the motivation and pedagogical philosophy behind the Analytics Engineers Club (co-founded with Claire Carroll), which provides a training course for data analysts looking to improve their engineering skills.

  • (01:07:57) Mike anticipated the evolution of the quickly evolving modern data stack (read his Fivetran article “The Modern Data Science Stack”).

  • (01:09:22) Mike unpacked how organizations can build, start, and maintain the data quality flywheel (read his Datafold article “The Data Quality Flywheel”).

  • (01:11:40) Mike shared his thoughts regarding the challenge of sharing complex analyses.

  • (01:13:15) Closing segment.

Mike’s Contact Info

Further’s Resources

Mentioned Content

Articles

People

  • Claire Carroll (Co-Instructor of Analytics Engineering Club, Product Manager of Hex, previous Community Manager of dbt Labs)

  • Drew Banin (Head of Product at dbt Labs)

  • Barry McCardel (Co-Founder and CEO of Hex)

Notes

My conversation with Michael was recorded back in October 2021. Since then, Michael has been active in his work projects. I’d recommend:

See this content in the original post

About the show

Datacast features long-form, in-depth conversations with practitioners and researchers in the data community to walk through their professional journeys and unpack the lessons learned along the way. I invite guests coming from a wide range of career paths — from scientists and analysts to founders and investors — to analyze the case for using data in the real world and extract their mental models (“the WHY and the HOW”) behind their pursuits. Hopefully, these conversations can serve as valuable tools for early-stage data professionals as they navigate their own careers in the exciting data universe.

Datacast is produced and edited by James Le. Get in touch with feedback or guest suggestions by emailing khanhle.1013@gmail.com.

Subscribe by searching for Datacast wherever you get podcasts or click one of the links below:

If you’re new, see the podcast homepage for the most recent episodes to listen to, or browse the full guest list.

See this gallery in the original post