Production

What I Learned From Tecton's apply() 2022 Conference

What I Learned From Tecton's apply() 2022 Conference

Back in May, I attended apply(), Tecton’s second annual virtual event for data and ML teams to discuss the practical data engineering challenges faced when building ML for the real world. There were talks on best practice development patterns, tools of choice, and emerging architectures to successfully build and manage production ML applications.

This long-form article dissects content from 14 sessions and lightning talks that I found most useful from attending apply(). These talks cover 3 major areas: industry trends, production use cases, and open-source libraries. Let’s dive in!

What I Learned From Arize:Observe 2022

What I Learned From Arize:Observe 2022

Last month, I had the opportunity to speak at Arize:Observe, the first conference dedicated solely to ML observability from both a business and technical perspective. More than a mere user conference, Arize:Observe features presentations and panels from industry thought leaders and ML teams across sectors. Designed to tackle both the basics and most challenging questions and use cases, the conference has sessions about performance monitoring and troubleshooting, data quality and drift monitoring and troubleshooting, ML observability in the world of unstructured data, explainability, business impact analysis, operationalizing ethical AI, and more.

In this blog recap, I will dissect content from the summit’s most insightful technical talks, covering a wide range of topics from scaling real-time ML and best practices of effective ML teams to challenges in monitoring production ML pipelines and redesigning ML platform.

What I Learned From Convergence 2022

What I Learned From Convergence 2022

Last week, I attended Comet ML’s Convergence virtual event. The event features presentations from data science and machine learning experts, who shared their best practices and insights on developing and implementing enterprise ML strategies. There were talks discussing emerging tools, approaches, and workflows that can help you effectively manage an ML project from start to finish.

In this blog recap, I will dissect content from the event’s technical talks, covering a wide range of topics from testing models in production and data quality assessment to operational ML and minimum viable model.

What I Learned From Attending Tecton's apply() Conference

What I Learned From Attending Tecton's apply() Conference

Last week, I attended apply(), Tecton’s first-ever conference that brought together industry thought leaders and practitioners from over 30 organizations to share and discuss ML data engineering’s current and future state. The complexity of ML data engineering is the most significant barrier between most data teams and transforming their applications and user experiences with operational ML.


In this long-form blog recap, I will dissect content from 23 sessions and lightning talks that I found most useful from attending apply(). These talks cover everything from the rise of feature stores and the evolution of MLOps, to novel techniques and scalable platform design. Let’s dive in!

Datacast Episode 37: Machine Learning In Production with Luigi Patruno

Datacast Episode 37: Machine Learning In Production with Luigi Patruno

Luigi Patruno is a Data Scientist and the Founder of MLinProduction.com. He’s currently the Director of Data Science at 2U, where he leads a team of data scientists and ML engineers in developing machine learning models and infrastructure to predict student success outcomes. Luigi founded MLinProduction.com to educate data scientists, ML engineers, and ML product managers about best practices for running machine learning systems in production.

As a consultant for Fortune 500s and start-ups, Luigi helps companies utilize data science to create competitive advantages. He has taught graduate-level courses in Statistics and Big Data Engineering and holds a Masters in Computer Science and a BS in Mathematics.