Datacast Episode 84: Business Development and Customer Success for Emerging Technologies with Taimur Rashid
The 84th episode of Datacast is my conversation with Taimur Rashid, the Chief Business Development Officer at Redis.
Our wide-ranging conversation touches on his Computer Science education at UT Austin; his time at Oracle transitioning from a purely technical engineering-focused role to more customer-facing functions; his 10-year career as the Managing Director of Business & Market Development at AWS leading new product incubation, go-to-market, strategic business development; his stint at Microsoft driving customer success strategy and overseeing field execution of cloud solution architects; his current journey with Redis leading initiatives related to AI/ML; the evolution of tech leadership, strategic business development, and customer success in the past two decades; and much more.
Please enjoy my conversation with Taimur!
Listen to the show on (1) Spotify, (2) Apple Podcasts, (3) Google Podcasts, (4) TuneIn, and (5) Stitcher
Key Takeaways
Here are the highlights from my conversation with Taimur:
On Studying Computer Science at UT Austin
I joined as a biology major. Like most folks from the Indo-Pacific culture, I was pre-med. After taking organic chemistry for a couple of semesters, I realized that I had limited my chances to get into medical school. Nevertheless, I also was curious to learn new things and took a few computer science classes. Eventually, I switched over to be a computer science major in my sophomore year. Overall, it was a rich experience for me combining different things learned across biology, computer science, and humanities.
The three classes that I enjoyed the most at UT Austin are object-oriented programming (my first introduction to C++), automata theory (the philosophy behind computing), and knowledge-based systems (my foray into AI and machine learning).
On His Time at Oracle
In my almost 6.5 years between Siebel and Oracle, I had the opportunity
to almost play several different roles. Even though I started as a QA engineer, I ended up taking a role in supportability engineering, product management, and eventually business development. What was interesting across these four different functions was the common thread of the products I worked on. I was aligned to business intelligence and data warehousing products.
As I started doing my current role, I got exposure to roles that were either upstream or downstream from my current role. That exposure got me very curious about those different roles. In order to understand what those roles were, I spent a lot of time with experts in such roles.
For supportability engineering, I got exposure to how certain individuals were looking at the product from a supportability perspective. That meant I had to learn a new vocabulary, the approach, and the overall thinking behind that function. It helped me make a functional leap from one mindset to the other.
From product management, I helped out with an acquisition that Siebel did during that time for a predictive analytics company. In that process of due diligence, I started learning about business development. Then, I had a natural segue way into business development during my final year and a half at Oracle.
There are three good milestones during my time at Oracle:
The first one was an integration we did with Microsoft Exchange Server and Siebel CRM. It was my first exposure to partner integration. Because of that project, I flew up to Seattle for the first time back in 2004 to meet the Microsoft teams. Little did I know that Seattle would be home for me eventually.
The second one happened around 2005 when Siebel had started a new project to rearchitect the entire platform around service-oriented architecture (SOA). That was an emerging concept when web services came into play. AWS got formed around the same time as Amazon started making a big shift towards SOA. It was memorable for me because I learned so much about SOA.
The third one was a go-to-market effort in which we saw an increase in the number of SaaS companies using Oracle as their underlying platform across database application servers and all aspects of the tech stack. It exposed me to cloud computing, which urged me to join AWS later in 2008.
On Joining Amazon Web Services
For the longest time, my parents encouraged me and almost forced me to do my Master's. I pushed back since I enjoyed working and learning a lot in my data job. I ended up taking a strategic marketing class at Stanford, which opened my eyes to a whole new dimension of business and technology. After that, I decided to enroll in Stanford's Center of Professional Development as a part-time student and took one class called Dynamic Systems.
Around that time, I interviewed and got an offer from Amazon Web Services. They were going to pay me a lot more, fly me up to Seattle, and find a place for me. I could not miss this opportunity. Amazon was not one of the sponsoring companies for the program, so I had to stop the whole effort after my first class. In hindsight, that was the best decision I made. In 3 years, I learned so much about business, tech strategy, and operations. I felt like I got my own MBA from that experience.
On Sales Talent
As I interviewed more people and interacted with more in an organization, I identified certain characteristics that truly result in more deterministic outcomes. When looking for sales talent, there were a few things that I looked at in particular:
Self-belief and confidence: In sales, it is imperative because great sellers believe in themselves and have the confidence to project that self-belief in ways that truly matter.
Ability and willingness to learn and experiment: Sales is not an easy job. In most cases, you will get turned down, and people will close the door on you in your first several sales. You need the tenacity to experiment, learn about customers and empathize with them. Such willingness to learn and experiment demonstrates the ability not to give up and keep trying until something works.
Likability and charisma: You can create a playbook, and anyone can follow it. In most cases, you can have a deterministic sale, but the reality is that if you can bring uniqueness in how you sell and your character into a standard playbook, you will show people your unique value. At the end of the day, people like to buy from people they trust and like.
On AWS Product and Go-To-Market Launches
AWS always takes a customer-centric view and approach with how they build promise. For us, it is about understanding the customer needs, their pain points, and the overall opportunity, then building the product around that capability. One of the early strategies, when we were putting together the database strategy for AWS, was this whole concept that: no one database would meet all the needs of developers (and, for that matter, applications). So we had this concept that databases are purpose-built. We always asked: "What are the attributes of the application that ultimately inform the kind of database that you need to support that application?"
It was a similar concept that we applied to the compute services. What configurations of CPU memory storage and networking align with the application that customers are trying to build? The footprint will change if you are running an application server versus trying to train deep learning models. That whole mentality of understanding the customer application, figuring out their needs, and building products that gave developers a choice was essentially how we went to market.
But more importantly, it was market readiness/awareness and an understanding of the product adoption lifecycle that we needed to infuse in our go-to-market strategies. In many cases, we were selling to early adopters before certain products got more mainstream. Such market awareness and an understanding of the product adoption lifecycle helped us create targeted market segments and run different GTM campaigns across those different segments.
There were many downstream teams that we had to interact with. A GTM strategy has a concept around the minimal deployable unit. That not only entails technical resources (like solution architects) needed to support the product and the sellers who were primary account team members. We also had to think methodologically about: Who would be the partners that can help out with expanding the product adoption? What are training-related materials that we need to build? How do we infuse this product into professional services? Our GTM strategy basically identified all the business functions needed to be trained on the product, and we had to have relevant messaging and activities associated with each of these groups. There was a cross-functional effort that we had to do in defining the overall GTM strategy. For some products, it would be a developer-focused effort. Other products required more enterprise-like adoption, and we needed to bring in professional services or a consulting partner. So based on the product and the market we were going after, a different combination of teams would bring around it as part of GTM.
On Amazon's Culture of Customer Obsession, Operational Excellence, and Innovation
Some of our guiding principles across Amazon and AWS are grounded on customer obsession and operational excellence.
In many ways, we built on that in business development by making sure we really understood the customer pain point or the customer opportunity. We highlighted our overall business development strategy around the key customer-facing tenants, whether simplicity, convenience, or selection.
It was paramount to ensure that anything we did from a GTM perspective highlighted operational excellence. At the end of the day, that was the value proposition of cloud computing - why run your infrastructure in-house when a company like AWS has the operational experience to run it over many years. Naturally, the experience, scale, and innovation that we have done around that fed our business development strategy. We could go to any market segment, whether a traditional IT organization or a line of business, and emphasize that customer experience, alongside operational excellence and security. That became guiding principles for us as we formulated our business development strategy.
Amazon is able to take very hard problems (whether consumer-facing, developer-facing, or organizational-facing) and creates an affordable offering in a very simple way. They continue to espouse value selection/convenience and package it into an experience that works for the end-user. Then they push it to the edge. Culturally speaking, that is what Amazon has done exceptionally well. In the case of AWS, infrastructure management was localized to a very special skill set and a certain amount of budget. Amazon basically democratized and pushed it to the edge, such that developers now have enough capability to construct their own data centers. In brief, the cultural thing that Amazon has done well is taking very hard problems, making the onboarding dead simple, making the product affordable, and pushing it to the edge so many people can participate in the platform.
On Leaving Amazon and Joining Microsoft
It was a tough decision to leave a company like AWS after ten years, but I was hungry to learn new things and get more experience in enterprise sales with a technology company. In my blog post, I listed three aspects to my decision:
Mission: AWS aims to empower developers to create applications in the cloud. I was searching for something with a broader surface area—Microsoft's mission of empowering every organization and individual to achieve more excited me. There is no reference to technology or any school of thought. It is centered around human empowerment. That mission gravitated me towards Microsoft.
People: In talking to certain people at Microsoft and seeing how Satya Nadella was leading the company's cultural transformation, I was excited to be a part of it. There were certain folks there whom I felt like I could learn from and who could be good mentors for me over time.
Builder Mindset: Around the time I joined Microsoft in 2018, they did a big transformation on the sales side and created a new customer success function. For me, it was an opportunity to take many years of experience at AWS and help Microsoft build this customer success culture.
On Enabling Customer Success through Evolutionary Architecture
I gave this talk during Microsoft's annual sales kickoff in summer 2018. When I joined, there were about 1000 solution architects in the organization. Many of them were trying to understand the cloud and the Azure platform. I wanted to show how architecture and culture influence each other in that talk. More specifically, I compared physical architectures (marquee-type architectures of the world) and how they influence the culture that gets centered around them. Then, I made the same argument with technology: Technologies you use to build a certain software or cloud architecture influence the culture you create around it and vice versa.
If there is any technical reflection of Amazon's culture, it is AWS. So I wanted the audience to understand the guiding principles that go into the software and cloud-based architectures (like Moore's Law, Metcalfe's Law, or Conway's Law) and how they influence the culture created around them. I brought up the concept of evolutionary architectures - when you are working in the cloud, business needs are constantly evolving and changing. You must not architect for a fixed state and instead architect for an evolutionary state.
There were five key takeaways from the talk:
The business needs always determine the architectures needed to be built, not the other way around.
With all architectures, data is the most important thing to think about because we live in a world where security and privacy are of paramount importance. One must think about data classification, data protection, and data sovereignty.
Even though customers are different, customer needs have repeatable patterns that can be implemented. The more one can create repeatability within cloud architectures, the more you can scale.
When you architect in the cloud, you have to think about resiliency in a decoupled way. You have to look at independent systems that are distributed and decentralized.
Because you design architectures that are constantly evolving based on customer needs, you might build brand new ones in uncharted territory. Thus, one must be willing to constantly learn and evolve, making it a mindset.
I ended my talk with a famous architect Frank Gehry quote: "You build for its time and play, but you yearn for timelessness." You have to build something relevant to the context while keeping it based on foundational principles.
On Amazon vs Microsoft Cultures
There are similarities and differences between both cultures in how they operate as teams. Amazon is both an agile and fast company. I made that distinction because agility is based on the company's ability to move quickly, while speed is about momentum and how fast one can move. When I look at Amazon, the process is lean, and the teams are single-threaded, so from day 1, they have had a structure that naturally leads to agility and speed being a part of their fabric.
What is interesting about Microsoft is that, ever since Satya took over in 2014, the company has tried to transform itself both culturally and operationally with how they build and sell products. While it takes them some time, they still have a little heavyweight process. While they move quite fast, their process still requires being leaner.
After ten years at AWS and three years at Microsoft, I noticed many similarities and differences with how they operate. Microsoft is trying to be more like AWS, and AWS has some aspects that are becoming like Microsoft. I started having an appreciation of systems thinking when evaluating both companies. As much as you are trying to culturally change a company (like what Microsoft is going through), there are times when you get a resurgence of the old way.
On Joining Redis
I had exposure to Redis early on while at AWS. I helped launch a product called ElastiCache, a managed caching service. When we started it, it supported memcached. Then sometimes down the line, we ended up supporting Redis, which led to a massive adoption in record days and hit revenue goals. I immediately saw the value in the Redis technology that when I moved over to Microsoft, I helped bring that partnership over and developed a good relationship with the founders. When deciding what is next for my chapter, I looked at the Redis technology and thought that we were still in the early days of what this technology could be.
Redis is a database loved by developers. Its technical first principles are sound in its delivery value around speed, throughput, and performance. The ubiquity across how Redis can be used across different applications for different needs is so vast that there is so much opportunity left on the table to realize. When I looked at the opportunity around Redis within the AI/ML market, it was an exciting opportunity to come and build this market for the company.
While Redis is the custodian of the open-source database, we also have a commercial offering whereby we take care of all the hard things with scaling Redis in a globally distributed way with the right level of security, scalability, and performance (called Redis Enterprise). It comes in the form of software that you can deploy on-premise or on any cloud and manage on your own. Alternatively, you can use the managed database service that we have for Redis on any of the three clouds today.
On Operational ML and Feature Store
Redis is really good on speed: low-latency and high-throughput performance. When you look at Redis technology today, its primary use case is being a cache in front of operational data (relational or NoSQL database). So much time is spent preparing the data, labeling the data, doing the featurization of the data, then actually feeding that into a model that you train before you can even get to the point of deploying the model. As we see that the bulk of the time is spent there, we quickly realize that there are aspects of that whole lifecycle where Redis can help bring better throughput.
Interestingly enough, when you look at data in the ML world, you have features and models so that you can apply the same concept of write-through cache and low-latency data store in front of feature data and models. The value that we see is being part of the overall ML lifecycle and saying: for production scenarios, Redis is an excellent storage for online predictions because it is all about low latency and speed. You can train models and ingest data faster. When you combine the cycle time of day to prepare data and build a model with low-latency serving in production, you suddenly compress the time it takes to bring ML into production. That is why our thesis is that in-memory database technologies can help this whole lifecycle.
When you look at ML and AI infrastructure today, there is a bit of a hodgepodge of different technologies underneath. When you look at the effort around feature stores, they centralize feature management and feature engineering so that distributed teams can share and reuse features towards model building and online predictions. That gives us an excellent opportunity to be a part of that modern AI infrastructure. It is essentially the data layer within that overall infrastructure that's being modernized if you look at it.
For Redis, though, our primary value prop is in speed and low latency. You need that in feature stores during online predictions, where ten milliseconds, even sub-millisecond, latency matters for use cases such as fraud detection and cybersecurity. Our overall GTM approach has been: Redis is the underlying data layer within the production stage of modern AI/ML platforms. It works closely with feature stores and MLOps providers that are around it. All feature stores (like Feast, Scribble Data, Tecton) can integrate with Redis as a database, enabling ML teams to leverage a complete solution.
On "First Principles in Building A Real-Time AI Platform"
I approached this talk by asking the 5 Whys, an approach learned at Amazon in which you want to start with why something is the way it is and keep asking yourself why until you get to an answer where you can take action. Operational ML is difficult. Why? Too much time is being spent on engineering and preparing the training data. Why? Data preparation has challenges around discoverability, duplication of features, and difficulty sharing them, which create an inconsistency with features that go into the models being built. Why? Every team is managing their own repository, and that is creating inefficiencies. Why? There is no centralized system of record and process to bridge the data and the models. Now you get to a point where you can see the opportunity to centralize the storage of features and create a process around it.
I ended up sharing five first principles around that whole process:
If you are going to build a real-time AI system, you first have to make sure that you are grounded on a mission that transcends technology and focuses on business outcomes that you are trying to drive or societal value that you are trying to create.
When you define your outcomes, you have to almost articulate the characteristics that define those outcomes in very clear terms.
Once you can define the characteristics, you can define scenarios. For Redis, it is important to define the ML scenarios that we are trying to enable because that will help guide requirements. For example, if your ML scenario is preventing fraud, then the requirement for your ML system is low latency.
Once you define your requirements, you want to define them in a foundational way. While they are foundational, you also want to be open-minded about how you actually implement them. There are many ML frameworks out there. Supporting all of them is a monumental task. You almost have to be very stubborn with prioritizing which frameworks to support. There is always a good balance between foundational elements and the openness that you create.
The overall message that I was trying to point out was that for real-time AI to be built, you have to make features first-class citizens. There is a very good blog post by Eugene Yang talking about the hierarchy of needs associated with feature stores.
I think it is foundational that you can build an AI system that's real-time and durable over time when you base it on these first principles.
On Redis' Product Vision
The first thing is reminding everyone why Redis is special and unique to what people do today. Then, being able to transcend that into the ML context by saying: "Hey, there is an opportunity to modernize the AI and ML infrastructure underneath. Here is where the technical merits of Redis database technology fit into that whole thing - from storing features and caching models for better performance to being able to store model performance data, which from an observability perspective could be leveraged in a very fast way."
The second important piece is educating our community and our customers about how Redis can be applied to the ML context. In its most simplistic forum, Redis is caching for ML data. Caching ML data speeds up data ingest and model training and speeds up online production (which ultimately affects the customer experience).
Thirdly, we want to show that there is a set of applications that can be built on this modern infrastructure. Whether it is NLP, vector similarity, or computer vision, this is now in the fabric of the ML lifecycle. You can naturally extend it into different ML use cases because they are horizontal by nature.
If I had to think about it, vector similarity is the killer ML application on top of Redis. It is a very complex problem to do entity matching, nearest neighbor-like search capability, which have applicability across domains. Furthermore, Redis works with all cloud platforms and integrates with feature store / MLOps providers. That means taking the foundational understanding and building the ecosystem around it.
On Angel Investing
I made my first investment in a friend's company (Magalix) and learned a lot in the whole process. Over time, I have advised startups and continued to increase my angel investments. At the end of the day, when you look at investments, you are investing in people + technology and how both relate to a market opportunity. You first have to look at the individual aspects on their own merit. However, what is more important is what I called reading in between the lines - which is understanding how the three aspects relate to each other and seeing whether there is a symbiotic relationship between the people who create the company, the product they are building, and the market opportunities they are going after.
As an investor, you want to be able to say: "Hey, each of these aspects has a certain understanding and an assessment of their own. But then how do they work altogether?" The dynamics of the people, the technology, and the market opportunity present themselves over time to capture the value and accrue the value in a very meaningful way. You might have a great product and a great opportunity, but the people behind it might not be able to capture and accrue the value in a way that's indicative of the market opportunity. I always say it is about reading in between the lines and forecasting how the investment can play out over time.
On The Evolution of Tech Leadership, Business Development, and Customer Success
Tech leadership is the area that is getting much attention right now. I see three very important aspects:
Courage: Tech leaders must have courage because we are dealing with realities that are very new to us. Leaders must have the courage to address such uncharted territories. This is the courage that we have as leaders and the courage to be passed on to the people we work with so we can take on new challenges together.
Empathy: There has been an emphasis on empathetic leadership in the past 18 months. We have to put ourselves in the shoes of others to understand how to lead through difficult times.
Curiosity: The curiosity that we develop as leaders can enforce and encourage a learning culture. For leaders, it is not just general curiosity; it must be genuine curiosity. When genuinely curious about something, you are a more active listener. I have seen some great leaders who bring genuine curiosity to the table when they are interested in how they can help people.
On the business development side, first-principle thinking is foundational in any business development work that one does. The second is being able to look at things in a multi-dimensional way. This is where transfer learning happens, where you take one discipline and transfer learn that to another discipline. This is about not looking at things in a two-dimensional way but a proverbial three- or four-dimensional way. Related to that is not only thinking about first-order effects but also second-and third-order effects and planning how to address them over time.
There has been a whole transformation in the tech industry related to customer success that companies like AWS have driven. This basically means flipping on its head the traditional sales model and helping customers start small at an affordable, low-barrier entry. Customer success is all about creating durable value over time.
Show Notes
(02:27) Taimur reflected on his education studying Computer Science at UT Austin in the early 2000s.
(06:26) Taimur recalled his first job working as a quality assurance engineer at Vignette.
(08:47) Taimur went through his time with Oracle / Siebel, where he transitioned from a purely technical engineering-focused role to more customer-facing functions.
(13:44) Taimur reflected on his proudest accomplishments at Oracle.
(18:23) Taimur recalled dropping out of studying at the Stanford Center of Professional Development and moving to Seattle to work for Amazon Web Services.
(20:35) Taimur provided insights on attributes of exceptional sales talent, given his time as an enterprise sales manager in his first two years at AWS.
(23:55) Taimur shared anecdotes of successful product launches and their market expansion strategies while leading business development for AWS's database and compute services.
(28:33) Taimur discussed instituting the culture of customer obsession and operational excellence into his teams - while leading the incubation, market development, and technical go-to-market strategy and execution for the AWS Platform across infrastructure, data, developer services, and emerging technologies.
(33:14) Taimur talked about his decision to join Microsoft to lead the Worldwide Customer Success function for their Azure Data Platform, Analytics, and AI business.
(36:24) Taimur unpacked his talk called “Enabling Customer Success through Evolutionary Architectures.”
(43:07) Taimur compared the BizOps culture between Azure and AWS.
(46:29) Taimur discussed his decision to onboard Redis as their Chief Business Development Officer.
(50:07) Taimur went over the data challenges with operational ML, the emerging data architecture of feature stores, and the powerful capabilities of Redis as a solution.
(55:58) Taimur unpacked key ideas in his talk "First Principles in Building A Real-Time AI Platform."
(01:01:52) Taimur hinted at Redis' product vision of "caching for ML data."
(01:05:21) Taimur gave advice for a smart, driven operator who wants to explore angel investing.
(01:10:17) Taimur described the evolution of tech leadership, strategic business development, and customer success strategies in the past two decades.
(01:15:29) Taimur shared three books that have greatly influenced his life.
(01:16:48) Closing segment.
Taimur's Contact Info
Redis' Resources
Redis Open Source | Redis Enterprise Software | Redis Enterprise Cloud
"Redis Labs Becomes Redis" (Aug 2021)
Mentioned Content
People
Andy Jassy (CEO of Amazon)
Melanie Perkins (CEO of Canva)
Jeff Lawson (CEO of Twilio)
Books
"Man's Search For Meaning" (by Viktor Frankl)
"Thinking In Systems" (by Donella Meadows)
"A Treasury of Rumi" (by Muhammad Isa Waley and Rumi)
"Start With Why" (by Simon Sinek)
Talks
"First Principles in Building A Real-Time AI Platform" (March 2021)
"Redis as an Online Feature Store" (April 2021)
"Redis as an online feature store, Redis Labs" (May 2021)
About the show
Datacast features long-form, in-depth conversations with practitioners and researchers in the data community to walk through their professional journeys and unpack the lessons learned along the way. I invite guests coming from a wide range of career paths — from scientists and analysts to founders and investors — to analyze the case for using data in the real world and extract their mental models (“the WHY and the HOW”) behind their pursuits. Hopefully, these conversations can serve as valuable tools for early-stage data professionals as they navigate their own careers in the exciting data universe.
Datacast is produced and edited by James Le. Get in touch with feedback or guest suggestions by emailing khanhle.1013@gmail.com.
Subscribe by searching for Datacast wherever you get podcasts or click one of the links below:
If you’re new, see the podcast homepage for the most recent episodes to listen to, or browse the full guest list.