Machine Learning Zoomcamp: Free ML Engineering course. Register here!

DataTalks.Club

New Roles and Key Skills to Monetize Machine Learning

Season 2, episode 9 of the DataTalks.Club podcast with Vin Vashishta

We discussed monetization roles and the capabilities people need to move into those roles. The key roles are ML Researcher, ML Architect, and ML Product Manager.

We talked about:

  • Vin’s career journey
  • What does it mean to “monetize machine learning”
  • Important monetization metrics
  • Who should we have on the team to make a project successful
  • Machine Learning Researcher (applied and scientist) - background, responsibilities, and needed skills
  • Developing new categories
  • The best recipe for a startup: angry users + data scientists
  • What research actually is
  • ML Product Manager - background, responsibilities, and needed skills
  • How product managers can actually manage all their responsibilities (and they have a lot of them!)
  • ML Architect - background, responsibilities, and needed skills
  • Path to becoming an architect
  • How should we change education to make it more effective
  • Important product metrics

And more!

Links:

Did you like this episode? Check other episodes of the podcast, and register for new events.

Transcript

The transcripts are edited for clarity, sometimes with AI. If you notice any incorrect information, let us know.

Alexey: Good morning. Now it's 7 AM for you or? (0:02)

Vin: Yeah, just after 7 AM. (0:05)

Alexey: Thanks a lot for coming at such an early hour. (0:08)

Vin: You know, for me this actually isn't that early. I work a couple of different time zones. This isn't too strange for me. (0:15)

Alexey: What time do you usually wake up? (0:25)

Vin: Usually about 4:35. Somewhere in that half hour range. I've got East Coast clients, I've got UK clients. I've got one client in Germany, but we don't really get on the phone too often. I usually interact with their US division. I've got one client now that's in South Korea… I am global. (0:28)

Alexey: Thanks for finding the time in your schedule to talk to us and share your knowledge. I think we can start. Next time, I'll think if I want to do another five-hour-long conference… (0:54)

Vin: It’s like a marathon if you're the host, and you're the only one. (1:15)

Alexey: I’ll think twice next time. We still have 55 people who managed to stay till the last hour. These are probably people who really wanted to listen to our conversation. Thanks a lot for staying that long. It must be tiring, I know. Thank you also for putting that post that went… I don't know if it's viral for you, but like I consider it a pretty successful post. A lot of people noticed that conference because of your post. I saw a spike of registrations right after you posted. The trick you did with “the first 45 people get it for free and the rest…” it’s amazing. (1:19)

Vin: You know, it's funny. We're data scientists, we do all of this type of work with marketing. We build models to help automate marketing. I build models for pricing strategy and decision support. And then you see people online not use it. You have all of this knowledge. Why not use it? Why not apply it to our everyday life? You can do things that are interesting and fun and entertaining, because we've seen the data. (2:10)

Alexey: Okay, let's start. We have the last talk of the day. We will talk about the key roles and skills for monetizing machine learning. We have a special guest today — Vin Vashishta. Vin’s current focus is monetizing machine learning, which covers revenue pricing strategies, model reliability, and also covers revenue and pricing strategies, model reliability, defining and hiring research/architecture/product management roles, and the path to production. As you see, Vin is doing a lot of interesting things. Since the topic of today's conference is career, we will focus more on roles – roles that are important for monetizing machine learning. Welcome. Thanks for joining us today. (2:38)

Vin: Thank you for having me. (3:27)

Vin’s background

Alexey: Before we go into our main topic, let's start with your background. Can you tell us a bit more about your career journey so far? (3:30)

Vin: Every time I do this, I sound old. I've been in technology, in the field, for over 25 years. I was fortunate, my mom worked at the University where I lived. I had one of the earliest email addresses when I was 12 years old. And fell in love with it, really did. I had a couple of good teachers in grade school who pushed me towards programming. I fell in love with it. I wanted to do machine learning, went to school, went to college for machine learning during that first boom in the early 90s. I graduated, no one wanted to hire me to do machine learning because it had all fallen apart by the time I graduated. So I had to go into tech. I had to go into traditional technology roles. That meant a broad range from my career. I've done everything from install computers, do websites, build networks – all the way out to product management roles. I've played a role in every phase of the software development lifecycle, got into strategy, led and built teams. Then in about 2010-2011 data science showed up again. We got past BI and advanced analytics and that's when I got back into the field. I had no idea it was called “Data Science” at the time. There were a bunch of us doing it outside of Silicon Valley and we all thought we invented something. (3:39)

Alexey: Data mining? (5:02)

Vin: Yeah, data mining. We were doing MapReduce. We were using some of the early Google tools. Depending upon a lot of the early papers, even back into the 90s, there were a lot of publications that we ended up using and a lot of papers and thoughts from the 90s and early 2000s that we ended up implementing. My first client was supply chain, then I went into marketing, and it snowballed from there. A lot of the early work was me having to convince companies that there was value in gathering data in a more comprehensive way than just BI and warehousing. Like I said, it just took off. In 2015, I had fortune 500 clients. People were bought in at that point. I've spent almost seven years in strategy for machine learning. Before that I was published on things like competitive intelligence, and competitive strategies. So there's a deep strategy background, deep machine learning and data science background, deep technical background. I got lucky that I got this whole spectrum of everything over the course of this 25 years. That's what I'm working on now — helping companies with strategy, with monetization, which we're going to get into, path to production, helping companies understand exactly how to build products and make money and generate revenue. From the technical side, I work in decision support and decision science. I do that for C-suite — for actual strategy creation and strategy planning, as well as some behavioral aspects that work into marketing and automated process, ML-based process improvement. It's an interesting field – it's an emerging category. I'm lucky, I'll be completely honest – really lucky to be where I am. (5:04)

Alexey: I guess when it just started, all this data science, people had no clue what it is. Being around that time is a luxury that people now don't have — for somebody who wants to get into data science. Because back then, nobody had a clue, and then it was easier to get into that field. (6:56)

Vin: It was interesting. It wasn't easier. It was easy to be a data scientist, but it was really hard to get people to pay you to be a data scientist. (7:22)

Alexey: I see. Because it wasn't popular? It became more popular, maybe six years ago, seven years ago – when it boomed. But before that… I have a few questions about your career, but maybe let's keep them till the end. Because I really want to get into the main topic of monetizing machine learning. (7:34)

Monetizing Machine Learning

Alexey: Your LinkedIn headline says, “I monetize machine learning.” I remember, when I connected to you on LinkedIn, the first thing I asked you about was, “What does it mean?” So, what is that? What do you mean by “I monetize machine learning”? (7:57)

Vin: Companies have different levels of maturity when it comes to machine learning. Obviously, I'm talking about this broad spectrum of companies. They are either at the very early stages – they have data scientists and they might have some prototype projects that have gone through the pipeline, they're in production. All the way up to companies that have done advanced machine learning. They actually have products based around models. No matter where you are in the spectrum, there's this one common theme – revenue. Companies are starting to look at machine learning and saying, “This is really expensive. This is more expensive than we thought.” The early companies that are still doing prototypes are looking at it from a staffing perspective — these engineers are extremely expensive. They're now telling me I have to have all these other different types of products to support machine learning. Then you have very mature companies who are looking at the cost to maintain and deploy high-end models. These very, very complex models – they're looking at those costs and saying, “You know, this just gets more and more and more expensive as it's in production. We need revenue.” From both ends of the spectrum, you have companies saying “We need to make more money off of machine learning.” That's the core driver for monetization. (8:14)

Vin: What monetization is... Number one: teaching companies how to become more mature when they look at machine learning. Number two: to create a strategy to handle everything from research and understanding how much value you can get from research. The most valuable thing a company can be doing is doing research and creating artifacts from that research, and then using it for project after project, product after product. In many cases, they’re defining new categories and evolving the business model in interesting ways. The more mature they get with research, the more they understand how to monetize research and how to productize models, the more revenue they see coming out of it. (9:36)

Vin: That's monetization – this massive effort to take the business from toy projects, or projects into production. They are very, very expensive. The returns reduce over time. Monetization is that process of taking a lot of the machine learning Dark Ages, and into a more modern way of using these for products and efficiencies.

Alexey: Let me summarize it quickly. Machine learning is expensive. First, you need to pay people with a certain skill set. They are not easy to find. These people are expensive. Once we get these people, they run all these GPUs, Kubernetes clusters, and things like that, which are also expensive. Expensive people spend a lot of money on expensive things. Therefore, we need to say, “Okay, we keep these people because they generate these products that bring this amount of revenue”. They're not just burning money. They're doing something useful. You do that by coming up with a strategy of finding “What should we focus on? What should we do to actually show that these people are not just burning money, but they're bringing value – bringing value to the company?” Is this correct? (10:47)

Vin: Yes. (11:49)

Alexey: Did I miss something? (11:51)

Vin: We can go into a little bit more depth – into each one of those areas – I wanted to give you all of the high points. I think we can explore more of what this means, not only from a business perspective, but also people trying to get into the field. What does this mean for you? What does this mean for your career? (11:52)

ARR — Annual recurring revenue

Alexey: We definitely will go there, but it's interesting to know how companies actually evaluate the value that data scientists can bring. Coming back to your LinkedIn profile. You bio mentions that you built and brought products to market with ARR in the hundreds of dollars? (12:07)

Vin: Hundreds of millions. (12:32)

Alexey: Hundreds of millions. Okay. (12:33)

Vin: Hundreds of dollars, they wouldn’t pay me very well for that. (12:35)

Alexey: I think this is an important money-related metric, but to be honest, I don't really know what this ARR means. Do I even pronounce it correctly? And what is it? (12:39)

Vin: You did. That's perfect. We've got an acronym soup and strategy. I'm building a strategy course on the side, and one of the first things I'm doing is building a glossary. If you're in data science, you're looking at these terms thinking “What are you talking about?” Because we have a totally different language. ARR is annual recurring revenue. You'll hear MRR — monthly recurring revenue. These are terms that the C-suite is going to listen to. My profile is very C-suite centric. I look at the board of directors — people who are CEO, CTO, CMO in a lot of cases, chief data scientists, who are product focused, who are revenues focused, who are cost-savings focused. There is this rich acronym soup, just like there is in technology. It's a different language and a different way of speaking. (12:52)

Vin: You're totally right. If you come from a data science background, this translation needs to happen between things like revenue and how companies measure revenue and how they talk about metrics — and how we talk about metrics. The nice thing about data science and machine learning is – we do this. CEOs like that because they're used to strategy consultants — these large companies and IT services companies — who come in. For them, it's hard to differentiate between what's real and what's not. Whereas the data scientists come in, we have data, we have results, we support our results. All you have to do to be successful with the C-suite is understand their language. Then, convert our data, bring our data into their way of thinking and ARR. Those types of metrics are important to understand, because now you're speaking a language that is going to get you budget. That's why that's one of the first things in my profile. I build products with an expected return, which is interesting to a CEO. It’s also going to generate a budget. I'm going to be able to justify a budget to a CEO. They can look at it and say, “Okay, I'm going to spend 100,000 on this project, because I'm expecting at the mid range at least $10 to $12 million in annual revenue.” Or something broken down monthly, if that's how they work.

Alexey: So this line in your biography in LinkedIn is targeted towards your potential clients – these C-level people who find you on LinkedIn. Then they see, “This is the right person, because he speaks my language. ARR is something this person understands. Clearly, we will not have any troubles talking.” (15:26)

Alexey: What are the other important things that people on this level care about? In addition to this annual recurring revenue and monthly recurring revenue. What are the other money related metrics that they care about? (15:59)

Vin: Every company is different. You have high level metrics which are important across the board – revenue and cost savings are your two drivers. Then you start looking at the business model. What is the business model? You're not necessarily looking at a static thing. One of the large themes in strategy right now is this concept of business model innovation, altering the business model for adapting new technologies, including machine learning. Every business now is coming to terms with that. They try to figure out how they're going to change their business model. That's where you find the core metrics. That's where you find the types of numbers that are interesting to the company. Again, that this is a dense acronym soup, because it really is. (16:14)

Vin: In every business, you'll find businesses talk about strategy differently. Some, when they say “strategy”, they really mean “tactics”. There are some businesses that are just immature that way. That's not a bad thing. Some companies are focused on execution, especially at a very, very young phase. For small companies, their needs for strategy are just as big as any other company. But the maturity just isn't there yet, simply because they haven't made it. So you have an entirely different range of tactical focus metrics for those companies. Whereas a more strategic company is looking at a bit more of a long-term view. They're looking at their product roadmap. They're looking for efficiency-type products, which are going to reduce the costs of doing business, which is going to give them a competitive advantage.

Vin: You're also looking at revenue streams: increasing their current revenue streams by acquiring new customers, producing new products that may create new revenue streams, pivoting their business model. I'm getting more and more specific. I'm going from the strategy realm into more of our realm when we start talking about products and building models that are going to line up with their core business metrics.

Vin: When you're in a business, what do they care about? What are they talking to investors about? The numbers that are important will always be in their quarterly reports. That's a really quick and easy way to understand the business that you're in, or a company that you're potentially looking at being hired by. What metrics are they explaining to investors? What are they talking about in press releases? Those are going to be the core numbers that they're most interested in. So that's how you quickly understand the metrics that are important to them.

Vin: Then you can start thinking, “Okay, is this a data science problem? Is this a machine learning problem? How do I translate that into my language using their language, using these specific metrics that they talk to others about?” Now I'm going to say what my project can do, improve, and change. You can start talking about it from a business model perspective, or a current business model perspective and the metrics and goals that are important.

Alexey: I can immediately see how being able to speak this language and translate, “I have a model with AUC in 90%” to this ARR or some other number… This skill is valuable to convince the people in charge of money to give more money, to give budgets to the machine learning team. If you fail to convince them, they will not invest in the team. They will say “I don't understand what you're doing. I don't see how important it is for our business. Maybe we should spend this money on marketing. What's the point of keeping these people here?” (19:23)

Roles needed to monetize ML

Alexey: Coming back to our main topic of monetization roles. Who should we have on the team to make sure that the projects we're working on are successful and monetizable? (20:15)

Vin: There are three real main areas that I focus on. We've talked about strategy. So you need someone on the team who is a strategist. Who speaks both languages, straddles both worlds. You need someone who understands the language of business. It’s very revenue focused, very cost savings focused. And then we have this language of machine learning. You have a role that is a machine learning product manager. This person is the crossover with a strong strategy background but also a strong machine learning background. You need a machine learning product manager. (20:30)

Vin: We mentioned that research is such an important part of realizing revenue and creating competitive advantages. Just understanding machine learning is not enough. How they're going to make money off of machine learning? How they're going to change the business model in order to start leveraging machine learning? Instead of constantly keeping up and trying to chase after companies that are bringing out market share, that are stealing customers away from them. That's where the machine learning researcher comes into play.

Vin: You've got two different types of researchers. You have applied, and you have scientists. The more mature a company is, the more they're going to go into the science side — true, hard, science research. Applied research is still scientific. You're still making breakthroughs. But it's very, very focused on building. I do applied research – I don't have a PhD.

Vin: I could do science, but I'm not the right person. That's also important. When you look at these roles, a machine learning product manager could spend a lot of time building models. But they're most valuable because they also have that strategy side. We're seeing a machine learning researcher as an applied researcher. I have more value because my science focus. I have it. But still, I'm far more valuable. I understand how to apply research, how to take something that's new, build it, and make money with it.

Vin: You also have a machine learning architect. They encompass data engineering, machine learning engineering, and a lot of these roles. But they're more frontend. They, again, have an almost strategic role. Even though they're very technical and very software development focused, along with having strong machine learning skills.

Vin: So you have these three capabilities in three roles. They are going to speak to the C suite and create a connection between research and strategy. That's your product manager. Then you have your researcher. They are going to take that understanding of what the business needs, what lines up with the business model. And they are going to start creating research. It results in artifacts that the company can use over and over again. These become intellectual property and can be reused from project to project to project. They drive new products, new revenue, new efficiencies, for the business. And you have the ML architect, who's going to create part of the product roadmap.

Vin: The roadmap explains how expensive some of these projects are going to be. As part of that, your researchers are also going to adopt their solutions. Some solutions are viable, but way more expensive than other solutions. That drives research from a more applied and practical standpoint. Because your architect is going to say “Look. Yes, you can do that. But in a year this is going to cost five times more. Maybe you can figure out a better approach. This is going to get expensive.”

Vin: You roll all of those roles together. And now you have this ‘front of the process’ that's connected to strategy. It’s going to create a product and a competitive advantage for the business. Now your more traditional roles downstream can be far more effective. A researcher is going to hand these artifacts off. Your data scientists and your machine learning engineers now have a solid framework to begin from. They have a solid product from the roadmap. They have requirements. They understand the connection between the business model and what they're building.

Vin: You have ML engineers who have a long-term framework that they're building out. What does this platform look like now? What do we need it to look like in two years? Where is it going to go? How's it going to scale? What are we going to support coming up in the next couple of years? What kind of considerations do we need to have? You hear this more mature way… and far faster – you build models and get them into production at a far faster cadence because you have these upstream roles. That's a big part of monetization – making sure that these projects aren't wasted money.

Alexey: Just want to make sure I understood you. You mentioned three roles. First, machine learning product manager – somebody who thinks about strategy and knows how to apply machine learning. Then a machine learning researcher. There are two kinds – applied and scientist, somebody with a PhD. Then we have a machine learning architect, who can say, “This is too expensive”. They think about implementation. “There’s an artifact from the researcher. And this is how we can bring it into production.” (25:22)

Alexey: So we have three roles. Then, in addition to these three roles, we also have more traditional roles like data scientists, machine learning engineers, data engineers. Right? If I understood you correctly — we already have a team. We have data scientists. We have data engineers. We have machine learning engineers. To be better at monetizing their efforts and their projects, we need to add three more roles to the team. Or maybe somebody in the team who can play these roles. Then we become better at explaining how we earn money with machine learning. Right?

Vin: Right. I’d say three more capabilities. But that's perfect. (26:51)

ML researcher

Alexey: Let's talk a bit more about this first role — machine learning researcher. We have the applied researcher. Let's talk about this person. What do they do? You mentioned that they produce some artifacts? What are these artifacts? What do they produce? (26:58)

Vin: Research is really poorly understood within the data science community. You have your applied researcher and you have your science researcher. So on two sides, your main artifacts are your models and your data sets. It's really important to understand that a novel data set is just as valid an outcome of research as a new approach or a more accurate model. What a researcher will do, whether applied or scientific… You have to parse these roles based off of where their focus is. A science researcher is more interested in creating an artifact, whether it be a model or a data set. It doesn’t have a specific application. It simply lines up with the business model, or a potential business model, a pivot for that particular business model. It takes a more mature company to understand how to evaluate research – pure science research – about some part of the business. (27:20)

Vin: Essentially you're modeling some piece of the business, or some interaction between the business – customers, competitors – some other portion of the market. It can be something like a supply chain. A scientist is going to do a pure science project with data and with machine learning involved in it. Then the business is going to have to look at those artifacts and say, “How do we turn these into products?” That's why it's a more mature business. In many cases, you're creating a new category. In machine learning, this happens all the time. New categories of products, or just new categories in general, are being created. The company then has to define that category, and figure out how to enter that new market or expand into that market, if they're already in it.

Category creation

Alexey: Maybe you have an example. What kind of category would it be? Let's say Google wants to get into a new market and develop a new product. Would it be a category? (29:18)

Vin: For Google, they're going in two different directions. Google has a lot under the covers that we just don't get to see. We see these small pieces of it. They're open sourcing offerings and some of their papers. We don't really know a lot of what's going on under the covers. They've looked at things like health care. They’ve looked at finance and FinTech. They've looked at a lot of different spaces. Amazon's the same way. They just abandoned health care. I don't think health care is really ready for it. (29:38)

Alexey: This is a kind of category? Amazon wants to get into health care. And healthcare would be a category? Then there could be a researcher, a scientist — somebody with an academic background — who can think, “I know what the goals of the company are. I know what the business model is. I know what the company wants to do. And this is the area where the company wants to go.” And the task of the researcher is to figure out how they can do this? Experiment with different things. (30:13)

Vin: “How do we do it in a different way?” In a lot of cases, you're entering a crowded field. FinTech is a crowded field. Healthcare is a crowded field. Companies do two things. I bring up Amazon because they killed a project. In a lot of cases, this is what happens with science. You have to have a company that's mature enough to monetize science and real data science Research – the “big R” type of research. In some cases, this means killing projects. If you get to the end of the road, and there's just been nothing produced, there's nothing valuable, you can't find anything to productize or monetize... You tried to expand into a space and you say, “This was a failed experiment”. But the company understands. (30:42)

Vin: They understand monetization, that that little failure is worth it. If they do 10 failed projects and one of them is a success, the ROI of that success is huge: now they're entering a new market. What Amazon was trying to do was figure out if they could define a new category within healthcare. And they failed. Whereas AWS is a great example of them creating a new category. Nobody was there before this. They've also created categories around machine learning and MLOps. They were one of the early providers for MLOps type software. They created a category. And now you're seeing a lot of people rush into that category.

Vin: That's more of your applied researcher side of the world. There wasn't really a whole lot of science to do there. Amazon simply looked at a problem that they had in house and said, “A lot of people are going to have this problem. This is a product.” So that's your artifact for an applied researcher. Someone who's looking at a new category and saying, “We can build this because we already have it. We just need to scale it and offer it up to customers.” You've got a PDM, who's gonna say “This is the business model we could get into. And this is the pricing that we could support.” It’s that strategy side overlapping with research.

Vin: When you talk about creating a new category, you're talking about a product that has never existed and it can't exist without the model itself, or without some sort of machine learning, driving and running it. That's what you're looking at when you start talking about new categories and defining categories. Something like Uber, without machine learning, it wouldn't be as efficient as it is. The same thing with a lot of these new online startups. They’re creating new categories. They can use machine learning, and they can monetize machine learning, as a core piece of their business model — machine learning first. That's where a lot of the research piece comes into play.

Vin: That's where defining a new category as a result of machine learning comes into play. When you look at a Stitch Fix, it’s another great example of a company which is “machine learning first” and “creating a new category”. That product existed in the past, but not in the way that they've been able to redefine it. They have redefined the category and entered a new market because they have that machine learning there.

Research at startups

Alexey: What about companies that are small startups. They just appeared. They are not ambitious. They don't want to be second Google. They don't want to be a second Amazon. They found a small niche. They think they can make a difference in this niche. How about them? Do they even need this kind of role for them? And if they need it, do they need to be at some point of maturity to think about this role, right? Or is it different? (33:57)

Vin: I say this a lot: “Angry users and data scientists create startups”. In a lot of cases you know that niche that you're talking about? It's a niche because there's no product there. And you have an angry, passionate user base who's saying “Everything we do is bad. Everything we have is not a solution. My problem is that this just doesn't work.” Angry users are willing to spend money. You, as a data scientist, can pair up with an angry user who really understands the need. Who has a connection to that market? Who understands a lot of the business side of how you would get into that market? What the product would need to look like? What problems does it need to solve? Who are the most passionate people within that group of angry users? (34:30)

Vin: When you have a data scientist that looks at it and says “Oh yeah, I can solve this problem. This is a machine learning problem. I can create a novel solution.” Even in a startup, those two people, by themselves, can create and define a new category for everything from a niche to a massive market. That's how the best startups are founded. There’s an angry user who meets a data scientist. They get together and have a bright idea and build something. If you – as a two to five person startup – can do that and can tap into a market and understand how to define that new category, you can make a lot of money.

Vin: The machine learning researcher is your first or second hire. They’re a co-founder. So, big, small — it works across the board.

Skills needed for the ML researcher

Alexey: This capability of a machine learning researcher – it could be a data scientist? It doesn't have to be somebody coming from academia, with all publications, top conferences, PhD. It can be just a simple data scientist who has experience working at different startups, who knows how to process data, who knows how to train a scikit-learn model. Am I right? Or are there some other things that an average data scientist needs, to wear the hat of an “ML Researcher”? (36:10)

Vin: An average data scientist… I think we kill the data science job title. We try to spread it across. It's like peanut butter and we're trying to spread it across a 10 foot wide piece of toast. We can't do it anymore. (36:49)

Vin: When I say “researcher” I mean someone who understands Research. The big R. In software, you used to have this concept of “Big R – Research” which now, in machine learning, is a whole lot more applicable. There's real research that needs to be done. There are experiments that need to be done. I talk about this process where you start out with data you can't rely on. It's garbage. It doesn't truly represent what you think it does. That first model that you build based on that data is a hypothesis. This is how you think what it is that you're modeling actually works. This is the best guess. Depending upon how well you can support that hypothesis, you either go forward and keep working that direction or maybe go back and rethink things. Gather data in a smarter way based on what's wrong with your initial hypothesis. That's research.

Vin: Once you have something that's solid, once you have a hypothesis that you can support — that initial model. It's something that you can support with data, with performance. In some cases, you test it alongside something else in production. Then you say “I see an incremental improvement here. I see something interesting in this particular model.”

Vin: That's where a lot of the automation software and workflow automation software comes in. Taking your data, creating multiple models based on it, and then displaying information to you as a researcher. You can look at it and say “That's interesting. That right there is interesting. Now I want to pull that model back and understand how that model works. There's something about it. Some piece of that model’s functionality…”.

Vin: This is where you get into explainability — deconstructing your model to understand how each feature works in order to create a better outcome. Typically, these models reveal something about a system that you didn't understand from the beginning. That initial model that you create, based on your hypothesis, and this iterative process that you do is secondary, tertiary hypotheses. You now go back and you say “There's something interesting here that I've discovered.”

Vin: That's research. You create an experiment to either refute or confirm that you’ve found something interesting. That model is now giving you data about the thing you want to actually model. That hypothesis and that experiment and the results – that's research. Now I've gotten a tangible result about what it is that I'm trying to model. Now I have a better understanding of what data I need to gather. I have a better understanding of how my next iteration of model should look. I have a better understanding of an additional experiment to now discover more. Is there more to go into and understand how this particular system works based on my discovery?

Vin: This is what you find a lot. This is the iterative process of research. Most data scientists don't understand this. They're stuck in statistics. You hear this a lot. People who talk research will say, “You don't understand statistics.” It's like, “No, you don't understand research.”

Vin: It’s not all statistics. Yes, we use statistics heavily. It's a heavy part of research and experimentation: exploring your data and understanding your results and trying to support them. But that's not everything, there's way more to this. Sometimes the statistics pushes us away from actual research. We get statistical results and statistical models. But that takes us away from the thing that we're actually trying to model and simulate and understand. That's the research side of this.

Vin: Anyone can be a researcher. I don't have a PhD. Anyone can get into the research side of this, But you need a strong solid science background. The only way I got it is, early on, my models got destroyed by hardcore scientists. They looked at my models, and they said, “You've supported them with statistics. Here, let me destroy your model now. You're not using any of the metrics that really matter. You're not modeling anything except for the data. All you've produced for me is a representation of the data in a different form. That's worthless.”

Vin: I got taught the process by some very, very smart and somewhat abrasive people, who gave me honest feedback. That's what it takes to get into the research side. That's what it takes to move from data scientist — which is overextended — into a more rigorous research role. (41:18)

Alexey: It's more like a mindset? As understood, researchers need to be able to come up with a hypothesis. They need to be able to test this hypothesis — to verify them or reject them by experimenting. They also need to be able to do data analysis. To understand that the data is not garbage and understand the results. Go into the results of the model and understand why the model is behaving in a certain way. Explain why the model is doing that. These are the core skills, right? Then the main thing is the mindset “Okay, I have a hypothesis. I want to test it. I want to try to verify it or reject this hypothesis.” This is where statistics also comes into play, because this is one of the tools you actually use to reject the hypothesis right? Or verify it. (41:38)

Vin: This is where more of the advanced math comes in, too. People ask: Why do you need to understand differential geometry? Why do you need to understand topology? Why do you need to understand some of these deeper concepts around graph theory? You can get very, very niche with the type of math that you need to understand in order to explore your data, in order to come up with experiments. That's part of the skill set. Really understanding how to creatively build an experiment that's feasible, affordable, and will probably get you the results that you're looking for. To get you a better understanding of what it is that you're trying to model. It's not only a mindset, it's also a skill set. It is a capability to do more of the science side of the field. (42:44)

ML product manager role

Alexey: I just looked at the time and noticed that we still want to cover the other two roles. We already talked about pairing angry users with data scientists. This is where the role of a machine learning product manager appears. When a user talks to data scientists, they do not necessarily understand them. The data scientist doesn't always understand them. Is this one of the things that a product manager does? I guess there are other things that they do. So maybe you can talk a bit more about what kind of work they do. (43:28)

Vin: The product manager has to address a wide range of audiences. This is one of the reasons why they need to also have a machine learning skill set. They have to speak to a machine learning team: researcher, architect. Sometimes it's your more traditional R&D and dev machine learning teams and your model development teams. They have to speak to them in their language. They can't be speaking in this abstract, fuzzy type of language. They have to speak about models. They have to be able to understand it. (44:10)

Vin: They also have to be able to talk to users, to get requirements, to make sure that the users are able to articulate their needs. Most users can't do that in terms of a machine learning solution or data science. They don't understand what's possible. A big part of what the product manager does is tell the C-suite “This is possible. I listened into your strategy planning. I heard three or four different problems. I’ve done a little backend work and built a very rough business case. This is exactly what I think we can do in order to solve some of these problems using machine learning. Now I need to give it to a researcher to prove that we can do this.”

Vin: That's where the ML architect and the ML researcher come into play. That product manager has enough understanding of user needs and user requirements, and strategy. They listen in to the planning process, hear business problems, and hear goals. Then they can write a business case saying, “With my knowledge of machine learning, I'm sure that if I give this to a researcher, we can solve better with machine learning than we could with a traditional software approach or whatever we're using right now.”

Vin: That's the product management role. They need to be that translator, create the initial business case, frame and reframe the business problem. They gather requirements and work with users through the whole lifecycle and connect all of these other groups into the development process. Then they pass all of this over to the teams that are going to be responsible for researching, architecting it and actually building it, deploying and supporting it. And then they also create a gated process where that PDM is reviewing at gates, at particular checkpoints, or at achievement points.

Alexey: Like the project management sort of thing? Making sure that the project gets delivered. (46:42)

Vin: No, that's one of the key differences. The project manager is making sure the timelines are in place. The product manager is making a “kill or greenlight” decision. They look at the progress with respect to the business. Is this returning the value? Is this worth investing more money in it? (46:49)

Vin: It's almost like in academia. You have the grant and funding process. You sing for your support every quarter or every year to try to get more grant money. The product manager is the one who evaluates whether I should continue — as a business — to fund you. Whether I should go back to the C-suite and say: “This is worth another X amount of money. We've made tangible progress. Here's what we've built. Here's why I think we should get funding for an additional round of research.” Or “Why I think we should put it on the product roadmap and go forward with it.” They're in charge of that gated process. Not so much deadlines, but deliverables.

Alexey: That's a lot of responsibilities. They need to talk to the user and translate whatever the user is saying to a set of requirements. They need to work with researchers, and help them. First of all, they need to know if it's at all possible to solve machine learning. Even before talking to researchers. Then they need to talk to the researchers and translate the needs of the user to the researcher. Then they also need to talk to the C-level people, and then translate whatever the researchers did. They need to come to the CEO and say, “This should result in whatever letter soup metric, like ARR, or MRR”, or something else that the company cares about. They need to know all that. Am I correct? (47:45)

Vin: Yes. Even in traditional software development, product managers are the apex predators of the business world. They are engineer strategists. (48:39)

Product manager and others in the team

Alexey: It's one single person? It feels like a team of people. (48:54)

Vin: It can feel that way. The way that it ends up working is you have a product manager who has all of those core capabilities. (48:59)

Vin: There's a traditional requirements team. You'll have technical writers and requirements gatherers — business analysts, who do the requirements gathering. But they need the product manager in order to guide them and help them. What requirements are they actually gathering? Have they been written up correctly? Have the user needs been captured? They're not personally writing up the requirements. When they read a business case, a lot of that business case is already there.

Vin: Other groups that are interested in solving that problem have presented a lot of the background research. They have done a whole lot of the legwork around the business case. Now the product manager is really defining “How are we going to use machine learning from a proposal standpoint? This is how I propose us to use it.”

Vin: Now they’re giving that to a researcher and saying “Am I right? Is this something that we can really do? Did I nail it here?” And the researcher will have a little bit of funding from the C-suite, who've seen the business case. They’ve seen this proposal and said, “Yeah, I'll give you some funding to do a two week exploratory on this.” Then the researcher will come back. It'll be a feasibility study. Is this something you can do? Do I think that you're right, as a product manager?

Vin: So, you're right. There's this ecosystem around the product manager. The ML architect gets involved as well. They're looking at that proposal and feasibility study from a path to production standpoint, from a support standpoint. Then you've got another layer of “This is a brilliant solution. Great job scientists. But here's what it's going to cost. Here's what it looks like if we actually put it into production.” You're having a lot of different roles being relied on, in order to support your product manager. But yes, the product manager has a significant skill set and a very deep understanding. Not only the technical side, but also the business side.

Background of ML product managers

Alexey: Basically, they need to be able to communicate with people who are collecting requirements. Also, with people who are building a model — this ML architect, for example. They need to be able to speak the same language with them. And if need be, they also can go ahead and do this themselves, but they typically don't. Usually, they are on a higher level, more coordinating than hands-on. Right? (50:53)

Vin: These are typically former data scientists, former ML engineers. They come out of that. A lot of them come out of startups. In a startup, you're really forced to do product management, data science and machine learning and a little bit of research too. They were forced to be unicorns. They have just enough understanding that they can quickly be upskilled within the business to do a larger range of activities. The product manager is not necessarily going to be the person that builds anything or prototypes anything. They're simply going to present the problem in data science and machine learning terms. (51:20)

Vin: Then the researcher will tell them “Based on what you told me about the problem, I think I can do this. Here's my proposal.” Very similar to an academic model, where they say, “Here's my proposal for what I'm going to do research on. What do you think?”

Vin: The product manager can then talk to the architect who says, “Based on that proposal, here's what it would take. Here's what I have to build in order to support this long-term. Here's the architecture and infrastructure. What do you think?” Now the product manager goes back and handles that ROI calculation.

Vin: So it's really them talking to the product manager. The product manager is doing a translation to machine learning terms and data science terminology. But it's technical folks driving that conversation about technical topics. The product manager isn't the sole technical owner of this. They aren’t the technical lead. They’re the business lead. But they have to understand what the researcher, the architect, the developers, and the engineering side are telling them. That's why you have to have the skill set.

Alexey: You already mentioned that usually data scientists and machine learning engineers become these ML product managers. Typically they come from startups. But can usual product managers that work in IT companies become ML product managers? They probably also have quite a good background for that. (53:17)

Vin: I've seen people try it with mixed results. Depending upon how technical the product manager is, and what their background is… You'd be surprised. Somebody with a background in economics does very, very well in the machine learning world. You'd be surprised how much relates to machine learning from an economics background — from a hardcore econ background. There are success stories. It depends upon what the background of the product manager is and how deep they got into the technical side of it. That includes understanding the platform, but also understanding it from a science side and from a model development side. (53:44)

Vin: You can upskill a product manager into a data science machine learning product manager role. But it really depends on their background. In some cases, project managers become product managers. They're semi-technical. It's really understanding how well they're going to learn the data science and machine learning side that will determine whether or not they're going to be successful.

ML architect role

Alexey: I noticed that the product manager is in the center — the communication hub, in a way. Then there's also this machine learning architect that we didn’t talk in detail about. From what I understood, this is a person who looks at the proposed solution and can understand if it's possible to implement it or not. If it's possible, they can also attach a price to this. They can say, “If you want to go with this model, this is how much it's going to cost you.” And then they can say “This is too much. We probably need to do something simpler.” Then they come back to the researchers and say, “Let's do something simpler.” Did I get it right? (54:50)

Vin: That's absolutely correct. The way that an architect provides value is creating the larger vision for the platform based on the business needs. Then the platform becomes part of the product roadmap. That's the really important piece of what an ML architect does. You see this in cloud architecture and software architecture as well. This concept of creating a platform is to not only enable development and automate as much of the development process and the research process and the experimental process as possible, but also to create a path for this model to get into production and then be supported. (55:43)

Vin: The ML architect understands the platform well enough to say: I'm looking at user requirements. I'm looking at business requirements. I'm looking at customer requirements. I'm looking at what we have in production right now and what platforms we have in production right now. I'm looking at the scale of data that's being proposed from a research standpoint, extrapolating out exactly what this would look like in the real world. Is this streaming? Is this batch? What do we have right now from a platform standpoint that can support this internally? Do we have something another team is using that we can repurpose? Do we have infrastructure already built out for this? Can we do something on premises? Or does this need to go to the cloud? Is that even feasible?

Vin: You have an architect who is going to say, “It's nice to say we're going to go on prem. We're going to go to the cloud. We're going to go hybrid. Or we're going to implement Kubernetes. But now let me tell you what it's going to take to actually do that, and how much it's going to cost.” When you start talking about “It's in production”. What is it going to cost to maintain this thing? A lot of models are very, very cheap to make. After three or four months in production, they have to be continuously retrained and that’s expensive,

Vin: That researcher needs to mature, to overcome the problem of retraining. To actually build a more efficient model that performs better on the classes and the regions and scenarios that the business is most interested in. Back to the product manager, they need to define that, so that the researcher and the architect can do their job. (57:44)

Skills and tools of the ML architect

Alexey: The skills they need to have. They really need to know the tools. They probably need to know cloud. They need to know what other tools there are. Knowing the scale of the problem, knowing the requirements, they can come up with a suggestion. If we use cloud and we can go with AWS or Google Cloud, these are the services from AWS that we can use to solve this particular problem. Because we need to build this data pipeline. We need to store this data somewhere. We need to then have this machine learning platform that makes it simpler for the researchers to deploy their models. They take care of all this tooling that makes it really easy for researchers to roll out things to production. Right? (58:04)

Vin: And give it to the model dev team. You have your research team at the front end of it. They're going to create artifacts. Those artifacts are going to go to a team who's going to use them to create products for scale, for efficiencies. Products that make things in the business run better. (58:52)

Vin: Cloud architect is a good traditional role to look at. They are very similar, but the ML architect is more focused on the machine learning side of cloud architecture. In a lot of cases, the business already has infrastructure in place. So the ML architect is looking at the infrastructure that's in place. They’re not necessarily designing from scratch in every scenario. They're looking at what's in place.

Vin: They're saying, “What can we repurpose to support the products that could come out of this research? Do we have to do anything? Can we just deploy on what we have? What is it going to take for the team responsible for deploying this to rebuild whatever it is that we may be incorporating into from an existing product standpoint? What are we going to have to do in order to support it? What's new that we might have to do to support it?”

Vin: In some cases they'll make a buy decision. “We need a platform but there's a couple of product solutions out there that we can buy. It's cheaper. If I look at the roadmap, this buy will also support: this product, this product, this product, and this project. Right now, for this project alone, it wouldn't make sense. But we're gonna have to do something for these three other projects anyway. So now it's justified from a cost standpoint.” That's why the architect’s important. The architect can look at a longer term vision and say “This seems expensive today. But over a year, over three years, this is going to seem a whole lot more reasonable. And we're gonna have to go this way anyway.”

Becoming an ML architect and ML researcher

Alexey: Talking about all these three roles: researcher, architect, and product manager. All these people who perform these things, wear these hats — they are quite experienced already, right? They worked already as cloud architects, software engineers, product managers, machine learning engineers. They have to be pretty experienced to be able to do these things, right? Can a fresh graduate come and start working as a machine learning architect? Probably not? (1:00:42)

Vin: It would be like saying somebody with that experience could become a software architect. Or cloud architect. I don't think so. I don't think you can really step into that role. You can step onto an architecture team. You can walk on as someone who's part of the team. Who's a junior level person on the team, who's more hands-on, more like a machine learning engineer or a junior machine learning engineer. Especially if you have a software development background, or some other type of architecture background, and you're upskilling into the role. You can be very, very successful. A lot of software engineering best practices are finding their way into machine learning engineering, machine learning architecture, and what falls under the “better-known data scientist”, model development-type of roles. (1:01:18)

Vin: You can be successful coming into one of those teams and being a contributor pretty early on. But you do have to have some experience with architecture, software development, best practices of software engineering, and eventually transition into the ML architect role. It's not something that you should be pushed out of, or that you can't upskill into. It's a longer journey.

Vin: The same thing with a researcher. It's not something that a data scientist can't do. You can come out of a Master's or PhD role, having some experience in academia, doing publications, or just doing applied work. And transition into a researcher role. You can be successful coming out of academia. You can be successful coming in from a research assistant or one of those types of roles. It's not necessarily barricaded and exclusionary. But it's a longer learning process than you would expect for something like a data scientist, where you can train up pretty quickly.

The gap in education

Alexey: We talked about the posts you made, when you made the announcement. One of the things you wrote there “The data science programs are still teaching unicorn curriculum and producing people with six inches of depth across six different roles. But the data science and machine learning field is changing. Most courses, certifications, and programs are two years behind.” (1:03:12)

Alexey: It made me think when I read this. What can universities do to address this? Can they do anything there? And what to do about this mismatch of what we study and these three roles we talked about? Because they're pretty hands-on. You cannot just read a textbook and go work as an architect. You have to put in a lot of effort. How can we bridge this gap? How can universities help?

Vin: I see two routes to it. First is internal to companies. We don't look at the role of business or as much as we should. Businesses should be mini-universities. There should be a pipeline of talent. As a company, you should have a farm club where you are upskilling people in roles that may become obsolete soon. Roles that, in many cases, the data science team are working to automate. These are efficiency projects, where the company is looking at automating something expensive, and trying to become more competitive and more efficient. Those people that are in those roles are domain experts. They have a lot of expertise within the company. Some may be semi-technical. (1:04:18)

Vin: Every role now — in some way, shape or form — is technical. There's the potential for that person to upskill. We have to create a farm club — a path for people that are in roles that are losing value. These roles are going to become increasingly more automated. Create a path for them to go into these more technical roles.

Vin: I see companies having a role in this. For them, it's far cheaper to upskill: take people and improve their employee lifetime value. We think of customer lifetime value a lot. But you can work to improve someone's employee lifetime value. Instead of consistent turnover, and all the costs that are associated with finding new talent, and laying off existing talent, and losing all that institutional knowledge in the process, you can preserve that knowledge, reduce those costs. Transition people into these roles. That's one of the key things that we can do as businesses. We can become more like universities and more like bootcamps. Try to continually upskill and reskill the people that are already there. That is, by far, the least expensive way to keep up with these ever changing needs in the field.

Vin: From the academia standpoint, it's very difficult. Universities are trying to become more and more connected with businesses. They're trying to create partnerships. But they haven't really gotten very good at that. There are some standout universities, MIT, Harvard — the big names in data science and machine learning. You can look across Europe, and you find a lot of these companies that are now lining themselves up with universities and research programs. The same thing in India, China. There's a lot of countries where core industries are getting very good at partnering with universities.

Vin: But only a small number of universities are getting these partnerships. They're only getting the top most prestigious universities. We can't have that. We have to have a more accessible educational system. We have to have smaller colleges, even down to community colleges and high schools. We need to be recruiting straight out of high school and into programs where the person is recruited into a company.

Vin: When I talk about a farm club being a huge part of this, recruiting out of high school becomes the beginning of the talent pipeline. The beginning of that farm club. The university is involved. Bootcamps can be involved — this new paradigm of learning and education that is outside of academia. It’s purely online and more self-driven and self-directed than classroom and teacher-driven and directed.

Vin: This is why we need to get into the high school level. Because if we're going to achieve any of our goals of education, we have to redefine the way that we teach at the high school level, even the middle school level. We have to be teaching people not for college, not for memorization, not to take tests, but to be productive. Eventually, this person is going to start a business and create new value for the entire marketplace. That's the redefinition that we need.

Vin: It needs to go all the way down into middle school. We need to be getting to the point where we're recruiting out of high school. In the same way baseball, basketball, soccer teams do. They're pulling people out of high school. They're so engaged in the athletic programs, the sports, and in the after school programs. We need companies engaging in the same way at the high school level, in the middle school level.

Vin: They can partner with schools and make it more accessible. As a company, they're going to improve our quality of education system across the globe. They will help transition from memorization and test-taking. Even the best education systems are still focused on answering questions that are true/false or multiple choice. They don't have a whole lot of substance behind wiring essays when it comes to being a creator, being a thinker, creating value from here up [points to head]. Not with these [shows hands]. We don't need these [shows hands]. We're automating these.

Vin: That's where the education system as a whole needs to be overhauled. Not only in order to meet these needs. But really, we need entrepreneurs. We need people starting businesses. That's our future. If we keep creating employees and people who are good at memorization, they have no future in the economy in 10 to 15 years.

Alexey: You're saying that universities shouldn't even attempt to catch up with the industry? They should focus on fundamental skills. They should help people learn things faster and let the industry take care of getting people with these skills — people who can learn fast, people who know the fundamentals. Maybe, computer science fundamentals. And then they will learn everything that is needed. The universities shouldn't try to cover everything. They shouldn't try to cover data science, because it's outdated anyway, it's two years behind. (1:09:50)

Vin: What universities used to be — and need to be again — is places that build on the body of knowledge. That's the point of a university — to add to the body of knowledge, to do the research that creates ideas that can be disseminated out to businesses. This concept of business dynamism is something that a lot of economists have been looking at from a policy standpoint going back to the 80s. Businesses are not as dynamic. (1:10:34)

Vin: These ideas are no longer coming out of academia. They're no longer state funded. They're no longer funded by research grants from companies to add to the body of knowledge and then disseminate that to all companies. Instead, they come from just one company. You're seeing that a lot happening with Google. The reason why there's so much consolidation of intellectual property and so much consolidation of wealth, is because we don't have this business dynamism.

Vin: Colleges and universities need to go back to being people, as part of this organization now adding to the body of knowledge. That's the fundamental piece of all of these theses, research, grants, and all of this. That's what research is. That's what a university needs to evolve back into. They need to devolve. They're not factories for test takers. When you look at liberal arts, they're focused on people learning how to think, learning how to be creative, how to look at issues, look at broader implications.

Vin: As engineers, we should be forced to take courses that are liberal arts courses. The reason for that is because those are the ones that teach us how to think. Those will make us look at history and understand the broader perspective, see patterns, and become more useful. Because we learn how to think.

Vin: That's where universities need to go back to. They don't need to teach niche skills. They don't need to try to chase after the latest trend and the latest technology. The software fundamentals, the fundamentals of development, and the best practices are easy to keep up with because those are not consistently evolving. But trying to teach the latest programming language or keeping up with platforms and trying to update your curriculum every two months… It's not the way that they were designed. They were designed to add to bodies of knowledge. I think that's what we need to get back to.

Alexey: We are taking more time than we originally planned. But maybe we can quickly go through a couple of questions that we still haven't answered. (1:13:19)

Is MBA needed for ML Product Managers?

Alexey: Do you think machine learning product managers need an MBA or not? (1:13:36)

Vin: No. Degree doesn't matter. I did it. I got an MBA and with an econ focus. I did a business when I was very, very young. When I was going through college. I made every mistake possible. Went into the corporate world trying to understand what I did wrong. Came out of the corporate world more confused. I did an MBA because I was trying to understand how business thought and worked. No, you don't need it to be a product manager. (1:13:43)

Splitting Data Science into specific roles

Alexey: Unrelated question. How soon will we see the data science role being split into multiple different, more specific, roles? (1:14:14)

Vin: It's happening now. It's very much happening. (1:14:25)

Alexey: These roles we talked about, like the ML researcher? (1:14:29)

Vin: And more. We're going to see things like model quality assurance engineering starting to come up. If you look at software development, that's going to be the paradigm for model development. Research is going to be something totally different. If you look at biotech or pharma – they manage research. They have researchers. They've learned how to monetize research. They manage the research process. That's where the new roles within companies and new capabilities are going towards and being focused on. But the “traditional data science”, is going to follow more of that software development, software best practices paradigm. (1:14:32)

Metrics to measure angriness and satisfaction

Alexey: You said that angry users are important. What metrics do we need to use to measure their angriness or satisfaction of these users? (1:15:14)

Vin: Adoption is one of the easiest metrics to follow. If you're creating a new product, or creating and defining a new category – it's adoption. Who's using it? How long are they using it? How long does it take them to complete a task? A task that they used to do either manually or with another solution. What's the learning curve? How long does it take to get a user from “novice, never seen it before” to an “expert power user”? Once you get to a power user, how long does it take on tasks? How much time does it take per task? To complete work? Have you reduced the total number of tasks that person has to manually interact with? Are there fewer tasks in their chain? (1:15:28)

Vin: Then, in my domain, I do a lot of decision support. What are decision outcomes? What is the decision chain that leads to a particular outcome? What information are they consuming in order to make that decision? How do we improve what information we are giving them to drive better decision outcomes?

Vin: Also, the feedback loop, measuring the quality of decision. You have 100/100 being the perfect decision. What score did this particular decision result in? If you have a product with a revenue range. You have your top, you have your expected, and you have your underperform. You have a decision chain that leads to that product being deployed, being adopted, and being monetized. How are your pricing decisions? Were your pricing decisions correct to achieve your baseline? Or overperformed? Or did they underperform? What data did we use in order to make those pricing decisions? What data can we change? What data can we improve in order to help the next project perform better — to get more to that mid or high end of the projected revenue or cost savings range? Those are really tangible metrics when it comes to adoption and when it comes to building these solutions that we need to start focusing on.

Alexey: They're not necessarily related to machine learning. I would say these are usual product metrics, right? This is something that a usual product manager would care about. (1:17:31)

Vin: It’s defining an entirely new way of connecting model output – your prediction – and model quality, back to an actual hard number. Because there needs to be a hard number on the other side. If I can improve a particular piece of performance that results in this happening on the business side, this happening on the revenue or cost saving side. (1:17:44)

Alexey: I think that's all. You can see that it's already dark here. Thanks a lot for coming, for sharing. I wanted to ask you so much more than I did. We had a very interesting discussion and I learned a lot from you. I'm sure that 33 people who are still there, they also learned a lot. That's why they stayed. Because this information was invaluable. It was also interesting to hear your thoughts about education and how we can change that. (1:18:12)

Vin: Thank you for having me. I really appreciate it. Thanks for all the time you put into it. And thanks everybody who stuck around for just over an hour's worth of me talking. (1:18:52)

Alexey: Now it's 5:20 PM and we started at noon. I don't know if all these 30 people have started at noon together with me, but it's been a while. I'm not taking your time anymore. Thanks a lot for coming. We will chunk this long video into multiple things. This conversation we will upload also as a podcast. Next week, we have another day of the conference where we will talk about machine learning in production. (1:19:03)

Subscribe to our weekly newsletter and join our Slack.
We'll keep you informed about our events, articles, courses, and everything else happening in the Club.


DataTalks.Club. Hosted on GitHub Pages. We use cookies.