Machine Learning Zoomcamp: Free ML Engineering course. Register here!

DataTalks.Club

Innovation and Design for Machine Learning

Season 8, episode 3 of the DataTalks.Club podcast with Liesbeth Dingemans

Did you like this episode? Check other episodes of the podcast, and register for new events.

Transcript

The transcripts are edited for clarity, sometimes with AI. If you notice any incorrect information, let us know.

Liesbeth’s background

Alexey: This week, we'll talk about innovation and design for machine learning. We have a special guest today, Liesbeth. Liesbeth has been working on topics related to strategy, product and AI for the last six years – first at McKinsey and then at Prosus. Prosus is the parent company of OLX, which is where I work, so Liesbeth and I were working together. Liesbeth also led an innovation division at OLX. Now she is doing a new company and she will be working on climate resilient agriculture – maybe you'll tell us a few words about that. Welcome to our event. (1:18)

Liesbeth: Thank you. And thanks so much for inviting me. I’m very happy to be here. I think you already gave a very nice overview of some of the things that I've been doing in the past year, so I won't spend too much time on it. But as you said, in the past few years I've been working on where strategy, AI and product meet. I am an engineer by training – I have a Master's degree in applied physics. But I am also an art historian and ever since my studies, I've really enjoyed trying to meet art and science, or at least trying to find ways in which more technical topics merge with softer topics like strategy or product or design or innovation – the ones that we'll talk about today. (1:59)

Liesbeth: As you mentioned, I’ve spent some years working as a strategy consultant, but then worked on all sorts of topics. But I really wanted to dive deeper into the topic of AI machine learning. That's when I joined Prosus, where I have also been doing many things, but I think you can kind of summarize it as trying to find ways to scale the impact that AI machine learning has across organizations. That's where I've been working on topics like, “How does AI actually fit into the overall business strategy of a company?” Or “How can you design and optimize your products in such a way that you have amazing feedback loops going?” Or “How can you upskill a whole organization – for them to understand the basics of AI machine learning and help enable the work of the data scientists in an organization?” So a lot of different topics that I’ve been working on and I'm very happy to be talking about design and innovation today. (1:59)

Alexey: So you had the double degree, right? First you got one and then another? (3:41)

Liesbeth: Yeah. I have my art history degree as an undergraduate, so just a Bachelor's degree. I did that together with my physics and then I did a Master's in Applied Physics. (3:47)

Alexey: That's pretty cool. Would you say it was quite useful in your career to have both perspectives on things? (3:59)

Liesbeth: I would say that the humanities – or art history – definitely has a completely different way of thinking about problems. In physics, it was always the case that you had a challenge and there's only one possible answer. Whereas in history or arts, there's always a single topic that you can look at from different perspectives – that's much more about trying to also understand other people's perspectives on a certain situation and really being able to maybe convince someone of your perspective. So I would say it's a very nice and complementary skill set. (4:07)

Alexey: Especially with history, right? The textbook is not enough because you always have differing opinions and you need to work with sources to really find out what happened. Textbooks are always written… they sometimes can omit things. (4:45)

Liesbeth: Yes. Someone always has their own view on a certain situation. Definitely. (5:01)

What is design?

Alexey: Okay. So what is design? Is it about making websites look good or is there more to that? (5:07)

Liesbeth: Right.[laughs] So you're asking a physicist here to define design. Probably a designer will have a different perspective. But because I'm not a designer by training, I actually very nicely am able to see design as a tool, really. For me more specifically, it’s a tool to help build user-centered products. I see it as a whole set of processes that you can apply in order to make sure that whenever you build a product, a feature, or an AI application, that you really start from the problem that you're trying to solve and really always take into account the customer view or the but whoever is going to use this application that you're building. You always have to make sure that you take that into account. That for me is what design is about. (5:15)

Alexey: Why is it called design? To me design is like, “Okay, let's design a car that is beautiful. Let's design a dress that looks beautiful. Let's design a website…” To me, it's always about the aesthetic component of it. But when I started working at OLX, I learned that designers at OLX and in other companies, especially U/X designers – it's not only about making buttons look good, but also about how easy it is to find a button on the website and other things. Is that right? (6:06)

Liesbeth: Yeah. Especially when you try to combine that with AI, data science, and machine learning – for me that becomes about much more than just what the button needs to look like and what color it should be. I have some examples that I can share with you about how that might work. Maybe the closest to UX, I have an example of what you would call “algorithm-friendly design” or as it's also sometimes called “seeing as an algorithm”. It's a very nice article that I can recommend by an author named Eugene Wei. What he talks about, it's almost as if when you think about a certain product experience or the way in which a customer interacts with the product, why don't you also see the algorithm as a stakeholder in that? For an algorithm to work very well, it needs to have very clear signals. So why not build your product in such a way that the algorithm can actually collect all of the signals that it needs? I think this is kind of a new way of thinking about the interaction between design or product design and data science. But I think it's a very powerful one. It also fits into this broader story of maybe data-centric AI. (6:43)

Liesbeth: Back in the days, three years ago, when I joined Prosus, I had assumed a lot of data science teams would build AI applications using the data that was already there, which was usually generated for a different purpose, not necessarily for the purpose of training their models. But imagine that you move to a different scenario where from the beginning of designing a product, you take into account that algorithm and what it might need. Think about OLX – it’s about buying and selling – sometimes buyers and sellers have different incentives. A seller just wants things to be very easy. They don't want to provide that much information, because it takes too much time. But a buyer might want to know a lot of details. So there's already a little bit of conflict there. (6:43)

Liesbeth: Usually, when you design a product, you take into account both the needs of the buyer and the seller. But imagine that you maybe also take into account the needs of the algorithm there, and maybe the algorithm is all about understanding the preferences of a certain buyer. Maybe it’s really detailed understanding such as what style of secondhand clothing they might like, or what style of furniture they might like and maybe, in a way, it’s also designing the product to collect really strong signals about exactly that. So that's an example for me where data science and design come together. (6:43)

The importance of interaction in design

Alexey: I think you mentioned one important word that I took a note of. You mentioned that it's about interaction. Design is about how you interact with something – be it a piece of furniture or a website, a physical or a virtual product. So it's about making the website easy to use. When we talk about design and machine learning, it's also about thinking how this thing will interact with the algorithm, right? Not only with the human (with the user) but also with the algorithm that we will use to make the user experience better. Is that right? (9:25)

Liesbeth: Yeah and especially with “How can we collect the signals that the algorithm might need in order to train faster?” This article that I just mentioned about “seeing as an algorithm” to very nicely describe the difference between Instagram and TikTok here. So imagine scrolling an Instagram feed – I think a lot of people will be familiar with that – you scroll past a lot of different posts, but if you read it from the perspective of an algorithm, it's kind of hard to understand if someone likes a post or not. Typically, you might see one bit of a post here, you see a bit of a post below it, you're looking at the comments. So what is it that the user is actually interacting with? (10:04)

Liesbeth: If you look at TikTok – there, it's all short videos where someone is looking at one video at the time. Maybe the user clicks on the creator, maybe they will click on the heart. Then you collect much more sharp signals about that one particular video, which actually allows TikTok and its algorithms to learn much faster about what it is that you like and what you don't like. So that's where, in the way that you design your interaction with the user, you're taking into account how you can speed up the learning of the algorithm and actually extracting what the interests are of a particular user. (10:04)

Alexey: I guess in the case of Instagram and other social networks – there are multiple ways you can interact with a piece of content. You can like it. You can also comment or you share it, or you can just watch it. Then the people – the developers, the data scientists in that company – now need to figure out “Okay, now we have a ton of signals coming out of this. How do we actually get all the signals and combine them in such a way that we can put this into our recommender system and get a good fit?” In this social network, they may have first designed the feed and then thought “Okay, now, how do we add the ranking here?” While in TikTok, they thought about this in the first place. Is that a correct assessment? (11:22)

Design as a process (Double Diamond technique)

Liesbeth: Exactly, yeah. It might be nice if I mentioned another example of how, for me, design and machine learning come together, because what we've just been talking about is really a bit more about design as a way that you shape the interface between the user and the product. But for me, design can also be much more of a process, or a way of working that you agree on with each other. (12:12)

Liesbeth: An example of a technique that's used a lot by designers is called the “double diamonds”. It's kind of a way to go from a rough problem area that you might want to solve – or something that you think you want to build – to an actual working solution. Imagine the shape of two diamonds – it's kind of diverging and converging. That's how that way of working works. So you start with something like, “Hey, I want to reduce fraud in my business or in my product,” then you diverge and you start to research, “Okay, what is fraud really about? What ways of fraud do we see on the platform? What do users care about when I think about fraud?” Then you converge again and you say, “Okay, if I want to solve fraud, there's this particular sub-problem that I actually should be solving, because it's most important to my users.” And then you start to diverge again and you look at all of these different potential solutions in which you can solve that type of fraud and you start experimenting. (12:12)

Liesbeth: You start testing them and then you converge to a particular solution that works. That, for me, is another design method that's actually used to make sure that you're solving the right problem and that you're also solving the problem in the right way. Some of this might seem a little bit obvious. But to me, again, it also touches on some of the core challenges that I associate with data science teams that I've worked with, namely, “What's really the problem that we're trying to solve here?” – first of all. And second of all, “Is machine learning really the right way of solving it?” And I have seen a lot of data science teams that go to the solution very quickly. But if, as a team, you agree on these types of design practices, it might help you to make sure that you're solving the right problem and you're using the right tools to do so. (12:12)

Alexey: I was taking notes about this “double diamond”. This is not the first time I hear this term, but let me try to summarize it to make sure I understood it. So you have these sort of two big steps in the process – the first step is that you really need to understand the problem, “What is the problem you're trying to solve?” So first, you do a bit of brainstorming and you try to understand “Okay – fraud – what does it actually mean?” Then you do brainstorming, you talk to people, and then you collect a lot of data. Then out of this data, you actually want to pick one area and focus on that. By doing this, you really understand the problem that you want to solve the problem – that your users have. (14:32)

Alexey: So first, you start with the problem. Then the second step is, “Okay, now we found out what the problem is. Now let's find a solution.” Then you again go into brainstorming mode and you say, “Okay, I can solve it with a neural network. I can solve it with, I don't know, gradient boosting. I can solve it without any machine learning. I can solve it just by sitting there and labeling data myself.” or I don't know, “I can hire a vendor to do this.” So you list all these possible things and then, at the end you say, “Okay, now we have all these possible solutions. We can build things in-house. We can solve it without machine learning. We can hire a vendor. Let's evaluate all these solutions and find the right one.” At the end of this process, you know what the problem is and you know what the best way to solve it is. Right? Hopefully. (14:32)

Liesbeth: Exactly. Hopefully, yeah. I think there was a very nice way of describing it. Of course, it depends on the team and your organization – how exactly they might apply a process like this. But what you also often see is that when you go through that solution phase, there's a lot of experimentation. So out of these six possible ways that we've identified, maybe let's start three of them – let's see if they actually work, if they work as easily as we thought they would. A lot of this is trying to also substantiate your ideas with data, whether it comes from your user or from tests, to try to see what the problem you're solving really is and what the best solution could be. (16:02)

Alexey: So I guess this is a part of the second step – the second diamond – to say “Okay, now we have a couple of possible solutions. We can evaluate this vendor. Let's try and build a proof of concept.” Or “We can see how to solve it without any machine learning. Let me just sit down and label all this data myself and see what happens. How can I actually make a decision based on that?” (16:43)

Alexey: Then the third option could be something else – let’s just say, “Open my Jupyter Notebook and train a simple model myself.” Then we do this in parallel. We have three proofs of concepts and at the end of this step we can also say “Okay, which seems more viable?” This is also part of the process, right? (16:43)

Liesbeth: Exactly, yeah. Then usually, along the way, you have different criteria. Sometimes, you just have budget constraints, so that's why you eliminate one of the options. Sometimes you have an idea but you put it in front of the user and it turns out that they don't like it at all. So maybe that's why you eliminate another idea. Or as you said, you try to at least build it but you figure out to really have none of the data to get started and to collect the data will take you so long that that idea is not feasible. (17:25)

Liesbeth: Then, for different reasons, you can start to eliminate some of your potential solutions. But at the very least, I like it because it's a very conscious approach to making sure that you are really doing the right thing and that you're not spending your whole team's time solving a problem that could have been solved in a much easier way. Or maybe that wasn't the most important step to actually solving that fraud issue that you started off with. (17:25)

How long does it take to go from an idea to finishing the second diamond?

Alexey: At the beginning, when you were describing this, it may have looked like a thing that could have been done in a couple of days. But when you talked about creating multiple proofs of concept and also showed them to the users it paints a different picture. (18:21)

Alexey: You also mentioned user research, I think, which involves talking to actual people and then showing them some things. So how long does it actually take from the moment when management comes in and says, “Hey, we have a lot of fraud – let's solve it,” to the moment when you finish the second diamond? (18:21)

Liesbeth: Well, ideally, just two days, right? [laughs] “I have an amazing team. We put all of them together in a room and solve it in two days!” Yeah, it depends a lot on how many customers you have, how elaborately you want to do user research, or how much of that is already there in your organization. It also depends on how complex you want to make the solution and when you would say it's done. Is it done when we have some sort of a small proof of concept? Or is it done when you've actually scaled that across all of your customers and all of the countries that you might be operating? (18:55)

Liesbeth: I think it's difficult to say how long a process like this takes. But I do want to stress that when a team, which includes data scientists, starts to work with these types of design techniques, it doesn't mean that they'll spend 90% of their time on brainstorming sessions. [laughs] I think it's very nice to use some of these techniques in order to try to make sure that you, as a team and as an organization, keep working in a user-centered, problem-centered way. That way you don't go to a solution too quickly. It also doesn't mean that you have to throw away all of the ways of working that you had before. I think it can be a very nice add-on to some of the ways of working that teams already have. (18:55)

Design thinking (Google’s PAIR)

Alexey: I heard about this “double diamond” in the context of another term called “design thinking”. So what is design thinking? How are these two things related? (20:17)

Liesbeth: If you maybe thought that this concept of “double diamonds” was really a little bit fluffy [laughs] then I would say that “design thinking” is the overarching term, even above some of these different processes. I would say that design thinking is the overall ambition to make sure that whatever products or features or models you build take into account this user. As an organization, anyone could do that in different ways. (20:28)

Liesbeth: There’s actually a nice example of this – how Google does this. If you're interested to know a little bit more of what some of these design processes might look like to make sure that you’re working in a “design thinking” way, you could look up PAIR with Google. I think it stands for “People and AI Research”. They have a whole set of tools, from scoping exercises to more ways of prototyping, and they’re very nicely combined – making sure that you understand the user, but also working towards actual AI machine learning solutions for some of the problems that you might identify. I would say it's a whole set of processes to make sure that you are user-centered. But as I said, I think it's a nice add-on and a nice tool to have, but it's not something that makes you have to throw away everything that you've learned before. (20:28)

Alexey: It's “pair” like, P-A- I-R. Right? (22:03)

Liesbeth: Yes. People AI Research – yes. (22:08)

Alexey: And you said it covers things like scoping? I guess this is the first time and we talked about that, right? Meaning – how do we make sure that we're solving the right problem? And then prototyping, I guess, is the second part – when we try to find the solution. (22:13)

Alexey: In PAIR, they talk about different tools and work processes – how to actually organize our sessions in such a way that we successfully find the problem. So that then, we can successfully find the solution, right? (22:13)

Liesbeth: Yes. And I would say, it's just more of an overview of some inspiring resources. As a team that has an ambition to do more with AI, it's not as if there's already a very strictly written handbook of exactly how you should be doing it. It's typically up to a team to just find inspiration in how others might have approached this and this research from Google might be an interesting one. (22:45)

What is a Design Sprint and who should participate in it?

Alexey: And what is a design sprint? Is it related to what we talked about? (23:16)

Liesbeth: Yeah, sometimes, teams go through a process like this, which is kind of centered in one week, for example. Before, they might have already done a lot of interviews, so one week with a team – they'll try to analyze those interviews and say, “Hey, here's really the problem that we want to solve.” They'll do a whole bunch of creation/ideation type of workshops and they'll come to a prioritized list of solutions that they might want to be trying. (23:22)

Liesbeth: Sometimes teams try to really put a lot of brainpower together and try to answer a whole bunch of these types of questions around the problem and a solution, together, in one week. But that, again, depends a little bit on what it is that you're trying to solve. Sometimes, “Hey, let's solve fraud.” That might be such a big challenge that a design sprint is not really the right method for a team to do. (23:22)

Alexey: So this is one of the tools of this “design thinking” thing, right? (24:15)

Liesbeth: Yes. (24:19)

Alexey: Okay. So I guess, from what you mentioned and what you talked about – when we understand the problem, when we did a bit of homework already, when we interviewed the users about the problem that they have with our product, and we already have some materials. In order to go through this material and process it as effectively as possible, we put together a team – like a bunch of people together – and tell them “Okay, you have a week. Now figure out what to do with it.” This is called a “design sprint,” right? (24:22)

Liesbeth: Yeah. I think that's how you could say it. Yes. (24:56)

Alexey: [laughs] Okay. What are the things that we need to do in this “design sprint” to be able to do it successfully? Who do we need to be in the sprint? I think we need data scientists, right? Do we need designers, product managers? Who should be there? (25:00)

Liesbeth: Well, definitely. If you're thinking about solutions that involve data science, it’s definitely helpful to have data scientists in the room. [laughs] Also, what I appreciate about these ways of working is that you'll try to involve the full team in the whole process. It's not just like, “Oh, okay. I think there's a solution with AI here. This is the moment that we call the data scientist.” No, I always really appreciate it if a data scientist maybe sat in on some of the interviews with customers or at least helped to really understand and own the problem that you're trying to solve. That's why everyone that you might need for the solution, I'd really appreciate it if all of them are there in the full process. (25:18)

Liesbeth: But yeah, for these types of sessions, it is very helpful to have someone that can facilitate that so that everyone understands, “Hey, when should we be diverging?” We keep researching and try to keep and expand the problem space and when do we start to diverge and say, “Hey, this is really the core problem that we're trying to solve,” for example. This could be a designer or a surface designer, who are pretty well-equipped to facilitate processes like that. But it could also be a product manager who has done that before. But this is, I would say, a skill that is definitely, you can definitely learn this data scientists could also be the ones that facilitate a session like that. And, yeah, whoever you might need on the solution side, you would want to involve them in that in a session like this. (25:18)

Alexey: And the engineers, right? Front end engineers, back end engineers – if we need machine learning and data scientists, then the facilitator could be the product manager could be the designer – or both? Basically, the entire team – the product team – that will work on implementing this, they should all be there. Right? (26:53)

Liesbeth: Ideally, I would say so. Yeah. This is something that I haven't always seen. I do see often a tendency that the user research is done by either researchers or designers and then once you actually start building, that's when you call the engineers. But I would say that you build better products or better features or better AI applications, if you have all that brainpower involved – both in the problem definition and in the solution definition. As I mentioned before this idea of “seeing as an algorithm” and maybe optimizing the interaction with the user in such a way that you really collect these laser-sharp signals that you might need in your models – well, that is quite a complicated task. So you probably need all of the different mindsets and all of the different specialties involved in the whole process to be able to achieve something like that. (27:13)

Why should data specialists care about design?

Alexey: The next question I had was – as a data scientist, why should they care about all that? It sounds like a joke, but I think you answered that, right? In this process, when we talk about a potential solution – even before that – for a data scientist, it's very helpful to understand the problem because it will influence how exactly you solve it. But then also during the solution phase – let’s say when we're thinking about the interface and how exactly the interface will look like – if I, as a data scientist, am not in this meeting, then I might not be able to say, “Hey, wait a minute, how exactly are we going to collect this data that we will actually need for something like this?” (28:18)

Liesbeth: Exactly. Yeah. I would almost turn the question around and say, “As a data scientist or machine learning engineer, or maybe data engineer – have you ever felt like you were working on an initiative where it wasn't 100% clear to you why you were even doing this and why this problem has to be solved or why it was an important problem?” If you ever find yourself having this tendency of just going to the solution a little bit too quickly, then that might be an indication that there's definitely some value for you and the rest of the team to get into techniques like this. (29:04)

Alexey: Yeah, definitely. I remember being in this situation and even worse than that – when somebody already comes to you with a solution. Let's say a manager comes and says, “Hey, we have this problem and I read an article that says you can solve this with deep learning. So go figure out how to do this.” Then you spend a couple of months and come up with some solution – it turns out that the problem is wrong and the solution to this problem is not correct. Then it's just two months wasted. (29:45)

Liesbeth: Yes, exactly. Yeah. Of course in some of those dynamics, you cannot just solve by having a brainstorming session. If you work in a company where that's the tendency – that the product direction is established somewhere by really high management – and you're really in an execution-focused role, then, of course, there might be some bigger challenges to solve. (30:17)

Liesbeth: But at least this way of working with your team might allow you to start the conversation with managers about these types of topics, where you say, “Hey, but look – we did the research and this is not even the right problem to be solved anyway.” Or “This might not be the right technique to solve it.” (30:17)

Challenging your task-giver (asking “why”)

Alexey: Let's imagine we have this situation: a manager comes to me, or to the team, or to the product manager and says, “Hey, this is the problem we think we have. Let's solve it with a neural network.” So how do we challenge that person? How do we challenge the CTO or whoever and say, “No, no. Let's first figure out what the problem is.” How do we go about that? (31:04)

Liesbeth: Also during my time working as a consultant, I think I've gained some experience with saying “no,” but maybe a friendlier way of saying “no” is asking “why?” It's almost as if your manager or the CTO or whoever gives you this assignment – he’s also kind of like a customer. Just like you want to understand the customer and their needs and their priorities, you might also want to do the same with whoever is giving you these types of assignments. Then it turns out that they say to you to build solution X, but what it really is, again, maybe what they're actually trying to do is solve fraud, or improve their monetization, or improve engagement between the different users on the platform. (31:34)

Liesbeth: So that's maybe the underlying “why” that you also need to get to with whoever gave you that assignment. And then, with your team, you can try to figure out, “Okay, if that's what we're trying to do (trying to solve fraud) then I can go back and understand what's within fraud that’s really the best thing to be addressing.” I would also see that that's kind of an engagement, where you want to get to the bottom of why someone is giving you a certain task. (31:34)

How to avoid the “Chinese whisper game” (reiterating the problem)

Alexey: I guess in this situation, I or somebody else would ask, “why,” and then understand where this is coming from? Like, “Why are you coming to me with this problem? Where did you hear about this?” And then maybe it's better to also involve that person? Or maybe if this is something that came directly from the CTO then I don't know – do we start a design sprint? How do we do this? Use the double diamond? Or, maybe there is even a bigger problem and we need to think about something else before we start? (32:51)

Liesbeth: Larger organizations and maybe others might not recognize this. If they work with smaller teams, where the CTO just sits at the desk right next to you, then you might not have these situations. But larger organizations sometimes remind me of this Chinese whispering game where you all sit in a circle and one person whispers something in the ear of the person sitting next to them and the story keeps traveling throughout the room. By the time it ends up at the end of the line, that story has completely changed. I think this sometimes does happen in organizations. (33:25)

Liesbeth: You know, everyone has their own interpretation of what it actually is that we're trying to achieve. Again, that's one of the bigger challenges that you can’t necessarily solve through design thinking or any sort of design process. That is probably just an organization that's in need of a pretty clear product definition or a very clear strategy statement. But, again, you go into these meetings, try to understand what the bigger problem is that you're trying to solve and then whoever is responsible for building a solution needs to make sure that he or she has the team that they need to try to achieve that – so evolve their product teams, usually, and then facilitate the right process. (33:25)

Liesbeth: It could be a design sprint – the problem is slightly smaller and slightly more clear-cut. It could be also launching user research, when we don't yet have that understanding of the problem that you’re trying to solve. Or maybe it's just a lot of analytics. Sometimes that can be what you need to understand what it is that we're actually talking about. Then combining those into an insight about the problem to solve and then taking it from there. (33:25)

Alexey: Yeah, this Chinese whispering – I think it happens quite often, especially in larger companies where we have multiple departments. The problem is coming from one department, then somebody tells it to their manager, to their manager, to their manager, and then the manager at the top talks to another manager, and then there could be like five-six hops before it reaches the data scientist who actually needs to solve it. I guess, here, the right way to do this would be to try to backtrack and then get together to try to solve it. (35:11)

Liesbeth: Yeah, and also, as a data scientist, don't be afraid to say “Okay, I'm just gonna write down what I think the scope is, and what I think the problem is that we're trying to solve and why we're doing this.” So just put together a very short scoping document – it can even be an email – and just send it back all the way up to where you think it actually came from, and see how they respond to that. Just kind of those basic project management tools can sound kind of boring, but they can maybe save you a few months of work if it helps you focus on the right topic. (35:49)

Alexey: So when somebody comes with a solution, you need to find out why and then you need to document this. Then you need to say, “Okay, this is what I think the problem is. This is the solution that I propose.” Right? Or even without the solution – just the problem. You have this document and you send it, and then you want to get an “OK,” meaning somebody saying “Yes, this is indeed the problem.” Right? (36:27)

Liesbeth: [laughs] In an ideal world, this is how it would work, yes. [laughs] (36:51)

Defining the roadmap for data science teams

Alexey: [laughs] Do we need to have some sort of processes in our organization? How exactly should this happen? I can imagine if there are such ad hoc requests coming in, I guess there should be some process, like how exactly we define the roadmap and how we prioritize things, right? How should it work? (36:55)

Liesbeth: That's maybe also a broader question around who defines priorities in an organization, maybe specifically when we talk about data science. Who defines what the roadmap looks like? What models are the teams going to build? Who leads that? I think what you're describing is one situation that I’ve sometimes seen, where it's decided by management, “Okay, we need deep learning here – let's build it,” without maybe a detailed understanding of the problems or the total solution space. But what I've also seen is maybe more the opposite, that it's data scientists that pitch ideas, maybe mostly from a technical perspective like, “Hey, we have this data available in the company and I see that we can build a computer vision model here, based on whatever data we have available. Let's build it.” (37:15)

Liesbeth: That's not always wrong but I do sometimes see a missed opportunity also for product managers to play a larger role in defining the AI and data science applications simply because the product managers are usually the ones that kind of at the center – that have an overview of all of the different problems that are out there that need solving. They could be the ones that very nicely say, “Hey, here I see a problem area that my customers are struggling with. Maybe we can use AI machine learning as a tool for solving it.” So when it comes to priority setting, which is, of course, a very difficult, broader topic and also organizations have different processes in place for that. But sometimes they see that maybe product managers can also take a leading role in at least understanding where AI might make sense. (37:15)

What is innovation?

Alexey: One other thing that I wanted to talk about here – so we talked about design – but we also wanted to talk about innovation. How is everything we talked about (design, communication with different people) how is it related to innovation? (39:13)

Liesbeth: [laughs] Again, we just discussed a very broad topic there, but you decided to go to an even further one called innovation. From my perspective, how I see this topic – especially in large organizations, a lot of the data scientists typically work on goals for the next three months, right? A lot of large tech organizations have these things called OKRs (objectives and key results) that are established on a 3-monthly basis. This means that a lot of data science work usually fits in a three-month block, which means that they are not huge, crazy endeavors but they’re something that's quite tangible and just has a direct impact on a certain metric. For example “Let's try to tweak the search of recommendation models for this particular part of the product in order to try to get the performance from 5-8% better than that.” (39:33)

Liesbeth: Of course, if all of the data scientists working in an organization all have these projects where they get from A to B in an incremental way – at scale that has a large impact. But I do think that that can be a bit of a missed opportunity because there might be completely new business opportunities, or completely new product opportunities, that just don't fit into a three-month time period – they will take more time. Therefore, innovation for me is trying to find a way of working to spend part of your organization's resources, and also figuring out where might the product or the business go in six months, or a year from now? And how might AI machine learning actually enable a huge opportunity that wasn't there before? (39:33)

Liesbeth: For example, when you talk about OLX, “How can we try to do personalization to an extent that we don't do right now? How can we really in detail understand your particular tastes when it comes to furniture in such a way that we can find the perfect couch that fits into your house?” Or when you talk about food delivery, “How can I really understand exactly what type of flavors you like in such a way that I can find this dish that you've never tried before, but I'm sure you're actually going to love it?” Those could be really cool and powerful ideas that very much change the business, but it's hard to do them in a three-month project. So that's why, at least sometimes, I would say you need a bit of space in an organization in order to try out the more radical ideas. That for me is innovation. (39:33)

Alexey: I think we usually work in these increments, in these sprints, in these time blocks – I think the whole Agile thing is about moving in small things, right? You don't spend a lot of time working on something ambiguous when you can work on something tangible that will bring results right now. Right? This is the usual way of working – this is the whole Agile thing. (42:20)

Alexey: But as I understood you correctly (maybe I'm wrong) innovation is not about making these small, incremental steps but instead backing up and saying, “Okay, instead of doing the next thing we need to do for this project, let's just start from the beginning and see if there is a different way of solving this problem. Not something that we build on top of what we already have, but maybe there is a completely different road that will take us closer to what people actually need.” Right? (42:20)

Liesbeth: Exactly. Yeah. As an example – second hand cars. Within OLX, there are a lot of platforms where people can buy and sell second hand cars. But the core problem when you're buying a secondhand car is trust. Not very surprisingly, the number one emotion when you buy a secondhand car is fear – that the seller is lying to you, that the car has been in a huge accident and they're trying to hide that, or the car has actually driven twice the amount of kilometers that the seller says it has. Of course, you can try to solve some of these issues in a kind of an incremental way. (43:19)

Liesbeth: So you'll try to make a model to test the reliability of the seller, for example. But you can also say, “Okay, can we use AI, or machine learning, or any other technique, to solve that trust issue in a radically different way? Maybe we should build this huge scanner, where you drive the car into the scanner, kind of like a carwash, and it has a lot of AI machine learning. At the end of it you know every possible detail of that car.” That would be a radically different way of solving it. I'm not saying that that is the way to solve it, because then I would be going immediately to a solution. That's not what you try to do in innovation. (43:19)

Liesbeth: Again, it's the same type of design thinking where you really try to understand the problems and all of the solutions before you pick one. But if you just have three months, no one is going to try to think about building that scanner because it just wouldn't fit into the roadmap. Of course, you also don't want everyone in your organization thinking about these crazy and out-of-the-box ideas all the time, because then, of course, nothing actually gets built on a daily basis. But you might want a few people that worry about completely different directions. (43:19)

Bringing innovation to your management

Alexey: I heard this story many times – let's say somebody works at Amazon or Facebook or some other company, and they have a problem with something. But because they don't have the freedom to solve this in an innovative way, they need to take these incremental steps. Therefore, what they do is leave and start a startup. In this startup, they try to innovate – to rethink exactly how they do things. Then there is a startup that tries to take one of the things that this big company does, but they do it differently, and then some of these startups become bigger because they saw it in a more original way – in a better way, in a way that users like more. (45:13)

Alexey: I heard many stories like that. Not necessarily Amazon. The reason I brought up Amazon is because I remember listening to an interview of somebody who had a problem with Amazon (with some internal machine learning platform) and then they thought, “Okay, this seems like a problem that many customers have.” So they left Amazon and started the company. So this is innovation, right? When you see a problem and you try to find a new way of solving this problem. (45:13)

Liesbeth: Yeah, I definitely think a lot of startups started that way [laughs] it starts from a personal frustration. The thing that you've encountered that maybe you couldn't solve at the job that you have, and you just say, “Okay, let me just do this myself.” Though, I don't think you always have to leave a company to solve a problem like that. (46:30)

Liesbeth: Maybe my recommendation would be – of course, if you have to see a big problem, or you think of an innovative solution, it's not as if, when you pitch it to the manager, he will immediately say, “Perfect. Here – have a million and a team. Let's build it.” It would be nice. [laughs] But again, what you can try to do is, in that similar way to what I just mentioned about the double diamonds, you can try to collect evidence. You can try to collect evidence about why this is really an important problem to be solved and also evidence about why your solution might be the right one. If you just have a little bit of time and the opportunity to, for example, experiment and collect some data that way, then that might help you build a really cool business case. Then you pitch it to your manager, and then you get that million [laughs] and then solve the problem. (46:30)

Alexey: I guess if you're at Amazon, you don't want your people to leave and start a startup, right? You want to keep them in the company and you want to encourage them to do [inaudible] because they don't know, maybe this will become the new, big thing. Therefore you probably want this to happen inside your company and not have a competitor who is doing something similar. (47:46)

Liesbeth: Ideally, yes. And I hope that that is also part of an interesting culture for people to work in – where they feel like, “If I have an idea and I'm actually able to prove that it might be working, and I have the opportunity to maybe experiment a little bit, and my idea actually makes it, so I can prove that maybe my idea is even better than the one that's already on the roadmap, that I get that opportunity to go and further develop it.” (48:09)

Liesbeth: It's kind of also that maybe back in the days from Facebook, whenever you had a good idea, you would get 10,000 users to test that idea on – that sort of mindset, I think is very interesting for tech companies to have. If there is a good idea that somewhere in the organization pops up, that people get a little bit of opportunity to run with it. Sometimes it works within existing product teams. Sometimes you may need a separate team to work on it because otherwise it interferes too much with whatever needs to be delivered in a certain quarter. (48:09)

Task force-team approach to solving problems

Alexey: I heard that there is a company in Germany (I think in the whole of Europe) called Zalando that sells clothes. And what they have is some freedom to work on small projects. They see a problem and they form a small task force – the person who sees a problem pitches it to (I don't know to whom they pitch) “Okay, we see this problem. Give us one month to try it out – to see how we can solve it.” And then they would get some people, they would work only for this limited amount of time on this problem and then they would show the results. (49:16)

Alexey: Based on that, they will decide if they want to keep doing what they do and then actually build a team around that, or just everyone goes back to their team. I think this is a pretty cool concept. I don't know how popular it is, but I remember a recruiter from Zalando was pitching this concept to me and I found it pretty interesting. Do you know if this is something companies often do? And do you think this is a good thing? (49:16)

Liesbeth: As I mentioned, there are different ways in which you could make sure that you spend part of your resources on looking a little bit further than just the next three months and maybe going to ideas that are a bit more radical. One of them would be these ad hoc task force-like teams. I’ve also worked with iFood, a food delivery company from Brazil, and they have a similar setup. They call it Jet Skis. So imagine that most of the company is this huge oil tanker that's very difficult to steer and if you steer it, it's just going to make a small movement. Then you have sort of a Jet Ski and it can go anywhere and it can try out new and different ideas. (50:19)

Liesbeth: That definitely works, but it sometimes depends a bit on the organization and also the type of challenges that you have. Sometimes it makes sense to really have a dedicated team, instead of these kind of ad hoc task force-like, kind of, “Oh, we'll spend a few hours a week on this” type of teams. Because sometimes you really want to make sure that you have enough resources and that there's really a dedicated team with a structured way of working – that always takes into account the users’ size and makes sure that your ideas are linked to the overall vision of the company, rather than leaving it kind of up to these smaller teams to hopefully come up with something cool. (50:19)

Alexey: Let me try to summarize – innovation is about finding new ways of solving the problem you already have, or maybe new problems, but not in small steps – instead trying to find a radically different way of doing it. But then you need to give people some time to actually work on this because if 100% of their time is taken up by these incremental things (by these OKRs and other things) they will not have time to work on innovation. Right? So you need to somehow give them space to work on this and this is how you innovate. (54:46)

Alexey: As a tool for innovation, you use these things that we discussed in the first half – double diamonds, design thinking, design sprints – to actually make sure you understand the problem, build a business case, and then present this business case to whoever decides if they want to invest in this team. Then based on that, the company decides what to do. Is that a correct summary? (54:46)

Liesbeth: Definitely. It’s a great summary. I think that makes a lot of sense. The way in which companies do this could be different – it could be a dedicated team, it could be something that people spend their Friday afternoons on in a project-by-project team. But my encouragement would definitely be to try to discuss these types of ideas with your direct team, or maybe the broader data science community. If you see an opportunity for more innovation, or more user-centered product development, or AI application development, try to get together and try to see if there are techniques out there that you want to try out. (52:45)

Innovation, resource management issues, and using data to back your ideas

Alexey: I imagine a different challenge. Let's say I'm a manager and I have a lot of fires to fight. Now somebody comes to me and says, “Hey, I have this amazing idea. We can completely revolutionize the way we're solving this problem.” But I have these fires. I don't have enough people. [laughs] So how can I tell them? Because I probably need to tell them, “Yeah, cool idea. But let's go back to work.” Right? [laughs] So how do I solve the problem here with innovation? Do I need to talk to my manager to ask for more people in the team? Or maybe I need to rethink my priorities? Is there a solution to this at all? (53:33)

Liesbeth: I mean, if you're always putting out fires, then there might be some underlying, bigger issues to fix – maybe around the resourcing because that's not an ideal way of working for anyone. But I do believe that, in order to keep teams motivated, they need to have a little bit of space and a little bit of a say in what they actually work on – about what the products team works on. Hopefully, there's definitely an opportunity for conversation there. (54:11)

Liesbeth: But I'm not sure if there's a one-size-fits-all solution to be able to both work on really cool enough projects and make sure that whatever the team needs to deliver actually gets delivered. A solution direction that I believe in is definitely experimentation. If I pitch a really cool idea, but my proof point is me talking to another colleague over a beer, that's not a very convincing idea. If my proof points is “Hey, I actually very quickly ran a survey and it turns out that 90% of the customers deal with this issue,” or, “Hey, I've very quickly changed this button from A to B and through that, I can actually measure that people really clicked on it (so they're really interested in it).” (54:11)

Liesbeth: Then you have a much better story. But in order to be able to quickly do that type of data collection, or that type of experimentation, you need quite advanced capability. That is not possible everywhere. But I believe it's definitely a direction that more mature tech companies need to move into, in order to be able to, together, prioritize all of the resources in the right way. (54:11)

Alexey: I imagine there could be a situation when you come to a manager with a good idea and the manager says, “Yeah, cool idea. But I don't like it.” Well, maybe not in this direct manner, but I can imagine that something like this happens. “We have other priorities. We have more things to do than we can fit in a sprint – in three months.” (55:54)

Alexey: But I guess the other story would be, if you use a bit of your Friday afternoon to get hard evidence that this is a good solution – through experimentation or something else – then you bring this evidence to your discussion. Then it's quite difficult to say, “Go work on something else.” Right? (55:54)

Liesbeth: Yes, yeah. And maybe one more recommendation for an article to read – this is a recent article by Citrix. They've very nicely described, as organizations are moving towards using more data science, AI, machine learning, everything becomes more measurable as well. Then people find out that their ideas for product directions – if they're based on intuition, something they work, but sometimes they also don't work. (56:36)

Liesbeth: So actually, if you do work with data science and everything becomes measurable, you have a much better tool to prioritize based on actual facts and data, rather than someone's gut feeling, which can also be a bit confrontational – it turns out that your ideas don't always work. But for the overall effect that you're trying to achieve, I think that's the direction that you need to be going into. (56:36)

Alexey: It's an interesting feeling when you think that idea works but then you look at the data, and it doesn't. At first, it's hard to accept, but then, over time you learn. I think it's a good thing, even when you put some time into this thing, to accept that this thing is not working, instead of trying to “Let me tweak these things. Let me tweak that thing.” Just accept, “Okay, it's not working. Let’s try something else.” (57:27)

Alexey: It also helps to see hard evidence that “Okay, this is a dud. Don't try to keep working on this thing.” Okay, I think we should be wrapping up. I didn't ask you all the questions I wanted to, but is there anything you want to say before we finish, maybe something you wanted to bring up, but we didn't have a chance to talk about it? (57:27)

Words of advice for those interested in design and innovation

Liesbeth: Yeah. Plenty of interesting topics, right? [laughs] I guess your work – design and innovation – you can keep talking about them for a long time. But maybe one encouragement for anyone listening in are kind of interested but maybe don't know where to get started. I want to say that these types of skills are definitely very learnable. Three years ago, when I joined Prosus, maybe the playbook for successful machine learning projects hadn't really been written. (58:20)

Liesbeth: Now, three years later, I think there's so much documentation and so many podcasts like this one, and so many courses out there that exactly describe how to do a successful machine learning project. It's mainly the same with these topics that are related to design and innovation. At the moment, there's no playbook written yet, but you can definitely read about it, see if you can find a course, get familiar with the topic, and maybe you might be the one that's later writing the playbook for how to incorporate design into a data science team. So I definitely encourage people to keep learning about topics like this. (58:20)

Alexey: A good starting point that I think you mentioned is this PAIR from Google, right? (59:28)

Liesbeth: Yes, definitely. Yep. (59:32)

Alexey: Okay, so if somebody has questions, what's the best way to reach out to you? (59:36)

Liesbeth: LinkedIn might be the best channel. (59:42)

Alexey: Okay. Oh, yeah. There is a comment, “Can we have links to learn more?” Maybe you can send us the links to these articles you mentioned. I think you mentioned the TikTok and Instagram algorithm. (59:47)

Liesbeth: Yes. Seeing as an algorithm. (1:00:04)

Alexey: Then there was another one about Citrix, right? There was something else and then PAIR. [cross-talk] Maybe if you send this, then I can put it in the description. I think that’s all for today. Thanks again, Liesbeth, for joining us today, thanks for sharing your knowledge. Thanks, everyone, for joining us today as well – for listening in. That’s all for today. (1:00:05)

Liesbeth: Thank you! (1:00:32)

Subscribe to our weekly newsletter and join our Slack.
We'll keep you informed about our events, articles, courses, and everything else happening in the Club.


DataTalks.Club. Hosted on GitHub Pages. We use cookies.