Season 21, episode 2 of the DataTalks.Club podcast with Lior Barak
The transcripts are edited for clarity, sometimes with AI. If you notice any incorrect information, let us know.
Alexey: This week we'll talk about mindful data strategy and how teams can shift from building pipelines to delivering real business outcomes. (0.0)
Alexey: We will look at how principles from the Zen philosophy can help teams work more intentionally and effectively. (0.0)
Alexey: We have a special guest today, Lior. This is not the first time we have had a conversation. Last time we spoke a few years ago, more than two and a half. Primarily about humus if I remember. (2:24)
Alexey: I don't remember other things. The funny thing is today I ate hummus. The one from Revit probably you will not approve of. I don't know your opinion about humus. (2:24)
Lior: I'm already disappointed (2:48)
Alexey: But this is at least some humus like it's better than the humus, right? Yeah. Lior is a data strategy consultant, former practitioner. Why the former though? Maybe you'll tell us. And the author of data is like a plate of humus which is why we talked primarily about humus last time. (2:53)
Alexey: He developed the data existing vision board and the data product life cycle manager frameworks designed to help teams move from reactive tasks to more strategic low long-term thinking. Welcome. (2:53)
Lior: Hi, happy to be here. I think the last time we also discussed something very similar to this topic about efficiency in uh teams uh structure. So how to make sure communication is working well if I remember correctly. (3:26)
Alexey: I think when I was doing that interview I was actually a practicing manager. So let's see how (3:43)
Alexey: much of the things we talk about today will resonate with me and uh let's see if I actually was able to follow your advice. Let's start with your uh career journey so far. Can you tell us more about that? (3:43)
Lior: So, in the past year I've been working more with smaller-scale to medium-size companies trying to basically write down my second book. So when we spoke the last time we spoke about my first one which was Data is like a Plate of Humus and I'm working on a new book which is more taking the strategic aspect of data. (4:06)
Lior: And I think that this is where I am today. Before that I was working at Idialo. I was the head of product for the data platform team. Just before that I was basically launching my own company Tele Data which the entire idea behind it was to create the infrastructure and the tables in an automatic way which will save a lot of cost for small companies that don't have the scales and the ability to hire engineers. (4:26)
Lior: And before that I was working for Zelando leading there basically the entire marketing data operation from the product perspective. So setting up basically the infrastructure for the first time ever to have an infrastructure for data instead of having a horrible Excel sheet and pythons that break every week that you need to run them. (4:54)
Lior: And then we moved to ROI forecasting creating basically a better understanding of data and then ended up in the marketing automation part of how you actually steer the budget of the campaigns and making the best decisions. So this is more or less my story path and today I'm practicing a lot of mindfulness in data and I think we're going to have a very interesting conversation today. (5:12)
Alexey: Mindfulness of data. Yes. Okay. That's interesting. So you're more like a product person if I got from your description. (5:40)
Lior: I am. I started my career actually as an engineer. I wasn't as good and I wasn't as fast as other engineers. I must admit it. I moved more towards the analytic side and then I found myself somehow in product something like seven, eight years ago and since then I'm staying in the product area. I'm very technical. So if you're going to ask me questions about tools I will be able to evaluate their architecture as well but my passion is really to bridge the gap between the business and the tech I think and actually bringing everybody together. (5:54)
Alexey: This is not one of the questions we have prepared for you in advance but I recently saw a question in one of the data science communities and the question was from an engineer. He was asking the community to recommend some resources about product management. So do you have any suggestions for somebody who works as an engineer or data scientist if they want to learn about product management and the product side of things do you have any recommendations for them? (6:25)
Lior: I will be honest. There is the Product Camp Berlin that is once a year if anybody is interested and I know they're doing a tour all over the world as well. But there's nothing specific for data product dictionaries because I think this is very different from the regular product. If you're learning about a product, it's a lot about users and communication which is very important for us to be able to know. But there is nothing specific yet that I'm aware of but I would love to learn if somebody finds something as well. (7:00)
Alexey: Product Camp, is this the people? I remember attending a workshop when I was still at Elix. I think it was Mind the Product something like that. (7:32)
Lior: This is also another one. There is also Buzzwords that is also happening somewhere in June I think usually that also connects tech and data together. So there are different ways to learn it but for products specifically for data products I haven't encountered anything. Maybe this is something I can do next. (7:38)
Alexey: Yeah. After this current book right, what is the title? Do you have the title already? (8:03)
Lior: I have a temporary one for now it's called Wabi Sabi Your Data. (8:08)
Alexey: What can you say it again slower? (8:14)
Lior: Wabi Sabi Your Data which is basically a Japanese concept about accepting the imperfections, the perfect imperfections basically and this is basically what I think about data. This is a lot of my philosophy about how to handle data. It's imperfect and we need to accept it and we should not always fight to have it perfect before we going forward. And I think this is one of the biggest issues that we are going through today. (8:20)
Alexey: I see where the mindfulness part comes from the acceptance, right? So don't fight it, accept it, right? (8:53)
Lior: Be aware of it, accept it and communicate it. I think if we need to take a core out of this conversation today it is be aware, accept it. This is the reality and make sure you're communicating it correctly. And I think that this is my biggest learning from leading a lot of data initiatives and failing many times as well. And I think that we always expect to have this perfect product. We are always in the hunt for the perfect product, the next technology, the next thing, the next always something next. And actually we should sometimes stop and accept the reality. (9:00)
Alexey: So how do we do this? (9:40)
Lior: Oh, that's a great way. So when we starting and we're talking about it, let's talk about the trust crisis, right? (9:48)
Lior: So I think that there is a very big trust crisis right now in the industry. I saw a research a few weeks ago that 97% of the CEOs mentioned they are using data but only 24% said that they are data driven. (9:48)
Lior: This is already saying something to us about the difference in how data is being used in the organization and how it's being leveraged. (9:48)
Lior: And there is a data trust. We are launching products. Yes, we have data governance. Yes, we have monitoring. Yes, we have a lot of tools around it to make sure the data arrives to the users safely, but in reality the data or the people using this data don't trust it and basically this is where the crisis starts. (10:15)
Lior: This is why we need to be aware of it. We need to be aware of where the flaws are. Data will never be perfect. It's enough that somebody changed one tab in the data that is being ingested into the data warehouse and everything can break. We need to learn here how to communicate it better. (10:15)
Lior: And the fact is that we are looking at many issues right now that are related to the trust of people in the data. (10:56)
Lior: I don't trust the data team because they are in constant migration. I have a CEO I spoke to recently and one of the things that still resonated with me, still stuck in my head, is that he has had the data team in the past three years running from migration to migration. Each migration taking them six to eight months. Everything stuck. (10:56)
Lior: This again is a loss of trust in the data team and sometimes we are not even aware of it. We don't even see the issue because we are trying to achieve a goal but during the attempt to achieve it we actually miss some of the components that need to be there. (10:56)
Alexey: Funny thing, when you mentioned the trust crisis what came to my mind was I have a son, he's nine years old and he likes to use Chip. He speaks with Charity a lot and Chip, as we all know, hallucinates sometimes. It comes up with things that don't exist. (11:47)
Alexey: So he was like, why should we pay, because I have a premium subscription, he was like we pay for this tool, we pay them €20 or something like that and it outputs garbage, it outputs nonsense. How can we trust, how can we ever trust what it says if sometimes it hallucinates? (11:47)
Alexey: But then my objection to that was it's actually a very useful tool because most of the time it is correct, sometimes it's not. You just need to, how to say, cook it. You like cook metaphors, right? You just need to cook it correctly. (12:37)
Alexey: And we just accept the fact that sometimes it hallucinates, sometimes it does not provide the correct answer. (12:37)
Alexey: We must not blindly trust whatever it says. We need to always double check. But overall it's quite a useful tool and if we use it, it is super convenient. It makes our life easier. (12:37)
Alexey: We just need to be mindful of the fact that sometimes it may be super wrong but it does not mean we shouldn't stop. We should not continue using this tool because it's super useful. (12:37)
Alexey: So I was explaining, I was also trying to explain how the model works, that it's a language model. All it does is trying to predict the next word and then people noticed with time that it's actually something useful. How about we start using it? Because most of the time the output is good and sometimes it's not. (13:31)
Alexey: So I think it's maybe a very interesting example where there is a trust crisis because the tool is useful but then it was wrong a few times and then people think okay it's useless, like we should stop using it while it's not the case. It's a useful tool, we just need to be accepting of the fact that it might be wrong sometimes. (13:31)
Lior: Correct and you know I always explain it and I say also data is like Lego bricks. We can connect and we can build a lot of buildings inside it. (14:09)
Lior: Sometimes we fall on a fake Lego brick that we bought from not Lego themselves but some other generic brand that looks like Lego but it is not connecting, it's not working well, right? It's not sitting there as it should and that's exactly the problem with data. (14:09)
Lior: So we are building a castle out of Lego bricks and sometimes some of these bricks are not going to be the original ones and this is something we are going to fill in the rest of the construction. And it happens we cannot change it, right? Maybe we even need this brick because we don't have enough original bricks because we want to build a very big building. (14:40)
Lior: But we need to find the right balance between how much we are willing to trust and invest in having the bigger picture, so this huge castle, or how much we are saying okay now we are stuck and we are waiting until we find a replacement for this brick which can take another week until we order it, receive it, and open the package. (14:40)
Alexey: This is something that we need to be aware of. Has it actually happened to you like the Lego brick thing? (15:11)
Lior: Oh yes. My kids have a huge box and they had one that was supposed to be Lego but it wasn't Lego. Then it didn't connect and they were sitting there frustrated trying to connect it to something. (15:21)
Alexey: Okay. Because what happened to me, I don't know if you're into Lego. I built the Millennium Falcon, which is one of the biggest Lego sets you have. And then a part was missing. So I had a choice whether to go to a store and buy not an original one or wait for two weeks till they sent the replacement. (15:33)
Alexey: So I ordered the replacement because it was free but then I went to a store and just bought a part. The part was not actually fitting. It didn't fit. So I had to use a saw, I had to manually adjust it to make it fit but then in the end it worked. So by the time when the part arrived I actually had the entire set assembled. (15:33)
Lior: That's cool. (16:23)
Alexey: Yeah. So it didn't but with enough perseverance I managed to make it work. I will take it. (16:28)
Lior: I will keep it for the next time, maybe for the next crisis. (16:34)
Lior: But again this is the same as data. When you think about it we sometimes manipulate the data so heavily so it's going to fit the pattern we need it, just we can continue the building, and then somebody comes like but hold on the color is not the same, I can notice it and this is not the original brick and then what do we do with it. (16:41)
Lior: I think that this is really where we go a lot of this direction. At the end of the day we trust it, we want this construction to sustain. (16:41)
Alexey: What we want to do with it and we need it to look good at the end of the day as well. So what do we do with this trust crisis? So in my case, I try to explain to my son that this tool is useful. We just need to be mindful of its limitations. (17:02)
Alexey: Do you also suggest the same like when you speak with the CEOs that perhaps have these trust issues with the data? How do you convince them that we still need to rely on data? (17:02)
Lior: I always say it's like a lot of times when I arrive at companies it's like this basketball team that everybody lost hope about and they keep losing. They know they have the capabilities and the right people to actually pull the season and make it successful. The owners are already frustrated. The fans are frustrated on them and they are just in this slow moment. (17:32)
Lior: And I think that what we need to do is have much better communication. So what I realized is that the default of a lot of us is to go into tools. (17:32)
Lior: Okay, so now we need to have a monitoring tool. Now we need to have a new governance system to do the catalog and the lineage and so on. And we keep extending tools instead of actually stopping for a second stepping backwards. (18:04)
Lior: This is where we go into mindfulness and reflect on why we have the crisis issues of losing trust. Why, what are the main drivers of it? Is it specific products? Is it specific things? How do we actually create the right environment for us to reflect and understand? (18:04)
Lior: And I think that this is something that we are missing a lot. And this is also something that will help the CEOs to understand where the problems are because when we come and say okay we need now a new tool most of the CEOs freak out already. (18:37)
Lior: It's like why do they need another tool now? I'm already paying 500,000, one million, two million dollars on the data team and now adding another tool to it. It's another 100,000 or 200,000 on overheads. (18:37)
Lior: What is actually the end goal of it? How do we actually create something? (19:04)
Lior: And I think again this is why we create this crisis because we don't communicate clearly. We don't really prioritize and clarify what is important, what is not, and we don't talk the language of impact. So if we say and we think that a data quality tool will be important and going to help us then why and how is it going to basically impact the business goals? (19:04)
Lior: Is it going to improve the stability of the marketing data? So they will be able to make better decisions and steer the budget? (19:04)
Lior: By that we are going to save between 20 to 50,000 euros a month on marketing campaigns that are basically wasting our money. We are already talking a monetary value. We have a different conversation and now people are looking at it not as a trust issue but more as a communication gap. (19:40)
Lior: And now let's see how we can figure this one out. (19:40)
Lior: We should get out of the crisis. We should get out of the chaos. We should ride the chaos and flow with it. Accept it and continue there. (20:05)
Alexey: To me it sounds a bit abstract. Do you have some examples to make it more concrete? Let's say we have this trust crisis but then you were saying that yeah we don't (20:18)
Alexey: I necessarily need another tool. (20:18)
Alexey: We need to think about what is important, what is not important. We need to take a step backwards. Do you have an example of a situation where, how to say, I'm just trying to visualize this and having a concrete example will help to understand it better. (20:28)
Lior: Think about the core KPI dashboard for management. They need it daily to understand how the business operation goes.Let’s say it’s an e-commerce company selling shoes. The KPI could be the number of sales, for example. (20:50)
Lior: This is not related to any specific company, just the core principle here. The dashboard exists, and management needs to use it. When there is a lack of trust, what happens in many organizations is that the CEO asks the CFO, “Are the numbers accurate? Can I trust them?” (21:09)
Lior: Then the CFO goes to the analyst. The numbers include revenues, sales, stock availability, and so on. The CEO doesn’t have time to supervise this directly, and this is where the crisis starts. (21:26)
Lior: If the CEO needs to check every day whether the numbers are correct or not, and whether they can make decisions based on them or second guess themselves, this is a big problem. (21:26)
Lior: We want to make sure the core KPI is correct and delivered reliably to those who use it. What we need to check is what will improve this. If we invest in an extra tool, or check where the issues are in the data pipeline or lineage is the problem at the start, middle, or end? (22:02)
Lior: Maybe the SQL code is broken when computing the data for the dashboard, or ingestion is failing because a team isn’t ingesting the data correctly. (22:31)
Lior: We need to be much more aware of these patterns. A tool might be the easy solution, but implementing it can take six to eight months. It could be that the product team is ingesting incorrect data daily and constantly changing the data structure, screwing up reports upstream toward the dashboard. (22:43)
Lior: This is where data trust is lost in most cases. (23:07)
Lior: It’s usually not the tool malfunctioning, but failures in some process components that we are not aware of or don’t know how to fix. We often default to software solutions rebuilding pipelines or adding monitoring tools on data ingestion. (23:07)
Lior: But that is the easiest, not the core, solution for building trust. (23:26)
Lior: The CEO might still not trust the dashboard because of past bad experiences with data quality. (23:26)
Alexey: Why does the CEO ask the CFO if the numbers are correct? Is it because the numbers look strange or because of previous problems with incorrect data? (23:50)
Lior: There are many cases. Sometimes teams present different numbers. For example, marketing presents numbers different from the core KPI dashboard, creating a crisis. (24:11)
Lior: Another case is when the CEO used numbers later proven wrong by someone notifying him a month after the fact the numbers he showed to investors were not accurate. It was completely fake, and this happens a lot. (24:11)
Lior: As data people, we often see these problems but don’t act fast enough to solve them or to make people feel more trustful of the data. (24:42)
Alexey: So trust is already lost. The CFO and CEO don't believe the data. We need to do something. (24:54)
Lior: Let me ask you, if someone gave you a dashboard about your podcast listeners, would you automatically trust the data? Or would you validate it first? (25:06)
Alexey: For Spotify, I automatically trust it because it’s the only data source I have. Having a number is better than no number. (25:18)
Lior: That's the pragmatic approach. But if you run a business, you want numbers that create the right impact. For instance, with a podcast, when you approach advertisers, you want to give numbers that convince them it’s worth investing in ads. (25:35)
Lior: This also applies to CEOs who need to prove to investors that the business is functioning well and heading in the right direction. Investments often come in chunks based on milestones, so you need credible numbers to justify them. (26:00)
Lior: Otherwise, misreporting numbers can lead to accusations of fraud. This is why CEOs ask CFOs to validate numbers and ensure they match what’s in the bank. (26:30)
Alexey: Otherwise, auditors might come and close the company. (26:49)
Lior: Exactly. There is a case in the US where a CEO reported sales numbers ten times higher than reality due to a database error where figures were multiplied by about ten. (26:54)
Lior: After discovering it, he kept reporting the inflated numbers because he couldn’t go back on what he reported. (26:54)
Alexey: So he knew about the mistake but continued to report inaccurate data? (27:28)
Lior: Yes, because he didn’t have a solution. That’s a very bad case we should avoid. (27:35)
Lior: When a CEO realizes the data was drastically wrong, he needs to explain to investors why the error wasn’t found earlier, what is being done to prevent it, and how the numbers will be accurate going forward. (27:35)
Lior: This also affects future investment decisions. (28:02)
Alexey: How do we regain trust? (28:07)
Lior: Regaining trust requires a mindful approach to data management and better control over what we do. I see three main components in day-to-day work: maintenance, product rollouts, and product innovation. (28:12)
Lior: In all three, we must focus on the end user the data consumer. What do they need first? What does a minimum viable product (MVP) look like for reporting? We then work closely with users long term, especially during maintenance, understanding where issues come from, their frequency, and root causes. (28:38)
Lior: Many teams experience data incidents. But few use data from these incidents to identify recurring problems. If, for example, three core products cause 60-70% of incidents, we can focus on fixing root causes. (29:16)
Lior: Knowing where issues happen and how often helps communicate to teams the impact of their work. This transparency creates shared reality and easier company alignment. (29:16)
Lior: The dashboard is no longer seen as “broken because of you,” but a problem to solve collaboratively because incorrect data risks losing investments. This is the first part of regaining trust. (30:10)
Lior: Second, for the CEO, we can add a traffic light indicator on the dashboard: green, yellow, red. (30:47)
Lior: Green means data is reliable, yellow means data can be used but there are known issues, red means data is broken and not trustworthy. (30:47)
Lior: This shifts communication from the CEO asking “Is the data correct?” to “Here’s the current status.” (30:47)
Lior: When yellow or red occurs, the team explains the problem and estimated fix time. (31:31)
Lior: If following investigation the data is verified, yellow can turn green. If issues persist, it moves to red until fixed. (31:31)
Lior: From my experience talking to data teams, the biggest gap is communication, not technology. (33:18)
Lior: We often tell data users to find errors themselves because “they know their data best.” (33:18)
Lior: This causes lost trust, as users spend 2-3 hours daily validating data instead of working on features or campaign optimization. (33:18)
Lior: If users can tell us patterns of data quality, we can automate the status indicators — the traffic light system. (33:18)
Alexey: Imagine an analyst sees a data discrepancy while doing their work. They flag it to the product manager to investigate. (34:39)
Alexey: Then the data team changes the dashboard traffic light from green to yellow, showing a potential issue being investigated. When the analyst provides concrete examples, the status might move to red until resolved. If a clear communication channel exists, this keeps all stakeholders informed that data is under investigation, not to be blindly trusted yet. (34:57)
Alexey: Is that how it works in practice? Or have you seen other ways to set this up? (35:42)
Lior: There are many automation approaches. For example, analysts check data and create logs during ingestion that are stored in a simple database, like PostgreSQL. (35:47)
Lior: Daily scans compare logs against trends to flag anomalies. Analysts can then deep dive into suspicious data, adjusting the dashboard status as necessary. This provides an easy-to-understand system for all users. (36:05)
Lior: Data consumers don’t care about backend complexity; they just want reliable data. We must simplify communication to meet that need. Similarly, when a data incident occurs, users care about resolution, not the technical details. (36:38)
Lior: If data loss costs the company €200,000 to fix, we need to communicate clearly why and how the money is being spent. (36:38)
Lior: Based on my experience, transparency and communication are the biggest gaps, not the tools. (37:29)
Alexey: If you start a project from scratch in a greenfield you can design this traffic light system from the beginning. (37:41)
Alexey: You create the dashboard with KPIs, and the traffic light is integrated from day one, which works well. (37:41)
Alexey: But in most cases, there’s already an existing product, dashboard, and trust issues. What do we do then? (38:07)
Lior: We start by identifying patterns. One key area is team stress levels and how their efforts divide among maintenance, innovation, and rollout. (38:19)
Lior: Maintenance involves existing products used by users; we must track tickets, identify patterns, and prioritize fixes. (38:37)
Lior: Sometimes, products cause recurring issues and should be deprecated to focus efforts more effectively. New rollouts require user growth and integration with the traffic light system. (38:37)
Lior: Teams should analyze historical tickets to understand issues. If past data isn’t available, start now; 30 to 60 days are enough to get meaningful insights. (39:21)
Lior: This gives a good indication of product health and where trouble lies. (39:21)
Alexey: You’re talking about the data team stress index, right? (39:51)
Lior: Yes, that’s part of it. (39:54)
Alexey: I’m trying to understand how to implement this practically. Teams spend time on maintenance, rollout, and innovation. (39:58)
Alexey: As a former data scientist, I imagine people want to focus on innovation rather than maintenance or rollout. (40:09)
Alexey: Maintenance is fixing bugs, which is acceptable, but rollout isn’t very fun either. Yet these are necessary. (40:26)
Alexey: Does the stress index go up when teams get stuck mostly in maintenance and rollout with little innovation? (40:26)
Lior: Maintenance is often higher than people realize, driven by multiple factors. (40:57)
Lior: We keep releasing new products but ignore ongoing maintenance and support. (40:57)
Lior: Healthy maintenance should be about 45 percent of time bug fixes, ad hoc requests, and even meetings count as disruptions.Innovation usually gets 10 to 20 percent of time; some teams reach 30 percent, but it’s often unclear how much they should allocate because maintenance constantly interrupts them. (41:21)
Lior: Rollout closes the loop: after innovating and creating something new, you ensure users actually use and like it, not just that project completion is ticked off. Maintenance over 45 percent already puts teams under stress. (42:16)
Lior: Adding innovation targets raises stress further because maintenance pulls them back, making task completion difficult. (42:47)
Lior: For example, a team built a forecasting model for marketing CLV a valuable feature. They gave it to marketing and product teams, who started using it; eventually, the model broke. (43:12)
Lior: No one monitored who the users were or how to use it; no one maintained it long term. (43:12)
Lior: This leads to chaos, collapse, and loss of trust. Once trust in a model is lost, it’s very hard to regain. (43:51)
Alexey: One important thing I noted is that a healthy level of maintenance is around 45 percent. If you only innovate all the time, you don’t have time for maintenance. You just keep releasing products and don’t fix the problems in them. We all know fixing problems isn’t fun; creating new things is more enjoyable. (44:12)
Alexey: But we need to take care of existing things too. If all we do is innovate, bugs and inconsistencies build up. That leads to lost trust. On the other hand, if we spend too much time on maintenance, we can’t innovate anymore. That’s also a problem. (44:39)
Lior: Correct. (44:59)
Alexey: Okay, clear. Now, the first question I had on the list, which we kind of diverged from, is about you describing yourself as a data Zen master. (45:09)
Alexey: What are these Zen principles? We talked about mindfulness, but in practice, how do you achieve Zen when dealing with all these problems? How do you make peace with yourself and the teams around you? (45:27)
Lior: We talked about wabi-sabi, the concept of imperfection. I bring this key concept to many teams. (45:47)
Lior: We need to accept that whatever tool or product we innovate and roll out won’t be perfect long term; it will become imperfect at some point. It’s about being aware of what you’re doing, understanding its impact, and balancing short-term and long-term actions. (45:52)
Lior: It means planning a direction for a journey like having a vision for the next three years of how the data ecosystem will look. While we might commit to innovating 50 initiatives this year, we won’t achieve all of them because disruptions happen. (46:18)
Lior: It’s about mindfulness and acceptance. It’s not about teams becoming completely relaxed or meditative. (46:39)
Alexey: The data is wrong... but what can we do? (46:58)
Lior: We’re just here for the paycheck, right? (47:04)
Alexey: It’s broken but. (47:09)
Lior: We have many options to handle this. The main idea is to be aware. (47:15)
Lior: For example, my late grandma used to make an amazing strawberry cake. The strawberry season lasts only about a month and a half. (47:15)
Lior: Her work took about three days to prepare the cake. When it was ready, she’d make sure to slice it carefully so everyone got a piece and enjoyed it. She said, “Enjoy the bites because it won’t come again until next year. I don’t freeze strawberries; I only use fresh ones.” (47:33)
Lior: This is like how data teams should think. There is a limited cake of hours we can invest. We must slice it carefully to deliver the right impact for the organization. (47:33)
Lior: At the same time, we need to enjoy the process not be in constant burnout, rushing to do everything. We shouldn’t always say, “It’s broken, it’s not the right cake.” We need to enjoy the strawberry cake. (48:20)
Alexey: There’s no way around you. It’s always about cooking with you, huh? Does it have to do with cooking? Sorry. Wabi-sabi sounds like wasabi. (48:49)
Lior: It sounds like it, but it’s not. It’s a concept of imperfection accepting reality as it is. It’s about riding the flow, accepting the imperfections, enjoying the seasons, just like going outside and seeing the changing seasons. (49:01)
Alexey: You live in Berlin, right? (49:29)
Lior: Just outside Berlin. (49:31)
Alexey: In Berlin, if people want to swim in the sea, they go to the Baltic Sea, which is super cold and the only sea nearby. (49:36)
Alexey: So they either enjoy that sea or don’t enjoy the sea at all, right? (49:49)
Lior: Similarly, there is data, and people need to use it. That’s not going to change. (50:08)
Lior: With large language models and generative AI, demand for data will increase massively. (50:08)
Lior: Some teams hide from this because of “garbage in, garbage out” fears.But we need to fix these issues, and we can’t fix them until we are aware of where the problems are and how to address them. For me, that awareness and acceptance that mindfulness toward data is what Zen means here. It’s philosophy, not religion. (50:14)
Alexey: You mentioned accepting that a tool or product isn’t perfect. You launch an MVP acknowledging data might be off, but you have short-term and long-term plans to improve it. (50:47)
Alexey: If the project continues, here’s how it will improve step by step. How do you know when it’s good enough ready to be shown? When do you decide the imperfect data is still useful enough to make decisions? (51:16)
Lior: We know by measuring impact. This is not spiritual, it's about what works and what creates value. If the MVP already supplies value even if ROI is negative but improving we did something good. (51:41)
Lior: In the long term, data products have a lifecycle: development (usually negative ROI), rollout (people start using it and ROI improves), maturity (value peaks), and then decline (ROI drops, user satisfaction lowers). (52:04)
Lior: At decline, it may be time to kill the product and try something new. If the MVP produces value whether cost savings or revenue generation it’s a success. (52:58)
Lior: For example, a new recommendation engine might improve conversion by 2.5%. That increase translates to real revenue impact and can be forecasted long-term. (53:34)
Alexey: In your example about customer lifetime value prediction, it went through all those lifecycle stages, right? At the start, a prototype could already make it possible for users to take action based on its output. (54:07)
Alexey: Most suggestions lead to positive ROI, even if sometimes the model isn’t perfect. (54:55)
Lior: I have seen cases where a model was what users needed but the cost was higher than revenue generated, so ROI was negative. You need to evaluate if it makes sense to continue. (55:13)
Lior: The CLV model reached maturity and was valuable, but eventually declined because maintenance and optimization efforts stopped. This applies to any product: if you don’t keep maintaining and listening to users, people will drop it. (55:38)
Alexey: We have many audience questions. Do you need to leave soon or can we continue? (56:12)
Lior: I can continue, let's do it. (56:14)
Alexey: Here is an interesting one about legacy monolith databases that remain the core of the business: how do you handle trust issues when these systems are already broken? (56:19)
Alexey: Legacy projects often can’t be replaced easily. How do you find root causes and solve trust? (56:44)
Lior: Legacy systems are tricky. I was at a company recently where a legacy system automated campaign budget steering. It was built by someone who left, and it started to fail. (56:55)
Lior: The replacement team tried maintaining it but couldn’t fix it. We worked together to calculate the cost of errors and decided to stop accepting the broken product as is. (57:21)
Lior: We agreed to do only minimal maintenance and focus on building a new product to replace it so the company could move forward. (57:40)
Lior: Legacy systems are often dead weight, but we carry them around trying to survive. At some point, we must stop and replace. (57:40)
Lior: At Zalando, when I arrived, there was a Python script that required six people to manually copy-paste data daily from Excel sheets. It was a huge operation that didn’t work well and only produced value once a week instead of daily. (58:06)
Lior: We stopped fighting to keep it going and built a new data warehouse using PostgreSQL with simple APIs, then connected Tableau dashboards. (58:40)
Lior: People loved the drag-and-drop dashboards compared to heavy Excel sheets that crashed frequently. When replacing legacy systems, we must focus on the impact and added value we deliver to users. (59:11)
Lior: This impact is the selling point for replacement. (59:11)
Alexey: Legacy systems can seem good enough until they break, and then it’s hard to fix them. (59:34)
Lior: Sometimes people fall in love with legacy systems and refuse to move on. (59:46)
Alexey: Like a genius wrote it overnight, but nobody remembers how, and they’re still proud. (59:53)
Lior: Yes. We need the right balance. Ultimately, it’s all about impact communicating the current bad impact and the better impact with replacement. (59:59)
Lior: Make sure the new product’s impact is much better than the old one. (59:59)
Alexey: Thanks. Another question: how do you handle ad hoc requests from executives? (1:00:23)
Lior: Say no. Just kidding. (1:00:28)
Lior: With executives, you need to understand why they need it and what impact they expect. (1:00:28)
Quantifying the value changes the conversation drastically.: When teams talk in the CEO’s language, CEOs start reacting differently. (1:00:59)
Alexey: Thanks. Last question: you started in development, moved to analytics, then product. How can someone tell if they are more suited for analytics or engineering? (1:01:44)
Lior: It depends on what you feel comfortable with. (1:02:08)
Lior: Engineering is about solutions. I enjoyed solving problems, but honestly, it wasn’t my main fun. (1:02:15)
Lior: Analytics is about refining problems and providing solutions. (1:02:15)
Lior: Product is about defining and scoping problems, then passing them to engineers for solutions and refining based on feedback. (1:02:33)
Lior: Think about what you enjoy more: the problem space or the solution space? (1:02:33)
Lior: In the solution space, do you prefer working on explaining problems or solving them? (1:02:59)
Alexey: So it’s about mindset what you prefer doing. (1:03:12)
Alexey: I was always an engineer. I tried product management courses, like Mind the Product workshops. I took a Udacity nanodegree on product management. One module covered scrum and team processes. (1:03:18)
Alexey: I asked a product manager if that’s really what they do, and they said yes all day long. I realized I don’t want to do that; I’d rather just build things. (1:03:42)
Lior: It gives perspective. Many engineers want to move to product and vice versa. (1:04:01)
Lior: Product is about problems, talking to stakeholders, creating tickets, and accepting that maybe only 20% of tickets will be relevant next month. (1:04:09)
Lior: Mindfulness and acceptance are key: it won’t always work perfectly. (1:04:09)
Lior: Engineers also need to accept that some code won’t be accepted but at least they’re solving problems. (1:04:30)
Alexey: Okay, thanks so much for the time and answers. Lovely talking with you. When you write your third book after Wabi-Sabi, you’re always welcome as a guest again. (1:04:36)
Alexey: Maybe sooner than three or four years! We’ll always need your advice on hummus. No way around that. (1:05:02)
Lior: I still go to restaurants because most supermarket hummus in Berlin isn’t just chickpeas it has a lot of oil. I prefer pure. But the easiest is to buy a can of chickpeas, cook a bit, add tahini, mix it, and you have better hummus than most supermarket versions. (1:05:28)
Alexey: Next time we speak, I’ll report on my progress making hummus at home. (1:06:05)
Alexey: Thank you! Amazing talking with you and thanks everyone for listening today. (1:06:05)
Subscribe to our weekly newsletter and join our Slack.
We'll keep you informed about our events, articles, courses, and everything else happening in the Club.