DataTalks.Club

What Data Scientists Don’t Mention in Their LinkedIn Profiles

Season 3, episode 9 of the DataTalks.Club podcast with Yury Kashnitsky

Did you like this episode? Check other episodes of the podcast, and register for new events.

Transcript

Alexey: This week we will talk about failures and things in our CVs that we typically don’t talk about. We have a special guest today, Yury. Yury is currently working as a senior machine learning scientist at Elsevier. His main focus at work is natural language processing. You might already know Yury from mlcourse.ai, which is an awesome machine learning course. You can go check it out. Yury is the mastermind behind this course. A couple of months ago, we had an interview with Ksenia about transitioning from project management to data science. She mentioned that course. on that day, I thought it was a good idea to invite Yury to this podcast as well. Finally this day happened. Hi Yury, welcome. (1:30)

Yury: Hi, Alexey. It’s very cheerful here in The Hague. I am pretty happy to join you this Friday afternoon. I’m always fond of talking about career and all this stuff. I’m happy to share this experience. Although the topic is a bit controversial, it might hurt the marketing interest of some of the companies. I’ll be careful here. I don’t promise that I will answer all of the questions as open as possible. (2:32)

Yury’s background

Alexey: We’ll start with your background. Can you tell us about your career journey so far? (3:05)

Yury: I lived in Israel for four years, then a year and a half in Canada. Then I went back to Russia. So I made the world around in my childhood. I was growing up in Russia and I was fond of aviation and thus I joined the best Russian technical university, Moscow Institute of Physics and Technology, the aviation department. In the meanwhile, I realized I am fond of programming. Then I watched Andrew Ngs course about machine learning. Then I switched from business intelligence. Then I switched to PhD studies, I was full-time in academia. That was a crazy time but I do not regret it. (3:13)

Yury: Then I joined Russian IT giant, Mail.Ru Group, as a data scientist. I always searched for better work-life balance. That’s why we moved to the Netherlands through a Telco operator. After that I joined Elsevier. It’s a good place to combine research. It’s a nice compromise between academy and industry. I’m also fond of quantum computing and quantum machine learning. I’m really a nerd, in a good sense. I have a small child, a daughter, who will turn a year and a half soon. She replaced all of my hobbies, I cannot do hobbies anymore.

Alexey: So you removed your “hobbies” section from your CV? (4:58)

Yury: Yeah. No more hobbies. (5:04)

Alexey: If we ever have an episode about quantum computing, I know who to invite to talk about this. (5:07)

Yury: I’d invite more serious guys. (5:15)

Alexey: Okay. Did you find the work-life balance you were looking for? (5:20)

Yury: Absolutely — here in the Netherlands they don’t die at work and I enjoy that. (5:27)

Failing fast: Grammarly for science

Alexey: The topic today is “things we don’t share in our CVs”. We all have stories about failed projects. I have a couple and maybe I will even tell one or two during today. But since you are the guest today, I will probably first ask you for a couple of stories. One pattern I observed in my career is that we as data scientists tend to spend a lot of time on projects that we did not go anywhere. We work for a couple of months or something and then the project turns out to be useless. We just waste time. Do you have any stories like that in your experience? (5:35)

Yury: One of the recent stories is a side project for a proofreading service. If you want to improve English in your paper, you can order this service. It’s pretty expensive. The project was to automatically assess the language quality in a document. Like Grammarly, but more in the scientific domain. We already had some solution when I joined. And I realized that the classifier that my colleagues built was very close to noise. It was a binary classification of whether an article is well written or badly written — that was close to noise when accurately measured. (6:22)

Yury: The proofreading service had a large data set with original paragraphs and then rewritten paragraphs — clumsy phrases are corrected. You can train a sequence to sequence model that would rewrite the original sentence. But it’s rocket science — you cannot really expect such a sequence to sequence model to produce a well-structured language. My colleagues worked on a binary classification task, and I turned it into regression problems — you can measure some distance between the pre-edited and the edited version of the text, for example, Levenshtein distance. You can predict this distance with a BERT model — they provide four or five fine-tuned language models. I run a couple of experiments with a BERT regressor to predict this distance — how much a paragraph needs to be edited.

Yury: We organized a couple of annotation experiments between me and my colleague, a American native speaker. We went through 50 examples each. The conclusion was that our model is about 60% accurate, in terms of precision. The idea was to highlight badly written paragraphs. Such a highlight would be just 60 percent precise. Considering that it’s a BERT model, which is just a black box, I insisted on closing the project.

Yury: The problem with the project was that it was prematurely advertised. The company was waiting for a solution so hard. Even data scientists, my colleagues, promoted this model like lacroix language quality assessment. They called it lacqua brain, like introducing AI to your project. That was a silly thing to do, you would not advertise your model so hard unless you know that it’s brilliantly good. It's just heating up this AI hype.

Yury: It’s a story about failing fast. I summoned all our product owners, my boss and then recommended dropping any attempts to build such a service. These are the things that are not done with one middle data scientist, one junior data scientist, and maybe one senior. For this task you need a group of linguists. The Grammarly team is pretty large, they have built their service in several years. So, my recommendation was to use third-party tools.

Alexey: This actually was the opposite example — instead of spending a lot of time on working on something for four or six months, you noticed it pretty early, than you got all the stakeholders and you told them “Hey folks, this is not going anywhere, let’s stop it”. (10:42)

Yury: Exactly. That was a hard decision because it was my first months in the company. Nobody knows me and I show up and say “Hey guys, we need to close this project, it leads nowhere”. I had to think this presentation through. I spent the whole day creating this presentation — it was pretty important. (11:06)

Not failing fast: Keyword recommender

Alexey: We have a comment from Pierre. Pierre is saying that that’s why having a data product manager is so important. They can say “this is not useful for us, we shouldn’t spend time doing this”. (11:31)

Alexey: I mentioned I also have a similar story. Back then we didn’t have a data product manager. If we had, maybe it would’ve worked out differently. I worked at a SEO company. SEO stands for “search engine optimization”. In SEO, you have a website, and you want this website to rank for certain keywords in Google. If you’re selling some hardware. If somebody goes to Google and writes “monitors Berlin”, you want to be the first there. That’s the idea behind SEO.

Alexey: In that company we wanted to build a keyword recommended project. Let’s say we have a customer who has some keywords. We say “you seem to be writing about monitors, how about writing an article that would compare 4k monitors versus 8k monitors”. We’d give suggestions for keywords to rank.

Alexey: If you think about this, this is a classic recommender system. in a collaborative filtering approach, so you have this huge matrix. You have rows which are your users you have — in our case, it was the clients. Then the columns are the items. In an e-commerce company, that could be phones. In our case it was keywords. You put “1” in the cell if this customer uses this keyword. You get this huge matrix and then you use things like alternating least squares to factorize this matrix. This way, you encode the users and the items in a certain vector space. Then you can compute similarity between these vectors and items that could be interesting for this user.

Alexey: This is all great in theory. We thought that it’s straightforward, we should follow this approach. We spent a couple of months training this model — collecting the data, cleaning data, preparing it in the right format, evaluating it, tuning it, trying different libraries. It was a lot of fun work. This is what data scientists love to do. We really love doing this kind of work — like Kaggle — especially when there is a good dataset. We had a good dataset and we had really great evaluation metrics.

Alexey: After a couple of months we show it to everyone. We say “This is so cool, let’s implement this”. We go to the engineers who are supposed to help us. They look at this and they say “No. There’s no way we are going to integrate this into the existing architecture”. The problem there was that the architecture was based on AWS Lambda. Previously, there was a very strict limitation on the size of your projects — it should be below 50 megabytes, which is a pretty tough case considering how many different libraries we had. It was very difficult. We spent a couple of months more, working with engineers to reduce the size of this package. We even implemented some things from scratch — we didn’t want to depend on some libraries, they were too heavy.

Alexey: Finally we did it. And nobody wanted to use it. It was a great project, a great idea. It was also an interesting engineering challenge which we overcame. But then nobody needed it. And that was very sad. The customers didn’t need these recommendations.

Alexey: In retrospect, what I would have done instead — I’d spend a few days and manually come up with these recommendations. I’d select a sample of a few clients. I’d work with the domain expert — with some SEO specialist — to suggest keywords for these clients. Then we’d just send emails to these customers — just to test if they’d be interested in these keywords. If we see that they are interested, it’s good. We spent a couple of days verifying it. If we see that they are not, we spent only a couple of days doing this manual work. But it saved us months of work.

Four steps to epiphany

Yury: I’m currently reading a book called “Four Steps to Epiphany”. It’s on startups and their business models. They describe exactly this problem: you think customers want something, but they don’t. Some startups cherish this product development model, but it doesn’t work well for startups. It’s better to switch to the customer development model. They describe this approach where you start with a trial group. You make sure that you are creating a feature that these customers need and you are solving their pain. Then you keep iterating. (16:46)

Yury: And in this story, there is actually a very important skill — not to use machine learning.

Alexey: I have not read this book, but I heard about this thing called “design thinking”. It’s about what you mentioned — it’s about thinking of the customer first, going from the problem they have and validating things as fast as possible. (17:40)

Lesson learned when bringing XGBoost into production

Alexey: One of the things I mentioned in my story was about the engineering part. This is what we, data scientists, don’t like. We like doing Kaggle, trying different models, tuning parameters, and coming up with smart features. This is fun, right? But when it comes to deploying models, when it comes to the engineering part behind machine learning, then for many, it’s not as fun. (18:00)

Alexey: I watched the talk you gave a couple of years ago. You worked in some advertising company and you had some issues with the serving layer. Maybe you can talk about this story? Can you tell us in more detail what happened there?

Yury: Indeed! I had exactly the same problem. I kept iterating on improving the model while the problem was lying in a different place. As you can guess, it was in the infrastructure around the model. (19:04)

Yury: When I switched from my PhD studies, I joined the Mail.Ru Group. For search, everyone uses Google. But in Russia we have a great Google competitor, Yandex. Mail.Ru is the greatest Yandex competitor, they also have a search system. Due to morphology and all the different challenges of the Russian language, Yandex is leading. It owes around 50% of the search market. Google is very close with something like 47%. And Mail.Ru has the remaining 3%. But these 3% bring in huge revenue, something like dozens of rubles — dozens or even hundreds of millions of dollars per year.

Yury: We had grading boosting for this search system. It’s a highly optimized C++ implementation of grading boosting. Bosting can do classification, regression and ranking. With some specific losses, it’s very good for ranking. We had the task of content recommendations — we had several partners, websites with different news. There are a lot of them, but you can show only 4 or 5 related pieces. So it’s an easy monetization scheme — you replace one of the recommendations with an ad. Gradient boosting was very good in offline experiments, in cross validation. But when deployed in production, in the online experiment, it wasn’t. We noticed that a heuristic was beating the gradient boosting model.

Yury: The heuristic was very simple. In content recommendation tasks, there is a very strong baseline — CTR, click-through rate. You show some ad a hundred times, you measure how many times it was clicked. Let’s say it was 7, then your CTR is 7%. If you rank all your content by CTR, it gives you a good baseline. In reality you have to exclude nudity and other inappropriate content, but we worked with partners, we were pretty sure that this content can be shown. Our heuristic was mostly based on CTR. It was a weekly CTR with some trend we added plus a monthly CTR with some small coefficient. We also broke it down into 10 age and gender groups. That was a fairly simple solution.

Yury: I iterated on improving the model. I used active learning. I created different features. I tried to improve the model itself, its architecture, hyperparameters and so on. I was still in my PhD program, so I approached the problem as a machine learning researcher. In 3-4 months, I realized that it’s a high-load system. The model was limited to 80 ms to make a prediction. If it fails, if it times out, you cannot show a blank page, you have to replace it with some quick and dirty solution. In these serious production systems, you always have a “last hope” solution. It’s a very reliable heuristics — like sorting by CTR. On the weekend, when everyone is off, if the main production system fails, this solution must work all the time. These things should work 100% reliably.

Yury: What happened is our grading boosting solution was timing out in many cases and it was replaced with this last hope solution. In the end, I tested not a grading boosting solution, but a mixture of grading boosting and this heuristics.

Yury: When I fixed it, it skyrocketed. The fix was pretty simple. Initially we were starting with CTR — we’d rank the content by CTR and boosting would just re-rank top 1000 documents. We replaced it with 300 — we only re-ranking 300 documents. It was working exactly the same in terms of precision and recall, but much faster. Now the project is bringing money and all is good.

Yury: The lessons that I learned there. First of all, I spoiled my relationships with a manager at this point. Four months was too much for such a project. We already had a working solution, a nice grading boosting model and there was pressure to deliver it earlier. At that time I launched a side project — this open machine learning course. I was a bit distracted. My personal lesson learned was that sometimes you need to earn a reputation. You need to work hard and focus on a single problem. Another conclusion was that you need to go beyond your Jupyter notebook. In a project, go to your developers, backend, MLOps, DevOps. Make sure you understand what’s happening to the model at each stage — from data collection to training to deployment. Understand the entire life cycle of a model. It’s also good to understand technical details, to avoid problems related to infrastructure rather than to machine learning itself.

Alexey: So you had a smart model but most of the time it was replaced by a simple heuristic. You found out about that later. I guess you spent a lot of time trying to figure out what is going on. I imagine going through all these logs trying to figure out what is going on there. (25:25)

When data scientists try to be engineers

Alexey: These things are not easy for data scientists. Even though I previously worked as a Java developer, it was difficult for me as well. The way I was doing things was far from best practices. I would SSH to the “production” server and I’d do “git pull” there. And I’d have a special branch called “production” in my git. Everything in this branch was the “production” code. Eventually I even set up a crontab to pull automatically every minute, so I wouldn’t need to SSH to the machine. Of course, there was no CI/CD, so every time there was a bug, even simple things like syntactic mistakes, the whole thing crashed. So I’d need to SSH to the machine and revert it. It was annoying. (25:56)

Alexey: At some point I left the company and somebody unfortunately needed to deal with all this mess. I heard many complaints. Eventually real engineers took it over and re-did everything with proper techniques like CI/CD. I have lunches with my colleagues from there, and they still complain. Did you have anything similar in your experience?

Yury: Exactly. When I switched teams in Mail.Ru Group, I joined a predictive analytics group. Overall that was a very successful project. It dealt with marketing. We had a business intelligence solution — an app which would create nice dashboards with key marketing metrics: LTV, retention, monthly users, large payments. It was related to mobile games and some of the tasks were to identify “whales — players who pay thousands of dollars per month. We had a nice app for creating reports on these metrics. I was the first guy to introduce predictions — as you create LTV reports, we would then create reports with LTV predictions. (28:11)

Yury: It’s a funny experience. We had this startup vibe in a giant company. I was the first guy to set up machine learning pipelines. So I was setting up the “production”. I started with a Jupyter notebook, created some snippets, and then switched to PyCharm to create a nice project with object-oriented programming, tests and so on.

Yury: But then I’d drop these predictions in a CSV file and another guy, backender, would pick them up and rsync them (i.e. copy) to another server. We had very similar issues. We had no CI/CD. If something didn’t work, we’d go on SSH and fix problems in this production. This backender didn’t like us. He was cursing us, data scientists. We earned 50% more than him and it was very annoying that he had to deal with all these issues.

Yury: Then I moved to the Netherlands and I dropped the project. But I think now it’s successful, it’s in active development. Now guys actually have the best practices there with all the CI/CD, code reviews and so on.

Alexey: You have to go through this experience, right? You have to do things incorrectly before you learn to do them correctly. (30:35)

Joining a fintech startup: Doing NLP with thousands of GPUs

Yury: At the same time, I joined a fintech startup. One of the huge advantages of companies like Google, Facebook, LinkedIn, Uber is the network. In Russia these companies are Yandex and Mail.Ru and maybe Sber. There are many smart guys. At some point, one of the directors of a huge department took the course about ML I gave at Mail.Ru. And he invited me to join a fintech startup. (30:44)

Yury: They came up with an idea to sell Bitcoin in a mobile app. Such banks already existed in Europe, but in Russia they had to solve many legal issues. Revolut actually existed at that point already, it was built by Russians who then moved to Great Britain. They already solved these legal issues to sell Bitcoin in their mobile app. This startup was doing the same in Russia.

Yury: And they bought 4,500 GPUs to mine Bitcoin. Then they realized that you need special hardware to mine Bitcoin, the GPUs weren’t cool anymore. They also had this factory somewhere in the center of Moscow, but you need cheaper electricity — you need to put these factories to hydroelectric stations somewhere far away.

Yury: They realized they had 4,500 GPUs. They needed to sell better, so they included “AI” in their pitch decks. And that’s why I was doing some “AI” for their startup. From day one, I said “No, I’m not going to predict Bitcoin prices. I don’t believe it’s possible. If you have a huge team of smark guys, maybe you can build something in 5 years, but I refuse to do that alone”.

Yury: So I was working on sentiment analysis of Bitcoin news. The idea was to create something like a “sentiment barometer”, which would show you the daily sentiment around Bitcoin. I was playing around with some state-of-the-art NLP solutions. That was before Hugging Face released their easy-to-use API. So, I would fetch some GitHub repo and spend a day trying to launch something. Eventually I managed to beat TF-IDF and logistics regression by 3%.

Yury: Then the startup had troubles raising money and they decided they do not need AI anymore. My solution didn’t end up in production and it wasn’t bringing money. But that was actually a great experience.

Yury: I also learned that labeling can be prohibitively expensive. We had some Australian financial experts. We had a special telegram chat with 15 guys. It was fun just talking to them. They were labeling this data and that was prohibitively expensive. At some point, I switched to Mechanical Turk. I was also labeling some of the data myself. We also learned that you can spend too much money just labeling data.

Alexey: It will still not be good enough. (34:36)

Yury: Yeah. (34:39)

Alexey: Do you know what happened with all these GPUs? (34:40)

Yury: At some point they explored the idea to sell it to deep learning researchers. On one side you have all these miners with many GPUs and on another side you have got deep learning researchers who need cheap compute. You can build this bridge. I know one startup already did that. I think this startup also decided to rent GPUs. I do not know how they got rid of all these GPUs. (34:45)

Alexey: It’s probably not so easy to get rid of them. That is a lot of GPUs. Do you have this startup on your LinkedIn profile? (35:18)

Yury: No. It was 4 or 5 months experience. I didn’t mention them because I don't want to hurt their marketing interests, so I don’t want to mention this startup. (35:46)

Alexey: Okay. That’s one of the things that data scientists don’t mention on their CVs — small startups that mine bitcoins. (36:01)

Yury: Exactly. (36:11)

Working at a Telco company

Alexey: Then you moved to the Netherlands and worked at a telecom company. Can you tell us more about what you did there? (36:12)

Yury: Yeah. I switched to NLP. In this telecom company, I worked with a huge dataset. According to the definition of data mining, you have a large data set and you want to find some useful signal there. We had exactly these problems. The Telco operator had many chats, calls, emails with different complaints. There’s actually a lot of useful signals there. There are different problems reported by chats, calls, emails. It was all in Dutch, so I used Google translate a lot… But you can imagine — people write that they are pissed off by some of these services, they have some technical problems and so on. So I built a service that would classify this into different broad groups, like “billing”, “general service”, “churn”, “customer satisfaction”, and things like that. That was fun. I had to work with Dutch. At that time there were no good pre-trained transformer models in Dutch, so we explored how we can train a model in English and then run it with Dutch. (36:21)

Yury: I worked in a data science department and it was not properly managed. Data science for such a company was like a luxurious car. LIke Lamborghini — it’s cool, expensive but what if you don’t know how to drive it. It was a challenge for the managers of this old company to manage us.

Yury: We had one project which yielded revenue. It’s called “bad debt” — it’s very similar to credit scoring. When you go to a shop and you’d like to take a loan for a mobile phone, they run a scoring model.

Yury: There was also a small cool story about this project. In my first days in the Netherlands, I went to a KPN store to buy a mobile phone for my wife. iPhone, of course. And their system rejected my application for the loan. The model was built by my colleagues, so I went to them, and asked “Why was I refused?” I was literally able to go through the learned coefficients with shapley, visualize them and understand why I was rejected. The killer feature was “residents permit”. I had a temporary permit for less than one year. What scammers typically do — they come to the Netherlands from some other country, take hundreds of mobile phones in loans and then leave and sell these phones elsewhere. So if your residence permit is shorter than one year, you can be rejected.

Yury: That was the project yielding revenue, several millions per year. That was a Card Blanche for us to do research to explore ideas. That was cool! I had a perfect team there. All the guys around me were so nice. The working style was so relaxed, especially after Russia. Friday is working from home.

Having too much freedom

Alexey: You were looking for work-life balance, right? (39:54)

Yury: Yes, I moved to the Netherlands for work-life balance. But we had too much freedom. I used this time to do research. I also launched an initiative with Amsterdam data science on exploring transfer learning in NLP. That was nice because it led me to my new job. My would-be boss wrote an email to me, “I know you from Amsterdam data science. Would you like to join as a senior machine learning scientist?” (39:56)

Yury: Maybe it is a bit philosophical, but if you have too much freedom and you are lacking some sense of impact, that’s also not good for your motivation. With all these lockdowns, I switched to another company.

Yury: As a consequence, I now ask a question about that during job interviews. The question is “How many projects yielding revenue do you actually have running right now in production?”

Alexey: What if they say zero? Then it is a red flag? (41:07)

Yury: Well, maybe yes. I like coming up with PoCs and doing research, but at the same time I don’t want to feel this lack of impact. (41:14)

Alexey: I liked your metaphor about a luxurious car that is cool but expensive. It’s probably not easy to find people in management who know how to drive this car. Especially in more traditional companies, like telecom companies. What I often see — these companies work with consultants like McKinsey, BCG or others. The consultants say, “You don’t seem to be doing AI, but you should be. Hire us and we will tell you how.” They hire these consultants. Then the consultants say, “You need to hire 2-3 data scientists”. So they hire them. The consultants are very expensive, so companies end their contracts with them. And now a company has 2-3 data scientists and needs to figure out what to do with them. I didn’t personally experience that, but I heard these stories from other people. (41:28)

Alexey: In the end, most of these projects were not successful. People would just be left alone with a lot of freedom. They would spend some time playing Kaggle. But you can only play Kaggle so much. You can do it for 2-3 months, but then you start feeling bad about doing this at work.

The importance of digital presence

Alexey: We actually have a question from Wahid. It’s not related to our topic today, but I’m curious to know your take on that. We already talked about Kaggle — doing this at work when your company doesn’t know how to keep you busy. The question is, how important is it to have a digital presence for landing a data science job? Something like GitHub pages, a personal data science blog, active Twitter account, Kaggle and things like this. (43:20)

Yury: I wouldn’t say it’s of critical importance, but it’s a good additional feature. The most important part in an interview is being able to describe your projects and the impact in those projects. It gets more and more important as your role matures. When you are a junior, you might be challenged to write a Fibonacci generator or take a derivative of some crazy function. But as you mature, they get more interested in how you can change the company and the processes. That’s why it’s good to show your experience from other projects. Especially if you changed the way the company runs some processes. That’s very important to describe it. (43:49)

Yury: So the first exercise I’d recommend to everyone is just to go through your LinkedIn profile and just analyze your past experience. Be able to describe your projects in detail. Using active verbs. I think Google has a nice instruction on how to reflect these in your CV. State what was your role in the project.

Yury: I still think this is important. My public activities helped me a lot — like having an open machine learning course and having a GitHub repo with 7000 stars would not hurt.

Alexey: I cannot imagine the scenario where it would hurt. Well, maybe I actually can. You mentioned that one of your managers was not really happy about that when you had an important project at work. (45:23)

Work-life balance

Yury: Indeed. There is a very subtle trade-off. I always kept a couple of hours per day for any creativity — reading blogs, writing blogs and things like that. For me, it’s also a way to avoid burnouts. Honestly, I am not fascinated with the idea of working hard for the company. Why would you? (45:35)

Alexey: Let’s hope nobody from your current managers is listening to that. Oh, your current managers are from the Netherlands. They take these things easy in Europe and care about work-life balance. (46:01)

Yury: Maybe some of my colleagues will also listen to this. But anyway I understand if you work 12-14 hours per day for your own startup. If you believe in that, if you think it’s revolutionary. You still have little chance to rocket with this startup. But I understand that if you work for your own startup, you can live at work. But I honestly don’t understand putting so much effort into someone else's company. Unless you are motivated with some stocks, it makes sense. That’s why I always leave a couple of hours per day, knowing that I would be distracted by some cool stuff like reading about causal inference or quantum machine learning or things like that. (46:13)

Yury: One more hack — arrange a meeting with yourself every day from 9 am to 1 pm. You create a meeting with yourself, and it’s your focus time. Sometimes, of course, you have important meetings and you are asked to reschedule. But most of the time you can do your stuff. As a data scientist you can go into code. I used the time also to work a bit on my side projects, on my public activities, blogging, shooting videos and so on. Indeed I had a negative experience with that as well. When I worked at Mail.Ru, I was maybe too involved in running this open machine learning course. My main task at work suffered. So there is a very subtle trade-off here.

Yury: But I certainly recommend having some public activities and nice talks. If you have solved some problem with A/B tests, share your experience. If you can describe all the nitty-gritty details, the caveats and how you resolve the issues — that might be very valuable. If you create 5-7 talks like that, you can already be recognized within closed circles. That would definitely help. But it’s really hard to put a label on how valuable that is in terms of money. It’s very personal.

Alexey: It just gives you a lot more opportunities that you otherwise would have. If you want to measure this in terms of money, then the next time you change a job, you can get a higher salary bump. You can just apply for a job and then you will get a job. But if you have some online presence, people will recognize you. Then you can ask for more money. Then this is how you can measure it. Of course, to do a proper evaluation, you’d need to run an A/B test. You need to take a group of people who don’t have public activity, a group of people who have public activity and then make them change their jobs. And then see. Running this experiment would probably take some time. But my gut feeling is that it helps. (48:27)

Quantifying impact of failing projects on our CVs

Alexey: You mentioned that you should go through your LinkedIn profile, take your projects, and try to quantify the impact you had in these projects. But what if these projects had a negative impact? You wasted six months of time working on something that resulted in nothing. And you decided that you want to look for a new job. That's the kind of thing we don’t mention in our CVs, right? I wouldn’t say on my CV that I wasted so much time on my company working on this project that resulted in nothing. I don’t become more attractive by saying that. Do you have any recommendations on how we can qualify our impact when our projects are not that great? (49:30)

Yury: All the stories that we share here are about inconvenient truths. LinkedIn profile is about self-promotion. It pursues some marketing goals — which is a euphemism for “lying”. Let’s say “self-promotion”, to make it more politically correct. It’s not lying, but it’s not 100% truthful either. (50:46)

Yury: But I think it’s worth mentioning a negative experience. You can still sell it. The next time I’m in an interview, I’d describe this language quality project where I made a decision to close the project while it’s not too late.

Yury: First, I have a gut feeling that not so many candidates describe such negative experiences. This can help you show off a bit by describing the negative projects.

Yury: But if you just want to put it on LinkedIn, then you don’t need to be very specific about all the financial goals in the projects and things like that. Just describe it in general. I have some on my LinkedIn profile and I also have some pretty general descriptions — because of all the issues discussed here. Just be sure that when you talk with your next potential employer, you can defend this project and you can share the lessons learned from the project.

Business trips to Perm: don’t work on the weekend

Alexey: We still have some time left. Maybe you have one or two stories you want to share. (52:41)

Yury: I have a couple more funny projects. They get us back to my youth. I don’t like business trips. So I can share a story about my job with a Russian system integrator. In my master's, I joined a company — an Oracle partner in Russia. They had a very primitive monetization scheme. Half of their revenue came from reselling Oracle licenses. Many small companies across Russia who want to collaborate with Oracle went through this system integrator and bought the licenses. Another 49% of the revenue was coming from a corporate studying center. They gave courses on database management, Java and so on. And only 1% of all the revenue came from the actual development. That’s approximately the role of data science in general in business, just 1% of the whole revenue. I was still a student, and I spent more than a year in a corporate center. I need to be grateful to this employer, they invested a lot in me. I studied a lot of things, even Hadoop. (52:55)

Yury: But when I joined the actual projects, I went to Russian city Perm. It’s almost Siberia, it’s very close to the Ural mountains that separate Europe from Asia. Perm is a bit to the east from the Ural mountains. It’s where Siberia starts. So I went on a business trip there.

Yury: I worked as a business intelligence architect, so the work itself was actually pretty good and sometimes challenging. You work with a corporate data warehouse and you need to build some logic around the warehouse. The final goal is to make reports easily with a couple of clicks — you create a nice dashboard that you show to your manager, CEO or anyone. I needed to think a lot.

Yury: But I needed to go on business trips. It was really crazy. They sent us to Perm in winter, when it’s -30. The project manager didn’t take into account that we live in a hotel during the weekend. So their financials weren’t actually thought through well. So they just made us work during the weekends as well. There was a taxi that would pick us up at 7:30 a.m. It’s -30 and windy, so we jump into the taxi, the taxi drives us to the company. Then we sit there till 8 p.m. Then another taxi takes us back to the hotel. You don’t see the sun at all.

Yury: I had three business trips like that. One of them was just three weeks in a row, with weekends, so 21 days in a row. At some point I said “No. I don’t care”. After spending Saturday at work, I got a hockey stick and played ice hockey with guys from Perm. I just said “No, I’m not working on Sunday”. I could have been fired, but I didn’t care.

Yury: The lesson I learned is that you need to be careful with business trips. I also have a feeling that you have to work more on a business trip, to satisfy your customer. To be honest, I just don’t enjoy this business model where you have to satisfy your customers. Data science projects are very risky. You always have a risk that the project is not profitable. Then you gradually turn into your customer's slave. I don’t like that feeling.

Yury: As for business trips, maybe someone would argue that it’s a way to explore the country, see the world. I know your company belongs to Naspers. We have their headquarters here in Amsterdam. I know a guy from this company. It’s a crazy lifestyle. One week you are in Brazil, another week you are in Guinea, then you go to Russia. It’s very challenging for your work-life balance.

Alexey: Now things are different. They don’t need to travel that much. Thigs are in zoom anyways. But maybe next year the life will get back to normal and they are back on the plane again. (58:15)

Yury: They will get back to their normal. (58:31)

What doesn’t kill you makes you stronger

Alexey: Maybe somebody will not. It’s time to wrap up. Do you have any last words before we finish? (58:36)

Yury: I wanted to say that these failures are fine — unless you are fired. I think it’s a very nice experience. It’s good to understand that all the people around you — they are not robots. They are not bringing millions every year to their companies. It’s important to understand that they also have their failures. It’s good for your self-esteem. At the same time these failures are very important. It’s okay to make mistakes. You can listen to talks like this one, but it’s important to feel it with your own skin. What doesn't kill you makes you stronger. (58:46)

Alexey: Right. So you have to try doing these things, SSH into your production environment and then deploy things through Git, to know how good things can feel like when you do it properly. Well, I am not recommending doing this actually, but by doing this you see the benefits of doing things the right way. (59:49)

Yury: I can only wish you good failures that don’t make you leave the company. (1:00:16)

Alexey: How can people find you? (1:00:24)

Yury: Twitter. (1:00:30)

Alexey: I will put this in the description. Thanks for finding time to join us today. I know you have a tight schedule. This is the third talk you gave today. I imagine that now you want to take some rest from talking and drink some hot tea. Thanks for talking about these things. Not everyone would be comfortable talking about failures. Also thanks everybody for joining us today and for watching our chat with Yury. (1:00:35)

Yury: My pleasure! Thanks for having me. (1:01:27)

Subscribe to our weekly newsletter and join our Slack.
We'll keep you informed about our events, articles, courses, and everything else happening in the Club.


DataTalks.Club. Hosted on GitHub Pages. We use cookies.