CASE STUDY – This insurtech venture has found in Lean Thinking a way to tackle its many scaling issues. It’s grown from two to fifty-five people in less than four years, ultimately thanks to a strong focus on quality.
Words: JB Limare, CEO,Veygo – United Kingdom.
When we launched Veygo in October 2016, we were like all new startups: we were really fast (we released our MVP – minimun viable product – in 11 weeks thanks to our partners at Theodo), our team was tight and focused, and communication worked perfectly.
After successfully launching our first product, we started growing the team in order to confirm our early success. One of our primary goals was to turn our MVP into a mature product and, to get there, we had a long list of features we had to deliver.
For 12 months we rushed to build up the product, but then issues and inefficiencies began to creep up. There was a general feeling that we were slowing down and we were getting worried about the scalability of our venture. We were afraid that we were catching “big company diseases”, and that the speed and decisiveness we were so proud of were just an illusion. It was easy to look good and go fast with a clean sheet, a small team and no legacy, but scaling a product and a whole business was another story. We needed to find a way to scale up well – a typical startup challenge – and so we started to explore Lean Thinking as a way to solve our issues.
I got very enthusiastic very quickly and we started “doing lean” with the faith typical of born-agains. It didn’t take us long to learn two key insights:
- Any venture (digital or not) can be conceived as a flow of parts that starts and ends with the customer. “Fake” Agile teams that only optimize the flow within IT are missing the point;
- Waste is the key concept to understand inefficiencies in our flow. To create a scalable organization, we need to spot and eliminate sources of waste.
We tried pretty much all the tools we could find in the books: mapping processes, visualizing and controlling our flow, kaizen workshops and so on. However, at that stage our goal was merely to go faster in order to produce more features.
While Lean Thinking can and should certainly help you become more efficient, at that time we were going about it the wrong way: we were trying to improve speed and productivity instead of aiming for great quality and people engagement, which would have led us to a better performance. As a result of this misunderstanding of key lean principles, the relationship with my teams got tense because they felt I was forcing the tools on them in a “command-and-control” fashion. In the end, we didn’t go any faster.
PUTTING CUSTOMERS BACK AT THE CENTER
Unfortunately, six months after we started our lean journey, our initiatives had failed to have any real impact on the business. For all our cool whiteboards and fancy process flows, we were unable to provide a single business problem that had been efficiently solved using the lean toolbox.
Worse still, we were starting to fall behind our sales targets. We were supposed to grow massively but sales were sluggish. We loved the ideas and concepts of Lean Thinking, but they seemed useless when it came to increasing sales… or so we thought.
We did not give up on lean, however. Instead, we tried to use the lean mindset to understand why our sales were disappointing. This is when our lean sensei proved instrumental. We could have addressed the issue via “obvious” solutions, such as increasing marketing expenses or building even more features, but our sensei helped us realize that the real issue was that we were more focused on our processes than our customers. We were catching, she said, the first “big company disease”.
She and I went to the gemba, where value is created, to try and find out why. More specifically, she took me to the Operations team, who handles customer requests and claims, and later on to the Product team. There, we discovered that:
- Only a few dedicated people in the Operations team ever talked to our customers, and sometimes it was not so good;
- We were guessing why customers were hiring us. We were making our own assumptions about customer problems without ever talking to any of them or knowing who they really were;
- We were doing no customer research to validate which project to work on next;
- Not knowing much about our customer habits and in which circumstances they needed us led us to set unrealistic sales targets that put pressure on everyone.
As a digital, online-only business, we were trying to create a great and highly scalable business that gave customers a smooth, automated experience. We used to say that our best customers were the ones we never talked to.
However, we were at risk of forgetting the customer altogether. Already we were becoming a “navel-gazing” team: we were closing ourselves to real customers. We were making (pleasant) assumptions about what we thought our customers wanted. We were good at talking about them, but not at talking to them and addressing their exact needs.
To solve this issue, we started talking and listening to our customers. We created a new role dedicated to interviewing them using the “jobs to be done” method. We also created a weekly meeting with the Veygo management team to listen to those calls so we could get to know our customers better.
This exercise was quite eye-opening. It allowed us to discover how our products were fitting into our customers’ lives. We had been assuming that our products (short-term motor insurance) were consumed by “cold and rational” individuals and that price must have been the main driver. In fact, we discovered that they carried very strong emotional values.
For instance, we realized that our Learner Driver insurance product (which covers a young learner driver so he or she can drive their parents’ car) gave parents the opportunity to reconnect with their teenage kids: it provided them with a “feel good” moment by teaching their kid a life skill. This helped us refocus our marketing on parents instead of just talking to the learner driver, for instance.
This step was critical for our growth in that it put the customer back at the core of all our decisions (marketing, product features, pricing, ops, etc…). Thanks to this, I realized that our obsession with speed and agility was leading us to confuse output with outcome:
- We wanted to maximize our output, i.e. increase the pace of our production, through frequent releases and higher team velocity.
- But we did not consider the actual outcome of our work for the customer, i.e. is this feature solving a real customer problem and is this bringing Veygo closer to success?
The realization that we needed to look at customer outcome instead of mere feature output is really what led us to discover how customer quality is the core of our business.
Learning the difference between outcome and output convinced us to change the way we define success and failure. We used to think, for instance, that a new product feature was considered “done” as soon as it was released. Instead, we decided to evaluate the real impact of our development work on customer satisfaction by introducing a systematic definition of success for each new feature.
But we soon realized that measuring the impact of our work was not so easy. We were not able to say with clarity whether a feature was a success or a failure from the client’s perspective.
For example, we wanted to test a new acquisition channel for our Learner Driver product, by convincing driving instructors to talk about our product to their students. We built a “quick and dirty” module to help instructors promote the Veygo product. Unfortunately, this trial generated very little traffic and sales. Our biggest problem was that we could not tell whether the test failed because selling the product through this channel was a bad idea or because the module didn’t work properly. We had spent precious time and effort on developing it, and we were learning nothing.
Until then, we had assumed that bugs were normal and that quality was somewhat optional: better to do things 80% right (or even 50%) than spend ages looking for perfection. This is typical of an MVP culture: we thought it was great to rush to release a product based on a few lucky field tests. Quality was an afterthought – usually put in the same basket as technical debt that we would reimburse later.
People were warning us. Our lean sensei kept telling us that bad quality is detrimental to a business and our Operations team, who faced angry customers, was trying to alert us to those issues as well. But we thought it was more important to go fast to release the next cool feature than to solve all of our customer problems. This attitude was no longer viable: looking at the outcome of our work, we started seeing quality issues creeping in everywhere.
We realized that the lead-time of our features was much slower than we thought once we took into account all the time necessary to fix bugs. In fact, many of the features that we had tagged as “delivered” over the past year were not considered “finished” yet, as we could not tell whether the problem was really gone. In the meantime, our technical debt kept growing.
We also started paying attention to customer complaints and discovered that most of them were related to quality issues. We had customers wanting to buy and telling us about bugs, but we were doing little to help them. Our most important quality issue was happening on the payment page: several bugs were leading to an endless loading screen, meaning the customer would be stuck without knowing if the payment was completed or not. Another frequent problem was that we were not applying a discount code that was meant to reward loyal customers, thus losing all their goodwill.
We decided to clean up our mess and focus on getting rid of this quality problem once and for all. We had to freeze our roadmap, assuming it would take us only a couple of weeks to fix things and then we would be back to normal.
Well, nine months on, this “clean-up” is still not over… and it has forced us to change our team culture completely!
First of all, we had to learn how to spot non-quality. Our teams were not trained to use reliable and healthy KPIs to learn about issues. A few senior IT staff were even resistant to the idea of using KPIs to spot issues: they were used to traditional IT departments working as a siloed black box and thought that KPIs would lead to micromanagement. They didn’t see that KPIs empower the team by making problems objective and visible, thus giving them the freedom to find and implement their own solutions.
Secondly, we had to challenge our development approach and our excessive focus on velocity. Those bugs were not just random errors, they were the result of our team culture and training (or lack thereof). We followed a very strong and demanding Agile model (weekly sprints, systematic estimating of every development tasks) but this was not helping our teams deliver quality. If anything, it was making things worse: teams were obsessed with achieving the target velocity of their sprint and rushed through the backlog in order to get more points in.
Moreover, we didn’t have any standards that helped the teams identify non-quality in their work. We were very far from the lean principle of recognizing a defect and stopping to fix it immediately before passing it on to the next step. So, when we first started experimenting with this concept, we inevitably slowed down, which was somewhat scary to all of us.
Here is what we changed in our way of working:
- Making quality problems visible by measuring lead-time, displaying our entire bug backlog and highlighting bugs raised by customers (which we made a priority). We removed scrum team velocity from our KPIs to ensure people would get the message that quality was now our number-1 priority;
- Systematic reviews of customer sessions to spot issues and bugs before our customers did;
- Change our testing standards: our developers now write and run all their tests themselves. We don’t have dedicated testers anymore. This allows developers to “own” the quality of the code instead of relying on people downstream to spot quality issues for them.
- Bug reviews: 1) understand the root cause of each bug, 2) check if we have done a good job fixing the bug, and 3) see what needed to change in our process to prevent the same problem from happening again. This exercise has proved very important yet quite hard because it required the team to question their own coding methods. It was not always comfortable and we are still struggling to make this practice permanent.
The first few months were hard for everyone. Our pipeline was frozen and we had little certainty that the work on quality would be a success. We could see the short-term pain, but we were not sure of the long-term gain. It took faith and courage, and I had to push for it.
Several months later, we started noticing that our conversion rate was increasing steadily. At first, we thought this was due to seasonality and our improved marketing campaigns, but after careful review we realized that this was not the case. Today, seven months after we started, our conversion rate is 45% higher and our cost per sale has decreased by 70%, while the number of customers complaints has been reduced fourfold. We attribute most of this progress to our work on quality. This has made our business significantly more efficient and profitable and we now feel we have more solid foundations for growth.
We have learnt a few key lessons along the way:
- Speed of production is useless if you don’t have strong quality standards – focus on speed only and people will just rush and ship whatever they can;
- Ignoring quality means to ignore the customer. A team that makes non-quality normal won’t provide good service even with the best business ideas;
- Quality is good for the business.
This is not the end of our journey. There is still so much for us to learn. Our next challenge? Trying to increase our speed again. Lean Thinking helped us at every stage of our growth, so I’m confident we’ll get there.
JB Limare is the CEO of Veygo, in the United Kingdom.