Two of the Most Common Mistakes Product Teams Still Make
Feature Creep and Over-Optimization
I’ve been focusing on the question: Why are the products that we build not better? What are the things that prevent us from building great product teams? And I still see two big mistakes. The first one is one I’ve seen for a long time. The second is a bit newer.
Feature creep is still a problem. Most of the products we use have way too many features, and most of them are there because the product or tech team couldn’t figure out what they needed to build next to solve problems for their customers. It becomes more and more of an issue with bigger and more established products. It even becomes an issue with smaller products when you’re trying to figure out, “What is the next critical thing that I can build to just get a user addicted to my product and wanting to use it?”
This problem comes from: “I don’t know what to build next. I’m not sure how I’m going to solve a problem for my customer, so instead I’m just going to bolt something on and hope that it works.” It comes from a place of not actually understanding who your customer is or oftentimes not looking at your data to figure out, “Hey, this is working. This isn’t working. This is where we should double down.”
The newer problem I’m seeing more and more is over-optimization. This is the flip side of feature creep. You’ve got something that’s working, you see some good signals in your data. Now, you should continue talking to your customer to understand from them, “why is this working? What problem am I trying to solve?” Instead, the product team starts thinking, “Okay, I’m going to tweak the dials, I’m going to start changing all of the button colors, I’m going to move this thing from here to here.” All of that works, usually for a brief period of time, but eventually you get to a point of marginal return, and you can’t actually figure out how to keep innovating within your product organization. What often happens in this scenario is you lose that innovative process that you need in order to keep building a great product. You also lose focus on who your customer is. Instead, the teams starts to think of the customer as one more data point, one more bit of information that they’re bringing in versus thinking of them as an actual user.
It’s interesting because one is an old problem and one is a newer problem. I think they have a lot of similarities in where they are coming from though.
The biggest one is teams still aren’t great at understanding the problem that their product is solving. A lot of teams still don’t spend the time to go out and actually physically talk to their customers. They don’t spend the time to actually sit down and understand, “Tell me what you are doing with my product every single day. Tell me how you are using it.” And even if they do, they may not use those conversations to generate insights for how they’re going to build a better offering.
But then secondarily, a lot of product managers are really bad at forming hypotheses. People are not good at saying, “Okay, I’ve talked to my user based on what problem they’re trying to solve. I can now come up with a clear reasoning why I’m going to build the features I’m going to build, why I’m going to run the tests I’m going to run.” Instead the idea is more like, “Let me just innovate and do as many things as quickly as possible without really stepping back and thinking about what I’m actually doing, what problem I’m solving.” And this is a mistake. You probably don’t have enough data to have the luxury of going that fast. Only a few companies in the world, like Facebook or Google, do. For most product managers, it actually makes sense to spend more time thinking and more time coming up with the right approach versus using your engineering team and wasting those resources and pushing them to go as fast as possible.
The next few reasons for these problems are also related. Product owners don’t validate the hypotheses with data. Often, they just start running tests or experiments. A lot of people think that the only way to validate a hypothesis is by looking at quantitative data and forgetting that you can actually take your screenshots, take your proposed ideas back to a user and see if they work before you actually ever build anything.
One of my career experiences that I talk about a lot is my time at IMVU, which is the company Eric Ries co-founded before he wrote The Lean Startup. The very interesting thing that I think has been muddled about the Lean Startup process is an assumption that you should build an MVP, get it out quickly and get user validation data. But really what Lean Startup really meant for people to do first was to have a clear hypothesis. Then make sure that your users can validate that hypothesis in some way before you ever build anything. And I feel that step really gets missed when companies attempt to do Lean Startup nowadays. It’s something that I really try to instill in product teams I build. It’s all about how can we think critically about what the user is doing, what the data can tell us, and how we can use it to validate what we’re trying to learn.
The second to last problem occurs a lot. Lots of teams go around the Internet, come to conferences and they think, “What are other teams doing to optimize and build the best product and what hacks can I quickly get from that? What are the growth hacks that are out there?” Most of the time those don’t work for you. Your customer is different from everyone else’s customer. And the fact is if you’re just relying on copying one idea or another to get things done, you’re not spending the time to understand who your user is.
And then lastly, and this is really in the optimization realm, people focus only on the data. They don’t actually focus on who the user is and what that data tells them about the customer. Instead, they are only looking at the dials and how they can turn them to drive more optimization for their product.
I’ll continue this in a followup post in the coming weeks. This originally appeared on my blog.