Senior Growth Product Manager at Monograph
13 Insights · 6 Questions · 11min Read · 15min Listen · Connect with Andrew on LinkedIn
Do your research. Deeply understand how your customers make money. Run lots of tests. Trying to understand user behavior. Double diamond model. The rugged landscape model. Feedback loops. AB testing. Fake door tests. Aligning continuous delivery on the engineering side with go-to market.
3 ways that your team converts your
market into revenue?
As a product manager, a lot of my work is situated sometimes a little bit upstream from GTM. My teams tend to be made up of designers and engineers, and I do work closely with sales, marketing, and CX more on the GTM side. It’s a huge part of my job to ensure that we're building the right things so that we can take those to market and convert them into revenue.
1) Do your research. Part of that process, to start, do your research. It's important to have confidence in what you're building and feeling confident that you're building the right thing. Have a hypothesis that when you take this product, or this feature, to market, you can convert it to revenue. When you're doing your research, you want to have a balance of qualitative and quantitative data. Go talk to customers to get your qualitative. Dig into your application database, data warehouse, industry stats, and market figures to get your quantitative.
For the second piece, I’ll split it into 2 categories, B2B and B2C.
2) So in a B2B context, one of the things that's super important is to deeply understand how your customers make money. The better that myself and my team understands the business of our customers, the better we can make product decisions, because we're making those in the context of asking ourselves, “What's going to make our customers as successful as possible?” And that tends to be the best way to convert your product into revenue.
3) On the B2C side, run lots of tests. Make small changes in your product and messaging. Observe the resulting behavior. Note any impact on your revenue. This is totally applicable to B2B as well, but I map it to B2C here because when you're dealing with consumers, you tend to have a much wider range of variability and it can just be a little bit more difficult to keep a pulse on long-term trends and patterns. So always be testing.
1 hard problem that you
1) Aligning continuous delivery on the engineering side with go-to market. One hard problem that's very timely. Continuous delivery, to put it simply, is when engineers are able to continually deliver updates to the code base on production. It's great on the engineering side because small incremental scopes of work that are continuously delivered to production results in fewer dependencies between engineering teams, results in less risk with each deployment because they're small deployments, and allows engineering teams to move super fast, which is awesome. However, on the go-to market side, this can actually create quite a bit of complexity because you have a go-to market team that's really trying to figure out, “How do we talk about this product or this update? How do we frame our product in a way that the market is going to react positively to it?” Meanwhile, the product is constantly changing, so it makes it a little bit difficult to grab on to certain messaging points for the GTM team. And so this is something that I'm working on, but really the entire product team at Monograph is working on right now, and Pete and Dixon are leading the charge. At least on my team, one of the solutions that we arrived at recently, quite simply, is feature flags. It's actually helped a lot because I was talking with Steven, my engineering manager, and explaining some of the dynamics there, we aligned on using feature flags because that allows the engineering team to continually deliver to production without those updates actually being available to customers yet. That gets the GTM team time to come up with the right messaging, and create their own strategies, to take it to market all at once, so they're not taking a feature that's constantly evolving to market.
⏳ Big revenue goals? Find out what GTM operators are doing now.
↓ Access lightning-fast interviews with go-to-market pros.
1 roadblock that you are
working on now?
1) Trying to understand user behavior. I'm at a new company, I'm at Monograph, I've been here for just over three months, and I'm really trying to understand user behavior. So, your application database, it's going to have lots of information about the records that your users are creating and manipulating, which is really insightful. We've been digging into that, but it won't typically tell you how your users went about doing it. It won't reveal their behavior in the product. For this, you need event tracking. You need solid instrumentation throughout your product that can capture and reveal the story of their behavior. Setting that up is a highly cross-functional effort in which product and engineering need to be super, super aligned. So, we've got a really good team working on this right now. It's a hard project, but it's fun, and I'm excited.
3 mental models that you use to
do your best work?
I love this question. I love mental models. I've got probably four books that are just filled with models. Some mental models, and just other models. Three that I do tend to fall back on often and work out:
1) Double-diamond model. I see this most commonly applied to product management, but I think it's a wonderful, general model that can be applied to many, many contexts. So in the double diamond model, you have two diamonds next to each other, if you kind of just visualize that. If you go from left to right, the first time it opens up and then it closes. And then, the next time it opens up and then it closes. So what the model captures is a sequence of divergent thinking, followed by convergent thinking, followed by divergent thinking again, followed by conversion thinking. And typically, what I see is the first diamond is focused on product discovery, product definition. So you're exploring different feature ideas. You're exploring, you're diverging, and then you converge on the right feature. Then, you hand it over to engineering and then they're figuring out how they want to build it, and then they're delivering. I think it works great for go-to market as well, just applying divergent, followed by convergent thinking in go-to-market.
2) The rugged landscape model, which is one of my favorites. So this one, if you imagine you're on a vast landscape and there's peaks and valleys all around you at different heights and depths. Your goal is to get up as high as possible. And so, one thing you could do is look around you, as far as you can, and find the highest peak, and start walking to it and walk up that peak. You might think when you get there, “You won.” But really, what you've found is your local maximum, which is different than the global maximum, which are two important terms in the rugged landscape model. So your local maximum is the highest peak that you can see. Whereas, your global maximum, if you zoom out all the way and look at the entire landscape that you can't see at once, there's probably some peak out there in the distance that's way higher than your local maximum. So, how do you find it? Finding your local max is good, but really we want to get to the global max. So one efficient way to go about finding a global maximum is to start in many different locations, and then in those different locations, you really are just searching for your local maximum, but from different starting points. The way this maps into product is your different starting points are your fundamentally different approaches to solving a problem. If you're taking something to market and you have three different theories for how you could take it to market, and maybe those three different theories are you different starting points, and then from there, you can iterate and AB test your way to the local maximum, hoping that one of those is the global, or is at least higher than the other two.
3) Just feedback loops in general. Love feedback loops. So that's when the output of a system is fed back into itself and it can result in typically one of two scenarios. Either you have a positive feedback loop or a negative. A positive is when the system amplifies itself. This tends to result in kind of wild, chaotic, unpredictable behavior. Positive feedback loops are really fun. It's high growth situations. Negative feedback loops are when the output feeds back into itself and it tends to stabilize the system. It limits the system from exponential growth. These are nice because it results in a very stable and predictable system, which can be great if your goal is to be able to forecast revenue into the future, for example. But, if your goal is to be able to grow as fast as possible, then you're looking more for a positive feedback loop. We see both of these play out together, kind of layered on top of each other, in all sorts of scenarios. One easy example is looking at the stock market. So if there's kind of a buying frenzy taking place, you might buy a stock and that drives the price up just a tiny bit, which sends a message out to anyone who's looking that the price is going up. Maybe that company is doing well, which causes other people to buy the stock, which drives the price up more, and this creates a positive feedback loop, driving the price really, really high. That, in turn, can be limited by a negative feedback loop. You might call it sort of the invisible hand of the market, where analysts, or just different investors in the market, start to decide that the stock is overvalued and they'll start to sell it off. That'll drive the price down and that will kind of limit it. So it's just one interesting way that those two can play together.
2 techniques that GTM teams
need to try?
1) AB testing. Everyone's familiar with AB testing. We got to do it. It's so important. The one thing I would emphasize with AB testing is I would discourage us from AB testing just to AB test. I would encourage us to really kind of think through, “What are our hypotheses? How are each variation and AB test connected to a different hypothesis? And also, what's the underlying theory that's driving those hypotheses.?” So if you imagine it as a hierarchy, you have your theory at the bottom, which results in a few different hypotheses, and then at the top, you have all of these different variations of your AB tests that are connected to the hypothesis. Some similar structure like that, that allows you to take your results and connect them back down to hypotheses and theories, that's the sort of framework that allows us to really learn effectively as a team, instead of AB testing and looking at the results and saying, “Well, I guess B won. let's go with B,” which doesn't always really get us to the next level of learning.
2) Fake door tests. I’ve done this in applications a few times, but I think it could be really relevant for GTM as well. So if you have a product idea, and you want to get some kind of early signal on it, one thing that you could do is drop some buttons in your application, or some CTAs in your messaging, that suggest that the feature is there when it actually isn’t. You don't need to call too much attention to it, but what you want to look for is, are customers clicking on the button? Are they clicking on the CTA? Are they asking about it in demos? Are they inquiring and just interested in the possibility of that feature? And if they do click on it, there's a couple of things you could do. You could take them to a form to sign up for a waiting list. You could provide them some more information about the feature and ask for feedback, kind of explain what you want to do with it, and then ask for input to get some more qualitative information. It's just a really cheap, quick way to do some early testing and validation on your idea.
3 operators that should be our
next guests and why?
1) Mike Ivey. First one I would call out is Mike Ivy. Mike Ivy is one of the co-founders of Modern Message, acquired by RealPage in 2020. I joined Modern Message when they were about 10 people, and I worked there for five years, and Mike and I worked closely together for probably three of those five years. I just had a great time. He’s a great product thinker, good leader, and his co-founder experience means that he's well-acquainted with not just product, but every aspect of a business, including go-to market. I think he’d have a lot to share.
2) Anthony Murphy. Super knowledgeable product leader and coach. I've had lots of chats with Anthony about product and about process. And I know he'd have a lot to share on go-to-market.
3) Matt Baxter. He's currently a Director of Product at Bestow, where I worked before Monograph, and he's just a great product leader. He was taking a brand new business unit to market when I was on my way out, and it was a huge project, and it was just so much fun to watch Matt do his work. I know he’d have tons of interesting details to share about the go-to-market challenges that he faced.
Work with Andrew (and me) →
Monograph is hiring!
April 2022 · Interview by Chris Morgan, Host of Market-to-Revenue
Market-to-Revenue Podcast ⚡️ Lightning-fast interviews with GTM operators in sales, success, product, and marketing.
⏳ Big revenue goals? Find out what GTM operators are doing now.
↓ Access lightning-fast interviews with go-to-market pros.