Your A/B Tests Suck: The Key to Running Effective Experiments

(Coming Soon)

Georgiana Laudi & Claire Suellentrop

Forget The Funnel

SaaS teams use the best practice of A/B testing like a crutch for marketing, messaging and product growth decisions. 

And it makes sense. 

If you think about it, A/B testing something relieves you of so much burden. Not sure which headline is gonna resonate? Test it! Teammates have an (ahem, questionable) idea for how to improve conversions? Test it!

Here’s the problem with the “test everything” approach: The vast majority of A/B testing is void of customer understanding and designed to make incremental improvements, typically without impacting revenue in any meaningful way.

If you can 'cut the cord’ from the purely data-driven statistical model of trying to optimize a page or user journey, you can build bigger, better, more powerful A/B tests that hit customers directly in the feels (and your ARR).

On this episode of the Forget the Funnel Podcast, Marc Thomas from Podia shares why testing isn’t the best practice it’s cracked up to be. Georgiana and Claire share when running tests is valuable and why knowing how your customers think, feel, and behave helps you run more productive and profitable tests.

‍Discussed:

  • Why “let’s just test it” isn’t the helpful universal best practice everyone thinks it is and where testing without customer insights can go wrong.

  • Why should you try getting closer to your ‘clonable’ customer before you run your next A/B test, and when testing can be most valuable?

  • Why you shouldn’t always believe what the test tells you, what you need to know to build more productive tests and the measures of success you should be paying attention to instead.

Key moments

1:04 - Marc Thomas discusses A/B testing as a flawed “best practice” in SaaS. He explains why it isn’t always the answer and why getting close to your best customers is a better alternative.

8:18 - Georgiana sets the stage by explaining A/B testing (AKA split testing), how it is conducted, and what it usually tests.

10:28 - Georgiana discusses scenarios when teams might use A/B testing, like validating a hypothesis before taking a big swing. She also explores when a smaller conversion rate can actually be a good thing.

15:07 - Claire explains why testing for testing’s sake will not produce the desired results. Instead, effective testing starts with understanding your customers to form stronger hypotheses.

18:11 - Georgiana tells a story of how testing went south for one company when the team tried to find a suitable pricing model. A/B testing steered them wrong until they zeroed in on their best-fit, higher-value customers and what value meant to them.

21:55 - Claire explores excellent and bad testing examples to show how it can be a valuable use of time when it starts from in-depth customer knowledge. 

23:29 - Georgiana talks about the value of identifying Jobs-to-be-Done for one company so that they can build the customer experience around their high-value customers and optimize for the right things for the right customer.

27:25 - Claire points out how major organizational change can come from a handful of conversations and when SaaS companies should run tests.

Transcript

Gia: I think that is the biggest mistake that I see teams sort of run into where their biggest miss - where they end up focusing on increasing the conversion rate of a website instead of actually increasing activation rates or moments of value. 

Intro: Hey, everybody, welcome to the Forget the Funnel podcast, where our goal is to help you as a SaaS leader, finally stop buying.

Guessing, understand your best customers and drive more predictable recurring revenue. We are GIA and Claire, founders of Forget the Funnel, a product marketing and growth consultancy that helps SaaS businesses learn from their best customers, map and measure their experiences, and unlock their best levers for growth.

If you're looking to help your team make smarter decisions, this show is for you.

Claire: Hey everyone. Welcome to another episode of the Forget the Funnel podcast. Today we have another one of our best practice episodes where we invite experts from the SaaS world to share a common best practice that they actually think is totally broken.

Then, Gia and I will share our take on what we think you should do instead. Today, we are joined by Marc Thomas, Senior Growth Marketer at Podia. Marc is joining us today to share his thoughts on A/B testing. Uh, so here those are.

Marc: What's up? My name's Marc Thomas. I work in growth at Podia. What's a flawed best practice in SaaS? I think that a flawed best practice in SaaS that we do all the time is that everyone's default answer is to say, let's test it. The reality is that A/B testing is challenging. Uh, and it doesn't always produce the results that you think that it's going to.

And it can often lead you to miss misleading stuff. What makes it flawed? A - B testing like has a number of different flaws. The first thing is you can only optimise what's on your page or your product. Like if you've got a hundred people coming onto your product, let's say you've got a hundred signups, and you've got those people coming onto your product, and you're trying to get them to convert to, you know, paid signups, you can only optimise those hundred people.

And yet you spend a huge amount of time trying to optimise for those. And even if it's a hundred thousand, like, is a 1 percent difference really going to drive the needle for your actual growth in your company? It's just statistical, right? There, there is kind of, even if you do this over and over and over again.

Those are not just going to compile 1 per cent additional, 1 per cent additional, 1%. You're not going to get to 100 per cent conversion just by trying to A/B test your way to it. The other thing is, you're basically doing something so inhumane by trying to A/B test your way to success. You know what it's like?

It's like when, uh, let's say, let's say you've got a piece of cheese, uh, and you put that cheese onto a table, and then you go, here are all of the obstacles. For this mouse that I've just put down, honestly, A/B testing, it's more like a rat. And the rat is the user. And now you've got to try and work out. Okay, what's the optimal way that I can get this rat to that cheese so that it can have it?

The whole thing is trying to optimise a maze, right? You're trying to do the same thing for your users. You're trying to take walls out of the way, put them in different places, or move all of the walls so that they just go straight to the cheese. The reality is you could just take the cheese to the rat.

Right? It can be nice, it can be pleasant and human, uh, you don't have to build a maze, you don't have to try and optimise somebody's behaviour and take all of their, kind of, ability to think and evaluate away from them in order to get them to the goal, which is to solve their problem. And while doing it, pay you so that you can solve your problems.

Why do you think these people get swept up following this best practice? Firstly, I think there are loads and loads of reasons, but here is the killer one for me. Firstly, I just think it really is a way of saying shut up in a marketing meeting. Almost all the time, somebody says, Oh, I don't know.

Let's test it. It's because they're saying, I don't actually think your idea will work. But let's get some data just in case I could be wrong. And so that you shut up, that's the common one. Look, the real thing is if we knew what worked, we would just do that thing. Right now, we do not know what our users want, how they feel and how they think.

Is the key driver for trying to optimise a hundred visitors. It's trying to optimise that rat maze. The key driver is I don't actually know what my customers think and how they feel. And so I've got to try and like statistically drive them to take an action that I want them to take rather than the action that they actually want to take or the way that they want to think about it.

That's honestly the key reason this. People just don't know how their customers feel and think. Let's think of the best that we can of this, uh, let's think that people probably aren't just trying to kind of shut everyone up or just purely rely on, you know, the things they don't know. There is this angle, obviously, which is like, well, People really struggle to talk to other people.

They struggle to ask them questions about their lives. The concept of doing customer research for a lot of people and gaining customer insight is actually really challenging. You know, maybe they're not wired that way, or maybe they feel uncomfortable with it. And so they lean back on what they can, which is we've got cold, hard numbers here, and we've got them up the wazoo.

For example, if you have a SaaS company and you're building a product, you've got numbers coming out of your years. So, look, why don't we just use the things that we already know? And we, we engineer this kind of, you know, this rat maze, uh, so that it's perfect and perfectly understandable but ultimately perfectly pointless.

This is not gonna, this is not going to get you the growth that you actually want. What should they be doing instead? Well, I guess this is pretty simple. It's simple and really hard at the same time. Okay. The simple bit of this is. You should be trying to get closer to your customers and trying to understand what it is that drives them.

Why did they sign up for this product? Or why did they even come and look at this landing page? If it's prior to the signup process, what is it that they're trying to do in that day? Why have they not chosen other tools? This kind of thing, right? Now that's a really simple kind of understanding. Getting that understanding is really challenging sometimes.

What I've seen in my own work is that it usually doesn't come from just one source. It doesn't just come from, hey, I've done ten customer interviews. Like, great if you've done that. Congratulations. You should be doing that, but maybe you also factor in some survey data. Maybe you, maybe you survey, you know, X number of people, maybe you watch screen recordings of what they do on your, not just your, uh, your app, but also your landing pages.

Like what are they doing there? What are they interested in? What are they not interested in? What kinds of things are they saying on social media? All that kind of social listening stuff. What are review sites saying? Some of that data is going to be like more relevant than other data, but all of that together will start to give you a picture of what your customer actually feels.

Now you can combine that with stuff that is very clear, very obvious. So sales tapes and self-reported attribution, when they say, Oh, I signed up to do this. All of this data goes together to build an understanding of your customers. Now, what do you do with that thing? How do you actually apply this so that it replaces A/B testing?

Again, it's simple but complex, right? The simple bit of it is that you create, from those insights, novel campaign messaging that actually responds to these people. You show off bits of the product in illustrations on your landing pages that actually apply to what they're trying to do. The reality is that the bit is still quite complicated, right? You will miss it sometimes. My argument is by getting away from this statistical model of trying to optimise a page to, okay, we know about our users.

Therefore we can build bigger, better tests that really, you know, take bolder swings and aim for hitting our customers directly in the feelings or the, like the emotional element of actually trying to get somebody to go, well, this product works for me because it has these features and. They clearly understand me better than anyone else on the market.

The kinds of things that you're going to create after that are much more effective than simply saying, well, if we change the layout of this section and we change that button colour, uh, We'll, we'll, you know, we'll get our rats to the cheese in a much more efficient way. 

Gia: Okay. Love Marc. Love that answer. Um, surprise, surprise.

We agree. But I think that to sort of set the stage for the conversation, we probably need to talk a little bit about what A/B testing is. And just get clear on, like, the thing that we're talking about here is split testing or A/B testing, which is essentially this idea that You split traffic or like audiences, because sometimes, you know, it'll happen on the product side as well as it does on for websites, but like traffic, so to speak, like people, you will split people, uh, their experiences to two different experiences.

In general, we're talking about an AB test where you do, um, like a 50-50 split of, uh, of traffic. When I was in-house at Unbounce, we talked about split testing a lot. Uh, it was, you know, a huge, huge driver of, like, sort of value for our customers was this idea that you could test, you know, your messaging, or layout, or colour, or button colour, or headlines, or images, or videos.

Like, test everything with split testing. Our mantra. Sometimes, we would advocate for a 50-50 split test. Other times when people were feeling like cautious, we would say, you know, like, use your sort of control and send about 80 per cent of your traffic to your control and then your variant, you know, start with 20 per cent or if you're feeling really risk averse 10%.

But in general, when we're talking about split testing, Most people are are describing like a 50-50 split, but obviously other, you know, percentages definitely exist, depending on your situation. So that when we're talking about split testing or A/B testing, that is what we're talking about. We're not talking about UX testing or usability testing.

Um, that's not the situation here at all. We're really just zoomed in on this idea that like. If you don't know the answer to something, test it, split test it. Yeah, exactly. Okay, so let's dive in. Yeah. 

Claire: Yeah, good foundation. I'm hoping you can, like, talk us through some situations, like, when might teams lean on baby testing?

What are scenarios? Because I know there was, like, you were having a conversation with a CTO. Just last week about how do we test? How do we test it? When do we test it? 

Gia: Exactly. Um, there are a lot of scenarios where a team might lean on A/B testing as, uh, the sort of answer to, like, I don't know what to do in this certain situation.

One of the scenarios is definitely the one that Marc described where, like, a head of marketing Gets an idea from their boss, and their boss is like, we should add a calculator to our home page, and that head of marketing or whoever's responsible for the website is like, that is a terrible idea. And I'm going to prove to this person or to my boss that that is a terrible idea, or it's, you know, uh, leadership on an adjacent team or, you know, because everybody, Everybody is marketed to, so we all think we're experts at marketing.

It's hard to be ahead of marketing. Um, especially in like in, in MarTech, but don't get me started on that. But you may be end up in the situation where like, I'm getting all these ideas. Um, you know, people who, who are stakeholders in my work really like have these ideas and I need to show them that like, a, I care about their opinion.

But also be it's probably A/Bad idea, and so I'm going to prove to them that it's A/Bad idea, so we'll we'll test it. And if they really want to test it, then like we're going to see another situation, which I see happen a lot, too, is where teams are like really nervous, um, or really risk averse for a number of reasons.

And so they're like, we need to test everything because we need the data. So they're in a situation where, and I see this happen with product teams a lot where they're like, we cannot do it. Take too big of swings like we need to make very calculated decisions because the implications are so big and so they need in order to get buy in on their ideas.

They run split tests to basically like validate their hypotheses and they sort of run in hypothesis mode. And then there's this other situation, which is the one that you are speaking about, which is the conversation that I was in last week. With the leadership team that we have just come out of the research and learning about their customers.

And we have all this amazing Intel on these, this really, uh, high-value group of customers. And the CTO was like, okay, but how do we test this? Like, we're not just going to overhaul everything, right? Like we need to. We're going to test this, right? And my answer was yes, absolutely. This should be tested.

But just remember that when you're testing, let's say, new messaging for a, uh, A/Better fit customer for your product. I often have to caution teams against measuring success by the conversion rate on their website because, arguably, the conversion rate on your website could go down. But if that's generating more revenue for the business, It's still a win.

So, just because your homepage is converting more, your pricing page on your website is converting more, but that does not mean that the downstream impact of that is more revenue. And I know that we have an example related to that. So this sort of being a little bit cautious. I see this with teams that have been around for longer, right?

They're like, well, our website has always been for this customer. We're not just going to all of a sudden, you know, completely make this shift. It is, it can be, um, it can be scary to make that big of a shift in messaging. Sometimes when you come out of a research project, we were like, Oh, we've been targeting the wrong customer this whole time.

teams can still feel concerned about like, I know I saw the research, but could we like, could we split test, uh, you know, our homepage basically with like an 80/20 split or even a 90/10 split. 

Claire: Right. Give us some kind of indicator that this is going to be effective before we, I think like in our notes, we were, or when we were probably, we talked about like, before we bet, that's right.

Like, can we get some kind of indicator that this is a good idea? Um, And I think that's really under, yeah, it's understandable. 

Gia: I want to make clear that we're not saying that that teams shouldn't do that. We absolutely should in that scenario. But there are some situations where, like the first example, like trying to prove that an idea is bad or trying to prove that an idea is good in the absence of customer understanding, which is what Marc, you know, was talking about.

That is where A/B testing can go off the rails when you're just sort of running tests for a test's sake, and you're not actually going to be moving the needle in a meaningful way. That is not the scenario that we're describing with this team that is shifting their messaging in a pretty dramatic way.

In that situation, I would say A/B testing is absolutely valid and should happen. 

Claire: This actually, uh, this is a nice segue into like the next part of this topic we were going to discuss, which is like the challenges or the limitations associated with A/B testing everything. We are not anti. A/B testing.

We are not anti-testing in general, but there's a very big difference between testing for testing's sake or testing in the absence of customer insight versus learning some of the key things you need to know about your customer base and then running tests based on what you have learned. And I think the version of A/B testing or the, you know, the best practice of A/B testing that Marc is calling out is that idea of testing in isolation.

There was one line in, um, in what Marc said about getting out of the statistical mindset that really resonated with me because when a team is running A/B test after A/B test after A/B test, There is more of this focus on, like, we need this, we need the statistics to prove that this was, that this exercise was valuable versus a scenario of gathering customer insight that helps you form hypotheses that are strong enough that you, that enable you to take bigger swings that can be more impactful on like the, on, on the revenue numbers, but may not give you the sexiest looking statistically significant results. 

Gia: Yes. To your point, we can end up chasing our tail a little bit and chasing small, tiny incremental wings wins because we're focused on the individual trees as opposed to the forest. So definitely. Getting a clearer picture and A/Better understanding of who your ideal customers are, even if that means a drop in your website conversion rate, if it ultimately turns into, you know, higher activation rates, higher engagement rates, higher ACV or higher LTV, that is a win.

And I think that is the biggest mistake that I see teams Sort of run into where their biggest miss where they end up focusing on increasing the conversion rate of a website instead of actually increasing activation rates or moments of value, like customers hitting moments of value. So that is actually what I told the CTO was like, yes, you should absolutely test this.

And I just want to make sure that your measure of success and everybody in the room understands that the measure of success is. actual activation rates, not your website's conversion rate, because so many teams go wrong there. And it, it means a longer test, right? It might mean depending on how, uh, how quickly your customers hit first value.

And we can talk a little bit about like what that right moment of like, how would you measure a win, right? Like how does a team measure the success of like new messaging and positioning on their website? Genuinely, if it's not the website conversion rate, well then what is it? We can maybe talk a little bit about that, but ultimately at the end of the day, is it turning into more customers?

That is what determines the winner of your test.

Claire: I would love to dig into, you know, an example of A/B testing leading to a problematic or misleading result. And I know there's a company we've been working with that has been struggling with this. Maybe you can paint the picture because I know you had the original conversation with them, but they had a very successful pricing and packaging test.

Gia: Ultimately, what it ended up turning into was a shift in their pricing strategy, which often shifts in pricing strategies are determined.

I mean, they're determined in many ways, but one of the ways is the actual economy. Let's do the math on how this would impact things. People also make decisions about pricing strategies and pricing and packaging. Based on testing it on pricing pages, one of the inputs to determining one pricing strategy over another is rolling it out on your website, putting it on your pricing page, and seeing how it performs.

And so this team. Again, I don't know all the details of how this test was run, but ultimately this team made a decision to move to usage based pricing and like the test one, right? So use and this is not going to surprise anybody listening to this, that the test one usage based pricing can be really, really attractive.

And what we have found since learning is that we're in the middle of this, like engagement with this client. So we haven't gone all the way through this, but this was just such A/Big learning. What we found in the research was that there were actually two pretty distinct groups of customers showing up. One that usage-based pricing was really, really attractive to make this tool like a no brainer for because they were looking at this tool.

A little bit like a commodity amid a little bit like A/Bolt on to what they were doing, and they wanted to be able to make a decision quickly implement quickly and move on because they were seeing this this tool as being, you know, again, like A/Bolt on A/Bit of a of a commodity. So usage based pricing for them was like, Yep, it's got to be.

What we realised was that there was this other group of customers also showing up, and they were looking at this tool as a solution, a platform, or a growth lever. And for them, the usage-based pricing was actually almost worrisome because they were like, oh, this, maybe this, I should be thinking about this as a commoditised tool.

Maybe I, the lens through which I should be looking at this solution is actually as like, you know, smaller, uh, lower impact again, not to use that term again, but like more commoditized. And so these more serious businesses that we're looking at this as like, this is a real investment we're making as A/Business.

This could be a huge growth lever for us. And this is a strategic business decision that we're making usage-based pricing or, or only, you know, uses-based pricing. Our hypothesis is that it might be causing a little bit of anxiety in that way. Higher value, better quality, better quality, and better product fit all add in all the qualifiers here: the customer.

And so we have these 2 customers have surfaced in the research and it is not a surprise that if you were basing. the sort of winner of this test on conversion rates on the website. Well, yeah, of course, you would go with usage based pricing if that was one of your inputs to determining a winner. And by the way, I don't want to pretend for a moment that this team made this decision solely based on an A/B test.

They didn't, but it was one of the inputs to determining the winner, which has led them down this path of realising. Oh shit, this test basically attracted us a poor fit customer, and it's dropped ACP across the board. So anyway, now we're in a situation where we know so much more about these amazing customers, and we can make way more informed decisions about what to do.

Pricing and packaging is going to work a lot better for them. Okay, so let's, let's, let's talk about like, okay, what does good look like? And what's the flip side of this? What is required in order to get to a place where A/B testing actually is going to be a lot more meaningful? Can you talk us through that?

Claire: Okay, I'm gonna, I'm gonna use two, like, good and bads. And they're going to sound, They're going to sound so silly, but if you're listening, try to apply them to your own scenario because they're, they're real. I'm just, you know, I'm just packaging them up for this conversation. So good example of running a test on, you know, on a website or as part of a signup form.

Research showed our customers care about X. And so we are going to test a call to action that is more aligned with X or that promises X. A bad example, Our marketing advisor said we need to have a lead magnet. And so we're testing having a lead magnet. And that's a real no go. That's, I know that's so silly, but that is an actual conversation I had with a CEO who, he's an intelligent person.

He's a subject matter expert on the industry that his product serves. But he's not a go-to-market expert, so he's getting this advice. It sounds like it makes sense to me. And I think that really paints the picture of how good of an antidote A/B testing can feel like, But it's just when it's just A/Best practice, And it's not rooted in something you know about your specific customer, It's, usually more of a waste of time or chasing your tail exercise than it is really that productive.

Let's go back to the conversation actually that you were having with this leadership team, including that CTO. What did the research look like that led you to that point at which you got some really great learnings, you surfaced some meaningful insights that Have the potential to change how they.

Gia: I think the difference maker there, like the, the, um, mental shift that the team was able to make again, after I can't remember what the exact number was.

I want to say that maybe 11 interviews were run, and we identified three different jobs to be done among these 11 customers, which was really, really interesting. The insights that we brought back to the team were both Validating for the team like they were like, Yep, we've heard this before. Yep. This makes sense.

But also the differences between these three groups. That was the mindset shift for them because they sort of conflated all of these jobs to be done by all of their customers. And so. They were solving for all of those things all at the same time because they were thinking of their customers as being this, like, a group of people that just had all of these challenges and had all of these, the t language they would have used, but like jobs to be done.

And so what we did was we carved out, like, actually, there's this group over here that talks about these things. And then there's this group that talks about these things with that. These things are really important. And there's this other group. And for the first time they realised, Oh, these are, Three, these are genuinely three different situations that people show up to our front door, so to speak, with, and so that was a mindset shift for them.

And what we helped them do was prioritise one, and it was not hard to convince them at all because it came really obvious once we'd sort of dug in to the three different groups, they were like, oh, yeah, it's definitely this group over here. We like them. These are the ones we would copy, paste, clone, or whatever.

And so now we're at the point of like, okay, great. So we're focused on this group of customers. Everybody agrees that this is the customer experience that we want to build. Now comes the time to actually like change the customer experience, change the messaging and positioning on your website, change the product onboarding experience, and maybe even change some of the post-acquisition because this is a B2B situation where there are other people who need to get involved downstream.

And that's when it starts to get really scary and they're like, Oh, now, like we actually need to, like, we made all these learnings and that was great. And it felt really good. But now we need to actually, like, put this out into the world. And that can feel it can feel really scary. Um, it was, you know, just as much on the website to.

Um, take that sort of leap of faith as a, as a team and put this new, you know, messaging out into the world, but also for the, the post acquisition team to start thinking of themselves as post acquisition, because they weren't thinking about themselves in that way. So that was another shift for them that they're like, Whoa, we can't remove ourselves from that acquisition.

Process. We're like, yes, you can because that's not actually where you're needed. You're needed over here. So there's some, you know, they've operated in this way for ten years. It's a mindset shift that they need to make. So, obviously, testing it is going to be needed to make them all feel better.

About the decision. So again, the research project was just 11 conversations, identifying those jobs to be done, having that like alignment with the team that like, yep, we all agree that this is the customer experience that we want to solve for. These are the customers that we want more of. We're all in agreement there, right?

That can't happen in isolation. That can't just be the founder and a hired gun who's done research saying, yeah, that's our plan. Like, it has to be something that is internalised, mentally processed, have a couple days to let it marinate and think about, like, how does this impact my day to day? And then, Okay, how are we actually going to put this out into the world and feel really good about our decision?

So that was how that conversation was. And I absolutely understand why they would want to split-test it. And they should, but they're doing it informed. They're doing it backed by an A/Bunch of customer data, right? We've got a lot of customer data to back up this hypothesis of this ideal customer.

Um, so they can feel really good about that test, but still testing it and continuing to optimise. And they should continue to optimise forever. 

Claire: I like that you established, though, like this. All of this is really incredible, like, or like, operational change comes off the back of just 11 good conversations.

And similarly, with the. The team we were talking about earlier, where we've learned within their customers, within their customer base, there's a more like commodity seeking customer type, and then a more major business investment customer type, similarly, that came out of 10 customer conversations. 

Gia: I want to be clear that like, that is the foundational understanding that is needed to make more informed decisions about what to test.

And it is not the end, all be all. We wouldn't encourage you to stop there, like run ten interviews, and you're good, and you're done. It's definitely something that we would encourage teams to do pretty regularly, depending on how often your product changes and what your release cycles sort of look like.

Sometimes we have teams that do it like twice a year, sometimes it's annually. Again, really depending on how quickly the product is changing, how quickly your market is changing. But if you are the type of person that loves running AB tests, you would probably want to run this type of research at least twice a year, because it will give you so much to test.

It's going to keep you busy. 

Claire: It will. And, and way, way better results at the end. 

Gia: Yeah. You want real, like, Um, you know, uh, uplifts in your conversion rate, definitely do this type of, uh, research because you will be, uh, you will be very, very happy with all of them.

Claire: You will look so smart. You will look so good.

Gia:  You will look so good. Exactly. Exactly. 

Claire: All right. See y'all next time. 

Outro: And that's it for this episode of the Forget the Funnel podcast. Thanks for tuning in. If you have any questions about the topics we covered, don't hesitate to contact Gia or me on LinkedIn. And you can also visit our website at forgetthefunnel.com 

This is still a new podcast for us. So ratings, reviews and subscriptions in your podcast platform of choice, make a huge difference. See you next time.

‍

Watch đŸ“ș

Listen Now 🔊

Watch Now đŸ“ș

Georgiana Laudi & Claire Suellentrop

When it comes to growing multi-million dollar SaaS businesses, we’ve seen what works. Both separately and together, we've built best-in-class brands from the ground up and played key roles in revenue growth. While our background stories may differ — Gia’s a Canadian who’s been marketing since 2000; Claire’s an American whose marketing career began in 2012 — we’re united in wanting to support those growing SaaS companies, and to provide resources they need to step up as strategic leaders. You can learn more about us here.

View Full Profile