All right. I hope you are ready to talk about some testing; get your pencils and paper out and your Scantron sheets because it's all about testing today. Let's see if you pass or you fail.
Today, were going to talk about the lifeblood of accounts today, which is testing both audience testing and creative testing. We're going to go through the testing procedure overview. This is going to be a high level, more theoretical. We're going to dive into some comments at the end, but this will be a bit more theoretical. I realize I'm going to sound like an eighth-grade math teacher right now. That being said, once you understand the testing principles, you don't need to memorize anything.
It’s essential to understand the principles and the theory behind it. When you do, you don't need to remember anything. And then you can just kind of continue on with your day, continue through testing.
Testing Overview
So first and foremost, testing overview, like, what is it, what are we actually doing? What does that mean? Right? One of the, kind of like one of the, the inside jokes, quote, unquote of advertising, is that oh, just test it, just test it. Oh, you just need to test more. It's like, what does that even mean? Well, basically what you need to understand is that in order to scale something, in order to spend more money on something and make more money, we need to have something that works. And one of the big issues is that if you don't have a testing procedure to do that, or if you don't even know what you're testing or why you're testing it, then you're kind of, you're, you've lost before you've even gotten started, essentially.
Speaker 1: (
)
So high overview of testing, it's important to always be testing. All right? Even if you're spending 15, 20,000 a day, if you have your, you know, five winning ads, like your creative is crushing it, it's getting you a four X on, you know, 15 K a day, and you have 10 winning audiences and your account is scaling on broad and blah, blah, blah. You always need to be testing, right? Doesn't matter what cuz guess what, what ends up happening is creatives will fatigue offers will fatigue. Audiences typically are difficult to fatigue, but they can, especially if they're smaller. So it's not a matter of if it's when and you need to be ready, the worst place you don't wanna be in the place of scaling your scales, start to die off and you don't have anything to replace it with. That is exactly where you do not wanna be.
Speaker 1: (
)
That's why creative testing is so important. All right. So testing is broken up in two, two parts. So what do you actually need to scale Facebook ads, right? You need an audience and you need a creative in this case, an ad. So we need to do testing on both of those audiences are like the car creatives are like the gas, meaning that once you have a car you have it for a while. It's, you know, unless you, unless you get a really crappy car, then you don't really have to replace it very quickly. And you can kinda run it for a while, but it needs gas in order to run creatives are that gas, if you are you know, leveraging sort of like low quality corn oil creatives in this case, you know, low quality gas, then your car is not gonna perform well and it's going to sputter out.
Speaker 1: (
)
And once it runs outta gas, no more money, right? You gotta get more gas for the car. Another analogy audiences are like the logs. Creative is like the gas. Creative is pretty much like gasoline in case you would notice this or in case you notice that, but in in, in sort of like the fire case, right? You need the locks to actually build your fire. And then once lit, then you pour gasoline on it, you know, your lighter fluid or whatever. And that's what keeps it going, right? That's how you keep it, keep it going. So what does this mean? It's not to say that audiences are not important. It's not to say the creatives are not important. Both of these things are absolutely crucial to running an ad account. It's just that audiences are typically going to last longer and creators are going to not last as long, usually.
Speaker 1: (
)
Right? Within testing, you want to be having a three day timeframe minimum on all of your tests. We recommend three day minimum. Okay. But if you have a longer sales cycle product, if you have a, you know, a higher ticket product, something that, you know, in theory could even take months to sell then you need a longer testing timeframe. So three days typically for e-commerce is fine for a more, for like more info space, something that has an email sequence, something that has a longer buying cycle, whether it's a service, whether it's a course application funnel, etcetera, typically those are going to require more time in testing and you just have to figure out what you're actually, you know, what are you actually testing? What is your testing KPI? What are your, what are you benchmarking a good test at, right?
Speaker 1: (
)
And obviously we're gonna dive deeper into that. But before this, you should already kinda understand what your testing KPIs are. You should know what your top of funnel CPA is that you're looking for. You know, what your top of funnel lead cost is. If you're running a lead funnel, what your target cost acquisition is, you know, customer purchase cetera. You should already know that. But basically whenever you're running a longer buying cycle, we just need more data. Imagine if some people, you know, in your test, imagine if everybody comes in on day one and they don't buy until day seven. Well, you could shut off your test on day three and say, oh, this was horrible. It sucked. And then you look back a week later and you're like, wait a minute. This got a bunch of purchases. What happened here? It's like, well, yeah, it's a longer sale cycle.
Speaker 1: (
)
So it's important to understand that short answer to it is courses high to take funnels, et cetera, can need seven on both an audience and creative tests to really see. I'm gonna show you some examples, but just high level. Okay. this is a really important one. Creative testing is the secret sauce of scaling creatives offers angle, et cetera, will scare scale an ATD account way easier than audience testing. Right? So audience testing is really easy because it's actually very easy to implement. And the fact that it's easy, it can make you feel productive. It can make you feel like, oh, I tested a bunch of new audiences and it took me 10 minutes. Yeah, exactly. What's way more difficult is to spend an hour thinking of creative angles. And then you really get like one task app and it doesn't feel productive.
Speaker 1: (
)
And the problem with that is, is that if it doesn't feel productive, you don't get that dopamine hit. You don't get that adrenaline hit. That being said one creative, a really, really, really good creative one that hits pain points. One that makes a very strong offer. One that is sort of really getting to the core of somebody's problem that is showing a transformation that your product and or service can provide that creative can last you for 6, 7, 8, 9 months, honestly, maybe even a year or more with minor tweaks and variations. So while the immediate dopamine hit of audience testing, I, you know, it feels good. Long-Term creative testing is really the lifeblood of accounts. And so to kind of illustrate that point, imagine that I have an offer and I have a product that says, Hey, you pay me $1 today, 24 hours. Later tomorrow, I'll give you a hundred dollars, assuming that I had proper trust and credibility built up with that, that I would be able to scale that ad up to like, you know, a hundred thousand dollars and spend per day.
Speaker 1: (
)
No problem. Right now, I'm not saying that that's a sustainable business, but at the same time, I would never run into scaling issues. Okay. Ever for any reason, there would never be scaling problems. Everybody in their mother would buy that. Okay. So this is why creative testing is really important. Audience testing again. Yes. It's important. Yes. You need it. But the accounts that really fly have a amazing creative testing process. So moving on testing mindset, right. We gotta remember the testing in general is really only 10% to 20% of our total budget. So it's not to say that it's not important. It's just that we're not looking for testing to be our profit center testing is pretty much like, Hey, does this PA testing is basically boot camp. Does this pass the test? Okay, cool. Now we're going to put it into, you know, the Navy into the Navy seals, which is going to be scaling.
Speaker 1: (
)
And then we're, that's where we're going to really reap the benefit from it. Bootcamp is not like that's not where the military is reaping the benefit from that. Right? So in order to make correct decisions on small data sets, which testing is, if you think about it, you know, where we might be, just be running, say like $150 on a specific creative, that's not a ton of data. If we have a $50 target cost per purchase, we might only get three purchases on that over the course of three days. So in order to make correct decisions on small data sets, we need big differences in performance in order to get big differences in performance, we need to make sure our changes have big differences. Okay. So think about that just like two seconds just to make sure that sinks in changing one word in a headline is not going to produce a big difference in most cases.
Speaker 1: (
)
Okay. So what that means is within testing, we need to make sure that we're having a, in order to get a really wide variance in data, we need to make sure that we have a really wide variance in inputs. Okay. Big difference in outputs are typically correlated with a big difference in inputs. So going back to my pay $1 and give you a hundred dollars immediately add, I might have a video ad where I show what that process is. And I say, all you gotta do is click here. You pay $1 and then tomorrow a hundred dollars gets into your bank account. Okay. That might be one very big difference. Or that might be one ad. And then a very different ad might be an image ad with a little bit of text on it that says see how he 100 Xed his money in one day.
Speaker 1: (
)
All right. And that might go through like a testimonial or a story. Those are very different ads and those are going to produce very different results. Okay. All right. We are not looking for our testing campaigns to be profitable overall on a campaign level if they are. That's awesome, but we're not looking for it. Okay. Ideally you wanna see a nice spread of results. You wanna be seeing audiences and creatives that you're testing that have, you know, one creative test might be getting you like five X on cold traffic return on that spend. And one might be getting you 0.7 X. Okay. That's exactly what you wanna be seeing. Otherwise you are not testing enough, otherwise you're not testing enough out of the box in ingenious or what's the word I'm looking for? You're not testing enough sort of inventive.
Speaker 1: (
)
There we go. You're not testing enough inventive new stuff. Okay. Both for audiences and for creative, you wanna be swinging for the fences. You wanna have either a home run or strikeout. Okay. That's what you want. Some audiences and creatives doing horribly others, crushing it. A sign of a good test is before you run it, if you say to yourself, this is either gonna be the best test ever, or it's going to completely fail and flop. And I have no idea what have no idea which it's going to do if you're saying to that, if you're saying that to yourself before it, it's probably a really good test, happy care. And this is just from experience and running a lot of tests, like honestly, more than I, more than I car to even mention. But usually what you think is going to work best doesn't in what you don't think is going to work at all will.
Speaker 1: (
)
And once we get to the end of this, I'm gonna show you a couple examples of that. And why it's important to do that. Because even for somebody who knows their market, very intimately, even somebody who's been working with their market for, you know, 6, 7, 8, 10 plus years, oftentimes what we think is going to work in testing just doesn't right. You might be thinking, oh man, this that is going to crush it. I thought of that so many times, and then we run it and it doesn't do well at all. Right. So let's continue brief stats overview. So you guys might know I'm a bit of a stats, nerd studied stats in college. And it's an interesting subject. One that is oftentimes like a beyond a surface level really is unnecessary for advertising, but it's actually insane to me how statistically illiterate most marketers are.
Speaker 1: (
)
But anyway, I digress different story. So statistical significance, what does that mean? It's different than just saying, oh, this was like a significant day in my life. What this means is basically it's a set of data that suggests a certain event happened beyond a reasonable doubt of random variation. You might see this as basically confidence, okay. P values, et cetera. If you've taken any statistics courses, you're probably like, oh yeah, I can remember that. Honestly going too far into depth with this, isn't going to provide anything. But what we want to what we want to avoid is basically getting a result and having it be a result of pure luck and random variation rather than because an add or an audience is actually really good. Right. So waiting for true statistical significance is often overkill, which would require too much spend.
Speaker 1: (
)
You know, oftentimes you get like a really, really true statistical statistically significant result you might need to spend, you know, beyond a thousand dollars and it's, that's so dumb. You would never wanna do that. Especially for just like a minor test. Absolutely unnecessary. So just as an example, it's kind of illustrate my significance or the significance of significance. Take an example of a target cost per purchase of $50. Imagine you have a product and you wanna sell it for 50 bucks, right? If you're getting a cost per purchase of 50 bucks, you are happy. You're thrilled. So in testing, you might spend $150 over three days, right? There'd be three X. You spend, if you get two purchases, you might be saying the test failed. Cause we spend 150, you got two purchases. That's a $75 cost per purchase. We might say, ah, man, but you know, our target cost per purchase is actually 50 bucks.
Speaker 1: (
)
And if we spent 70 or if we were getting $75 per purchase, that's no longer a winning ad. All right. But if you get four purchases, you might say, oh man, this was awesome. Best test ever. We're getting purchases for 37 50, our Target's 50. We're like way below our we're way below our target cost pro acquisition. The takeaway from this is those two purchases could have been due to ran a variation in the case where you got four purchases, you might be saying, oh man, this is an awesome ad. And really what happened is you just got lucky, right? You might have just gotten lucky. And in the case where you've got two purchases, you might have actually just gotten sort of like a bad pocket of people on that test, right? Facebook can't show you right to a million people with that. It might only be able to show it to say like a thousand people or, you know, 4,000 or 5,000 people.
Speaker 1: (
)
And it might not be fully representative, okay. This is where we started getting into power of testing and statistical error and things like that. Again, way beyond the scope of this, you don't really need to understand it. Just know that you wanna make sure that you have reasonable confidence in saying, okay, this wasn't just lucky or unlucky because there there's a very real possibility that in that ad said that you got only two purchases on you might kill that. And actually it's a really, really good ad. Just got served to a bad pocketed people. That's why, like all of our testing procedures are designed to mitigate against both of these things happening. It's never gonna be perfect. Right? That's literally the world of statistics. We cannot show our add to every single person in the world. If we could, there'd be no error.
Speaker 1: (
)
Right? So we gotta kinda, there's some trade off to make here. So the size of your testing data set is important. What do I mean by that? Let's think of a a coin toss example. So if you flip the same coin, three times, the odds of getting heads three in a row are one out of eight, right? It's just one times two or one divided by two to the power of three. So it's one half times, one, half times, one half that's 12 and a half percent. The odds of getting 10 heads in a row, right? So heads, heads, heads, heads, heads 10 times is 0.09, 8%. Right? So when I'm what the takeaway is of this smaller data sets means more chance for a statistical error. You know, if you flip a coin three times over repeated trials, one out of eight of those times, you are going to get heads three times in a row, right on average.
Speaker 1: (
)
But if you flip a coin 10 times, the odds of getting heads 10 times in a row are extremely low. Okay. So what that means is with more data, you have more certainty that something was not just due, purely to chance this at this point, you might be saying, okay, this is kinda like freaking me out. Or this is scaring me. Like what happens with testing? Luckily we have the Facebook algorithm in our favor. So this is not just pure chance. This is not pure random variation. Facebook is looking for the people who are most likely to do the action that we desire, whether it be purchase registration, book, a call, etcetera. So, you know, we're not a, we're not completely on our own here, but this is just a fact of advertising life. So one of the ways to mitigate against this is building an intuition within our accounts, within client accounts that we work with, we pretty much know.
Speaker 1: (
)
And you can just tell whenever an ad gets up into testing, usually in like two or three days, you're like, oh man, I know this ad is crushing it because it's getting us a five X return on ad spend. Whenever we are, you know, our KPI might be like 1.9 or 2.0, okay. So you know, and you're gonna build that intuition as you run more your account. So when something is a winner, you usually know pretty quickly. And when something is a loser, you usually know pretty quickly, it's important to let it go for the full testing period before making any rash decisions. But as you start to test more, you'll, you'll figure it out. You'll know. All right, let's talk about testing schedule. So we like to bash this out two days a week. So you can do Monday, Thursday, you can do Tuesday, Friday.
Speaker 1: (
)
Doesn't really matter. You know what you can do Wednesday, Saturday, if you wanna get a little bit wild and crazy, but it doesn't matter. What you wanna do is you wanna do two days per week and at least three days apart. So Monday, Thursday is pretty good cuz you get a test up and running on Monday. You get your results in on a Thursday and then you get new tests up on Thursday. Then you get, you know, come back from the weekend and you have data to look out on Monday, right? So Monday Thursday's typically pretty good. A lot of this stuff is contextual. Okay. Meaning that if you have absolutely. If let's say that you're just working with a fresh account or, you know, you're I dunno, you, you switch your website up, you're switching a bunch of ads out. You're switching our products, et cetera, pretty much.
Speaker 1: (
)
You're going to be doing exclusively testing on the flip side. If you know, your account is absolutely humming and you're spending 10 K a day and getting five X return on that spend, you know when we all love that. But imagine you probably don't need to be spending, you know, four days a week focusing on testing. You might be able to get away with one day a week. So it's a little bit contextual, but in 90% plus of cases running testing two days per week is going to be the move. All right. Both audience and creative testing. And then what you wanna do is on those days you do two things. They're basically just testing days, you get your tests up. Okay. You analyze the results from the previous days test. So let's say we have, you know, Monday, the first, then we're gonna have two second Wednesday, the third, Thursday, the fourth.
Speaker 1: (
)
Okay. So on Monday, the first, if you get six new audience tests up, then you're going to be analyzing those on Thursday and also getting new audience tests up for the ones that you ended up killing. Again, we're gonna go through this way more in depth, but just high level you're testing and replacing on two days a week, right? Most of your ad account sort of like Inad manager type of stuff can be managed on two days a week easily. Okay. So take your account into context. This is another really big thing. Advertising is part it's very much scientific, but it's also part art. And what I mean by that is you need to be able to take, you need to be able to zoom out and take your account. In context, if you have 10 ads that are giving you an amazing cost per acquisition, but you only have two audiences, you probably don't need to spend 10 hours this week thinking about brainstorming, recording, writing, et cetera, new creatives.
Speaker 1: (
)
You probably don't need to do that. So you probably need to focus more on audience testing. So it's contextual. If your scaling campaigns are starting to die off because of creative fatigue you should be putting almost the majority, if not all of your time into creative testing, right? You need to be testing new angles, new offers new hooks, new copy, new headlines, all of that good stuff. If your entire context and nose dive, right, it might be due to an offer problem. For this example, let's say that that's the case. You might need to test your entire offer. So you might need to do 10% off rather than buy one, get one free or vice versa. You might need to actually take a look at your landing page. Is this relevant anymore? Is it irrelevant? Were we running a, you know, were we running a Christmas sale?
Speaker 1: (
)
And oh yeah, by the way, it's March. Obviously that's like a very dumb example, but you'd be surprised sometimes. You know, where is it a funnel problem is what we were saying. Three months, six months, a year ago, no longer relevant. It's possible. It happens. It all changes. So it's very important to take things into context. There's an entire module dedicated to this analysis process. So again, not the point of this, but it is important to understand what you should be testing and why you should be testing it. All right. Let's talk about some budget allocation stuff. So 10% to 20% of your total spend should be on testing. If you're spending 5k a day, you're gonna be spending about a thousand to $2,000 per day in testing, right? If you're spending a hundred a day then again, probably like say 10 to $20.
Speaker 1: (
)
This is gonna be a little bit more on a hundred a day. But you know, in general, 10 to 20% of your budget is going to be on testing. And this is going to be like on a month basis basis. This is not a day to day thing. Meaning that if you get a bunch of new tests up on a Monday, you don't need to make sure that it's within 10% to 20%. This is something that you're checking on a, say two week basis on a monthly basis. Am I hitting this? Am I not? Okay. So this can be more if you need a lot of winners. So imagine that again, let's say that you just have a brand new product that you're releasing and it's within your brand and you know, you know the audiences already, but you really have no idea what's going to happen.
Speaker 1: (
)
You might even all, you might not even know what price is going to work best. So you are probably going to need to do a ton of creative testing on the new product. Okay. And what that means is you're going to be testing a lot of angles. You're gonna be testing images, videos, different you know, different hooks on the videos. You might be testing U GC. You might be testing in influencer, video, cetera, et cetera, et cetera. So you're going to probably overshoot your 20% budget. Okay. We wanna spend a minimum of three times our target cost per purchase cost per lead cost per call. What have you on a test, whatever your, you know, end result that you want is whatever your KPIs you wanna be spending a minimum of three times that. So if you have, you know, a target example, a target CPA of 50 bucks, you need to spend at least $150 over three days.
Speaker 1: (
)
Okay? If your business relies a lot more on delayed attribution and longer sales cycle, like for example, like our business does you know, an application funnel, a course, a service webinar funnel, all of that good stuff, moving to seven day testing period can be a savvy move. As an example, you know, right now we're running an application to a call funnel. And so if somebody watches our video training, they might not book a call for seven to 10 days. And so we might be getting really good lead costs, but we might not actually see those calls end up showing up for like seven days, 10 days, two weeks a month. And obviously, you know, that's a problem because we would make the wrong decision. So if you have a lot of delay attribution, you might need to have a longer testing cycle. Right.
Speaker 1: (
)
And yes, you just make sure to spend at least three times your target CPA. That's pretty much it let's go over audience testing high level. So audience testing are our best ads with new audiences. So we're taking our absolute winning ads and we're testing new audiences against those. Why? Well, we want to control the creative variable. If a new audience does not work with our best ads, is it really a good audience? And the answer is maybe, but probably not. Right? So if we test a brand new lookalike with one of our top performing ads and we go through that and the results are not good could it be due to that statistical error? Right? Might have we just gotten a bad pocket of people potentially, but if it doesn't work with our best ads, then I don't know. Right? The other thing with this is you want to be pretty ruthless and disloyal with your tests, meaning that as soon as you get a new ad, right?
Speaker 1: (
)
So audience testing is best ads with our new audiences. So as soon as you find a better ad, as soon as you find a new best ads, a new best ad, that is what you start testing your new audiences with. Let's say you have ad, you know, you have video ad one that you've been running for the past three months and it's been doing amazing, but in your last round of creative testing, you know how you now have image ad one that is getting you a five X return on ad spend. Whereas before the video is getting you a three X return on ad spend, well, in that case, now all of your new audience testing, you want to be using image, new image ad, right? The one that's getting you five X return on ad spend. So be ruthless with that. It's always best ad new audiences.
Speaker 1: (
)
The more winning audiences you get the better of course, right? So that's why you want to always be audience testing. And as you get to the scaling phase you need audiences to scale. So of course you need audiences. Audience is important audience size of at least a million, okay. This is just general rule. In most cases you want your audience size to, or your potential audience size at least to be above a million so that you can scale into it below that you have trouble. Obviously again, take it within context, if you know for a factor that your audience might only be like 300,000 people where you're probably not gonna be able to scale as high. It just means that, you know, you you're going to have some different challenges essentially, but in general, for almost every single business audience size of at least a million, right, for audience testing, what can you test?
Speaker 1: (
)
You can test look like audiences. You can test interests, broad targeting, geos, et cetera. So we have plenty of resources for a bunch of different audiences that you can try. And again, further in this module, you're going to see you're gonna see more examples and things like that, but these are just broad level for audiences. You know, we're testing lookalikes 1% look like 5% look like 10% lookalike. We're testing, lookalikes of purchasers, Facebook engages Instagram, engages, et cetera, right? Interests. If you sell a dog product, you might test a puppy interest. You might test a I don't know, like a, a Petco interest, right? You might test a a cat interest. I'm kidding. That one make any sense, but no, in all seriousness, just again, these are ideas for what you would actually be wanting to test in audience testing.
Speaker 1: (
)
Okay. Broad targeting. This works really well when you can get this to work. Scaling is so easy. Okay. So this would just be age and gender. So maybe it's just, you know, women 35 plus maybe it's men 18 to 24. What have you, okay. And geos with any of the above combinations. So if you are running a test in the us, so 1% look like of purchasers in the us, then your next test. If you're able to logistically, you know, fulfilled orders to Europe, you can then test an EU 1%, ER, look like, right. So different geos are again, that's a different audience. So these are just some ideas. Again, we're gonna get way further into this creative testing is pretty much the exact opposite of audience testing. We're taking our best audience in testing out new creatives. And the important thing with this is big results, right?
Speaker 1: (
)
For big results, we need to make big changes. So if we have an image, that's getting us a 1.2 X return on that spend switching the headline out on that is not going to get us to a, a 3.4 or, you know, a, a four point, oh, whatever you wanna hit, it's not gonna happen. So we don't have the luxury of changing one word in the headline. It's not a useful test. So an approximate order of importance for what you should test offer first and foremost, you know, again, to make this little bit ridiculous, to prove the point. If you have an ad that has a 90% discount, it's going to convert better than just a 10% discount . So anytime you can make your offer better you're going to win. You know what I mean, it doesn't an offer. Doesn't just mean taking more money off it, you know, it could be, Hey, you know, this pen is this pen is useful to do your homework, right.
Speaker 1: (
)
But maybe the pen also has this unknown property where it allows you to grant yourself, you know, five extra years to live. It's like, well, okay, let's highlight that as our offer rather than you know, writing, writing our name on a piece of paper, dumb example, but you get the point. So next thing that's really important is the content of the image or video. Alright. So whether this is say, for example, like an image of, let's just use the puppy example. Let's say that we are selling dog colors. Is the image showing a happy puppy with a dog color? Is it showing just a product shot? Is it a video where an influencer is talking about how durable it is? Are we talking about, you know the return policy we're talking about how fast shipping is, are we talking about how cute your dog is going to look and how many Instagram likes you're going to get?
Speaker 1: (
)
Okay. So the content of the image or video, and then next, add copy. So talking about, say saving money versus shipping speed. This is gonna be like the angle of your ad copy, right? At the very top, it might be saying something like save more money by buying with us. Another one might be, Hey, you know, to your door in 24 hours. Again, these are all done examples. I would never recommend actually doing any of these, but just approve the point, right? When you see a winning creative, then you can start to do further testing. So again, at the beginning outside, if you don't know if a video is good or you don't know an image is good, changing, the headlines means absolutely nothing. Once you see something doing really well, you know, maybe an influencer video video is getting you a five X and testing.
Speaker 1: (
)
Then we can start to test different added copy angles. So again, going on that saving money versus saving time or saving money versus fast shipping speed, then you can start to test thumbnails on that video. Then you can start to test, you know, the first hook, right? The first se zero to 10 seconds of that ad, et cetera. So that's kind of the mind frame with creative testing and remember best audience is new creative. So as soon as you find a new best audience and then bring that into creative testing, right. Frequently asked questions. So these are just some general kinda like testing roadblocks people get into, should I test audience X? Yeah, you should even if it sounds ridiculous. I know in like the last slide I said, oh, we should test like a cat audience. And I kinda was like, just making a joke to be an idiot.
Speaker 1: (
)
But to be honest with you, we've seen stuff like that work all the time for a haircare brand that we work with. We've seen a whole foods interest work really well for a really long time, or like an apple interest apple as in you know, apple computers. So we've seen those do really well, even though they don't really make a whole lot of sense on the surface. There are reasons why they work and we'll get into those a little bit more, but so should I test it unless it's completely illogical? Yeah, probably should. Should I test creative? Why again, same thing probably. If it's, if it makes you a little bit nervous if it makes you unsure of whether it's gonna work or not, it's probably a decent thing to test. And again, unless it's an exact replica of your worst performing at ever probably the key here is you don't want to, what people will sometimes fall into the trap of is just retesting just to retest, oh, this creative didn't work.
Speaker 1: (
)
Let me try it again with a different audience. It's like, yeah, you can try that. But if you try two times, three times and it still doesn't work, there's no need to try fitting a square P into a round hole. Okay. What creative should I test? So for this one, again, this is where we start to get a little bit more contextual and you'll get used to this. The more that we kind of go through this, and obviously the more scenarios that you see, but try to make a contextual decision. If you have a video performing for six months and it's recently started to fall off, you don't necessarily want to abandon ship completely on that. What you wanna do is you wanna start saying, all right, well maybe the first 10 seconds of the video are tired. Maybe people are sick of seeing the first 10 seconds of this, or, you know, maybe the people who responded to this hook to maybe like, again, say like the free shipping angle or the fast shipping angle, maybe the people who responded to the fast shipping angle have purchased.
Speaker 1: (
)
And they're tired of seeing this. So maybe we need to have a different angle. Maybe it's our return policy. Okay, if you, again, but this is all contextual. So if you have that, then that's what you'd wanna do. If you have nothing working, then you need to be doing multiple, very, very different ads. You need to be doing an image with, you know, an image with a customer review. You need to be doing a video with an influencer showing how to use the product as an example, carousel video, different offers, et cetera. Right? Okay. So it's all contextual. Should I stop testing? No. Never. For any reason I'm running circumstances. by testing didn't work. What should I do next? This is a really big one. Sometimes people will say, I'll make this test. Didn't work. I I'm just gonna test a new audience.
Speaker 1: (
)
And it's like, okay, well, why didn't the test work without understanding why something worked or didn't work? You might as well have not even ran the test because you don't understand, right. If you don't understand something, you can't replicate it. And if you can't replicate it, you cannot get consistent results. So if you don't know why it didn't work, you you absolutely need to have an educated guess before your next test. You're never going to be 100% certain, but you should have at least two or three potential ideas of why it didn't work, right? Otherwise you're flying blind and potentially doing just the completely incorrect thing with, you know, more help on that analysis stuff, et cetera. Again, entire module dedicated to a bunch of account AEs, test analyses, et cetera. But this is just more for high level. All right. So let me show you now, why
Speaker 2: (
)
Testing is so important? So let's come back in here. I'm gonna show you in a couple accounts. First off, I'm going to show you in our internal account, why this is so important because what ends up happening. And this is creative testing again, by the way, just so you know. But we have tested a bunch of new video ads in this account. And these are all done within the past seven days. Okay? And we're getting wildly different results. If you notice this one $7 70 cent leads, our KPI in this is $10 leads, anything below 10 bucks. I am happy with in this account. Okay. So this is a 7 77 lead. I am thrilled with that. I will take this all day, but you notice in order to get to this $7 77 Le 77 Le oh my Lord, 77 cent leave. That is a tongue twister and this $8 71 cent lead I've had to go through $19 leads.
Speaker 2: (
)
We've had to go through a hundred dollars, lead $34 leads a landing page view for $15. These are all atrocious, atrocious ads. These are horrible, right? I'm embarrassed to even show these, these are bad ads, but guess what? To be honest with you, I thought that these ads were going to do way better than these ads on paper. These ads are way better. Okay. These have a, these have a very clear hook. My face is at the beginning. And obviously if my face is at the beginning, I mean, who could deny that ad, right? And this ad actually that's getting these leads is actually this ad is a pretty dumb ad. It's it has a comedic eight second intro. It's me being scared to look pretending to be scared, to look at as manager, right? And guess what this ad is doing amazingly so far, but the importance of all of this is understanding that you need to get through a bunch of ad bad ads to get to a good ad. No one is immune from this, and this is the exact type of spread that you wanna be seeing with your test. Again, big differences you know, you need to make big changes to see big changes. So again, this one stupid ad dumb ad. Would've not guessed that this worked and turns out that it's working really well, actually. Right. Did you check, do you check as manager like this? It's me being scared and then I scream as manager.
Speaker 2: (
)
I know. All right, there you go. And basically I put that intro at the very front of a video that has worked well for six months. Okay. And then you see an a, like this, for example, and we come in here and this is one that, you know, again, starts out with me at the very beginning case study scaling for X. I thought this was going to crush it. I thought this was gonna be an amazing ad. Nope. Horrible ad. No liked it. All right. So this is why it's important to show you another quick example here, we can see why again, why is it important to run creative testing? Because for every ad KPI in this, just so you understand is basically for getting anything above, like say a 1 6, 1 7 top of funnel. This is awesome. Cause this says a massive lifetime value.
Speaker 2: (
)
So in order to get your 2.36 return on that spent tests, you need to get through tests that are getting you 1.1 0.8 0.6, cuz guess what? We're gonna cut these. After three days, you know, this one might have spent 200 bucks. This one spent 160 bucks. We're gonna cut these after three days. And these ones we're gonna run for the next three months and get this ROAS for three months outta this. This is why testing is important. And so I don't one of the things that I don't want you to do is get scared of a test failing. No, you want test to fail cuz once you're failing, it means you're getting closer to getting these. Once you get these, then it's like, who cares? Then keep failing, try more ridiculous stuff, try more interesting stuff. Okay. So that's gonna do for this one. I'll continue on or I'll see you in the next testing modules and on the next testing videos within this testing model and yeah. Get testing. Oh right.
All right. I hope you are ready to talk about some testing, get your pencils and paper out and your scantron sheets, because it's all about testing today. Let's see if you pass or you fail.
No, but actually we're gonna talk about sort of really the lifeblood of accounts today which is testing both audience testing and creative testing. We're going to go through the testing procedure overview. This is going to be high level more theoretical. We're gonna dive into some comments at the end, but this is going to be a little bit higher level theoretical. I realize that I'm going to sound like an eighth grade math teacher right now. That being said once you understand the principles of testing, you don't need to memorize anything. Okay. So it's really important to understand the principles and the theory behind it. When you do, you don't need to memorize anything.
Testing Overview
First and foremost, testing overview. What does it mean, what are we actually doing here? One of the inside jokes of advertising is “oh just test it”, but what does that even mean? Basically, what you need to understand is that