Testing advertisements has been in practice for hundreds of years—the test cycle was just slower up until recently. Similarly with Facebook, in order to have success with your ad account, you have to consistently be testing the creative, copy, and audiences. Even when you find a winning combo, it’s only a matter of time before that combo will die out. Finding winners is tricky and you’ve probably stumbled across a winner by chance before. Today, I’m going to teach you how to increase your chances of finding winners by sharing my bulletproof ad testing formula. I’m very excited about writing this as testing, reading the data, and optimizations are one of my favorite parts of Facebook, so let’s get into it.
While most people see organization as a constraint or an extra thing they have to do within the account, I see the opposite. It helps with scalability and helps the media buyer focus more on the important stuff. Rather than spending time trying to figure out mislabeled and unorganized campaigns, they can focus on strategy and implementing new ideas into the account. There are two big things you can do within your ad account to stay organized and they go as follows.
The first is keeping all campaign priorities separate. This means that top, middle, and bottom of funnel campaigns are going to be separate here. In addition to that, I split my TOF and BOF even further to have TOF-1 (which is for testing specific audiences), TOF-2 (which is for testing creative and copy), and TOF-3 (where we combine the winners from TOF-1 and 2 to start a scaling campaign). BOF is broken into 2 separate campaigns: normal BOF targeting and BOF – Cross-sell. This is where we use dynamic product ads or special offers to upsell people that have already purchased from the store. Having every campaign separated like this is crucial to staying organized. This helps with identifying what part of the funnel is struggling if things go south, and to come up with a solution quickly to get your account back up to speed.
Start adapting the methodology of creative and copy organization to a whole new level of OCD. Within all of our ad accounts, we have everything organized down to the copy and creative variations. We utilize naming conventions that help us not only identify what creatives and copy are doing well, but also what combination of the two are performing well. For example, for copy we use a system of letters to label our ads with, such as copy “A” or copy “B.” Copy A could have the same text as copy B and an entirely different headline. This is another important point: when changing variables, make sure they’re accounted for and tracked by using a different letter. For our creative, we use numbers to track them. For example, we’re advertising creative 1023 with copy B. So here, our ad name would be 1023B, and if we were to use image 1024 with the same copy variation the ad would be named 1024B. Lastly, we label our TOF/MOF/BOF ads differently by having a different number prefix on them. For example, all TOF ads are labeled 1XXX, all MOF ads are labeled 2XXX, and all BOF ads are labeled 3XXX. This really helps differentiate where most of our successful efforts are coming from when it comes to reporting. It sounds like a lot of work, but honestly the work is miniscule compared to going through the headache of figuring out where your winning copy combination is located. With how fast digital marketing is moving, you want to have speed and efficiency on your side.
Single variable testing is by far the most important part of this article, as setting up this foundational methodology is important to having a more scientific approach to the way that you run and test your ads in the future. So why is it so important? Because by only running tests based on a single variable, you cut out all the uncertainty that comes with media buying, like not knowing whether the ad, creative, or targeting caused the failure. Here, you’re both testing specific copy to see if it performs and for specific creative or audiences to see if they’re the things that move the needle forward. You’re essentially trying to figure out what mixture of variables give you a winning combination so that you can scale your Facebook ads. This cuts out all the luck (like I mentioned before) and scientifically proves why an ad will do well and why it will scale well. If the creative was proven, the audience was proven, and the copy was proven, then it’s good to scale it up.
Now, obviously doing single variable testing sounds good and all, but to you it may seem like extra effort. Well? Good. Because running Facebook ads at a competitive level and spending thousands of dollars per day takes a lot of effort, and this is where that effort comes in. For starters, I mentioned earlier that I break up my TOF campaigns into TOF-1 Audience Testing and TOF-2 Creative/Copy Testing. That’s for a reason, because both campaigns have specific rules about them and how they operate. For example, TOF-1 only tests new audiences; when we test for audiences, we only use a creative/copy combination that is proven. The same thing goes for Phase 2—when we test for new creative, we make sure the audience/copy are proven. The same thing goes with copy testing. This way, we single out the one variable that’s actually being tested and that will help us determine if it was responsible for winning or failing in the ad account. Now you’re probably wondering, what makes an audience, creative, or copy proven? We personally use one metric, and that’s conversions. If the audience, creative, and copy have gotten at least 10 purchases under our ideal CPA, we would consider it a winner. For audiences, it would be that specific adset, and for copy/creative it would have to be the individual ad that generated those results. This is crucial to understanding when running ads, and since discovering this method and implementing it, I have not gone back to the old ways of “testing.”
Protip: When launching new audience, creative, or copy tests, make sure to run them in separate adsets. This will help keep things organized and the tests separate from each other. Additionally, try not to have too many ads in one adset when testing for copy/creative. At most have 2-3 ads running in an ad set with a budget of $25/day.
It’s important to be aware of the sunk cost fallacy and not to get too emotional when it comes to running Facebook ads. Just like traders on Wall St., emotions need to be cut out when operating ads. The reason why I bring up emotions here is because it’s important to understand when to cut your losses with the specific test that you’re running, whether it be an audience, creative, or copy.
Identifying the KPIs of failed tests and then shutting the ad down based on those KPIs is important. While overspending on a failed ad and “hoping” it will turn around is stupid, shutting ads down 2 hours after you launched them is equally as stupid. They’re both sides of the extreme and you want to be in the middle. This is specifically the point where you have enough statistically significant data to know that the test didn’t work out and have to pause the ads/adsets. How do you know if your ads have enough statistical significance? Well, there are a few basic guidelines:
All of these base guidelines are important to follow, but may vary depending on how you prefer to run your account and read your data. Regardless, I’d say the 1st rule holds true for every store testing ads for their products. You have to wait multiple days before pausing anything because that’s where most of the data is collected.
The great thing about Facebook ads is that they have automated rules you can implement into your ad account—specifically the testing campaigns where if/then statements are followed. For instance, here you can input your shutdown KPIs, and FB can automatically shut down your tests based on the conditions. Rules are also easy to set up. You can either use a 1-stage rule or a 2-stage rule depending on how deep you want to test your audience/creative/copy. We personally use a 2 stage rule with all of our TOF testing campaigns. This basically means we have 2 rules setup with different KPIs. Since most of the testing drop off happens after you first launch the ads, this is where those rules help weed out the bad performers. For example, this a rule we have for one of our clients:
Basically, here we have it set up that if we get a conversion under $30, we will keep testing the ad to see if we can get 2 conversions under $60. If we can’t, we’ll pause those ads and chalk them up to failures. Automated rules are a lifesaver, especially when you want to control your ad budget for testing and make sure that you don’t have to be exactly in the ad account when it comes to shutting down the ads.
When combining these 3 major aspects of testing, you’ll have a testing machine that will keep running forever and your bottleneck will eventually become having enough creative and copy to supply the testing machine. Those testing campaigns are the backbone of scaling, why? Because the testing campaigns are what supply the scaling campaigns with creatives, copy, and audiences. Without having the machine running in the background once you burn out of creatives in your scaling campaign, it’s game over and you have to start from scratch all over again. So keep testing, keep iterating, and keep improving with Facebook ads.