Split Testing Archives - Jon Loomer Digital For Advanced Facebook Marketers Thu, 24 Oct 2024 00:06:20 +0000 en-US hourly 1 https://www.jonloomer.com/wp-content/uploads/2024/03/apple-touch-icon.png Split Testing Archives - Jon Loomer Digital 32 32 5 Meta Ads Tests that Transformed My Perspective on Targeting https://www.jonloomer.com/5-meta-ads-tests-targeting/ https://www.jonloomer.com/5-meta-ads-tests-targeting/#comments Thu, 24 Oct 2024 00:06:20 +0000 https://www.jonloomer.com/?p=46807

My approach to targeting completely transformed during the past year, driven primarily by the results of these five Meta ads tests...

The post 5 Meta Ads Tests that Transformed My Perspective on Targeting appeared first on Jon Loomer Digital.

]]>

To suggest that my perspective on Meta ads targeting has changed during the past year is an understatement. It’s completely transformed. This evolution wasn’t immediate and was reinforced through a series of tests.

Understand that it wasn’t easy to get here. It’s reasonable to say that my prior advertising strategy could have been boiled down to targeting. It was the most important step. Great ad copy and creative couldn’t overcome bad targeting.

It’s not that I don’t care about reaching a relevant audience now. It’s that the levers we pull to get there are no longer the same.

I’m getting ahead of myself. This post will help explain how I got here. I’ve run a series of tests during the past year that have opened my eyes to just how much things have changed. They’ve helped me understand how I should change, too.

In this post, we’ll discuss the following tests:

  • Test 1: How Much Do Audiences Expand?
  • Test 2: How Much Remarketing Happens When Going Broad?
  • Test 3: Do Audience Suggestions Matter When Using Advantage+ Audience?
  • Test 4: Comparing Performance and Quality of Results
  • Test 5: Understanding the Contribution of Randomness to Results

Let’s get to it…

Test 1: How Much Do Audiences Expand?

One of my primary complaints ever since Advantage Detailed Targeting (then Detailed Targeting Expansion) was introduced is the lack of transparency.

Advantage Detailed Targeting

We know that Meta can expand your audience beyond the initial targeting inputs, but will this always happen? Will your audience expand a little or a lot? We have no idea. I’ve long asked for a breakdown that would solve this problem, but I don’t anticipate getting that feature anytime soon.

The same questions about how much your audience expands also apply to Advantage Lookalike and Advantage Custom Audience. It’s a mystery.

This is important because we can’t always avoid expansion. If your performance goal aims to maximize conversions, value, link clicks, or landing page views while using original audiences, Advantage Detailed Targeting is automatically on and it can’t be turned off.

Advantage Detailed Targeting

The same is true for Advantage Lookalike when your performance goal maximizes conversions or value.

Advantage Lookalike

Are we able to clear up this mystery with a test?

The Test

I don’t believe that there’s any way to prove how much our audience is expanded when Advantage Detailed Targeting or Advantage Lookalike are applied. But, there is a way to test this with Advantage Custom Audience. While it won’t definitively prove how our audience is expanded with the other two methods, it could provide a roadmap.

This test is possible thanks to the availability of Audience Segments for all sales campaigns. Once you define your Audience Segments, you can run a breakdown of your results to view the distribution of ad spend and other metrics between three different groups:

  • Engaged Audience
  • Existing Customers
  • New Audience

For the purpose of this test, this breakdown can help us understand how much our audience is expanded. All we need to do is create an ad set using original audiences where we explicitly target the same custom audiences that are used to define our Audience Segments.

So, I did just that, and I turned on Advantage Custom Audience.

Advantage Custom Audience

I used the Sales objective so that the necessary breakdown would be available.

The Results

My only focus with this test was to uncover how my budget was distributed. Performance didn’t matter.

In this case, 26% of my budget was spent between my Engaged Audience and Existing Customers.

Audience Segments Breakdown

Since the custom audiences I used for targeting matched how I defined my Audience Segments, we can state definitively that, in this case, Meta spent 74% of my budget reaching people outside of my targeting inputs.

What I Learned

This was groundbreaking for my understanding of audience expansion. Up until this point, whether or not Meta expanded my audience — and by how much — was a mystery. This test lifted the curtain.

These results don’t mean that the 74/26 split would apply in all situations universally. Many factors likely contribute to the distribution that I saw here, not limited to…

  • Performance goal
  • Conversion event
  • Budget
  • Size of remarketing audiences

We also don’t know if a similar split happens when applying Advantage Detailed Targeting or Advantage Lookalike. While we don’t know, this at least gives us a point of reference rather than having to make a blind guess.

Read More

Check out the following post and video to learn more about this test:

How Much Do Audiences Expand Using Advantage Custom Audience?

Test 2: How Much Remarketing Happens When Going Broad?

Even before we had Advantage+ Shopping Campaigns and Advantage+ Audience, some advertisers swore by using original audiences to “go broad” (no inputs for custom audiences, lookalike audiences, or detailed targeting). While unique, this approach was largely based on gut feel, with limited ways to prove how ads were getting distributed. They could only provide results as evidence that it was effective.

The addition of Audience Segments to all sales campaigns would allow us to provide a bit more insight into what is happening when going broad.

The Test

I created a campaign with the following settings…

  • Campaign Objective: Sales
  • Performance Goal: Maximize Conversions
  • Conversion Event: Complete Registrations
  • Targeting: Original Audiences using only location and custom audience exclusions
  • Placements: All

The Results

Recall that we already had a remarketing distribution benchmark with the prior test. In that case, we explicitly defined the custom audiences we wanted to reach within targeting. In this case, I didn’t provide any such inputs.

And yet…

Audience Segments Going Broad

Even though no inputs were provided, Meta spent 25% of my budget on reaching prior website visitors and people who were on my email list (both paid customers and not).

What I Learned

I found this to be absolutely fascinating. While we will struggle to get any insight into who the people are that Meta reached outside of remarketing, the fact that 25% of my budget was spent on website visitors and email subscribers is important. It shows that Meta is prioritizing showing my ads to people most likely to convert.

This realization helped improve my confidence in a hands-off approach. If the percentage were closer to 0, it may show disorder. It could suggest that the broad targeting approach is based in smoke and mirrors and your inputs are necessary to help steer the algorithm.

What was most shocking to me is that the remarketing distribution was nearly identical, whether I used Advantage Custom Audience and defined my target or went completely broad. This was a whole new realization.

While the first test helped me understand how much Meta expands my targeting inputs, the second made me question whether those inputs were necessary at all. I’d spend about the exact same amount reaching that desired group in each case.

Read More

Check out the following post and video to learn more about this test:

25 Percent of My Budget Was Spent on Remarketing While Going Broad

Test 3: Do Audience Suggestions Matter When Using Advantage+ Audience?

While you have the option to switch to original audiences, the default these days is Advantage+ Audience. Meta strongly encourages you to take this route, warning that switching to original audiences can lead to a drop in performance.

Advantage+ Audience

When using Advantage+ Audience, you leverage Meta’s AI-driven algorithmic targeting. You have the option to provide audience suggestions, but it’s not required.

Advantage+ Audience

Meta says that even if you don’t provide suggestions, they will prioritize things like conversion history, pixel data, and prior engagement with your ads.

Advantage+ Audience

But, is this true? And how pronounced is it?

The Test

We could test this by again leveraging a manual sales campaign with Audience Segments. I created two ad sets:

  • Advantage+ Audience without suggestions
  • Advantage+ Audience with suggestions that match my Audience Segments

Since I can use custom audiences that exactly match the custom audiences used to define my Audience Segments, we can get a better idea of just how much (if at all) these audience suggestions impact delivery.

A reasonable hypothesis would be that while Advantage+ Audience without suggestions will result in remarketing (potentially in the 25% range, as we discovered when going broad). But, it’s likely to make up a smaller percentage of ad spend than when providing suggestions that match my Audience Segments.

But, that didn’t play out…

The Results

Once again, quite shocking.

The ad set that used custom audiences that match those used to define my Audience Segments resulted in 32% of my budget spent on that group.

Audience Segments Breakdown

By itself, this seems meaningful. More is spent on remarketing in this case than when going broad or even using Advantage Custom Audience (wow!).

But, check out the results when not providing any suggestions at all…

Audience Segments

Your eyes aren’t deceiving you. When I used Advantage+ Audience without suggestions, 35% of my budget was spent on remarketing.

What I Learned

Every test surprised me. This one shook me.

When I provided audience suggestions, I reached the people matching those suggestions less than when I didn’t provide any suggestions at all. Providing suggestions was not a benefit. It didn’t seem to impact what the algorithm chose to do. That same group was prioritized either way, with or without suggesting them.

It’s not clear if this would be the case for other types of suggestions (lookalike audiences, detailed targeting, age maximum, and gender). But, the results of this test imply that while audience suggestions can’t hurt, it’s debatable whether they do anything.

As is the case in every test, there are several factors that will contribute to my results. Budget and the size of my remarketing audience are certainly part of that. And it’s also quite possible that I won’t always see these same results if I were to run the test multiple times.

It remains eye-opening. Not only is Advantage+ Audience without suggestions so powerful that it will prioritize my remarketing audience, it’s possible that Meta doesn’t need any suggestions at all.

Read More

Check out the following post and video to learn more about this test:

Audience Suggestions May Not Always Be Necessary

Test 4: Comparing Performance and Quality of Results

I’ve encouraged advertisers to prioritize Advantage+ Audience for much of the past year. It’s not that it’s always better, but it should be your first option. Instead, it seems that many advertisers find every excuse to distrust it and switch to original audiences.

Advertisers tell me that they get better results with detailed targeting or lookalike audiences. And even if they could get more conversions from Advantage+ Audience, they’re lower quality.

Is this the case for me? I decided to test it…

The Test

I created an A/B test of three ad sets where everything was the same, beyond the targeting. Here are the settings…

  • Objective: Sales
  • Performance Goal: Maximize Conversions
  • Conversion Event: Complete Registration
  • Attribution Setting: 1-Day Click
  • Placements: All

The three ad sets took three different approaches to targeting:

  • Advantage+ Audience without suggestions
  • Original audiences using detailed targeting (Advantage Detailed Targeting)
  • Original audiences using lookalike audiences (Advantage Lookalike)

Since the performance goal is to maximize conversions, Advantage Detailed Targeting and Advantage Lookalike would automatically be applied for the respective ad set, and it could not be turned off. The audience is expanded regardless.

The ads were the same in all cases, promoting a beginner advertiser subscription.

The Results

In terms of pure conversions, Advantage+ Audience led to the most, besting Advantage Detailed Targeting by 5% and Advantage Lookalike by 25%.

Ads Manager Results

Recall that this was an A/B test, and Meta had 61% confidence that Advantage+ Audience would win if the test were run again. Maybe as important, a less than 5% confidence that Advantage Lookalike would win.

A/B Test Results

But, one of the complaints about Advantage+ Audience relates to quality. Are these empty subscriptions run by bots and people who will die on my email list?

Well, I tracked that. I created a separate landing page for each ad that utilized a unique form. Once subscribed, these people received a unique tag so that I could keep track of which audience they were in. The easiest way to measure quality was to tag the people who clicked on a link in my emails after subscribing.

Once again, Advantage+ Audience generated the most quality subscribers.

Is this because Advantage+ Audience leaned heavily into remarketing? We can find out with a breakdown by Audience Segments!

Breakdown by Audience Segments

Nope! More was actually spent on remarketing for the Advantage Detailed Targeting ad set. Advantage+ Audience actually generated the fewest conversions from remarketing (though it was close to Advantage Lookalike).

What I Learned

This test was different than the others because the focus was on results and quality of those results, rather than on how my ads were distributed. And, amazingly, Advantage+ Audience without suggestions was again the winner.

Of course, we’re not dealing with enormous sample sizes here ($2,250 total spent on this test). It’s possible that Advantage Detailed Targeting would overtake Advantage+ Audience in a separate test. But, what’s clear here is that the difference is negligible.

There just doesn’t appear to be a benefit to spending the time and effort required to switch to original audiences and provide detailed targeting or lookalike audiences. I’m getting just as good results (even better) letting the algorithm do it all for me.

As always, many factors contribute. I may get better results with Advantage+ Audience because I have extensive history on my ad account. But, as mentioned in the results section, it’s not as if it led to more results from remarketing.

The fact that Advantage+ Audience won here isn’t even necessarily the main takeaway. There could be some randomness baked into these results (more on that in a minute). But, this test further increased my confidence in letting the algorithm do it’s thing with Advantage+ Audience.

Read More

Check out the following post to learn more about this test:

Test Results: Advantage+ Audience vs. Detailed Targeting and Lookalikes

Test 5: Understanding the Contribution of Randomness to Results

There was something about that last test — and really all of these tests — that was nagging at me. Yes, Advantage+ Audience without suggestions kept coming out on top. But, I was quick to remind you that these tests aren’t perfect or universal. The results may be different if I were to run the tests again.

That got me thinking about randomness

What percentage of our results are completely random? What I mean by that is that people aren’t robots. They aren’t 100% predictable when it comes to whether they will act on a certain ad. Many factors contribute to what they end up doing, and much of that is random.

If there’s a split test and the same person would be in all three audiences, which audience do they get picked for? How many of those random selections would have converted regardless of the ad set? How many converted because of the perfect conditions that day?

It might be crazy, but I felt like we could make an example of randomness with a test.

The Test

I created an A/B test of three ad sets. We don’t need to spend a whole lot of time talking about them because they were all identical. Everything in the ad sets was the same. They all promoted identical ads to generate registrations for my Beginners subscription.

I think it’s rather obvious that we wouldn’t get identical results between these three ad sets. But, how different would they be? And what might that say about the inferences we make from other tests?

The Results

Wow. Yes, there was a noticeable difference.

One ad set generated 25% more than the lowest performer. If that percentage sounds familiar, it’s because it was the exact same difference between the top and bottom performer in the last test. But in that case, that difference “felt” more meaningful.

In this case, we know there’s nothing meaningfully different about the ad sets that led to the variance in performance. And yet, Meta had a 59% confidence level (nearly the same as the level of confidence in the winner in the previous test) that the winning ad set would win if the test were run again.

A/B Test

What I Learned

Randomness is important! Yet, most advertisers completely discount it. They test every detail and make changes based on differences in performance that are even narrower than what we saw here.

Think about all of the things that advertisers test. They create multiple ad sets to test targeting. They try to isolate the best performing ad copy, creative, and combination of the two.

This test taught me that most of these tests are based in a flawed understanding of the results. Unless you can generate meaningful volume (usually because you’re spending a lot), it’s not worth your time.

Your “optimizing” may not be making any difference at all. You may be acting on differences that would flip if you tested again — or if you let the test run longer or spent more money.

It’s even reasonable to think that too much testing will hurt your results. You’re running competing campaigns and ad sets that drive up ad costs due to audience fragmentation and auction overlap — all for a perceived benefit that may not exist.

I’m not saying that you should never test anything to optimize your results. But be very aware of the contributions of randomness.

Read More

Check out the following post to learn more about this test:

Results: Identical Ad Sets, a Split Test, and Chaos

My Approach Now

You’re smart. If you’ve read this far, you can infer how these tests have altered my approach. My strategy is drastically simplified from it once was.

I lean heavily on Advantage+ Audience without suggestions, especially when optimizing for conversions. Of course, Advantage+ Audience isn’t perfect. If I need to add guardrails, I will switch to original audiences. But when I do, I typically go broad. I rarely ever use detailed targeting or lookalikes now.

I also rarely use remarketing now, which is insane considering it once made up the majority of my ad spend. Since remarketing is baked in, there are few reasons to create separate remarketing and prospecting ad sets now. Especially when I’d normally use general remarketing (all website visitors and email subscribers) because I felt these people would be most likely to convert.

This also means far fewer ad sets. Unless I’m running one of these tests, I almost always have a single ad set in a campaign.

It doesn’t mean I’m complacent in this approach. It means that the results of these tests have raised my confidence that no targeting inputs will not only perform just as well, but oftentimes better. And I know that there are exceptions and factors that contribute to my results.

Maybe things will change. But, I no longer feel the need to micromanage my targeting. Based on the results of these tests — and of my results generally — it’s no longer a priority or a factor that I worry about.

And that, my friends, is quite the evolution from where I was not long ago.

Run Your Own Tests

I’m always quick to point out that my results are at least partially unique to me. Whether you’re curious or skeptical, I encourage you to run your own tests.

But, do so with an open mind. Don’t run these tests hoping that your current approach will prevail. Spend enough to get meaningful results.

Maybe you’ll see something different. If you do, that’s fine! The main point is that we shouldn’t get stuck in our ways or force a strategy simply because it worked at one time and we want it to work now.

Replicate what I did. Then report back!

Your Turn

Have you run tests like these before? What results did you see?

Let me know in the comments below!

The post 5 Meta Ads Tests that Transformed My Perspective on Targeting appeared first on Jon Loomer Digital.

]]>
https://www.jonloomer.com/5-meta-ads-tests-targeting/feed/ 8
Test Results: Advantage+ Audience vs. Detailed Targeting and Lookalikes https://www.jonloomer.com/test-results-advantage-plus-audience-detailed-targeting-lookalikes/ https://www.jonloomer.com/test-results-advantage-plus-audience-detailed-targeting-lookalikes/#comments Mon, 09 Sep 2024 20:14:57 +0000 https://www.jonloomer.com/?p=46398

I ran an A/B test to determine whether Advantage+ Audience, detailed targeting, or lookalike audiences led to the most quality results...

The post Test Results: Advantage+ Audience vs. Detailed Targeting and Lookalikes appeared first on Jon Loomer Digital.

]]>

We should always test our assumptions. We may think that something works, or maybe it worked at one time, but it’s important to verify that it remains the path forward.

Testing our targeting strategies was the focus of a recent blog post, and I ran a test of my own as an example. This post will highlight the setup and results of the test.

I tested using the following three targeting strategies:

  1. Advantage+ Audience without suggestions
  2. Detailed Targeting with Advantage Detailed Targeting
  3. Lookalike Audiences with Advantage Lookalike

It’s important to understand that the results of this test are not universal. I will address some of the potential contributing factors at the end of this post.

Here’s what we’ll cover:

  • Campaign Basics
  • Targeting
  • A/B Test Setup
  • Surface Level Data
  • Conversion Results
  • Quality
  • Remarketing and Prospecting Distribution
  • Potential Contributing Factors
  • What it Means

My goal isn’t to convince you that your approach is right or wrong. My hope is that my test inspires you to run a similar one of your own so that you can validate or invalidate your assumptions.

Let’s begin…

Campaign Basics

I created a campaign using the Sales objective.

Sales Objective

Within that campaign, I created three ad sets. Each used the following settings…

1. Performance Goal: Maximize conversions with Complete Registration conversion event.

Maximize Conversions Performance Goal

My goal is to get registrations on a lead magnet. The reason I’m using the Sales objective is to get access to Audience Segments data (I’ll address that later).

2. Attribution Setting: 1-day click.

Attribution Setting

I recommend using a 1-day click attribution setting for most non-purchase events.

3. Budget: $25/day per ad set ($750 per ad set overall)

Daily Budget

The total spent on the test was about $2,250.

4. Locations: United States, Canada, and Australia.

Locations

I would normally include the United Kingdom, but it is no longer allowed for split testing.

5. Placements: Advantage+ Placements.

Advantage+ Placements

6. Ads: 1 static and one using Flexible Ad Format. The Flexible version utilized four different images.

Each ad sent people to a different landing page with a unique form. All three landing pages and forms appear identical to the user. This was done so that I could confirm results in my CRM — not just the number of registrations using each form, but what these people did once they subscribed.

Targeting

Each ad set utilized a different targeting approach.

1. Advantage+ Audience without suggestions.

Advantage+ Audience

There isn’t much to show here. This allows the algorithm to do whatever it wants.

2. Detailed Targeting with Advantage Detailed Targeting.

Detailed Targeting

I used Original Audiences and selected the following detailed targeting options:

  • Digital Marketing Strategist
  • Advertising agency (marketing)
  • Jon Loomer Digital (website)
  • Digital marketing (marketing)
  • Online advertising (marketing)
  • Social media marketing (marketing)

Because I’m optimizing for conversions, Advantage Detailed Targeting is automatically turned on. I cannot prevent the audience from expanding.

3. Lookalike Audiences with Advantage Lookalike.

Lookalike Audiences

I selected lookalike audiences based on the following sources:

  • Customer List
  • Power Hitters Club – Elite (Active Member)
  • All Purchases – JonLoomer.com – 180 Days

Because I’m optimizing for conversions, Advantage Lookalike is automatically turned on and can’t be turned off.

A/B Test Setup

I ran an A/B test of these three ad sets in Experiments. The key metric for finding a winner was Cost Per Result. That “result” was a registration.

A/B Test

I ran the test for 30 days and chose not to have it end early if Meta found a winner.

A/B Test

I’m glad I did it this way because Meta’s confidence in the winner wasn’t particularly high and it changed the projected winner a couple of times. This allowed the test to play out until the end.

Surface Level Data

Before we get to the results, I found this interesting. Beyond testing how these three would perform, I was curious if the cost for delivery would be much different. This, of course, could have an impact on overall performance.

Ads Manager Results

The difference in CPM is minor, but it could be impactful. It was $.68 cheaper to deliver ads using Advantage+ Audience than Lookalikes. The difference in CPM between Advantage+ Audience and Detailed Targeting was $.89.

While this may not seem like much (it’s not), that resulted in the delivery of between 1,500 and 2,000 more impressions when using Advantage+ Audience. It doesn’t mean that a lower CPM will lead to more results, but we should bookmark this metric for later.

Conversion Results

According to Ads Manager, Advantage+ Audience led to 9 more registrations than Detailed Targeting and 36 more than Lookalikes.

Ads Manager Results

The overall costs for these results weren’t great, but that’s also consistent with what I’ve seen when running split tests. Because these tests prevent overlap, delivery will be less efficient. Of course, “good results” weren’t the goal here.

The difference between Advantage+ Audience and Detailed Targeting may not be statistically significant, but the difference between the two and Lookalikes certainly was. The A/B test results support this assumption.

A/B Test Results

It’s possible that if the test were run again, Detailed Targeting would come out ahead (Meta estimates a 36% chance of that happening). But, it’s very unlikely (under 5%) that Lookalikes would come out on top.

Recall that each ad sent people to a different landing page that utilized a different form. This way, registrants were given a unique tag so that I knew which audience they were in. These landing pages and forms were only used for the test.

Keep in mind that the results in Ads Manager reflect all registrations, and this can include registrations for other lead magnets. This could happen if someone who subscribes to the lead magnet I’m promoting then subscribes to another (I email about other lead magnets in my nurture sequence).

The numbers from my CRM aren’t much different, but they are different.

The disparity is greater when looking at the “true” results. Advantage+ Audience led to 14 more registrations than Detailed Targeting and 43 more than Lookalikes.

At least some of this difference might be related to the slight difference in CPMs. But, keep in mind that Lookalikes had the second lowest CPM of the three targeting strategies, but it performed the worst.

Quality

One of the first arguments I hear from advertisers when it comes to leveraging Advantage+ Audience over old school targeting approaches is that it’s more likely to lead to low-quality results. Was that the case here?

I was prepared to measure this. It’s one of the reasons that I used unique forms for each ad set. It allowed me to get a deeper understanding of whether these registrants did anything else.

I’d consider my funnel atypical when it comes to most businesses who collect registrations. I don’t have an expectation that many of them will buy from me within 30 days. I look at it as more of a long-tail impact, and many of the people who buy from me do so years later.

Because of that, we can’t make any reasonable assessment of registration quality based on sales at this stage. While two purchases came in via Advantage+ Audience and two from Detailed Targeting so far, these are hardly statistically significant. And it could change dramatically in a matter of months or years (and I don’t want to wait until then to publish this post).

But, there is another way to assess quality, and I first applied this when comparing lead quality from instant forms vs. website forms. Have these registrants performed a funnel event by clicking specific links in my emails?

Once again, the count of “quality clicks” is incomplete, but we can make some initial evaluations. Here’s where we stand at this moment…

While Advantage+ Audience led to a higher volume of registrations, it was not at the expense of quality. It generated 17% more quality registrants than Detailed Targeting and 54% more than Lookalikes.

These numbers are imperfect and incomplete since, like I said, a true evaluation of whether or not the registrations were “quality” can’t be made for quite some time. But, it at least shows the difference in engagement. If someone hasn’t engaged with my emails, they are less likely to be an eventual customer.

Remarketing and Prospecting Distribution

I promised I’d get back to this when I explained using the Sales objective at the top. I could have used the Leads objective (or even Engagement), but I chose Sales for one reason: Access to data using Audience Segments.

When running a Sales campaign (Advantage+ Shopping or manual), some advertisers have access to Audience Segments for reporting.

Audience Segments

Once you define your Engaged Audience and Existing Customers, you can use breakdowns to see how your budget and results are distributed between remarketing (Engaged Audience and Existing Customers) and prospecting (New Audience).

This is something that isn’t necessarily incredibly meaningful, but I find it interesting. It gives us an idea of how Meta finds the people who are likely to perform our goal event. I used this as the primary way to compare distribution using four different targeting approaches in another test.

Within that test, I saw remarketing take up 25 to 35% of my budget, regardless of the targeting approach. In that case, I ran each ad set concurrently and didn’t run an A/B test. This test could be different since it’s a true A/B test.

Here are the breakdowns…

Breakdown by Audience Segments

It’s a lot of numbers, but the distribution between remarketing and prospecting is very similar in all three cases.

  • Advantage+ Audience: 9.2% remarketing, 90.8% prospecting
  • Detailed Targeting: 10.1% remarketing, 89.9% prospecting
  • Lookalikes: 8.7% remarketing, 91.3% prospecting

More remarketing happened with Detailed Targeting, though I wouldn’t consider that statistically significant. The type of remarketing was a bit more significant, however. Advantage+ Audience spent $10 on existing customers, whereas the other two approaches spent around $5 or under. Not a lot, obviously.

Maybe somewhat surprising is that more remarketing registrations came from using Detailed Targeting (25 vs. 16 for Lookalikes and 14 for Advantage+ Audience). While that creates a seemingly significant percentage difference, we’re also dealing with very small sample sizes now that may be impacted by randomness.

My primary takeaway is that distribution to remarketing and prospecting is about the same for all three approaches. My theory regarding why it’s so much less than when I ran my other three tests is that an A/B test splits a finite (and comparatively smaller) remarketing audience into three. There isn’t as much remarketing to go around.

Potential Contributing Factors

It’s important to understand that my results are unique. They are impacted by factors that are unique to my situation and you may see different results.

1. The Detailed Targeting selected.

Some advertisers swear by detailed targeting. Maybe they have certain options that are much more precise and make using them an advantage. Maybe I would have seen different results had I used a different selection of interests and behaviors.

These things are all true. But, you should also remember that no matter what our selections, the audience is expanded when optimizing for conversions. This is why I have my doubts regarding the impact of using specific detailed targeting options.

2. The Lookalike Audiences selected.

The lookalike audiences that I selected are based on sources that are important to my business. They include both prior registrants and paying customers. But, this was also my worst performing ad set. Maybe different lookalike audiences would have changed things.

Once again, I’m not wholly convinced of this because of the fact that lookalike audiences are expanded when optimizing for conversions. I have doubts regarding whether any of my lookalike audiences are that different that the algorithm wouldn’t eventually find itself showing my ads to the same people once expanded.

But, I can’t ignore the possibility. I was surprised that lookalikes performed so much worse than the other two, and the ones I selected could have contributed to those results.

3. Activity and history on my account.

This one is based primarily on theory because Meta isn’t particularly clear about it. We know that if audience suggestions aren’t provided when using Advantage+ Audience, Meta will prioritize conversion history, pixel data, and prior engagement with your ads.

Advantage+ Audience

It’s possible that I’m at an advantage because I have extensive history on my account. My website drives more than 100,000 visitors per month. There is a history of about a decade of pixel data.

Yes, this is possible. We just don’t know that for sure. Many advertisers jump into a new account and automatically assume that Advantage+ Audience won’t be effective without that history. Test it before making that assumption.

4. Industry.

It’s entirely possible that how each of these three approaches performs will differ based on the industry. Maybe some industries have detailed targeting that clearly makes a difference. That doesn’t seem to be the case for me, even though there are detailed targeting options that clearly fit my potential customer.

And… once again, we can’t ignore that your detailed targeting inputs will be expanded when optimizing for conversions.

5. Location.

Some of the responses I’ve received from advertisers regarding the viability of Advantage+ Audience refer specifically to their location. They say that Advantage+ Audience does not work where they are. Maybe that’s the case. I can’t say for sure.

6. Randomness.

One of the biggest mistakes that advertisers make is that they fail to account for randomness. Especially when results are close, do not ignore the potential impact of random distribution. The more data we have, the less it becomes a factor.

One of the tests on my list is to compare the results of three ad sets with identical targeting. What will happen? I’m not sure. But, a piece of me is hoping for chaos.

What it Means

As I said at the top, my goal with this test wasn’t to prove anything universally. My primary goal was to validate or invalidate my assumptions. I’ve been using Advantage+ Audience for a while now. I haven’t used detailed targeting or lookalikes for quite some time. But, these results validate that my approach is working for me.

Another goal for publishing these results is to inspire advertisers to create similar tests. Whether you use Advantage+ Audience, detailed targeting, lookalike audiences, or something else, validate or invalidate your assumptions.

A far too common response that I get from advertisers about why they don’t use Advantage+ Audience is something along the lines of, “This will never work for me because…” It’s based on an assumption.

That assumption could be because of an inability to restrict gender and age with Advantage+ Audience. But, as I’ve discussed, you should test that assumption as well — especially when optimizing for purchases.

Bottom line: These results mean that Advantage+ Audience without suggestions can be just as effective as, if not more effective than, detailed targeting and lookalikes. If that’s the case, you can save a lot of time and energy worrying about your targeting.

Test this yourself and report back.

Your Turn

Have you run a similar A/B test of targeting strategies? What did you learn?

Let me know in the comments below!

The post Test Results: Advantage+ Audience vs. Detailed Targeting and Lookalikes appeared first on Jon Loomer Digital.

]]>
https://www.jonloomer.com/test-results-advantage-plus-audience-detailed-targeting-lookalikes/feed/ 1
A Guide to Dynamic Creative in Meta Ads Manager https://www.jonloomer.com/dynamic-creative/ https://www.jonloomer.com/dynamic-creative/#comments Mon, 29 Apr 2024 14:37:23 +0000 https://www.jonloomer.com/?p=44876 Dynamic Creative

While Dynamic Creative was introduced in 2017, it's possible that this feature has never been more relevant. Here's how to approach using it.

The post A Guide to Dynamic Creative in Meta Ads Manager appeared first on Jon Loomer Digital.

]]>
Dynamic Creative

Dynamic Creative was discontinued for Sales and App Promotion objectives in June of 2024. Meta recommends using Flexible Ad Format in those cases instead. Read about the details of this update here.

Dynamic Creative was first rolled out in 2017. And yet, you can make an argument that it’s a feature that has never been more relevant.

The landscape has changed. Best practices are evolving. While testing in years past often focused on differences in the ad set, it’s now shifted almost entirely to the ad.

There are four primary ways to test ad creative:

  1. Run multiple ads
  2. Use the Text Variations feature
  3. Dynamic Creative
  4. A/B Test

The focus of this post, of course, is on Dynamic Creative: How it works, how to use it, best practices, and viewing results.

How it Works

When you create a campaign, there may be several approaches that you want to try when it comes to creative. Different images, videos, and text. You could create separate ads to test out these variations. Or you can use Dynamic Creative.

Dynamic Creative allows you to submit multiple images or videos, primary text, headlines, descriptions, and call-to-action buttons for a single ad. Meta will then mix and match to show variations based on different combinations in an effort to get you better results.

You can submit the following creative variations:

  • Images or videos (or a combination): Up to 10 total
  • Primary text: Up to 5
  • Headlines: Up to 5
  • Descriptions: Up to 5
  • CTA Buttons: Up to 5

These variations won’t be shown equally and it’s not a true split test, but it’s a scalable approach to creative variations. Instead of creating 10 or 20 ads (that may or may not get shown) based on specific copy and creative variations, submit up to 30 creative assets and let Meta find what works.

You’re unlikely to reach it, but know that you can create a maximum of 1,000 Dynamic Creative ads.

How to Set Up

Dynamic Creative is available using any campaign objective. Within the ad set, toggle Dynamic Creative on.

Dynamic Creative

When you do, you may get this message…

Now, create your ad. As noted in the message above, Catalog ads will be deactivated. Select single image or video or carousel as the ad format.

Dynamic Creative

Add up to 10 images or videos, or a combination thereof.

Dynamic Creative

If you use the carousel format, you can only include up to 10 images.

Dynamic Creative

The ability to submit up to five primary text, headlines, and descriptions was originally a Dynamic Creative-only option. It’s now available for all ads.

Text Variations

And finally, add up to five CTA button options.

Dynamic Creative

Optimize Creative for Each Person

Optimize Creative for Each Person was originally unique to Dynamic Creative, but you can also find it when running Traffic campaigns without Dynamic Creative.

When using Dynamic Creative, it’s on by default but can be turned off.

Optimize Creative for Each Person

Enhancements include optimizations like cropping, applying a template, swapping text between fields, creating videos from your images, and more. Most, if not all, of these optimizations have been absorbed into Advantage+ Creative.

Optimize Creative for Each Person

Instead of having the option of turning on Advantage+ Creative when you run Dynamic Creative ads, you can turn on Optimize Creative for Each Person.

Best Practices

When does using Dynamic Creative make the most sense, and how can you get the most out of it? Here are a few thoughts

1. One ad versus multiple defined ads. You have lots of creative and text possibilities, but you don’t have a preferred approach. Instead of throwing multiple ads into the rotation, combine copy and creative and allow the algorithm to sort it out automatically.

2. You don’t care about finding a “winner.” This isn’t a split test, and you won’t find results that tell you which combination is the top performer. But, you’re okay with that.

3. Make sure the assets will work together. Keep in mind that each image and video needs to work with each primary option that you provide. Don’t craft text that refers to your video if you may also have images. It may be best to keep copy short and simple.

4. You don’t need to submit the maximum number of options. Just because you can submit 10 images or videos doesn’t mean you should, just as you don’t need five primary text options, headlines, descriptions, and CTA buttons. If you have a large budget, feel free to take advantage of it. Otherwise, limit what you submit to your best text and creative options.

Segment Your Results

Dynamic Creative isn’t for everyone, especially if you demand full control and transparency. You won’t be able to determine how text and creative are combined. If you want that, just create ads the way you want them. And you won’t see a detailed itemization of results by creative combination.

But, there are a couple of things that you can do…

First, you have access to a Breakdown feature for Dynamic Creative. While in the Ads tab, click the Breakdown drop-down menu and select “By Dynamic Creative Element.” You’ll then get access to breakdowns by creative, text, headline, description, or CTA button.

Breakdown by Dynamic Creative Element

You won’t get results by combination of these elements, but you can get a breakdown by each element. Here’s an example for primary text…

Breakdown by Dynamic Creative Element

While you can’t get results by creative combinations, you are able to manually view the top performing combinations by engagement.

You can access this information by using the instructions below…

Dynamic Creative

Your Turn

Do you use Dynamic Creative? What results do you see?

Let me know in the comments below!

The post A Guide to Dynamic Creative in Meta Ads Manager appeared first on Jon Loomer Digital.

]]>
https://www.jonloomer.com/dynamic-creative/feed/ 3
4 Ways to Approach Creative Testing with Meta Advertising https://www.jonloomer.com/creative-testing-meta-advertising/ https://www.jonloomer.com/creative-testing-meta-advertising/#comments Mon, 22 Apr 2024 15:39:10 +0000 https://www.jonloomer.com/?p=44783 Creative Testing

How should you approach creative testing? Well, it depends on the situation, what you care about, and what you're trying to accomplish...

The post 4 Ways to Approach Creative Testing with Meta Advertising appeared first on Jon Loomer Digital.

]]>
Creative Testing

Creative testing with Meta advertising is an inexact science. It seems every advertiser has their own approach. Some will swear it’s the “right way,” but the best option for you is more nuanced.

It depends on the situation, what you care about, and what you’re trying to accomplish.

In this post, we’ll cover four ways to approach creative testing:

  1. Create Multiple Ads in an Ad Set
  2. Use Text Variations
  3. Use Dynamic Creative
  4. Run an A/B Test

By the end, we’ll discuss what I do and how to determine what’s best for you.

1. Create Multiple Ads in an Ad Set

There’s nothing wrong with kicking it old school and simply creating multiple ads for a given ad set with different combinations of copy and creative. But, there are some basics to consider when doing this.

Ads won’t be shown equally, if at all.

If you’re a control freak, this will drive you crazy. Just because you created four ads doesn’t mean that Meta’s ad delivery algorithm will show them equally to help you understand what works best. In fact, one or more of the ads may not show at all.

There has to be a certain amount of letting go of control with this approach. You need to be okay with the fact that you might create an ad that doesn’t get shown. Or maybe your favorite ad won’t get shown the most.

Trust the algorithm, but understand its imperfections.

When you take this approach, you embrace the chaos and imperfection of it from the outset. Your ads won’t be shown equally and some may not be shown at all. You are trusting that the algorithm will use historical and real-time data to help deliver the right versions to the right people.

But the algorithm will also make these decisions very quickly because, in most cases, any differences in ad performance won’t be statistically significant.

This isn’t a true split test.

If you have multiple ads in an ad set, it’s not a true A/B split test. The same user can see more than one version of your ads. In many cases, this is preferred anyway. But, that overlap means that you’re not going to see results based on a true, scientific split test.

And you need to be okay with that.

Consider a limit of six ads.

Assuming you aren’t running an Advantage+ Shopping Campaign, Meta recommends using no more than six ads in an ad set. Once you’ve added more than six, there’s marginal benefit.

Create the ads at initial publication, if possible.

Every time you publish a new ad, you’ll restart the learning phase. Not every advertiser sees this as a big deal, and there are times when it definitely doesn’t matter. But, it’s typically best to create all of your ads at the start, rather than doing it later on and having to roll the dice on messing with results.

2. Use Text Variations

This feature has also been named Multiple Text Options or Multiple Text Optimization in the past. No matter what you call it, the functionality is the same.

When assembling your ad, you can create up to five variations of your primary text, headline, and description.

Text Variations

This is a great way to create variations while using only one ad. Meta will show combinations of text to people based on what they’re more likely to respond to. That could be due to what other people respond to, what the individual user has responded to in the past, the placement, and more.

Meta also generates primary text options that you can choose from using AI.

AI-generated Text Variations

I’ve found these rarely match up with my voice, so I don’t use them. But, it’s something worth testing out.

If you require control, you will not like this feature. There is no way of dictating how much a text variation is used — or whether it’s used at all. And since all of the variations contribute to the same ad, you won’t be able to see which combination led to the best results.

What you can do, though, is use the Breakdown by Dynamic Creative Element.

Breakdown by Dynamic Creative Element

A separate row will be generated for each variation, but you won’t see which combination performed the best.

Today’s advertiser needs to be okay with not always being in control while putting a certain amount of trust in the algorithm. This is a feature I regularly use, and I’m not overly concerned about “finding a winner.” Instead, I use it knowing that if I give the algorithm more options, I give it more opportunities to get the best possible results.

3. Use Dynamic Creative

Dynamic Creative was discontinued in June of 2024. Meta recommends using Flexible Ad Format instead. Read about the details of this update here.

Dynamic Creative is not a new feature (I first wrote about it in 2017), but it’s still useful.

Dynamic Creative combines multiple images, videos, and other ad components (primary text, description, headline, and CTA button) to find the best possible results while creating only one ad. This is similar to the Text Variations option, but it also includes creative and CTA buttons.

This feature is turned on in the ad set.

Dynamic Creative

When using “Single Image or Video,” you can upload a combination of up to 10 images and videos. It could be all images, all videos, or a combination thereof.

Dynamic Creative

You have the option of turning on Optimize Creative for Each Person. When this is on, ad creative and destinations vary depending on what an individual person may respond to.

Dynamic Creative

You can also test various CTA button options.

Like every option so far, this is not a true split test. If you’re hoping to test specific options against one another, this is not the option you want to use. Dynamic Creative is best for situations where you have several creative options, but you’re willing to give up control to the ad delivery algorithm.

As is the case with Text Variations, you will not see which combination of creative, text, and CTA button performs best. But, you can use Breakdowns to see how each individual item performed.

4. Run an A/B Test

A true A/B test is ideal for the control freak who has something very specific that needs to be tested. You want to find the best performer between two or more ads, free of overlap.

While you can run the options above indefinitely, an A/B test is meant to be temporary. You find a winner so that you can leverage it and turn off the losing variation. That’s why you’ll also need the benefit of time to run an A/B test.

Finally, keep in mind that your results are unlikely to be ideal during an A/B test. Your ads won’t be distributed optimally during this test because the entire goal is to segment your audience so that one half sees one version while the other half sees the other.

If you want to create a variation of an existing ad to test against the original, select the existing ad and click “Duplicate.” Then select “New A/B Test.”

A/B Test

For the variable that you want to test, select “Creative.” Then select the ad that you want to copy.

A/B Test

Pick the key metric that will determine a winner.

A/B Test

Then set a start and end date for the test. You can choose to have the test end early if a winner is found before the end date.

Unlike the other options listed in this post, an A/B test will give you a true winner — assuming that a winner is found and is statistically significant.

Which is Best for You?

The option that you choose for testing creative depends on your situation and what is important to you.

If you desire control and certainty and want to determine which ad is the top performer, use the A/B test option.

Otherwise, you’ll run a combination of the other three options. I rarely have a deep desire to know which ad is the top performer with an A/B test. It suggests that I already found two ads with preferred combinations of text and creative. And that is almost never the case.

When I create an ad set, I typically use multiple ads. Each ad will utilize a different format (video, image, or carousel), or maybe a different version of one of those formats. And each ad utilizes Text Variations.

Admittedly, I haven’t used Dynamic Creative for several years. But, I have heard that some advertisers still swear by it, and it’s not all that different from using the Text Variations optimization.

Like everything else, know your needs and style. Do what works for you.

Your Turn

Which approach do you take to creative testing?

Let me know in the comments below!

The post 4 Ways to Approach Creative Testing with Meta Advertising appeared first on Jon Loomer Digital.

]]>
https://www.jonloomer.com/creative-testing-meta-advertising/feed/ 4
How to Create an A/B Test in Meta Experiments https://www.jonloomer.com/how-to-create-an-a-b-test-in-meta-experiments/ https://www.jonloomer.com/how-to-create-an-a-b-test-in-meta-experiments/#respond Wed, 16 Aug 2023 18:41:09 +0000 https://www.jonloomer.com/?p=40574

An A/B test in Meta Experiments helps you isolate the impact of a single variable (targeting, optimization, placement) on performance.

The post How to Create an A/B Test in Meta Experiments appeared first on Jon Loomer Digital.

]]>

There are many different ways you can create a split test using Meta’s built-in testing tools. In this post, we’re going to walk through creating an A/B Test within the Experiments section.

This will allow you to test two or more campaign groups, campaigns, or ad sets against one another in a controlled environment to determine which performs best based on a single variable.

Before we get to creating an A/B test in Experiments, let’s cover a little bit of background…

Understanding A/B Testing

The concept of an A/B test isn’t as simple as comparing your results between two campaigns, ad sets, or ads. While you can do that, too, the results are unscientific.

What makes an A/B test different is that it eliminates overlap to help isolate the value of a particular variable (optimization, targeting, placements, or creative). The targeting pools will be split randomly to make sure that targeted users only see an ad from one of the variations.

Without a true A/B test, there is less confidence in how much a single variable impacted your results.

About A/B Tests in Experiments

My preferred method for creating A/B tests with Meta ads is with the Experiments tool. This allows you to select currently running campaigns or ad sets to test against one another.

This is different from the original approach to A/B testing with Meta ads. That method required you to create the variations that would be tested against one another. When the test ended, those ads stopped delivering. You would take what you learned from that test to create a new campaign.

With Experiments, you select campaigns or ad sets that are already running. When you select the schedule for your test, delivery will shift to prevent overlap during those days. Once the test is complete, the delivery of ads will go back to normal.

Create a Test

Go to the Experiments section in your Ads Manager Tools menu under “Analyze and Report.”

Meta Ads Experiments

Click “Create Test” in the left sidebar menu (or the green “Create New Test” button at the top right).

Meta Ads Experiments

You will see options for the types of tests you can set up. Examples may be A/B Test, Brand Survey, and Cross-Channel Conversion Optimization Test.

Meta Ads Experiments

We’re going to run the A/B Test. Click the “Get Started” button.

First, we’ll need to determine what we want to test: Campaign Groups, Campaigns, or Ad Sets.

Meta Ads Experiments

If you test campaign groups, you’ll see the cumulative results for each group of campaigns. You would name each group and select the campaigns that are within it.

Meta Ads Experiments

When testing campaigns, remember that all ad sets within each campaign count toward the results.

Meta Ads Experiments

Here’s an example of testing three different ad sets…

Meta Ads Experiments

Regardless of whether you test campaign groups, campaigns, or ad sets, you will need to select at least two variations and up to five. Meta strongly recommends that the variations that you select are identical except for a single variable. This helps isolate the impact of that variable.

In the example above, I’m testing three different ad sets that were optimized differently against a different performance goal. Everything else was set up identically: Targeting, placements, and creative.

Next, schedule when you want this test to run. Keep in mind that these campaigns or ad sets are already running. This schedule will determine when the test will run.

If you want, you can have the test end early if a winner is found. Otherwise, the test will run for the length of the schedule that you select.

Name your test so that when you are looking at results you’ll easily remember what was tested.

A/B Tests in Experiments

Finally, choose the key metric that will determine your winner. You can choose from some recommended metrics or from your custom conversions and standard events.

A/B Tests in Experiments

The metric you choose may not be the performance goal you used. The key metric should be whatever your ultimate measure of success is for these campaigns or ad sets.

You can also include up to nine “Additional Metrics.”

A/B Tests in Experiments

Unlike the key metric, these metrics won’t determine success. Instead, they will be included in your report to help add context.

View Results

Click “View Results” in the left-hand menu within Experiments.

A/B Tests in Experiments

You will see highlights of the experiments that you have run or are running. This provides a snapshot of how each participant performed against your key metric.

A/B Tests in Experiments

Click “View Report” to see additional details.

You can view results by your key metric, additional metrics (if you provided any), or common metrics.

A/B Tests in Experiments

There is an overview of the participants in the test…

A/B Tests in Experiments

A table to compare metrics (this will include additional metrics if you provided them when setting up the test)…

A/B Tests in Experiments

And a breakdown by age…

A/B Tests in Experiments

And gender…

A/B Tests in Experiments

A Note on Testing Results

Keep in mind that you have a single goal when running an A/B test: You want to figure out which variation leads to the results that you want based on a single variable.

Don’t be distracted by this goal. I’ve found that my results tend to be worse during A/B tests. This makes some sense since targeting will be restricted to prevent overlap.

What you do with the information learned from your A/B test is up to you. It’s possible that Meta will have low confidence in the results (you’ll see the percentage chance that you’d get the same results if you run the test again). In that case, you may not want to act on it.

But if confidence is high, you may want to turn off the variation(s) that lost and increase (slowly) the budget for the winning variation.

Watch Video

I recorded a video about this, too…

Your Turn

Have you run A/B tests in Experiments? What have you learned?

Let me know in the comments below!

The post How to Create an A/B Test in Meta Experiments appeared first on Jon Loomer Digital.

]]>
https://www.jonloomer.com/how-to-create-an-a-b-test-in-meta-experiments/feed/ 0
2 Ways to Create A/B Split Tests of Organic Content on Facebook https://www.jonloomer.com/create-a-b-split-tests-of-organic-content-on-facebook/ https://www.jonloomer.com/create-a-b-split-tests-of-organic-content-on-facebook/#respond Wed, 01 Jun 2022 18:24:31 +0000 https://www.jonloomer.com/?p=36267

Did you know that you can create A/B split tests of organic content on Facebook? Here's how they work and what we can learn from them...

The post 2 Ways to Create A/B Split Tests of Organic Content on Facebook appeared first on Jon Loomer Digital.

]]>

A/B split tests are helpful for uncovering the most effective posts or ads in a scientific way. Did you know that you can do this with organic content?

You’re probably familiar with A/B split tests for Facebook ads. This process allows you to test variables like targeting, optimization, and creative in a scientific way, showing variations to random groups without overlap.

What’s great is a similar process is available for organic posts that you publish to your page. Actually, there are two different available ways to do this, one through Meta Business Suite and one through Creator Studio.

It’s entirely possible that this isn’t fully rolled out yet. In fact, I’ve seen some bugs in one version, and it also seems that what you see may be different from person to person.

With all of that said, I want to walk through the two ways that I can create A/B split tests of organic content…

Meta Business Suite

From the “Posts & Stories” section of Meta Business Suite, create a post.

A/B Test

Click the “A/B Test” icon to start a test. It will look like this…

A/B Test

You can include up to four versions of a post. All you do is create each version by providing the copy and either an image or video. For links, either provide the URL in the copy or paste it into the link preview section.

When you’re done, you can preview all versions of your post before publishing…

A/B Test

You can even schedule the post if you’d like.

A/B Test

The test will run for 30 minutes. A different segment of your page followers will see each version. Once the test is complete, the winning version will be published to your Facebook page. The other version(s) may still exist in news feeds as people engage, comment, and share.

Creator Studio

The approach using Meta Business Suite is extremely simple. But, maybe you want to add a little bit of complexity? You can do that with Creator Studio.

For whatever reason, some people have easy access to Creator Studio while others don’t. From my page and Business Suite, I no longer get links to the tool.

You should be able to access Creator Studio by clicking this link.

From there, go to Post Testing under “Tools.”

A/B Test

Click “Start a Test” and it will look like this…

A/B Test

This is a lot different from the Business Suite version, right?

The first step will be to select a Content Type.

A/B Test

Just know that video posts can only be compared against other video posts. You can mix and match between image, link, and text posts.

If you do choose to test video posts, you’ll need to click the “Edit” button at the far right after uploading the file.

A/B Test

This will give you access to all of the various customizations that are unique to videos.

A/B Test

When you’re done, it will look something like this…

A/B Test

As you can see, you can publish or schedule the test. Once you click to schedule or publish, you’ll get some more settings…

A/B Test

This is one of the big differences between A/B tests from Meta Business Suite and Creator Studio (at least for me — see the upcoming section about bugs). You can select from various “key metrics.”

A/B Test

This way, you determine the metric Meta uses to determine the winner. Choose from Comments, Shares, Reactions, People Reached, and Link Clicks.

You can also customize how long you want the test to run.

A/B Test

Choose from 10 minutes, 30 minutes, 1 hour, 3 hours, and 24 hours. Once the test is complete, the winning post will be published to your page.

A Note on Bugs

It’s entirely possible there’s something I’m not seeing when creating a test from Meta Business Suite. When I choose to publish or schedule a test, I get the following…

A/B Test

That “moment” takes many moments. Actually, I’ve never found an end to it. Originally, I thought it just took a while to publish each individual version. Then I thought maybe it wouldn’t “complete” until the test was over. But I waited that out, and this message kept showing.

I’m not sure if another important screen is supposed to appear next. It’s possible that additional customizations that happen in Creator Studio are intended to appear here. No idea. I do know that the test still ran even though this page didn’t finish loading.

It still does appear that there are significant differences between Business Suite and Creator Studio A/B split tests. But, I can’t say with 100% certainty that all differences that I’m seeing are intentional.

Viewing Results

Once again, I believe there’s a situation of what I see and what others see as being different.

If I create an A/B test in Meta Business Suite, there is nothing (in that section, at least) that shows me the test results. I can view the individual post versions and metrics associated with them, but that’s not unique. It’s the same as any other post.

Creator Studio, though, shows clear test results. These are found within the same Post Testing section under Tools.

A/B Test

And guess what? These results include all A/B tests of organic posts, including those that originated from Meta Business Suite.

Click through to see a side-by-side comparison.

A/B Test

First, I can’t ignore that the “Reached” stats are clearly wrong for any image post. There’s no way that a post that reached one person has seven reactions.

Second, this is part of the potential problem of these short tests. This shows the results for a very short period of time when the test ran. Is that enough time to choose a winner? It’s one reason to consider going with the 3-hour or 24-hour options in Creator Studio.

You can also choose to track the results of each variant after the test is complete. Of course, the winning post will see the biggest benefit of viewing results this way.

A/B Test

If you don’t like the post that Facebook chose, you can override the selection and publish a different one to your page. That is done at the top.

A/B Test

Is This Useful?

I have to admit that I avoided experimenting with A/B split tests of organic posts for quite some time. It sounded great in theory, but I was also negatively influenced by how A/B tests work for ads and assumed this would be pretty much the same.

A/B tests for ads take anywhere from a few days to four weeks. I have no patience for doing this with every organic post. There’s also a matter of the overall usefulness of A/B tests for ads since they are best suited for learning something over a few weeks that you can then apply in the future.

But, this is way different in the best ways. The test is super easy. The test will be completed in anywhere from 10 minutes to 24 hours. The winning post is published to your page.

Also, this is really the only way you can have Facebook optimize for an action when publishing an organic post. If your goal is link clicks, you can have Facebook choose the version that results in the most link clicks. That’s great!

I’ve only started using this feature so far, but early returns are solid. I’ve run two tests, and one resulted in a winning post that has reached over 17,000 people with close to 2,000 engagements.

Of course, don’t expect this to be miraculous either. If you submit four post versions that all stink, expect to get bad results. This at least allows you more opportunities to find something that hits — and also learn from what works and what doesn’t.

An Ads Tie-In

While this is for organic posts, A/B testing can absolutely benefit our advertising.

One of the negatives of A/B testing of ad creative is that you can spend a lot of money simply running a test to see which ad is most effective. But, you could move that effort to organic testing. Find the post that gets the most link clicks (or whatever you want), then put money behind that winning post.

It would seem that this could make some of the guessing, experimentation, and budget a bit more efficient, at least in theory.

Your Turn

Have you experimented with A/B tests of organic posts? What results are you seeing?

Let me know in the comments below!

The post 2 Ways to Create A/B Split Tests of Organic Content on Facebook appeared first on Jon Loomer Digital.

]]>
https://www.jonloomer.com/create-a-b-split-tests-of-organic-content-on-facebook/feed/ 0
A Facebook Ads Experimentation Guide https://www.jonloomer.com/facebook-ads-experimentation/ https://www.jonloomer.com/facebook-ads-experimentation/#respond Tue, 26 Oct 2021 00:04:26 +0000 https://www.jonloomer.com/?p=33679

This Facebook ads experimentation guide focuses on the 18 areas that are prime for testing, as well as what you should consider for each.

The post A Facebook Ads Experimentation Guide appeared first on Jon Loomer Digital.

]]>

The most important characteristic of a successful Facebook advertiser is the willingness to experiment. It is your experimentation with Facebook ads that will lead to knowledge, solutions, and success.

Instead of asking, “Should I…” related to a basic Facebook ads strategy, know that there is very little black and white (outside of the rules themselves). What works for you may not work for me, and vice versa. If you want to know if something will work, try it!

It’s really easy to get stuck in your ways, too. As someone who has been running my own business with Facebook ads for a decade now, I fully appreciate how quickly you can rely solely on tried and true methods. It’s so easy to end up with a templated approach.

The problem, of course, is that while your approach remains unchanged, the advertising environment is evolving quickly. It’s all so much different today than it even was a year ago. Fail to evolve your strategies, and you can expect to get buried in frustration.

You should experiment often. It’s what keeps me sharp and helps me uncover things that I’d never know without trying something new. I’m working on an experiment right now that I’ll explain in more detail at the bottom of this post.

For now, let’s cover the primary buckets of experimentation opportunities you should be taking with your Facebook advertising…

1. A/B Testing

If you’re trying to figure out whether one thing works better than another, Facebook’s built-in A/B testing tool is the only way to get a true, scientific test without overlap. This is the best way to determine the best strategies based on things like image, video, text, targeting, and more.

Facebook Split Testing

A/B testing isn’t something you should be doing for the long term. It’s a short-term test (1-30 days) to help you understand what works best going forward.

You don’t need to use A/B testing in all scenarios. Maybe you’re fine without a true, scientific test and just want to run separate ad sets to different audiences or separate ads with different creative options. All of that is fine. Facebook’s optimization will also help focus on what is working best in those cases.

Here is some additional documentation on A/B testing:

2. Campaign Budget Optimization

Should you create multiple ad sets with their own, separate budgets, or should you utilize Campaign Budget Optimization (CBO)?

Facebook Campaign Budget Optimization

If you turn on CBO, Facebook will distribute your budget “optimally” between ad sets to get the most results. So, you could set a $50 daily budget with CBO using two ad sets, and Facebook will move budget between those ad sets based on the results each one generates.

If you don’t use CBO, you would set individual budgets at the ad set level. Here, of course, you are given more control. Maybe you are okay spending more per action for a particular audience. Maybe you don’t trust Facebook’s optimization.

I prefer the control, but I will occasionally experiment with CBO, particularly if the audiences are similar in size.

3. Budgeting

I understand that this is a tough one to test. Either you have the budget available or you don’t.

But keep in mind that volume drives Facebook ad optimization. If you struggle to generate enough volume at $10 per day to exit the learning phase, Facebook may struggle to get you the results that you want. Maybe spending $50 or $100 per day will get that volume, and everything will change.

Or maybe it won’t? Bottom line is that it’s nice to be able to try it and find out.

4. Daily vs. Lifetime

This is one that I can confidently say I am stuck in my ways. I have always used Daily budgeting.

Daily Lifetime Facebook Budget

It’s not because I get better results with Daily budgets. It’s just that I feel like I have a far better understanding of what is happening and can easily adjust. Getting great results? Maybe I spend a little more per day. Results are dropping? Maybe I slow it down.

I’ve always felt that Lifetime budget is best in cases where you (or a client) have a rigid budget to work with. You know that you want to spend $500 during the month, and that’s it.

The rumor is that Facebook ad reps recommend Lifetime budgeting. Does it actually work better? If you care, test it out!

5. Dayparting

If you aren’t familiar with Dayparting, it is only available when using Lifetime budgets. It allows you to schedule your ads so that they only run during certain times or on certain days of the week.

Facebook Ads Dayparting

A few years back, I was determined to make dayparting work. I researched which specific times of day gave me the best results for a certain objective over a six-month period. Then, I focused only on those times.

The result? Costs actually went up.

Maybe you can get dayparting to work for you. I’ve never heard of anyone who has seen better results by using it. Maybe you want to use it because you need to have staff on hand during certain times. That may be the best argument for it.

6. Small Audiences vs. Large Audiences

If you ever ask Facebook advertisers whether it’s better to use small audiences or large audiences, you’re going to get a very wide range of answers. The best answer: It depends.

Facebook says you should use large audiences (in the millions) to create a large pool for optimization. Some advertisers absolutely swear by using the largest audiences possible. They even say removing any filtering at all and going with an entire region works best.

But, context likely matters. How large is the country? Is the brand well-known? How large is the brand’s built-in audience? Are there repeat customers? How are you optimizing?

I’ve found that optimization with large audiences for top-of-the-funnel actions produces garbage results. Actually, those results are good in the eyes of Facebook, but they aren’t quality results that lead to purchases (read this post about the problems with optimization).

I love micro-targeting my audience for those who have performed a specific action. Remarketing to tiny audiences of people who abandoned cart, too, often works well.

There is a place for large and small audiences. Feel free to experiment with both!

7. Lookalike Audiences vs. Interests and Behaviors

What’s more effective, targeting people based on interests or using Lookalike Audiences? And if you use Lookalike Audiences, what should be the source? And should you use 1% or 10%? Or something else? Should you layer interests on top of Lookalike Audiences and combine them?

So many questions, right? The problem is that there isn’t a universal answer. “It depends” is doing overtime here.

The performance of your interest targeting depends upon the quality of the interests you use. The quality of interests you have available to you often depends upon the industry you’re in.

Lookalike Audiences, too, will vary greatly in performance depending upon the quality of the source audience and how Facebook assembled it. Whether you use 1% or 10% is also greatly impacted by the countries used (and you may not need to debate this one anymore due to Lookalike Expansion).

There is no universal answer here because the factors involved will drastically impact the answer. My primary suggestion is that you explore both interests and Lookalike Audience targeting for top-of-the-funnel, knowing that this is their first exposure to your brand. Adjust your expectations accordingly.

But which interests and Lookalike Audiences should you use? Test, test, and test some more. This is where using the A/B Test option may prove valuable.

8. Country Targeting

If you’re a local brick-and-mortar business, this is easy. You probably only want to reach people within driving distance of your building.

But, if you create virtual products or ship globally, everything changes. Then the question becomes, “Which countries should you target?”

This gets really complicated. The CPM (Cost Per 1,000 Impressions) costs vary widely depending on the country. But that’s also at least partially related to the quality and competition within those countries.

Some countries are much more prone to spam, bots, and people who can’t afford your products. Does that mean you shouldn’t target them? Maybe. Maybe not.

If you’ve been in business for a while, I encourage you to research where your paying customers come from. That should at least be a starting point.

Be careful, though. Let’s say that you have paying customers in the US, UK, Canada, Australia, and India. If you include all five countries in the same ad set, Facebook may dedicate more of your budget to India (particularly if you aren’t selling a product). The reason is that the CPMs are much lower there than in the other four countries.

You may want some control over that. This is where creating multiple ad sets for similarly priced countries may be a good idea.

Have proper perspective here. Targeting globally all of the time using all objectives is probably a bad idea. Refusing to target certain countries may also limit your opportunities. Know the risks and know how to mitigate those risks.

9. All Placements vs. Select Placements

Look, I have a very strong opinion about one Facebook placement in particular. I have seen really bad stuff from Audience Network. It’s where the most click fraud and accidental clicks happen. The Audience Network is often the source of “too-good-to-be-true” results (because they are).

At least in the case of traffic and engagement campaigns. Leave that placement on and be prepared to throw some money away. And hope you catch it before it’s too late.

But, is that a hard-and-fast rule for everyone? Of course not. If you get sales from Audience Network, use it. There are a lot of placements these days. Find what works for you and what doesn’t.

Facebook recommends using “All Placements.” I can see that being fine when optimizing for a purchase. Otherwise, scrutinize your results, do a lot of testing, and figure out what works best for you.

10. Optimization Options

This is one of those areas that provides a wide variety of possibilities.

Should you optimize for conversions? Maybe. If you can get results. You may not have the budget to generate enough results to properly optimize. In that case, you may need to optimize for something else.

Does that mean optimizing for link clicks or Landing Page Views? Or Engagement? Maybe. But be wary of the results you get there (as discussed before).

When I micro-target, I don’t want Facebook to optimize for an event. I want to reach everyone within that tiny audience. In that case, I’ll optimize for Reach (or you could even use Daily Unique Reach).

You have a ton of options. There isn’t a one-size-fits-all approach. The main thing is to understand how optimization works and what Facebook needs to properly optimize. Make sure it fits your goals. Know the potential weaknesses of optimizing for a specific action.

I only started optimizing for Reach because it solved a problem I had. I’ve used it in a way that isn’t even how Facebook intends it (they see it more for broad audience, awareness targeting).

Know how it works. Know what you want to accomplish. Understand the weaknesses. Then test!

11. Bidding Options

If you need a place to start, don’t screw around with Facebook’s bidding options. Just roll with Facebook’s “Lowest Cost” defaults.

But once you’re comfortable, feel free to experiment with Cost Cap, Bid Cap, and Minimum ROAS bidding. They are all ways of manipulating how Facebook bids in the auction, rather than relying on Facebook to do it how they want.

Facebook Bid Cap

Sometimes, manual bidding just leads to frustration and a lack of delivery. It’s not magical. You can’t mysteriously tell Facebook you want $1 conversions and get $1 conversions. If you under-bid, you just won’t get any results.

Once you understand how it works, though, put bidding on your list of things to experiment with.

12. Attribution Setting

Oh, how attribution has changed

Attribution, or how Facebook gives credit to an ad for a conversion, has evolved quite a bit over the years. The main thing to know is that the default Attribution Setting is now 7-day click and 1-day view.

This is determined within the ad set.

Facebook Attribution Setting

Not only is this how Facebook will optimize your conversions, but it’s how Facebook will report on them. If you change the Attribution Setting, it will change how Facebook selects your audience. It could also impact how many conversions are reported in your results.

Back in the day, this was no big deal. If you used 1-day click, for example, you could add columns to your reporting to see how many conversions occurred outside of that window. That option is no longer available.

So, now? You can still make the argument that 1-day click is best for opt-ins and low-cost purchases while 7-day click and 1-day view is best for higher-cost purchases. Still, I find I’m reluctant to make that change, fearing a loss of reporting.

It’s absolutely something to test, though!

13. Website vs. On-Facebook Experience

Ever since the iOS 14+ changes related to privacy and tracking, there has been more reason to run ads that keep people on Facebook. It’s understandable. Confidence in results goes way up in those cases.

That doesn’t mean there’s no longer a place for sending traffic to your website. I still do it a ton. It depends partially on your percentage of iOS traffic (mine is low) and appetite for accuracy.

Reasons to keep people on Facebook go up if your iOS traffic is high. Or you have a client with a horrible website experience.

Consider Facebook lead ads, instant experiences, video ads, and Facebook Shops. There are plenty of ways to run your business while keeping people on Facebook.

I still love to use both. Lead ads, for example, have their strengths and weaknesses. You’ll often get more sign-ups because they’re so easy to complete. But the quality of those leads may drop for that same reason.

Don’t throw all of your eggs into one basket, as they say. Experiment with keeping people on Facebook and sending them away.

14. CTAs (or None)

All these years later, and the jury’s still out on whether you should use Facebook CTA buttons with your ads. And if you do, which ones you should use.

Facebook CTA

Some CTA options may lead to more clicks, but are they the right clicks? Some CTAs may lead to fewer clicks, but people with higher intent.

One way to test this is by using Dynamic Creative.

Dynamic Creative

If you turn it on within the ad set, you can submit multiple CTA options for Facebook to test.

Dynamic Creative

15. Dynamic Ads vs. Manual

It makes a lot of sense for e-commerce businesses with hundreds or thousands of products to use Dynamic Ads to showcase the right ads to the right people while doing minimal work. Create an ad template, provide a product feed, and everything is done for you.

Of course, such ads based on a template may also be less effective on some level, as well. You may get better results by crafting a very specific message based on someone’s activity on your website who is interested in a very specific product.

There is room for both. Try both.

16. Ad Formats

You have options. Single image, collection, instant experience, carousel, video. You can even mix and match, to a point.

When determining which to use, I ask a simple question: What is the benefit of this ad format?

A single image removes options and may make a click away to your website more likely.

A collection or carousel provides your audience with options.

An Instant Experience allows you to tell a story and provide more information within a single ad.

A video will encourage engagement and allow you to communicate with a potential customer in a completely different way, but it may not lead to a click.

Start with the format that is most likely to satisfy your primary goal. From there, feel free to use Facebook’s A/B testing to test what works best. You can also simply create multiple ads, each with different formats, and allow Facebook to optimize.

17. Long Copy vs. Short Copy

It’s long been debated whether long copy or short copy is best. As always, we over-simplify this.

If you take an average of the performance of all ads, you may find out that the highest-performing ads used less copy. That doesn’t mean that you should always use less copy. It just means that, for the average situation, it may be best.

Sometimes, long copy makes more sense. It’s great for the right audience. Use it for people who want to read. Use it to introduce something that people may not know about.

Short copy may be ideal in the case of an audience already knowing about your product or service. They only need to know about the deal.

This is where Dynamic Creative, Multiple Text Options, and Facebook’s A/B testing allow you to test this out.

Facebook Multiple Text Options

18. Creative Types

Which image should you use? Should it include a face? Bright colors? Or should you use a video? And how long should the video be?

Oh, goodness. So many questions.

Different images appeal to different people. Know your audience.

Long videos have the benefit of educating your audience. If someone sticks around for the entire video, they are a warm lead. Short videos can get the attention of your audience quickly and get your message across.

They all have a purpose. Test them out by creating multiple ads or by using Dynamic Creative or A/B testing.

Be Mindful of Generating Meaningful Results

Look, the possibilities are endless, as you can see. It is very easy to be overwhelmed by the limitless options and features.

Start simple.

Before you completely understand what you’re doing, use defaults. Facebook makes it about as easy as they can to create a campaign that might work without knowing what you’re doing. Just don’t mess with things if you don’t have to.

Also understand the importance of volume. Don’t create a whole bunch of options if you don’t spend the budget or won’t generate the volume to lead to meaningful results.

Experiment. Try new things. But create options within reason. Otherwise, you’ll only succeed at creating a messy campaign that doesn’t really tell you anything.

My Experiment

As I said at the top, I love creating experiments. I just started one last week, and it’s possible you’ve been seeing some of the ads.

The main goal of my experiment is to create ads that both reward my loyal audience and incentivize additional engagement. I’ll do this with micro-targeted audiences. I also want to see how small I can go with these audiences.

Ultimately, I want to figure out what my most engaged — and reachable — audience is. And I want to reward them with exclusive content.

For now, this is built around Reach optimization and a special type of Website Custom Audience. I’ve created audiences based on frequency of page views.

My first ad starts broad, targeting those who have viewed two pages or more of my website during the past 30 days (I eventually move to 180 days). But with each ad, I tighten up the audience. If I stick with frequency, I’ll keep climbing until Facebook no longer delivers the ads.

Facebook Ads Experiment

Of course, how I’m doing this is pretty darn complicated. Since it’s an experiment, I’m also adjusting on the fly as results come in.

How can you participate? Well, reading this blog post is a good start! The more pages of my website you view, the more likely it is you’ll see these ads.

One favor: Engage with the ads and let me know you’re seeing them! I’d love to hear what you think.

Watch Video

Your Turn

What kind of experiments do you like to run with Facebook ads?

Let me know in the comments below!

The post A Facebook Ads Experimentation Guide appeared first on Jon Loomer Digital.

]]>
https://www.jonloomer.com/facebook-ads-experimentation/feed/ 0
Create Facebook Split Test with Ad Set Duplication https://www.jonloomer.com/create-facebook-split-test-with-ad-set-duplication/ https://www.jonloomer.com/create-facebook-split-test-with-ad-set-duplication/#respond Mon, 02 Dec 2019 22:55:26 +0000 https://www.jonloomer.com/?p=29576 Facebook Ads Split Test

You can create an A/B split test of an actively running Facebook ad set by duplicating it. Here are the steps to getting this done...

The post Create Facebook Split Test with Ad Set Duplication appeared first on Jon Loomer Digital.

]]>
Facebook Ads Split Test

Want a quick way to split test different audiences with your Facebook ads? A new feature will help you.

Note that this is different from Facebook’s primary built-in split testing feature. With that feature, you would set up a split test with a new campaign based on creative, delivery optimization, audience, or placement.

Facebook Ads Split Test

This new split testing option is related only to actively running ad sets. Let’s take a closer look…

When You’d Use This

Unlike the primary built-in split testing feature where you’d plan a test from the start, this new option is for an actively running ad set.

In other words, an ad set is running and you decide at that point that you want to try something else. For example, the ad set is running fine and you don’t want to edit or stop it. But you’re curious how a variation of the targeting would perform.

Create a Split Test by Duplicating

Select the ad set that you want to split test and click the “Duplicate” button.

Facebook Ad Set Duplicate

You should then see an option to create a new split test.

Facebook Ad Set Split Test Duplication

The only options for me are Age and Gender or Saved Audience. Until recently I only had Age and Gender.

Note that within the Help Center, Facebook indicates that you can create a split test by duplicating an ad set using the following variables:

  • Audiences
  • Delivery Optimizations
  • Placements
  • Product Sets
  • Creative

That would more closely align this method with the built-in split testing feature. You may have these options. I don’t at the moment.

After making your selection, click the “Continue to Test and Learn” button.

Facebook Ad Set Split Test

First, determine how your targeting will be different in Version B.

Facebook Ad Set Split Test

You will have the ability to edit the ad set name, schedule, and budget.

Facebook Ad Set Split Test

Facebook will provide an “Estimated Test Power,” which is the likelihood to detect a difference with your test.

Facebook Ad Set Split Test

You may want to increase budget to improve the test power. When you’re ready, click “Create Test.”

Facebook will then generate two new campaigns, one for each version…

Facebook Ad Set Split Test Duplication

Split Test Results

You’ll view the results within Ads Manager as you normally would…

Facebook Ads Split Test

But you can also view the test details within the Test and Learn section. First, you can access Test and Learn within the main Ads Manager menu…

Facebook Ads Split Test

There, you will see your test results under “Learn.”

Facebook Ads Split Test

In my case, Facebook found a winning campaign before the test completed. Click “View Report” for details.

Facebook Ads Split Test

While the results of the two campaigns were very similar, Facebook found a winner anyway. The test reveals that there would be a 66% chance of getting the same results if tested again.

Facebook Ads Split Test

Things to Consider

First, know that these two new campaigns that are created will run in addition to your active ad set. You may want to pause your original ad set in the meantime to prevent unnecessary overlap.

Second, the budget will default to half of what was remaining in the existing ad set’s budget. As mentioned earlier, you can edit that during the duplication process.

Always consider volume required to find a winner. If you are split testing a conversion campaign, how much do you need to spend to yield an adequate sample size? The more the product costs, the more you should expect to spend to get those sales. Facebook is less likely to find a winner if you aren’t generating an adequate sample size.

You can increase volume either by extending the length of the campaign or increasing the daily budget.

Split Test by Editing an Ad Set

You can also start a split test by editing an ad set. While this opens up more split testing flexibility than the duplication option, many restrictions apply.

We’ll cover this later in a separate post.

Your Turn

Have you experimented with this split testing feature? What kind of results are you seeing?

Let me know in the comments below!

The post Create Facebook Split Test with Ad Set Duplication appeared first on Jon Loomer Digital.

]]>
https://www.jonloomer.com/create-facebook-split-test-with-ad-set-duplication/feed/ 0
New Feature: Facebook Creative Split Testing https://www.jonloomer.com/facebook-creative-split-testing/ https://www.jonloomer.com/facebook-creative-split-testing/#comments Tue, 14 Nov 2017 16:54:36 +0000 https://www.jonloomer.com/?p=25791 Facebook Creative Split Testing

Facebook creative split testing is an update of Facebook's built-in split testing feature to help advertisers test to find the best-performing ad unit.

The post New Feature: Facebook Creative Split Testing appeared first on Jon Loomer Digital.

]]>
Facebook Creative Split Testing

Creative split testing of Facebook ads just became a whole lot easier with the update of Facebook’s built-in split testing feature.

Don’t confuse this update with the dynamic creative feature (which is also amazing). Facebook creative split testing is a great way to run tests to determine your best performing ad without audience overlap.

Let’s take a closer look…

Facebook Split Testing

I first told you about Facebook’s built-in split testing feature nearly a year ago.

To use the split testing feature, you’ll need to use one of the following objectives:

  • Reach
  • Traffic
  • App Installs
  • Video Views
  • Lead Generation
  • Conversions
  • Catalog Sales

While setting up a campaign, you’ll notice a checkbox for “Create Split Test” under the objective.

Facebook Split Testing

At the ad set level, you would then select the variable you want to test…

Facebook Split Testing

Until now, you could split test delivery optimization (Conversions vs. Link Clicks, for example), Audience (Website Custom Audience vs. Page Connections, for example), and then later, placement.

Facebook Ad Split Testing

One of the primary benefits of Facebook’s built-in split testing tool is the lack of audience overlap. Facebook will randomly determine who is tested against each variation. No exclusions necessary.

Prior Creative Split Testing Options

While Facebook’s built-in split testing tool is great, it didn’t previously address creative. So, if you wanted to split test creative, it was difficult to make it a true A/B test without overlap.

In the past, you would have done one of two things:

1. Create two or more separate ads within the same ad set. By doing this, Facebook optimizes to provide the most impressions to the highest performing ad. The same audience will be served ads from the same pool of creative, but some will see only one variation while other users may see multiple.

2. Create multiple ad sets with a single ad variation within each. As long as you were careful with necessary exclusions, you could prevent overlap, but it takes more time.

Additionally, it’s a bad test by comparing results from two potentially very different audiences as opposed to randomly selecting people from the same audience. Are the better results due to the creative or the audience you are targeting? It wasn’t always clear.

How to Use Creative Split Testing

Thankfully, Facebook addresses these concerns with the new creative split testing feature.

Now, when selecting the variable you want to test, you’ll see the option of Creative.

Facebook Split Testing

Set up your audience and placements as you normally would.

Your split test will need to run for between three and 14 days. This is required so that Facebook can get the sample size necessary to determine a winner.

Facebook Split Testing

However, there is an option to end the split test early once a winner has been found…

Facebook Split Testing

On the right side of the ad set, Facebook shows you how your split test is being organized.

Facebook Split Testing

Even though you’re putting in the work of creating a single ad set, Facebook is generating an ad set for each ad. Each ad set will have identical settings for audience, placement, and delivery.

REMINDER: While the audience is the same, there will be no overlap. Each user will only see one creative variation, and users are selected randomly.

You can create up to five ad variations. On the left, you’ll see Ad A through E (if applicable).

Facebook Split Testing

Create your ads as you normally would with image, link, headline, text, link description, CTA button, and more. When you click the “Test Another Ad” button, Facebook will copy the prior ad for easy editing.

Your Turn

This is a great new option for advertisers to help uncover the highest performing creative. By using this feature and (separately) the dynamic creative feature, advertisers are much better equipped to serve high performing creative.

Have you tried out the creative split testing feature? What do you think?

Let me know in the comments below!

The post New Feature: Facebook Creative Split Testing appeared first on Jon Loomer Digital.

]]>
https://www.jonloomer.com/facebook-creative-split-testing/feed/ 18
How to Use the Facebook Ad Split Testing Feature https://www.jonloomer.com/facebook-ad-split-testing/ https://www.jonloomer.com/facebook-ad-split-testing/#comments Fri, 09 Dec 2016 06:23:55 +0000 https://www.jonloomer.com/?p=24192 Facebook Ad Split Testing Feature

Facebook has created a split testing feature that has simplified the process of uncovering the most effective targeting, optimization and placements.

The post How to Use the Facebook Ad Split Testing Feature appeared first on Jon Loomer Digital.

]]>
Facebook Ad Split Testing Feature

Experienced advertisers understand that extensive Facebook ad split testing is necessary to uncover what works and what doesn’t. That means experimenting with lots of things that don’t work before finding what does.

Split testing allows an advertiser to experiment with two or more versions of a single variable (like two different audiences) that would see the same ad.

Without the built-in feature, split testing on Facebook has been somewhat complicated. Yes, you can create multiple ad sets for different audiences or optimizations and show each the same ad creative. But advertisers can’t prevent overlap — a single user seeing the same ad in each ad set — with this approach.

That’s solved with Facebook’s split testing feature. Let’s take a closer look…

How Split Testing Works

Here’s how Facebook’s split testing feature works:

1. Facebook separates your audience randomly to assure that there won’t be overlap. Each ad within the ad set will then be shown equally (or weighted as you determine) to each ad set.

2. Each ad set within the split test is identical, other than the testing variable. The variable differentiating the two ad sets can be audience or optimization (it also appears that placement will be a variable, though I don’t have this yet). You can test one variable at a time and include up to three variations.

3. Split testing is available to most advertisers who have a Business Manager account within either the ad create tool or Power Editor.

4. Split testing is currently only available for the Website Conversions, Mobile App Installs and Lead Generation campaign objectives.

5. In most cases, you will split test two different ad sets, but you can test up to three.

6. When testing the “Audience” variable, advertisers must currently select from saved audiences.

7. When testing the “Optimization” variable, advertisers can create variations based on the following:

  • Optimization for Ad Delivery (Conversions, Impressions, Link Clicks, Daily Unique Reach)
  • Conversion Window (1 day vs. 7 day)
  • Bid Amount (Automatic vs. Manual)

8. Facebook establishes a minimum budget required to get sufficient data to execute a test. That budget will be split equally between ad sets.

9. Facebook recommends schedules between three and 14 days for best results, though there aren’t any requirements here.

10. When the test is complete, you’ll be notified within Ads Manager and receive the results via email. The “winner” of the split test will be the ad set with the lowest cost per outcome.

How to Set Up Split Testing

Ready to set this up yourself? Let’s do it…

When using Website Conversions, Lead Generation or App Installs campaign objectives, you’ll find a check box for “Create Split Test”…

Facebook Ad Split Testing

All of the split testing is done within the ad set. First, choose your variable…

Facebook Ad Split Testing

As mentioned above, a third variable option will be Placement, however I don’t currently have that.

If you select “Delivery Optimization,” you’ll be presented with Ad Set 1 and Ad Set 2 together within the Delivery Optimization section…

Facebook Ad Split Testing

You’ll simply want to make one variation between the two ad sets. As mentioned above, it could be delivery and optimization (optimizing for Website Conversions vs. Link Clicks), conversion window (1 day vs. 7 days) or bid type (automatic vs. manual).

By default, you’ll create two ad sets, but there is a button at the bottom of the variable area to “Test Another Ad Set.” As I type this, you can create up to three.

If you select the “Audience” variable, you will need to select competing audiences. This is done by choosing different saved audiences…

Facebook Ad Split Testing

Yeah, this is kind of annoying. Not sure why you need to select a saved audience instead of simply entering in your audiences. But if you haven’t created the saved audiences you want to test, you’ll need to do that first.

Finally, you’ll need to define a test budget…

Facebook Ad Split Testing

By default, your budget is split evenly between the two ad sets. However, if you click the drop-down, you’ll see an option for “Weighted Split” as well…

Facebook Ad Split Testing

When using the weighted method, you can focus more of your budget on one of the ad sets by choosing the percentage of budget to be used on Ad Set 1.

Facebook Ad Split Testing

Note that when using this method, the minimum budget tends to go up. This makes sense to make sure you get enough of a sample size in the ad set getting less budget.

You’ll then need to create your ad. This single ad will be used across the ad sets in your split test.

Ways to Use Split Testing

I’ve mentioned a few things to test in passing above, but let’s dig in. There are some very specific ways that I test that will make this new method useful…

1. Test Audiences

Always avoid combining multiple audiences within the same ad set. This new split testing feature will help you isolate which audience is most effective.

Some examples of audiences to compare:

  • Different Custom Audiences (Website Custom Audience vs. Engagement Custom Audience
  • Different Lookalike Audiences (Based on registration vs. based on sale
  • Different Interests

2. Test Optimization

This is a big one. I can’t even tell you how many times I’ve been asked questions like, “Should I optimize for conversions or link clicks?” or “What’s better, manual or automatic bidding?”

The truth: There is no universal answer. You need to find it out for yourself! This new split testing feature will make uncovering that answer much easier.

You should absolutely split test website conversion vs. link click optimization. This one is often a mystery, and it’s especially important to test out when the volume of conversions you get on your pixel is in the “tweener” stage for effective optimization. But you should also consider testing Daily Unique Reach or even impressions when given the right situations (extremely relevant audience).

Speaking of mysteries, you should test Facebook’s new options for optimization windows. It would seem logical that you should use 1 day windows for conversions that typically happen within a day and 7 days for those that take more time, but you know what? I really have no idea. Test them!

And while I typically use automatic bidding, many advertisers swear by manual bidding. Not only can you compare manual vs. automatic, but you could compare two or three different manual bids to see what works best!

Your Turn

Have you started using the new split testing feature yet? What do you think? What types of results are you seeing?

Let me know in the comments below!

The post How to Use the Facebook Ad Split Testing Feature appeared first on Jon Loomer Digital.

]]>
https://www.jonloomer.com/facebook-ad-split-testing/feed/ 34