Site icon Jon Loomer Digital

5 Meta Ads Tests that Transformed My Perspective on Targeting

To suggest that my perspective on Meta ads targeting has changed during the past year is an understatement. It’s completely transformed. This evolution wasn’t immediate and was reinforced through a series of tests.

Understand that it wasn’t easy to get here. It’s reasonable to say that my prior advertising strategy could have been boiled down to targeting. It was the most important step. Great ad copy and creative couldn’t overcome bad targeting.

It’s not that I don’t care about reaching a relevant audience now. It’s that the levers we pull to get there are no longer the same.

I’m getting ahead of myself. This post will help explain how I got here. I’ve run a series of tests during the past year that have opened my eyes to just how much things have changed. They’ve helped me understand how I should change, too.

In this post, we’ll discuss the following tests:

Let’s get to it…

Test 1: How Much Do Audiences Expand?

One of my primary complaints ever since Advantage Detailed Targeting (then Detailed Targeting Expansion) was introduced is the lack of transparency.

We know that Meta can expand your audience beyond the initial targeting inputs, but will this always happen? Will your audience expand a little or a lot? We have no idea. I’ve long asked for a breakdown that would solve this problem, but I don’t anticipate getting that feature anytime soon.

The same questions about how much your audience expands also apply to Advantage Lookalike and Advantage Custom Audience. It’s a mystery.

This is important because we can’t always avoid expansion. If your performance goal aims to maximize conversions, value, link clicks, or landing page views while using original audiences, Advantage Detailed Targeting is automatically on and it can’t be turned off.

The same is true for Advantage Lookalike when your performance goal maximizes conversions or value.

Are we able to clear up this mystery with a test?

The Test

I don’t believe that there’s any way to prove how much our audience is expanded when Advantage Detailed Targeting or Advantage Lookalike are applied. But, there is a way to test this with Advantage Custom Audience. While it won’t definitively prove how our audience is expanded with the other two methods, it could provide a roadmap.

This test is possible thanks to the availability of Audience Segments for all sales campaigns. Once you define your Audience Segments, you can run a breakdown of your results to view the distribution of ad spend and other metrics between three different groups:

For the purpose of this test, this breakdown can help us understand how much our audience is expanded. All we need to do is create an ad set using original audiences where we explicitly target the same custom audiences that are used to define our Audience Segments.

So, I did just that, and I turned on Advantage Custom Audience.

I used the Sales objective so that the necessary breakdown would be available.

The Results

My only focus with this test was to uncover how my budget was distributed. Performance didn’t matter.

In this case, 26% of my budget was spent between my Engaged Audience and Existing Customers.

Since the custom audiences I used for targeting matched how I defined my Audience Segments, we can state definitively that, in this case, Meta spent 74% of my budget reaching people outside of my targeting inputs.

What I Learned

This was groundbreaking for my understanding of audience expansion. Up until this point, whether or not Meta expanded my audience — and by how much — was a mystery. This test lifted the curtain.

These results don’t mean that the 74/26 split would apply in all situations universally. Many factors likely contribute to the distribution that I saw here, not limited to…

We also don’t know if a similar split happens when applying Advantage Detailed Targeting or Advantage Lookalike. While we don’t know, this at least gives us a point of reference rather than having to make a blind guess.

Read More

Check out the following post and video to learn more about this test:

How Much Do Audiences Expand Using Advantage Custom Audience?

Test 2: How Much Remarketing Happens When Going Broad?

Even before we had Advantage+ Shopping Campaigns and Advantage+ Audience, some advertisers swore by using original audiences to “go broad” (no inputs for custom audiences, lookalike audiences, or detailed targeting). While unique, this approach was largely based on gut feel, with limited ways to prove how ads were getting distributed. They could only provide results as evidence that it was effective.

The addition of Audience Segments to all sales campaigns would allow us to provide a bit more insight into what is happening when going broad.

The Test

I created a campaign with the following settings…

The Results

Recall that we already had a remarketing distribution benchmark with the prior test. In that case, we explicitly defined the custom audiences we wanted to reach within targeting. In this case, I didn’t provide any such inputs.

And yet…

Even though no inputs were provided, Meta spent 25% of my budget on reaching prior website visitors and people who were on my email list (both paid customers and not).

What I Learned

I found this to be absolutely fascinating. While we will struggle to get any insight into who the people are that Meta reached outside of remarketing, the fact that 25% of my budget was spent on website visitors and email subscribers is important. It shows that Meta is prioritizing showing my ads to people most likely to convert.

This realization helped improve my confidence in a hands-off approach. If the percentage were closer to 0, it may show disorder. It could suggest that the broad targeting approach is based in smoke and mirrors and your inputs are necessary to help steer the algorithm.

What was most shocking to me is that the remarketing distribution was nearly identical, whether I used Advantage Custom Audience and defined my target or went completely broad. This was a whole new realization.

While the first test helped me understand how much Meta expands my targeting inputs, the second made me question whether those inputs were necessary at all. I’d spend about the exact same amount reaching that desired group in each case.

Read More

Check out the following post and video to learn more about this test:

25 Percent of My Budget Was Spent on Remarketing While Going Broad

Test 3: Do Audience Suggestions Matter When Using Advantage+ Audience?

While you have the option to switch to original audiences, the default these days is Advantage+ Audience. Meta strongly encourages you to take this route, warning that switching to original audiences can lead to a drop in performance.

When using Advantage+ Audience, you leverage Meta’s AI-driven algorithmic targeting. You have the option to provide audience suggestions, but it’s not required.

Meta says that even if you don’t provide suggestions, they will prioritize things like conversion history, pixel data, and prior engagement with your ads.

But, is this true? And how pronounced is it?

The Test

We could test this by again leveraging a manual sales campaign with Audience Segments. I created two ad sets:

Since I can use custom audiences that exactly match the custom audiences used to define my Audience Segments, we can get a better idea of just how much (if at all) these audience suggestions impact delivery.

A reasonable hypothesis would be that while Advantage+ Audience without suggestions will result in remarketing (potentially in the 25% range, as we discovered when going broad). But, it’s likely to make up a smaller percentage of ad spend than when providing suggestions that match my Audience Segments.

But, that didn’t play out…

The Results

Once again, quite shocking.

The ad set that used custom audiences that match those used to define my Audience Segments resulted in 32% of my budget spent on that group.

By itself, this seems meaningful. More is spent on remarketing in this case than when going broad or even using Advantage Custom Audience (wow!).

But, check out the results when not providing any suggestions at all…

Your eyes aren’t deceiving you. When I used Advantage+ Audience without suggestions, 35% of my budget was spent on remarketing.

What I Learned

Every test surprised me. This one shook me.

When I provided audience suggestions, I reached the people matching those suggestions less than when I didn’t provide any suggestions at all. Providing suggestions was not a benefit. It didn’t seem to impact what the algorithm chose to do. That same group was prioritized either way, with or without suggesting them.

It’s not clear if this would be the case for other types of suggestions (lookalike audiences, detailed targeting, age maximum, and gender). But, the results of this test imply that while audience suggestions can’t hurt, it’s debatable whether they do anything.

As is the case in every test, there are several factors that will contribute to my results. Budget and the size of my remarketing audience are certainly part of that. And it’s also quite possible that I won’t always see these same results if I were to run the test multiple times.

It remains eye-opening. Not only is Advantage+ Audience without suggestions so powerful that it will prioritize my remarketing audience, it’s possible that Meta doesn’t need any suggestions at all.

Read More

Check out the following post and video to learn more about this test:

Audience Suggestions May Not Always Be Necessary

Test 4: Comparing Performance and Quality of Results

I’ve encouraged advertisers to prioritize Advantage+ Audience for much of the past year. It’s not that it’s always better, but it should be your first option. Instead, it seems that many advertisers find every excuse to distrust it and switch to original audiences.

Advertisers tell me that they get better results with detailed targeting or lookalike audiences. And even if they could get more conversions from Advantage+ Audience, they’re lower quality.

Is this the case for me? I decided to test it…

The Test

I created an A/B test of three ad sets where everything was the same, beyond the targeting. Here are the settings…

The three ad sets took three different approaches to targeting:

Since the performance goal is to maximize conversions, Advantage Detailed Targeting and Advantage Lookalike would automatically be applied for the respective ad set, and it could not be turned off. The audience is expanded regardless.

The ads were the same in all cases, promoting a beginner advertiser subscription.

The Results

In terms of pure conversions, Advantage+ Audience led to the most, besting Advantage Detailed Targeting by 5% and Advantage Lookalike by 25%.

Recall that this was an A/B test, and Meta had 61% confidence that Advantage+ Audience would win if the test were run again. Maybe as important, a less than 5% confidence that Advantage Lookalike would win.

But, one of the complaints about Advantage+ Audience relates to quality. Are these empty subscriptions run by bots and people who will die on my email list?

Well, I tracked that. I created a separate landing page for each ad that utilized a unique form. Once subscribed, these people received a unique tag so that I could keep track of which audience they were in. The easiest way to measure quality was to tag the people who clicked on a link in my emails after subscribing.

Once again, Advantage+ Audience generated the most quality subscribers.

Is this because Advantage+ Audience leaned heavily into remarketing? We can find out with a breakdown by Audience Segments!

Nope! More was actually spent on remarketing for the Advantage Detailed Targeting ad set. Advantage+ Audience actually generated the fewest conversions from remarketing (though it was close to Advantage Lookalike).

What I Learned

This test was different than the others because the focus was on results and quality of those results, rather than on how my ads were distributed. And, amazingly, Advantage+ Audience without suggestions was again the winner.

Of course, we’re not dealing with enormous sample sizes here ($2,250 total spent on this test). It’s possible that Advantage Detailed Targeting would overtake Advantage+ Audience in a separate test. But, what’s clear here is that the difference is negligible.

There just doesn’t appear to be a benefit to spending the time and effort required to switch to original audiences and provide detailed targeting or lookalike audiences. I’m getting just as good results (even better) letting the algorithm do it all for me.

As always, many factors contribute. I may get better results with Advantage+ Audience because I have extensive history on my ad account. But, as mentioned in the results section, it’s not as if it led to more results from remarketing.

The fact that Advantage+ Audience won here isn’t even necessarily the main takeaway. There could be some randomness baked into these results (more on that in a minute). But, this test further increased my confidence in letting the algorithm do it’s thing with Advantage+ Audience.

Read More

Check out the following post to learn more about this test:

Test Results: Advantage+ Audience vs. Detailed Targeting and Lookalikes

Test 5: Understanding the Contribution of Randomness to Results

There was something about that last test — and really all of these tests — that was nagging at me. Yes, Advantage+ Audience without suggestions kept coming out on top. But, I was quick to remind you that these tests aren’t perfect or universal. The results may be different if I were to run the tests again.

That got me thinking about randomness

What percentage of our results are completely random? What I mean by that is that people aren’t robots. They aren’t 100% predictable when it comes to whether they will act on a certain ad. Many factors contribute to what they end up doing, and much of that is random.

If there’s a split test and the same person would be in all three audiences, which audience do they get picked for? How many of those random selections would have converted regardless of the ad set? How many converted because of the perfect conditions that day?

It might be crazy, but I felt like we could make an example of randomness with a test.

The Test

I created an A/B test of three ad sets. We don’t need to spend a whole lot of time talking about them because they were all identical. Everything in the ad sets was the same. They all promoted identical ads to generate registrations for my Beginners subscription.

I think it’s rather obvious that we wouldn’t get identical results between these three ad sets. But, how different would they be? And what might that say about the inferences we make from other tests?

The Results

Wow. Yes, there was a noticeable difference.

One ad set generated 25% more than the lowest performer. If that percentage sounds familiar, it’s because it was the exact same difference between the top and bottom performer in the last test. But in that case, that difference “felt” more meaningful.

In this case, we know there’s nothing meaningfully different about the ad sets that led to the variance in performance. And yet, Meta had a 59% confidence level (nearly the same as the level of confidence in the winner in the previous test) that the winning ad set would win if the test were run again.

What I Learned

Randomness is important! Yet, most advertisers completely discount it. They test every detail and make changes based on differences in performance that are even narrower than what we saw here.

Think about all of the things that advertisers test. They create multiple ad sets to test targeting. They try to isolate the best performing ad copy, creative, and combination of the two.

This test taught me that most of these tests are based in a flawed understanding of the results. Unless you can generate meaningful volume (usually because you’re spending a lot), it’s not worth your time.

Your “optimizing” may not be making any difference at all. You may be acting on differences that would flip if you tested again — or if you let the test run longer or spent more money.

It’s even reasonable to think that too much testing will hurt your results. You’re running competing campaigns and ad sets that drive up ad costs due to audience fragmentation and auction overlap — all for a perceived benefit that may not exist.

I’m not saying that you should never test anything to optimize your results. But be very aware of the contributions of randomness.

Read More

Check out the following post to learn more about this test:

Results: Identical Ad Sets, a Split Test, and Chaos

My Approach Now

You’re smart. If you’ve read this far, you can infer how these tests have altered my approach. My strategy is drastically simplified from it once was.

I lean heavily on Advantage+ Audience without suggestions, especially when optimizing for conversions. Of course, Advantage+ Audience isn’t perfect. If I need to add guardrails, I will switch to original audiences. But when I do, I typically go broad. I rarely ever use detailed targeting or lookalikes now.

I also rarely use remarketing now, which is insane considering it once made up the majority of my ad spend. Since remarketing is baked in, there are few reasons to create separate remarketing and prospecting ad sets now. Especially when I’d normally use general remarketing (all website visitors and email subscribers) because I felt these people would be most likely to convert.

This also means far fewer ad sets. Unless I’m running one of these tests, I almost always have a single ad set in a campaign.

It doesn’t mean I’m complacent in this approach. It means that the results of these tests have raised my confidence that no targeting inputs will not only perform just as well, but oftentimes better. And I know that there are exceptions and factors that contribute to my results.

Maybe things will change. But, I no longer feel the need to micromanage my targeting. Based on the results of these tests — and of my results generally — it’s no longer a priority or a factor that I worry about.

And that, my friends, is quite the evolution from where I was not long ago.

Run Your Own Tests

I’m always quick to point out that my results are at least partially unique to me. Whether you’re curious or skeptical, I encourage you to run your own tests.

But, do so with an open mind. Don’t run these tests hoping that your current approach will prevail. Spend enough to get meaningful results.

Maybe you’ll see something different. If you do, that’s fine! The main point is that we shouldn’t get stuck in our ways or force a strategy simply because it worked at one time and we want it to work now.

Replicate what I did. Then report back!

Your Turn

Have you run tests like these before? What results did you see?

Let me know in the comments below!

Exit mobile version