Click for a Printer-friendly Version - Adobe PDF
Three Big Mistakes Direct Marketers Make When Reading Test Results
By Cynthia Baughan Wheaton
Principal, Wheaton Group
Original Version of an article that appeared in the May
2003 issue of Target Marketing
I love the way the numbers of our business can "talk" to us - giving us
guidance. We pride ourselves on the precision of our results. And
yet, if we are not careful, the numbers can lie to us - wasting time, resources
and misdirecting our businesses. Often, we must tease out the true answer
rather than take the simple answer at face value.
The following are three big mistakes that many marketers are making. They
may seem obvious, yet I see companies overlooking the obvious in the quest to
get things done and move to the next opportunity.
Big Mistake #1: Skipping the Homework
first step in analysis is to gather the facts. Otherwise,
there may be critical clues that are overlooked in the more traditional
phase of the analysis. Homework should include a thorough
check of every phase of the solicitation and fulfillment of the
Start with a review of the name selection instructions, looking for any bias
that may have been inadvertently introduced, particularly during split and
key. (Written instructions should have been kept current throughout the
selection process.) Go on to check name and address output. If
possible, get dumps of names and verify that selection criteria were used
properly. At the least, check the duplicates listing to see if there was
Create a partnership with your printer. Get printed samples, preferably
by attending the print run rather than relying on hand-selected samples.
Next, verify what actually circulated: counts, dates, keycodes, and
physical samples of the solicitations. Get samples to check for proper
insertion, correctness of the order form, readability of the ink-jet message,
and other critical elements.
Ideally, you have visited the lettershop, and examined machine-inserted samples
to verify correctness before the mail has been turned over for delivery.
On occasion, bags of mail are found at the printer after it is too late for
insertion into the postal system, or when the delayed timing will generate
In addition, if different creative packages have gone out from different
vendors, they can be biased if one panel is received on a different date than
its counterpart. For instance, a test between a solo catalog and a
catalog in an envelope, initiated by different vendors, would require close
Talk to your team of supporters. Your list broker may have new
perspective on any outside lists that were used. Check with Customer
Service, Systems, and Fulfillment to see if there have been any complications
that may impact the methodology you use to evaluate results.
The mail can provide important clues. Monitor arrival dates and the
condition of your own "seeds." Also monitor the arrival dates and offers
of your competitors. Did they coincide with yours?
Big Mistake #2: Lack of Perspective
look at a mailing or test in isolation, you will not understand
its overall relevance to your business. Often, by putting
results in proper context, you will see clear direction in unexpected
places. Also, it is important to determine cause and effect
as best you can. Understand the "why?" as well as the "what?"
of your results.
If you have changed more than one variable when comparing two tests, be careful
with the analysis. You will not know the individual effect of the
changes. Perhaps one change helped a lot, while the second change
response. You may need to re-test in a different combination.
Measure test results incrementally against a control, and look for
statistically significant differences. Anything less will be
misleading. Check your actual response rates against your sample
sizes. Test quantities should have been determined statistically, based
on the expected response rate or something slightly lower. If the actual
response is quite a bit lower than expected, your reading of the test cells
will probably be jeopardized. If the results are too small to read with
confidence, plan a re-test.
In list testing, if you are testing another variable, such as timing, be sure
to test both alternatives against the same set of lists. For instance, it
is not fair to compare Cover A (Lists 1 through 5) against Cover B (Lists 1, 2,
3, 7, and 10). Also, be sure that the included lists are in the same
proportion to each other within the test panels. For instance, if List A
is 20% of the circulation for the Cover A test, then List A should be 20% of
the Cover B test. If it is not, arithmetically adjust the proportions so
that an unbiased read of results is possible.
Individual test panels do not have to be identical in size to be compared
accurately. For instance, your control may be 250,000, while the two test
panels may be 35,000 and 23,000. Each test panel needs to be big enough
to read with confidence, but each test does not have to be identical - or even
particularly close in size.
The caveat to this is when the panels (any combination of test and control)
involve extremely different quantities of different physical packages, with one
(likely the control) qualifying for a postal discount, while the second panel
does not. The smaller panel will experience a lower rate of
deliverability, which in turn will lower corresponding response.
Do not apply averages when calculating results, unless you have no other
alternative. For instance, you cannot assume that web orders are the same
percentage of total orders within each keycode. You also want to use the
true cost of each individual rental list segment, rather than an overall
average cost per thousand.
Costs included in an analysis should be carefully chosen. For instance,
rollout-quantity-based costs should be used to evaluate a test, rather than the
actual cost of producing the test. Using test costs will put it at a
disadvantage, because the overhead of developing the test is spread over much
smaller circulation than the control.
Big Mistake #3 - Forgetting the Future
are made for two reasons: 1) generating revenue and profit,
and 2) learning how to increase your profitability in subsequent
contacts. Always consider the future implications of your
Pay particular attention to unexpected results from ongoing offers or
lists. If a "gold standard" list for your business suddenly has bad
results, consider a retest. This is particularly important if your
homework provides no reasonable explanation. Act cautiously on weak test
Identify any operational changes that must be made to have "winning" tests
ready to rollout. Factor incremental cost requirements into your
Document your findings in a simple but thorough manner. Communicate
findings to the appropriate people and encourage future feedback, so your
company can build on the base of knowledge rather than constantly reinventing
Perhaps most importantly, identify next steps. What do these results tell
you about the past? What do they mean for the future? If results
are inconclusive, or suspect due to an error, your best long-term alternative
may be to re-test.
These three big mistakes come from one
root cause - lack of time. That could be due to a shortage
of resources, or cash-flow considerations. But, there is no
aspect of the business that is more important than understanding
your results. Testing will never be perfect, but if you put
in the time required for thorough planning and appropriate analysis,
you will ultimately save time and money.
Cynthia Baughan Wheaton is a Principal at Wheaton Group, which specializes in
direct marketing consulting and data mining, data quality assessment and
assurance, and the delivery of cost-effective data warehouses and marts.
Cynthia can be reached at 919-969-9218, or firstname.lastname@example.org