People are influenced by innumerable whims, impulses, and values. You never know what marketing and fundraising strategies and tactics will work best until you test them. That is why testing is a must for nonprofits.

Testing Nonprofit Marketing keyboard


Photo credit: Big Stock Photo


Don’t just trust your gut; run experiments. We’re doing our causes a disservice each time we don’t test — potentially leaving donations, actions, and opportunities on the table.


We recommend creating an annual testing calendar in line with the scientific method so you can optimize your learning. For instance, by December (the biggest fundraising month of the year), you’ll want to have tested your donation forms thoroughly so you are serving the most optimized version.

 


Testing 101


Here’s a quick primer on how effective testing works:


Step 1: Be clear on your goals.


What are your objectives with this campaign or effort? If you are unclear on your goals, you won’t know how to measure success.


 


Step 2: Outline a testable hypothesis.


The key word here is testable. You will perform a test of how two variables might be related. This is when you are doing a real experiment.


Here’s an example: Integrated online/offline messages will yield higher results in regard to money raised, average gift, and response rate (both online and offline) than will unrelated online and offline messages.

 


Step 3: Outline your testing methodology.

Test group: 50% of donors (who have given both online and offline) for whom we have an email and mailing address.

Control group: Remaining 50% of donors (who have given both online and offline) for whom we have an email and mailing address.

Test group segments will receive:

  • Pre-email mirroring messaging of direct mail
  • Offline letter

Control group segments will receive:

  • Control online treatment
  • Offline letter

 


Step 4: Outline the metrics you will measure.

Total money raised (measured separately by channel and then combined)

Average gift (measured separately by channel and then combined)

Response rate (measured separately by channel and then combined)

 

Beware: Common Testing Pitfalls


We strongly advocate testing well. A poorly run test isn’t worth the effort you and your staff will invest in it. Here are some testing pitfalls to avoid:


1. When looking for breakthrough results, skip the small things.


Testing small items such as subject lines and the color of your call-to-action button may uncover low-hanging fruit. When looking for a big breakthrough, however, think big with your tests.


 

  • Test content.
  • Test treatments across segments.
  • Test a long-term cultivation program on a test cell.
  • Test messengers.
  • Test channels.

Get creative and bold — but make sure your creativity and boldness can be tested.


 


2. Avoid samples sizes that are too small to produce statistically significant results.


It’s not how many people you solicit; it’s how many responses you receive. A statistically valid test requires 100 responses for each test cell. You’ll need 200 responses for a simple A/B test. For a donor renewal effort with a projected 5% response rate, this means soliciting 4,000 names (2,000 per cell) for a valid test. In a new donor acquisition effort with a 1% response rate, you’d need to solicit 20,000 names (10,000 per cell).

If you don’t have a large list size, here are some suggestions:

 

  • Test fewer elements. Ditch the four-way test and try a 50/50 split test.
  • Carry the test across multiple efforts until a statistically significant number is reached.
  • Don’t extrapolate. When you don’t test a statistically valid quantity, you can’t assume a larger group will behave the same way.
  • Retest. Always retest to see if you replicate your results.


 

3. Don’t ignore past test results.


Your test results are the voice of your donors and activists. Listen to what they are saying even if it’s not what you expected to hear. Keep a “testing bible” that brings together your organization’s learnings over time.

 


4. Don’t think that what worked for a competitor or another campaign will work for you.

You must test it with your audience. Enough said.



5. The data you generate is only as good as your analysis of it. 


Set up systems to accurately measure your test and incorporate that learning into future campaigns.


Finally, don’t be afraid to fumble. We’ve learned a lot about testing through failed tests. Being data-driven is a daily practice that you must exercise to excel.