The marketing world, especially the online marketing world, has made great strides in recent years moving towards being more scientific, more data-driven, more evidence-based, etc. in its approaches.
The ability to run live experiments on web pages (e.g. A/B/n split and Multivariate) has made being “scientific” about conversion optimization much more feasible, so hats off to those software vendors that continue to bring us those abilities.
So while we all “talk the talk” of testing our landing pages and shopping carts, I sometimes get the impression that we’d “walk the walk” even better if we had more solid backgrounds in science and especially in the Scientific Method as it pertains to experimentation.
I was guilty of not paying attention in science classes, and not focusing much on science courses in college. In fact, I think I’ve learned more about the Scientific Method in my work on Conversion Rate Optimization than I did in school!
If you could use a primer on how exactly the Scientific Method should be used to run a test on your website or other marketing touchpoints, I’m going to break down the scientific steps in very marketing-centric language.
Note: There isn’t consensus on how many “steps” one follows when applying the Scientific Method to experimentation, so don’t worry about whether you follow a 6-step, an 8-step, or a 5-step model. I’m using an 8-step to be as thorough as possible.
Scientific Steps to a Marketing Experiment
Step 1 – Define The Problem
This one is easy. The problem can always be stated as, “I’m not getting enough desired actions out of the traffic that views my _______.” Desired actions could be transactions, leads, likes, shares, video views, ad impressions, or anything else that drives your business. Even a page that makes a lot of money could likely be making more, so you can always default to this simple problem statement.
Step 2 – Research/Observe The Problem
This step will usually rely heavily on the quantitative data that comes pouring out of your web analytics software each week. If you’re not getting enough desired actions out of the traffic that views your _______, your web analytics is a great place to look for more information to help you understand the problem. For example, segmenting by traffic source, looking at entrance keywords, time on page, bounce rate, previous pages, next pages, etc.
There are lots of other ways to research your problem as well, including usability studies, customer surveys, heuristic analysis, call logs, chat transcripts, using personas, and more. Feel free to loop those information sources into this phase, as the more context and data you have, the better chance you have at success in Step 3...
Step 3 – Form A Hypothesis
Once you have a full understanding of your problem, and ideas as to why it’s occurring, it’s time to form your hypothesis. Your hypothesis is your best, educated guess as to what will help fix the problem you’ve stated. For example, if your problem is that you’re not getting enough conversions on your landing page, your hypothesis might be that “adding an explanatory video to the landing page will increase conversion rate by 20%.”
It’s important to establish a quantitative goal in your hypothesis in order to be able to prove or disprove it in your experiment. If you just say “…will increase conversions,” and your test raises conversions by 1.5%, is that a success? It opens the door to challenges that you don’t need.
I recommend putting your stake in the ground at 10-20% increase, as achieving double-digit increases in conversion isn’t that difficult or rare. If you’re not feeling that bully, you can always state that “…will increase conversion rate by a statistically confident amount.” Then, so long as your testing tool reaches ‘confidence,’ you’ll have proved your hypothesis. For example if your tool declares a winner at 95% confidence, your increase needs to be greater than 5%.
Step 4 – Conduct The Experiment
Conducting a valid online experiment is complex enough to deserve its own post, but suffice to say that the point of the experiment is to prove or disprove, with validity, your hypothesis. Anything in the experiment that isn’t crafted to prove or disprove the hypothesis is potential “noise” in your data, and should be avoided.
There are 3 scenarios when you conduct a marketing experiment:
- You prove the hypothesis with validity = 🙂
- You disprove the hypothesis with validity = Don’t fret, because you learned something, and you’re one step closer to applying scientific learnings to solve the problem you defined at the outset.
- Your experiment ends up not being valid = 🙁 This is the worst case scenario, a waste of money, and will eventually get you fired if you continue down this path.
Step 5 – Analyze Experiment Results
Assuming you’ve conducted a valid experiment, you’ve now got results to analyze. If the data shows that the hypothesis was proven, analyze the data to understand “by how much?” Did the results exceed everyone’s expectations? Are they shocking? Any unexpected movement in Key Performance Indicators (KPIs)? Any impacts elsewhere in the organization (e.g. more calls to your customer service line)? The key here is to not celebrate yet; just be objective.
If the hypothesis was disproved, same procedures should still apply.
Step 6 – Form A Conclusion
Based on your analysis, what conclusions can you come to about the experiment? Hopefully, the conclusion will include a proven hypothesis as well as increased KPIs for your business. Better yet, those increased KPIs are tied to estimated, incremental revenue!
If the hypothesis proved false, the conclusion will need a data-driven idea of why the experiment showed opposite results, and what next steps should be taken to either test a different hypothesis, or form a new hypothesis to drive a follow-up experiment.
Step 7 – Publish Results
In the scientific community, experiment results are published so that other scientists can attempt to recreate the results, and the body of knowledge can be updated.
In the marketing community, or in your internal marketing team, results are published to increase the body of knowledge, end opinion-based debates, and quantify the business impact of the new knowledge you have gained. In the best scenarios, publishing should be cause for internal celebration because you’ve found a better way to resonate with your target audience and make more money in the process!
A warning, though: publishing (just like in the scientific community) opens you up to scrutiny and challenge. If your results go against someone’s strongly held opinion/belief, they may challenge the results. This is why steps 1-6 are so crucial. If you’ve done a good job following the scientific method, your results will be bullet-proof.
Step 8 – Re-test
As mentioned before, re-testing occurs among scientists so that results can be recreated by impartial parties, biases can be ruled out, etc. As marketers, we don’t want our competitors to know what we’re learning via testing, so we won’t publicize our findings in most cases. Our re-testing will happen within our own organizations.
This step is often skipped when the experiment is a success, but that’s a dangerous move. Audiences change, tastes change, and popular culture changes. What worked last year won’t necessarily work this year, so re-testing is crucial. Any experiment that is a success should be queued for re-testing in 6-12 months. So long as it re-validates, the hypothesis is still true, and you can make ongoing marketing investments based on it.
(Optional) Step 9 – Form A Theory
I almost hesitate to include this step because it seems so dangerous for us marketers. In scientific experimentation and thought, a series of consistent test results that have been validated and re-validated might one day contribute to the formulation of an overarching theory.
In marketing, if you’ve run dozens of tests and observed the same behavior, you might think about developing a “marketing theory” for your company. For example, you might formulate a theory that “our target audience prefers blue page backgrounds on landing pages.” Or, “our target audience prefers free gifts to monetary discounts as incentive offers.”
I hope you can see how risky these types of broad theories can be unless you’ve run a LOT of valid experiments. Proceed at your own risk.
I hope this has been a helpful way to think about applying the rigor of the scientific method to your marketing testing efforts. It’s a lot of work, and takes discipline, but the rewards and competitive advantage are there for the taking. The more your marketing organization can apply these principles, the more knowledgeable and successful you’ll be! Just don’t wear a white lab coat to work, OK? 😉