Let me start by saying I’m a huge fan of testing in SEO and other online marketing disciplines. Proper tests can take away a lot of the FUD (Fear, Uncertainty, and Doubt) that exists in the online marketing industry.
And there are several search engine prophets out there that are doing a superb job testing SEO myths and the impact of new search features. David Harry at SEO Dojo has done some great stuff, and so has the crew at SEOmoz (though a bit hit and miss).
The problems that exist with testing in SEO aren’t deliberate. I’m convinced that pretty much all tests performed by SEOs start with the best of intentions: to help uncover truths and add quality information and best practices to our methods.
But nonetheless there are some fundamental issues that need to be addressed:
Most search engine optimisers aren’t scientists. There are a few notable exceptions (Marie-Claire Jenkins from the Science for SEO blog to name but one – and if you’re not reading that blog you really should), but generally SEOs seem to come from backgrounds of sales/marketing/pr, journalism, and IT.
This means that most SEOs are unfamiliar with the methodology of proper scientific testing – controls, double-blinds, statistical variances, etc. The inevitable result is that many tests performed by SEOs suffer from intrinsic shortcomings that may influence their outcomes. The tests may be biased towards a certain result, the results themselves may be misinterpreted, and often there is no adequate control within the test.
By its very nature the online marketing industry as a whole, and the SEO industry specifically, is filled with egos. This is not a bad thing – before you can market someone else’s product effectively you need to be able to market yourself. Truly effective marketing starts with unfailing belief in the product you’re selling. And when that product contains your own skills and services, you need an abundance of self-confidence. (I’m certainly no exception.)
The result is that many SEOs conducting tests are inclined to attach too much value to their interpretation of the outcome. There’s a tendency to proclaim Grand Truths of SEO based on little more than a single test (which is often flawed as per point 1).
Where most of conventional science tries to understand the inner workings of the universe, SEOs are trying to understand the inner workings of human constructs. Search engines, for all their complexity, are built by humans for a specific purpose. And search engines are not static – they’re continuously changing, adapting to new trends and developments.
On Monday a well-conducted SEO test can reveal an incredibly valuable insight to an aspect of Google, only for this to become obsolete on Friday when Google’s engineers make a small tweak to their algorithms. But it might take months or even years for that obsolete insight to be disproved and discredited.
This is not in the least because of point 2. Often SEOs are loathe to let go of their discoveries, nor to properly frame them in the inherent uncertainty that these discoveries deserve. As a result these Grand SEO Truths continue to exist on blogs and in books long after they’ve lose their validity, where they are often taken at face value by newcomers to the industry.
So should we all stop doing tests? Heck no. We need proper tests to help improve the quality of our industry, fight the spread of old knowledge and outdated practices, and keep the search engines on their toes.
But we need to try and make sure we do things right. I’m not saying every SEO that wants to run a test should first get a degree in a scientific discipline (though it would help), but perhaps we can work together and use the enormous amounts of skill, knowledge, and experience contained in our industry to codify a set of best practices for running SEO tests.
There’s been a lot of talk about qualifications and certification for SEO professionals and companies, and maybe this can be a part of that process.
I’d love to hear the thoughts and ideas of other SEOs on the matter, so please leave a comment.