Trying to Reverse Engineer the Google Algorithm is a Futile Exercise.

So it turns out LDA doesn’t have quite as strong a correlation with search results as SEOmoz initially led us to believe. Turns out when they were doing their calculations they made a mistake. I’m not surprised it looked like some serious number crunching their end, but then I gave up half-way through my maths A-level, so I’m no expert.

Fair play to Rand & co, their analysis of data they’ve been getting out of their tools and is pretty unique in the blogosphere, but this latest mis-hap really illustrates that trying to work out the exact nature of what Google look at to order rankings is a lost cause.

I’m sure if you got a team of egg heads together locked them in a room away from SEO dogma they might come up with some interesting insights, but in that time I’ve been to busy making money for my clients.

We all know there are hundreds of factors in the algo, no one factor or even combination of factors is really going to be a silver bullet. It’s getting worse as well; we all know personalisation is in play and the social graph working in unison with the link graph is going to escalate that change. You could kidnap a bunch of Googlers and force them to create a model at gun point and I bet they’d struggle to build something close to the algo.

These attempts at reverse engineering the algo are admirable, but bring out two of the worst sides of the SEO industry; we love to procrastinate and we love to argue. So rather than getting on with the type of SEO work we can all be confident is going to work, we distract ourselves reading the latest study on a new ranking factor, findings in the blogosphere may influence you to tweak your approach slightly but if you tried to keep up with every half decent SEO blog you’d never get any real work done.

Then we get the arguments, I’m a sucker for it, I spent a good fifteen/twenty minutes reading about the mathematical flaws in SEO’s sums, and another ten on the difference between LDA and LSI; I didn’t really understand much of it. In that time I could have made a decent crack at a compelling blog post that could have bought in dozens of links. In hindsight, I know which would have been a better use of my time.

This whole LDA thing has probably been a bit embarrassing for SEOmoz, but what’s worse is how much time we wasted on it as a whole community.

So resist the urge to get sucked in, occasionally when something that was once recommended like page rank sculpting is now off the menu you won’t miss it. Try and ween yourself of the constant updates from all us SEO bloggers, we might be doing you more harm than good.

We’ll try and share something valuable that may help you out in the future but you need some time actually left in your day to implement everything you’ve learned – right?

A big part of my job at SiteVisibility is research and looking a new trends and patterns but even I realise at some point the research needs to stop and the action needs to happy. So stop reading about SEO, close down Twitter, exit the tabs you’ve got open with twenty blog post you’ve been meaning to read and do some work, We’re all experienced professionals with a great understanding of what the search engines like and appreciate, we won’t go to far wrong.

About Kelvin Newman

Kelvin Newman is Creative Director at SiteVisibility and specialises in achieving natural search results and producing link-worthy online content, working with a variety of brands including the RSPCA & uSwitch.

3 thoughts on “Trying to Reverse Engineer the Google Algorithm is a Futile Exercise.

  1. So true Kevin, good to know that I am not the only one with this situation. But I always find articles on this site really useful. Just implemented the “Track Google Products Search” filters.

    Not going to close this tab:)

  2. Your post seems to make 2 points, one of which I agree with and one of which I don’t.

    1. SEOs should do more work and spend less time reading blogs – agree
    2. Researching ranking factors and their relative importance is a waste of time – disagree

    Even with part 2, I partly agree; an individual practitioner (or most agencies) will not get a good return on their time spent even if they do have the resources to get enough link data to draw good conclusions.

    In the post you say “In that time I could have made a decent crack at a compelling blog post that could have bought in dozens of links” but how do you know your time is best spent getting dozens of links rather than tweaking title tags or on page content? This is the sort of question that research like what SEOmoz are doing (if done right) can answer and it is the main reason why I think such studies have value.

  3. Bulls Eye!

    I really admire Rand for his almost very “science like” approach to SEO and now and then some really good stuff comes out of it. But at the end of the day the hunch and experience are the best tools.

Comments are closed.