What is Rank Tracking and Do We Recognise it Anymore?
Search Engine Optimisation

What is Rank Tracking and Do We Recognise it Anymore?

26th April 2017

I love rank tracking – in fact SEO’s like me are addicted to it. It’s an industry where the fixation on tools & software is expected, if not essential.

But the love-affair with rankings is more to do with the feeling of security that they bring. Confidence in numbers (in something tangible) goes part the way to explain why they’re still one of the most use KPIs when related to SEO campaigns.

Crafting a narrative around a campaign is easy with rankings, as is benchmarking performance & progress. Rankings are easy to understand – perhaps too easy. But if nothing else, rankings were born out of an “easier” time of SEO – a time of predictability and quick results.

Problems with Rankings

It’s a well-trodden area, but rank tracking hasn’t been straightforward when we consider the following:

  1. Position of organic changes – i.e. where on the page organic rankings start. If position 1 is below the fold, the significance of being first reduces.
  2. Ads are becoming harder to avoid – More people clicking on them than before
  3. SERPs aren’t 10 organic results – Sometimes anything from 0-13
  4. Position 0 – Various non-paid results above organic. Answers, Knowledge graph etc
  5. Personalisation – Meaning that rank tracking could vary from person to person.
  6. Local – Location based searches, again are very inconsistent and harder to track.
  7. News/Tweets – Other QDF-based queries.

What do the above mean for rank tracking? Broadly speaking, it means that the significance of position one can vary from one SERP to another. The straightforward days of position one == winning are long gone.

This isn’t to say that rank trackers and SEOs have given up – in a lot of cases they’ve managed to keep up well. Identifying which of the above “SERP Features” are within your results is relatively easy, but it continues to make it harder to understand the significance of rankings.

But fundamentally they just enable us, we still rely on rank tracking as if it was 2012.

An Experiment into Hourly Rank Tracking

Over the last few months we’ve been running an experiment, which may inadvertently help further prove the fallibility of rank tracking.

But that wasn’t the intention – remember I love rank tracking.

I don’t have an issue with commercial rank tracking, nor is it something that we at StrategiQ are looking to move into. What’s also important to note is that the information which follows isn’t a complete argument and I am no expert on the subject.

I am someone who wants to further dig into this and I do invite all the constructive feedback I can get my hands on.

Oh, and I am mildly paranoid – but, that’s what 5+ years in SEO will do to you.

“What does Hourly Rank Tracking Look Like?”

The original question was based on a period of ranking instability for one of my clients. During an afternoon a key “money-term” shifted on Page 1 at least 3 times, and I posed the question as to what hourly rank tracking would look like. We, like most, track rankings daily – which wasn’t nearly enough to pick-up the fluctuations which we saw “live”.

The answer was this:

That might look a little like a magic-eye puzzle, but what we’re seeing is the top one hundred positions tracked over nearly two weeks at hourly intervals.

If we break it down to make it more digestible, this represents page one – which is pretty stable.

Page two, is pretty similar, although there is much more in the way of noteworthy movements.

Where it gets really interesting – or to me, where it was most surprising when rendered in graph form…

But that was just one keyword – hardly enough to prove anything.

As part of this, however, we had tracked five other terms – all within the same niche and with search volumes ranging from 2,000 per month to ~9,000.

If you want to take a further look at this data, my colleague Simon wrote more about it here.

However, the real takeaway point is “Across 1,056 SERPs we looked at, not one of them was an exact match to the previous hour — that’s 0% stability.”

When we reflect back on the original question, what does that mean? What does hourly rank tracking bring that daily doesn’t?

The best way to consider this is if we compare the daily (top) against hourly (bottom).

This might as well be showing two completely different pieces of data – although presented as above, both struggle to tell a coherent story.

Within the hourly data we do see see 4-6hr periods which are significantly ‘noisy’. Whilst the majority of moves are 3-5 positions in total, during these periods there is no real consistency at all.

This can be counter-pointed by periods of relative calm, which take place over similar lengths of time.

Could it be that we’re seeing “live” algorithm changes? Perhaps new pages being dropped & added to the index?

But before we get carried away trying to make assumptions on what we’re witnessing – we’ve got to be really sure before we start running to any conclusions.

A Word on Rank Tracking

Rank tracking is hard.

This is mostly because Google doesn’t like us doing it – it makes rank tracker’s lives hard through various means. Most who have worked as an/with an SEO for any period of time will most probably have seen the following:

Fundamentally, rank tracking is ‘unnatural’.

It looks unnatural and can leave a footprint, anything which leaves a footprint can be identified, and blocked – hence the captcha which Google serves to “unusual” network traffic sources.

This type of captcha can stop rank tracking dead in its tracks, unless you invest in a way of manually working around it – which isn’t worth doing at any kind of scale.

Google is likely screwing with us (those trying to check rankings), in fact you could argue that it should be doing so. It’s hard to imagine the costs of those scraping ranking results and the additional overheads it poses to them. Never mind the fact that rank tracking can be used to reverse-engineer Google’s algorithms.

So we try a number of things to keep things more natural, we use proxies on rotation, we randomise time intervals between search & switch things up (but commercial rank trackers probably do more, but it is achievable).

But this could be ‘noisy’, it’s also still pretty unnatural. Real people don’t search like this, so the variables we have to contend with to make our rank tracking ‘look natural’ could be making the data less reliable.

If rankings don’t represent something consistent – what do they represent?

Assuming that the method of collecting the ranking data is credible at the scale we attempted, what does the data represent?

Page 1-2 of search results look relatively stable, although we suspect this may vary depending on the term type, competition & other factors such as QDF, SERP features, Paid ads etc. However, the “crapshoot” which is page 3 onwards is harder to rationalise.

It’s likely that the hourly flux we see on the above graphs is because the ranking signals get weaker and therefore small changes are more profound. However, if hourly tracking looks so radically different from daily (the ‘norm’), what kind of narrative are we providing?

A slow rise from page 10 to page 3 feels more problematic when it’s near-constant flux, yet when a trend over a 3 month period is upward – the key is how you’re viewing the data, and what it’s for.

Hourly inspection on anything under page 2, if used as a KPI feels next-to-worthless when considered against the data we’ve collected so far.

Some SEO ‘philosophy’?

Here’s something I pondered a few weeks back which resonated with some, “If a keyword is ranked, but nobody searches for it, is it really there?”

My view? Nope.

I’m serious. For some, this may be obvious, but Google is only ranking a website when someone searches, it only loads page four if page four is requested.

How much traffic past page 3-4 in SERPs is actually human, how much is bot traffic?

I’m sure we’ll never answer this accurately – but we (myself, Simon & the rest of the team who’ve chipped in on this) are beginning to think that rank tracking could – in some instances – be self fulfilling.

Google Could be Screwing with Us Even Further

If the above wasn’t enough, since starting these tests my mind has gone back to a blog Bill Slawski penned some time ago based on a rank-modifying patent.

For those who haven’t read it, there is the potential that Google is randomising the ranking changes if it has reason to believe it is being ‘gamed’ or ‘spammed’.

Granted, this patent is nearly 5 years old, what they mean by spamming is hardly clear nor unproblematic – I told you I was paranoid.

F-U-D

Whether the above patent accurately suggests that Google is screwing with us when ranking content or not, the point is they are utilising FUD, or:

The definition of which, courtesy of Google (the irony!) can be traced back to Gene Amdahl after he left IBM – essentially means that instilling either fear, uncertainty or doubt is enough to influence behaviour.

More than enough FUD in this instance of rank tracking – at least for me.

More Reason to FUD

Finally, when I first starting formulating this blog, Google made the announcement that they were about to launch a new evolution of their reCAPTCHA service – Invisible reCaptcha.

Being involved in a number of web projects I love reCAPTCHA and what it means for developers & users alike. Its ability to more easily spot spam (or at least the more basic bots), is pretty far ahead of most competition.

With the announcement of an invisible version of this, the key is that Google is pretty close to providing a totally implicit method of detecting bots.

If it can detect bots in this context, what’s to stop it detecting those scraping SERPs to collect ranking data and stopping them? This could cause a real headache to anyone who needs this data. Of course there’s a chance that Google isn’t able to do this yet…

F-U-D!

The Arms Race

So as the arms race of rank tracking continues, there’s no doubt that software engineers will try and keep up with Google’s changes – but there’s also no doubt this could be an anxious time for some who rely on this for income.

Whilst the above experience/results provide an early reflection, we have to face the very-real future where ranking data becomes less consistent, less reliable & therefore less used by the industry.

Time to Pick Better KPIs

I’m not the first to suggest this – nor will I be the last – but if rankings are a core KPI, the future could be challenging. Maybe it’s time we report on what’s more important to the business like traffic & revenue, rather than something which is becoming less and less simple to understand.

Hourly rank tracking (and rank tracking in general) has its uses for SEOs and marketers alike, but we need to use it as a tool, not a success-metric.

It’s early days still, but hourly tracking is great for the following:

  1. Far-better at spotting cannibalisation
  2. Helps more accurately see when tweets/news etc are included
  3. Measuring speed of indexation & what it does to the rest of the SERPs
  4. Has the ability to make a more accurate correlation between rankings/traffic

Want to Help us Test This Further?

We’re going to be releasing this (for free) on a limited basis – if people want it badly enough, click on the image below – or here.

All feedback is greatly received here, so please leave your comments below or come find me on Twitter!

The following blog is heavily based on a recent talk I gave at Search London, you can see the slides here!

Written By
Chris Green is Head of Marketing Innovation at Footprint Digital, a digital marketing agency based in the East of England.
  • This field is for validation purposes and should be left unchanged.