What Really Impacts Google Suggest Suggestions? An Experiment
Uncategorized

What Really Impacts Google Suggest Suggestions? An Experiment

18th April 2011

A few weeks ago I read with a huge amount of interest about Rishi’s experiment around Google Suggest where he suggested it was sheer volume of mentions which influenced what was suggested. This seemed a slightly different outcome from Brent Payne who’d seemed to be influencing the suggestions by increasing search volume.

Two different theories – seemed like the perfect opportunity for a test.

I’ll start by saying there’s no perfect way of testing this but I thought this experiment would be valuable all the same.

I started with an assumption, a pretty big assumption truth be told, that if something was the first suggestion it was a stronger suggestion than the tenth suggestion.

It seemed fairly logical that there was some kind of sequence of importance in the suggestions and if this was the case there might be a correlation either between the search volume or number of mentions.

So we took ten different key-phrases chosen completely at random, a mixture of commercial, navigational, branded and informational search queries and compiled the top ten suggestions made by Google, giving them a score, so the first suggestion a score of one the second two etc.

We used the Google keyword tool to measure the exact match search volume, the number of search results for the phrase both with and without inverted commas to see if volume of mentions appeared to influence the sequence.

And was their a correlation? I’m stats geek the correlation was random.

One flaw was perhaps the huge differences of scale, i.e some search terms had half a dozen searches a month or mentions while others has hundreds of thousands. So I decided maybe we should just look at the rank, i.e. where was it in the ten suggestions for that metric. Which of the ten suggestions had the biggest search volume, exact mentions, phrase mentions etc.

And still completely random correlation.

Now both Brent and Rishi’s experiments clearly had an influence, but this test did appear to either challenge those theories a little, or confirm that they may trigger a suggestion but have no direct influence over the sequence of the suggestion.

As well as this there were another couple of really interesting insights that came out of this experiment.

When looking at exact search mentions, some of the suggestions by Google had hardly any mentions, I mean single digits. For example one of the suggestions at the time of the test for “skinny mocha” was “skinny mocha starbucks calories” which as an exact match phrase has a grand total of two results.

Also some of the phrases being suggested had, according the the keyword tool, tiny search volumes. i.e. according to the tool “skinny mocha latte starbucks” has zero search volume, this is probably as likely a weakness in the keyword research tool but shows if volume is a factor it probably doesn’t need much volume to make an impact.

So caveat emptor, this was far from a perfect test, but has made me realise that there’s probably a lot more to what triggers a suggestion than just volume of mentions or searches.

Well worth some further investigation.

If you enjoyed this, you’ll probably enjoy this post from Danny Sullivan

Tags

Written By
Kelvin Newman is Creative Director at SiteVisibility and specialises in achieving natural search results and producing link-worthy online content, working with a variety of brands including the RSPCA & uSwitch.
  • This field is for validation purposes and should be left unchanged.