Friday Commentary: Traffic as Quality Indicator: What’s next in Link Evaluation?
Welcome to the already the fourth of our Friday Commentary. In this series every Friday experts will shine a light on the digital industry. Where are we heading, what is going on and how should we approach this as decision makers? This Friday it’s the turn of Bastian Grimm, founder and managing partner of Grimm Digital who specializes in international SEO strategy- & management consulting.
It’s hard to miss – but in case you might have – or you just tried to ignore it, let me quickly wrap-up a couple of things for you before we get started:
- Google’s Head of Search Quality, Matt Cutts, nowadays seems to have a good time scaring site owners on a per-country basis – during the last weeks Cutts and / or his team did “reminders on paid links” in different languages (like French, German, Polish) followed by a bit of tweeting and usually a crack-down of one (or more), mostly un-important (and long dead) link networks.
- In addition, outing seems to be getting into fashion (or better said: it’s already there) – I’m not going to be pointing fingers here, but we saw drops from Expedia, Halifax as well as couple of big fashion brands lately.
This development is worrisome from so many perspectives. Besides of the fact that it doesn’t really give any good press for SEOs when big sites do get penalized by Google (because of “those shady SEO tactics”) it also causes a lot of confusion. Even more so, it scares people away from doing proper SEO – which includes proper and solid link-building, of course. Which again is not really helpful – because one thing still is a fact: Even though, the importance has probably declined, you won’t be ranking anything in a competitive industry / niche without caring about links to your site; simple as that.
If you’ve seen me speaking somewhere lately, you might have caught me saying something like link building will not be going away any time soon, simply because Google relies way to strong on link based signals – what will change though is how the will evaluate quality of each and every link. But I’m also not very much a believer in what people call Social Signals; Yes, I get it that for some industries being popular on Facebook surely can help and I’m also in by using Google+ for indexation on steroids BUT: This does not work for every industry and niche the same way – however links do. And just as a side note on – for example – Facebook: I really don’t see how Google can rely on 3rd party data (and even replace their main asset, the link graph) without knowing connections, relations and potential flaws in these datasets.
So let’s fast-forward some and shine a bit of light onto something that a) is in my opinion way harder to game and b) Google can (now) measure, without having to rely on third-party data: Traffic.
Potential data sources for measuring traffic
Traffic is great, the more the better; everybody loves traffic – and so does Google. Websites that are more frequently visited than others tend to be more important; especially if you do that comparison based on industries and verticals. And you (or Google) don’t even need to know the absolute numbers, traffic sampling (if you’re interested in how this works, be sure to read this) it totally sufficient. And in an ideal world you can back-up these samples using a variety of different sources.
To give you a bit of an idea, let’s have a look at some data sources that Google might be able to use (and yes, in some cases – right now – they tell us, they don’t…):
1. The Google Chrome Browser: The usage rate is significantly increasing and depending on whom you believe it’s somewhere at around 30% – or even more – globally (TheNextWeb: 17.6% (08/13), Netmarketshare: 15.5% (01/14), clicky.com & StatCounter: 40.0%+ (02/14)). This can give a pretty solid impression on which sites are used and visited as well as which links are being clicked – and more importantly which is not. Probably also good to know: There is a lot of other data being transferred (check this if you’re interested) that Google can potentially incorporate – such as bookmarked URLs for example.
2. The Google Toolbar: If you’re not on Chrome already, that’s not a problem at all. Especially for folks still running Internet Explorer, the “Google Experience” is available as well – not to mention the fact that, again, the Google toolbar can sample traffic data and report details back home. Pretty much similar to Chrome, this can also track bounce rates and other signals on websites which are not Google properties (more on this over here).
Google has had a massive infrastructure – and that for years already – to track all this data and just by using these two sources, Google can get a very clear picture of the web. But it even gets better, if you look at some of the web-based services that Google offers, most of them “for free”:
3. Advertisement: There is a huge variety of services in advertising which are Google-owned, to give you a bit of an idea, the following tracking codes are embedded on the top 1 million websites, according to builtwith.com: DoubleClick: 18.9%, AdSense: 13.5%, Google Publisher Tag: 1.7%, Google Remarketing: 2.2% and AdSense for Search: 2.2%
4. Web Analytics: This is even better – Analytics is currently present on around 48% of the top 1 million sites.
5. Widgets: Another type of tracking codes can be based on widgets, such as: Google+ Widget: 5.6%, Google+ Button: 3.4% and Google Custom Search: 2.1% – and let’s also not forget about Google Maps with a massive amount of 69%!
As mentioned, these numbers a based on the most important 1 million sites – however even if those numbers are smaller if you apply it to the overall web, again it helps very much to give an even clearer picture of traffic and users movements. But we’re not done yet – if you’re not yet scared, now is probably the time:
6. Google & Starbucks: Google recently announced that they’re going to power Starbucks’ Wi-Fi – how convenient to also be able to track traffic pattern on public places all over the world as well. And did you know, according to engadget.com, Google is also building a new app for Android and iOS that automatically logs you into those networks; kind of practical – isn’t it?
7. Google Fiber: Let me quote Google on this: “A different kind of Internet and TV: Google Fiber starts with a connection that is up to 100 times faster than today’s average broadband speeds. Instant downloads. Crystal clear high definition TV. And endless possibilities.” – so Google becomes an ISP as well; you don’t even need a browser anymore to track user movement as you’re basically at the core of things…
8. Google Data Compression Proxy: So this one is really scary… have you ever heard of the Chrome Proxy for data compression? Basically Google sends you a message when using Chrome on your mobile and asks “if they actually are allowed to improve your user experience by making mobile browsing faster and help you save bandwidth?” How nice…! An interview over at RepublicWireless Google’s llya Grigorik says: ”As the name implies, Chrome Data Compression is a Chrome-only feature available for Android and iOS platforms. When enabled, unless the request is made from an Incognito window or is going to an HTTPS site, it is routed to the compression proxy which then fetches the content, optimizes it, and returns the content to users device.” – yes that means, Google is able to see every single request you’ll be doing on your mobile device as well, except for / to SSL-based sites (for now).
And there is more, of course: We didn’t touch on things such as Google Apps (which also “might” allow them tracking smartphones and therefore helps in determining things such as “real users”), Googles newest company take-overs in the field of robotics (Wi-Fi providing robots, anyone?) or even Google Loon, there approach to “Balloon-powered Internet for everyone”.
If you sum things up it becomes quite clear that Google does know a lot about how we’re actually moving on the web, which sites are being used how often and when, which links are clicked and which not – and it’s easy to break that down to verticals, even. So yes, a lot of data; and it’s not even about a single user but more about relevant samples.
But wait – what takes them so long?
One might come to the conclusion that a lot of this data is around for quite some time – and I’m not going to argue with that. As a matter of fact, a side of acquisitions, Google is pretty well known for its desire of statistics, so rest assured that they do start tracking as soon as a service will be launched.
That said there is a couple of things to keep in mind though: First and foremost probably the fact that also the interpretation of traffic data has flaws, for example I could imagine that in certain industries these so called traffic exchanges (for example due to sneaky exit pop-ups, pop-unders, etc.) are something Google needs to tackle in a proper way before really relying on traffic in total.
Further, there might be pitfalls if you consider the following case: If a website was banned by a search engine or even more likely, otherwise had a significant reduction in traffic caused by the loss of very strong inbound link, the websites’ popularity would also decline. In following months the overall traffic coming from other engines would also decline because of the websites popularity – which again would mean reducing traffic more, causing search engines to reduce rank lower and so one and forth until traffic stabilized at a new, significantly lower level. This is probably not as easy to handle as it sounds at a first glance – however I’m sure Google will get around
And lastly, there is absolutely no evidence that Google is NOT already – at least partially – using traffic in link evaluation (and rankings); of course there is no evidence officially confirming the usage of traffic and its patterns – but it definitely looks strange to me in a sense of this being Google, sitting on that massive amount of data and NOT using it at all? I just don’t buy it… what do you think?