Subscribe to our newsletter

Get the State of Digital Newsletter
Join an elite group of marketers receiving the best content in their mailbox
Help us understand what topics we should be writing about!
We would like to help you get the best content for your role
* = required field
I want the...

What alerts do you want to receive?

What topics do you most like to read about?

A Midsummer Night’s Digital Dream

While all the Search world is occupied in talking about the Life and Death of Google Plus, during these days of relatively quiet thanks to the (short) period of (half) vacation I gifted myself with, I started “dreaming” about the future of search and digital marketing.

I thought it was interesting to dream about it; at the end, are not we already in the second half of this Year of Our Lord 2015?

On the other hand, thinking about the evolution of digital marketing, and Google in particular, is something that I always liked, and that recently has been the object of some interesting posts and talks.

As a disclaimer, though, I advise you that I am not going to talk about privacy issues and the rise of Skynet because:

  1. That would not be about digital marketing, but about politics;
  2. I do not think that is a possible scenario (you can call me an optimist, if you want).

However, if you read my posts since a long time, you should also know that I am not a candid person, all the contrary! I tend to the conspiranoic side of life, and I tend to consider the possibility of hidden-to-the-public reasons for what we experience.

David 8 Google

As that great reference in investigation that Fox Mulder was used to say, I “trust no one”… well, I tend to trust friends and family, but surely, I do not trust big corporations.

For instance, I cannot but think that all the frenzy Google and Facebook have for offering Internet to the entire world (Balloon Project and Internet.org) in reality is all about expanding their markets and not just being generous.

Another example is that I do not believe at all that the model Google has for its ideal evolution is the Star Trek computer. Sure, it can be part of its long-term project, but only as a step to their own David 8 (if not, why Google is so much investing in robotics?).

Machine Learning and Deep Learning

SEOs discovered Machine Learning with a slap.

The slap was Panda, back in 2011, the infamous algorithm able to learn through iterations what a “quality website” was and was not.

Panda Algorithm Machine Learning

Inspired to an original Rand Fishkin graphic.

Panda is just an example of what Google does with Machine Learning (here you can find what Google shares about it).

One of the first public examples was offered by Matt Cutts at Pubcon in 2013, when he was presenting how Google was using machine learning for associating different entities between them.

Matt Cutt Machine Learning baby steps Pubcon 2013

Sure, in 2013 Google was still moving its first baby steps in that direction, but two years in tech nowadays are like two centuries back in the past, and we just have to look to how Google Knowledge Graph and Featured Snippets have evolved to understand how Mountain View pushed its giant feet on the accelerator of its (self driven) Search-Car.

Knowledge Graph (in its wider aspect) and machine learning go hand in hand, as this patent here explained by Bill Slawski seems to be suggesting too.

It is logic thinking that Google will not stop using it only on Knowledge Graph, and will start using it in the same Search phase of its algorithm, which means that more sooner than later it will determine how a document ranks in the SERPs.

This also seems the logic evolution of Knowledge Graph, as an entity based system able to offer timely answers to users performing a search, especially on a mobile and voice search environment, and the logic final merging of Universal Search and Answers, whose borders are firmly blurring every day.

Deep Learning

However, how Google could use machine learning for determining also rankings? Rand Fishkin presented a fascinating hypothesis at the last MozCon.

Rand suggested that how Google is able to understand that a photo of a cat actually is a photo of a cat without the need to be instructed about felines, than it is theoretically possible that a Deep Leaning machine could start understanding why the sites now ranking in the top 10 are ranking in the top 10.

I know, it sounds a little bit involute, but in reality is so simple that Occam himself would suggest it as the best solution Google has for offering the best results.

If we reflect on what Google is doing in the last few years, the amount of data it owns and try to stop thinking at least for a second about just our little SEO world, then we will understand that it’s something that Google is already experimenting with.

Let see:

  1. Google has the data (the Knowledge Base);
  2. Google has the hardware for developing such a huge calculation (not only its servers’ hubs, but also some neural computers, which it uses for analyzing Big Big Data);
  3. Google has the human resources able to set up the deep learning algorithm (i.e.: Jeff Dean).

How this process could be? Rand well represents it in this graphic below:

Rand Fishkin Google Search Deep Learning hypothesis

OK… maybe I (and you) should really start worrying about Skynet, but still I prefer the image of Sergey Brin and Larry Page as existing versions of Peter Weyland.

I already suspected that no one in Google really knows the Search Algorithm in all its entirety – only little bits of it – but the Rand hypothesis is telling us that in the future probably nobody will know either that little bit. It will be us against the machines.

What can we do, then? Is there space to hope?

Yes, there is! Also because we are already working on those that can be the “factors” that a deep learning search algorithm will consider as positive ones.

Now that I am having this public midsummer night’s digital (and conspiranoic) dream, I cannot help but associate the start of machine and deep learning by Google with the rise of the definition of SEO as Search User Optimization. The definition was not new (Bryan Eisenberg was the first using – almost – this definition almost ten years ago), but the first time a Googler used it was just few months before Google started making public that it was experimenting with machine learning.

Search Experience Optimization means that SEOs should start thinking about optimizing web sites so to really answer to the searcher’s needs, so that they will:

  • Click more on their search snippets;
  • Stay longer on those sites;
  • Search them more with its named entity ID (the brand);
  • Offer strong “positive” amplification in terms of links, mentions and co-citations for their given topics;
  • Have stronger interaction/engagement on-page (is this a reason why Google signed a new contract with Twitter?).

Wait! Are not these things something we should be working on already? I think yes, because if not I cannot justify all the frenzy about Content, CRO, SiteSpeed et al that the SEO world is experimenting since Google started obliging it to change its mindset with Panda, Penguin and all the updates we experienced during these last 4 years [N.o.A.: oh yes, there are also those doing all that just to discover bugs with which trick Google].

However, stay with me, because there is something more about my “dream” I want to share with you.


If the Rand’s hypothesis is true, and I strongly suspect he is right, then all the “technical SEO is useless” articles writers are going to eat their own words sooner than later.

Remember that the algorithm right now possibly is a sum of different algorithms. So, in my opinion, will it also be in the future. A users’ signals-based algorithm matched with renewed codes signal ones

Google, in fact, will still need to understand what a web site is about in order to present SERPs that the users’ signal algo will need to match with the factors it was able to determine with it deep learning process. And that means Semantic.

I am not talking of Semantic in the literary terms, but in the science of information ones in particular.

This explain the renewed interest Google had about Schema.org in the last 18 months, especially with the JSON LD implementation, but also it gives a new meaning to the importance of links and the growing influence experiments are showing of unlinked mention or “no-followed” co-citations.

However, also Semantics in its more classic and literary meaning is and will be important.

david-real-emotionsIn a post I wrote last year, I presented a theory according to which Google would have soon moved from Semantics to Semiotics. I still believe in that idea and I think that the voice recognition Google has developed (not only Google, but also Apple with Siri and Windows with Cortana), is a great base Google has for training its search algorithm in how to recognize rhetoric structures and the pragmatic side of language, hence understanding the real tone of voice of a document. I am talking of sentiment analysis based on machine learning, and not just on randomly present signals (i.e.: negative comments, poor users’ based votes).

In this context, others trends like über personalization, predictive search, pervasiveness (or fluid mobility, as Microsoft defines it) and glocalization assume a new stronger relevance too.

In my digital dream I see us SEOs working in a sort of schizophrenic, but ultimately consistent way: optimizing both for the users and the bots.

Oh… Gianluca, but is not that the Present already? I think it is.



Gianluca Fiorelli is an SEO and Web Marketing Strategist, who operates in the Italian, Spanish and English speaking countries market. He also works regularly as independent consultant with bigger international SEO agencies.
  • Your brain and my brain really should get together one day and discuss AI, machine learning, semantics & semiotics, and the philosophical and practical impacts on SEO….