Player Piano, or How Digital Marketers Can Survive the Advent of AI
Commentary

Player Piano, or How Digital Marketers Can Survive the Advent of AI

28th March 2017

This post is the written version of the talk I presented at The Inbounder World Tour Madrid on March 17. Others future The Inbounder events will be in London on May 2nd and New York on May 22nd

Who knows me or simply follows me on social networks, knows that I am a science fiction geek, which is not surprising in a man who in his childhood saw “Star Wars” in the movie theater or the original series of “Battlestar Galactica” on television.

Robots are ones of the main characters in sci-fi tv series, movies and novels.

These mechanical beings with human or superhuman intelligence have always fascinated us.

If we look only at the history of cinema, we can find some naive robots, like the Tin Man of “The Wizard of Oz”, or subtly dangerous, like Ava in “Ex-Machina”, or more humans than humans, like the mythical Roy Batty of “Blade Runner”.

Regardless of their dramatic roles, though, the common feature of all robots is that they are our substitutes.

Perspectives on the impact of robots on our work

The Player Piano Westworld

Credit: Westworld / HBO

Beyond the fascination we feel towards our synthetic skin substitutes, the problematic relationship between humans and robots has been put at the center of the discussions about the future that awaits us in many occasion in the last couple of years.

The evolution of robotics, in fact, has taken such a cruising speed that what appeared to be only science fiction, may now be reality (if it is not already).

The First Industrial Revolution devalued muscle work, then the second one devalued routine mental work. (Player Piano by Kurt Vonnegut, 1952)

What Kurt Vonnegut foretold in 1952 in his novel Player Piano, a world where the majority of the world’s population does not work because it is replaced by machines, and thus in fact lose the control over its own life, never seemed so probable.

His dystopian vision of the future, which at heart hid some irony about the enthusiasm for the enthusiasm about the evolution of science and technology typical of American society of the 1950s, is common in science fiction.

Maybe, we won’t live a Terminator-alike future or we won’t become like those fat cruisers, who are the humans that Wall-e so much misses in the Pixar movie, but it’s true that all the latest studies on impact of robotics on jobs seems pointing in the direction indicated by the great scientist Stephen Hawkins:

The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.

Probability of computerization of different occupations, 2013

This concern and, therefore, the need to intervene to balance (if not contrast) this future without work is not new.

In 1962, the same American president John Fitzgerald Kennedy indicated as a duty of the state to “maintain the full employment at the same time that the automation is replacing the human workers”.

Not much different was the concern that the economist Keynes, in the 30s of the last century, expressed when he coined the definition of “technological unemployment”.

But do we really have to worry?

The answer is yes.

If we take as an example the impact that robots and automation will have on the US labor market, then we can see how 43% of the US workforce is at risk of being replaced by machines.

If saying 43% does not impress you, maybe it will saying that 87,720,000 workers will be unemployed in the next few years in the United States because of robotics.

If we think at the working-class crisis that affects the industrial states of USA, we can see that more than having moved plants to cheaper nations, it was the massive implementation of industrial robots what made people losing their jobs (and made them vote Trump, even if they don’t know it).

It is true that other studies less pessimistic exist and that, on the contrary, indicate how AI and robotics will also create new jobs – many of which we do not even know what they are – that will replace the lost.

This positive view is also based on what has always happened in similar situations, such as when steam replaced the muscular strength of men and animals during the First Industrial Revolution.

The problem, however, is that the creation of new jobs does not go to the same pace as the destruction of the old or, at least, does not ensure their full relief.

If we re-examine the list above of the jobs, which are considered at risk by automation, we will see that the clear majority of them are jobs where the routine tasks are the majority.

Are jobs in the web marketing industry endangered too?

At first glance we, the digital marketers, could rest easy.

In fact, if we think of robots as machines as such, then shepherd dogs have more to worry than us, because there is already a robot – SwagBot – designed to replace them:

swagbot, the sheperd robot

On the other hand, if we think of automation also as software systems that facilitate and even replace human operators with their undoubtedly greater computing power, then things start to be less certain.

Automation in marketing is not new.

Who, in fact, does not know or have not used software such as Zapier or IFTTT to automate mechanical tasks such as saving in Dropbox the files we receive in our emails?

Many of the tools we use daily, from Answerthepublic.com for keyword searches to the same automation marketing systems as Hubspot or Pardot, aren’t but software that free us from performing repetitive tasks by making our work not only faster, but more effective.

The most advanced instruments are based on techniques that are today in the mouths of everyone:

  1. Machine Learning;
  2. Deep Learning;
  3. Artificial Intelligence.

The issue is that these terms are so many times misunderstood and badly used, especially by the press, so let me clarify them.

First: avoid misunderstandings

Artificial Intelligence

When we talk about artificial intelligence, we speak of computer systems capable of performing tasks that normally require intelligence, such as visual perception, voice recognition, decision making and translation between languages.

There are, then, three different types of Artificial Intelligence, as Tim Urban finely explained in this article:

  1. ASI (Artificial Super Intelligence), which ranges from a machine, which is just a little smarter than a human being to another that is billions of times more intelligent. Fortunately, this type of A.I. exists only in science fiction.
  2. AGI (Artificial General Intelligence), which is a machine that can perform any intellectual task that a human being. Creating an AGI is objectively a very complex task. However, thanks to investigations of companies like Google’s DeepMind, AGI seems something that may happen in not so many years from now.
  3. ANI (Artificial Narrow Intelligence), in which Artificial Intelligence specializes itself only in one area. This is the A.I. that already exists and to which Google itself refers when it says – for example, when RankBrain announced – that it uses A.I. for improving its products as its search engine.

Machine and Deep Learning

Machine and Deep Learning are often used as synonyms for Artificial Intelligence, when – in fact – they are not.

These are techniques that are used to get to equip machines of Artificial Intelligence

¿What Machine Learning is?

Machine Learning is the most commonly used technique nowadays, and in its most basic version is the practice of using algorithms to analyse data, learn from them, and, then, determine or predict something about the analysed subject.

So, instead of manually coding software routines with a specific set of instructions to perform a task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the assigned task.

A good example of Machine Learning use is the Panda filter, which Google has implemented in its algorithm to improve the quality of its search results.

Also at the marketing software level we can already find tools that are based on Machine Learning so to perform all the complex calculations that are the basis of its functions.

Two Spanish examples of such kind of software are Safecont, which aims to measure whether a website is at risk of penalization for low quality according to Google, and Adinton, which uses Machine Learning to offer the best conversions’ attribution data and, for PPC, the ideal bids that should be used to get the maximum ROI from a campaign.

¿What Deep Learning is?

Defining in a simple way that Deep Learning is not simple, also because many definitions of it exist, hence I will limit myself to reporting the most common ones:

  1. The use of a layer cascade with non-linear processing units to extract and transform variables. Each layer uses the output of the previous layer as input. Algorithms can use supervised learning or unsupervised learning, and applications include data modeling and pattern recognition.
  2. A system based on multiple levels of learning of characteristics or representations of data. The higher-level characteristics are derived from lower-level characteristics to form a hierarchical representation.
  3. To Learn multiple levels of representation that correspond to different levels of abstraction. These levels form a hierarchy of concepts.

In other words, Deep Learning is based on creating a base algorithm that is based on other algorithms and that, in turn, can create others by itself to achieve a designated objective.

The first example of use of Deep Learning by Google was the creation of an algorithm capable of recognizing cats in YouTube videos

If we talk about practical applications of Deep Learning, TensorFlow – thanks also to its open source nature – is the most common platform for creating Deep Learning based tools.

A good example of Deep Learning use is what the Distilled did with DeepRank, a system with which it is possible to predict which web page is more likely to be rank better with simply asking DeepRank to analyse them:

Deeprank operating schema

Deeprank operating schema

The results of this experiment were such that DeepRank proved to be more effective in evaluating the potential ranking of one page over another than the more experienced SEOs.

As you can imagine, the question whether our works, which we find so safe and so “novel”, are in danger returns even more strongly after the examples of software based on “artificial intelligence” shown so far.

In Social Media Marketing, all Community Manager specialized in Social Customer Care should begin to look suspiciously at the evolution of chatbots, which are being developed more and more (without some problems, to tell the truth) on instant messaging platforms like Facebook Messenger or Whatsapp.

Likewise, agency account managers, who normally devote much of their work to creating reports for their clients, should know that there are programs like NarrativeScience, which are able to create reports much faster than they do and with the same efficiency.

Also, even if we think we are not “in danger” because our jobs are less routine and more creative, we should be less confident.

For example, Condé Nast recently started using IBM Watson’s Artificial Intelligence software to identify which influencers to target among tens of thousands social networks profiles, not a group of Social Media experts.

Smint bot commercial

Did an algorithm create this commercial for Smint?

Finally, there are even examples of commercials created exclusively by algorithms, and which up to 46% of a testing group considered better than the ad created by a human creative.

Change Model to be not outdone

Since several years digital marketing has set as its model the figure of T-Shaped Marketer, a professional who – while having a basic knowledge of the disciplines of online marketing closer to his daily work – specializes in a single discipline or even in only a few areas of his main discipline of work:

t shaped web marketer

With the advent of techniques and software based on Machine and Deep Learning, though, this model begins to show weaknesses because, as demonstrated, the more specialized the task, the more a machine can do it.

If we are hyper-specialized in one or a few tasks like, to do just a couple of examples, the badly called SEO Copywriting or Semantic Search, then we will be more likely to be replaced by software that will know how to do our work at least as well as we do, but faster and more economically.

Fortunately, all the studies done so far also tell us that the algorithms are (still) weak in replicating decisions taken on unstructured data … that is, they do not yet can intuitively create solutions based on their analysis.

This great difference between us and the algorithms can give us the solution to the doubts about our future, and suggest moving to a new model: the π-Shaped Marketer, a figure that IBM itself is relaunching.

In addition, the π-Shaped Marketer is the model that best corresponds to the true nature of digital marketing professionals: being Technical marketers:

In other words, it is not us or the robots, but us with the robots.

“New online marketing” examples and ideas

SEO, Keyword Research and Natural Language

The most recent studies are showing how Vocal Searches, also favored by smartphones’ usage – already accounts for 20% of all searches performed on Google.

In addition, beyond what Googlers “officially” claim, it seems that one of the characteristics of the latest updates of Google is prizing/penalizing websites based on the relevance between search intent and the pages that are ranking for the searches themselves.

This evolution both in how people search and how Google interprets the content of online documents makes that the more traditional keyword research techniques are becoming more and more obsolete.

One of the most effective solutions, then, is to use tools based on Machine and Deep Learning, such as the Cloud Natural Language API developed by the Natural Language Understanding Team of Google.

Thanks to its APIs, we can perform a thorough syntactic and sentiment analysis and entities research of the web pages that are ranking in the Top 10 for the keywords that we have identified in our first keyword research.

Based on this analysis, we can identify:

  1. The common “dictionary” used by these pages, which we can use for optimising ours too;
  2. The tone of voice that best corresponds to the intentions of users searches;
  3. The entities that are usually referenced when talking about a subject for which the competitors are ranking in the Top 10 for a given search, and that – therefore – are considered semantically related to that same search.

In addition, if we are doing an International SEO Keyword Research, thanks to other Google API based on Deep Learning – the Cloud Translation API – we can replicate such analysis in other languages much more easily and, above all, faster.

The most interesting thing is that we can do all this without having to program anything, but simply using a tool like MonkeyLearn.

SEO and Internal Linking optimization improved by Machine Learning algorithms

All SEO know how one of their “secret weapons” is a good internal link optimization.

This task, however, is not simple and to carry it out in a really effective way it is necessary to analyze very carefully things like the servers logs, the users behavior within the pages, conversion metrics and, finally, SEO metrics like traffic from organic search, average rankings et al.

It is, in short, a task that, if it costs hours of effort and work for a small or medium-sized site, for a potentially huge one such as an e-commerce or news site it can even become so complex that, finally, very few really strive to optimize this very important on-site SEO facet.

A possible way to solve this issue could be using Machine Learning algorithm trained with data like Conversions per page, User Signals per page, Organic Landing Pages, Anchor text used and distance from homepage level, so to discover what pages in our site should be helped with a stronger internal linking

SEO and CTR Optimization

One of the most “hated” tasks among SEO because of its boring nature is the optimization of the Title Tag and Meta Description elements.

In fact, especially when we are working with a website with thousands of pages, the tendency is to create a basic meta tags, which, applying simple rules, can be adapted to almost every type of page.

This classic practice, however, cannot always be effectively applied.

In addition, it forces us to create meta descriptions that are tend to be poor in terms of CTR, and this, as some experiments seem to confirm, can become harmful on a SEO level, apart obviously offering organic traffic results many times far below those that the same ranking should get.

Fortunately, there are algorithms like Summarizer – available in the Algorithmia algorithm portal – that facilitate this task.

Based on semantic recognition, Summarizer gets what it promises with its own name: creating meaningful summaries of even very large blocks of text.

Think about what this can mean in terms of work and savings for SEO and for any writer who is asked to create excerpts of articles and posts.

Reporting

If there is an activity that, although necessary, is a black hole of work hours for every marketer, this is reporting.

In fact, this task is so costly at the level of actual work hours, that practically all the tools we use offer us a way to automate it.

The problem, however, arises when we must put together reports of different nature and tools because, obviously, there is no standard and every tool has its way, also graphically, to create them.

Therefore, then, it is so rare for us to be forced creating Excel templates after having collected dozens of different .csv files.

However, Natural Language-based algorithms can also help us with tools like Wordsmith, which can read and interpret Excel, Google Sheet, Tableau and Zapier files and transform them – after we have given some simple initial rules – in a written report.

SEO, Content Marketing, Content Strategy and image bank optimization

One of the areas where Deep Learning has shown its full potential is the recognition and interpretations of images.

Now the machines can recognize what is portrayed even in the most blurred photos, like in the most Hollywood action movies.

Specifically, what all programs based on image recognition do is to label the images themselves with the elements present in them.

An already classic example is Facebook: if we look at the code of a Facebook page, we can see how it tags all the images that we upload based on algorithms; so if we upload the photo of a dog playing with a ball, Facebook will tag the photo with “dog” and “ball”.

Facebook labels images via Deep Learning algorithms so to be able to better understand the content that we upload in our walls and, thus, recompile data that can then be used by Facebook itself and its advertisers to target us better.

Being able to satisfy a similar need is also behind the many uses that can be made of a tool like Clarifai or of free algorithms like Altify.

But these are not the only uses we can give to image recognition algorithms.

Consider, for example, how they can facilitate the work of classification and, finally, of using huge image banks such as those owned by the news sites or photo stocks companies for creating better categories, filters or even curated content.

Ecommerce and online personal shopping improved by Machine Learning

It is already possible to offer the users of an Ecommerce the possibility of buying things accompanied by a robot, which helps us buying better and faster what we are looking for.

An example is what The North Face does on its website with the help of IBM Watson.

As you can see in the screencast below, we instruct the algorithm with each search and purchase, and it – guided also on rules based on the same products categorization and established filters, and crossing them with open data as the weather forecasts – is able to propose us the products that best respond to our buying intention.

Content Curation and podcasting

Of all the things, we can do thanks to Machine and Deep Learning and the creative use of Artificial Intelligence, the most exciting is that now we can create with them products that would have been extremely tedious and expensive before to create.

For example, we all know how podcasting has gradually conquered market shares and how millions of people are listening and downloading them.

What is missing, however, is a website that not only offers the best podcasts by thematic category (of these sites there are many), but also present these podcasts in a transcript version, which can be of great advantage both to users and SEO, because transcriptions are a very effective way of positioning podcasts themselves in the search results, so to be able to acquire organic traffic and, therefore, to contribute even more to increase the number of downloads.

Thanks to tools such as Import.io, which we could use to scrape the podcast listings by category and thus download them in a systematic way, and thanks to an automated transcription tool based on natural language recognition and interpretation like it is Api.ai, we can quite easily create websites based on podcasts curation.

Moreover, we can substantially use the same technique to create content based on recorded public talks or video transcriptions of YouTube videos.

The combined power of men and machines

The seven examples presented above clearly show what we said before: the best way to survive the advent of bots is to work with the bots, not to combat them.

The algorithms are ideal for:

  • Recollecting data;
  • Elaborating data,
  • Analysis data objectively;
  • Creation of logical solution based on data analysis.

Us humans, on the other hand, are ideal for debugging the data with which the Machine and Deep Learning algorithms are taught, because the greatest risk of any technology based on Artificial Intelligence is not to endow the algorithms of all the representative data related to what we want them to learn.

And if an algorithm has erroneous or incomplete data, the same algorithm will offer erroneous results and degrades itself over time.

In addition to debugging, our role is and will still be to experiment and test the solutions proposed by the algorithms.

On the other hand, thanks to the fact that the algorithms free us of the most repetitive and hour-costly tasks, we will have more time to develop our true role as consultants and change drivers to our clients and our companies.

But the most important thing is that, unlike algorithms, we are naturally carried to creative lateral thinking, so – unlike robots – we can find creative or even illogical-but-effective solutions to problems to which algorithms only propose logical ones:

The changes that the irruption of Artificial Intelligence is forcing us to make are enormous, scaring but exciting at the same time and, above all, they force us to radically rethink our role as digital marketing professionals.

Going back to the television series and the movies that I love so much, the time has come to abandon models like Don Draper, the totally creative marketing man of Mad Men, or Elliot Anderson, Mr. Robot’s super tech geek, but to transform us In Robert Ford, the creator of Westworld, a storyteller based on Artificial Intelligence data analysis and manipulation.

Written By
Gianluca Fiorelli is an SEO and Web Marketing Strategist, who operates in the Italian, Spanish and English speaking countries market. He also works regularly as independent consultant with bigger international SEO agencies.
The State of Google Images and Visual Search
Latest Post from Gianluca
Commentary The State of Google Images and Visual Search
8th August 2018
  • This field is for validation purposes and should be left unchanged.