Clicky

X

Subscribe to our newsletter

Get the State of Digital Newsletter
Join an elite group of marketers receiving the best content in their mailbox
* = required field
Daily Updates

SEO Forecasting: A Poor Allocation of Resource

9 March 2011 BY

5 Flares Twitter 1 Facebook 2 Google+ 2 LinkedIn 0 Buffer 0 Email -- StumbleUpon 0 Pin It Share 0 Filament.io 5 Flares ×

As I’m sure we all know, SEO forecasting is one of the most tedious and frustrating tasks that many of us have to cope with for pitches. In my short time at OMD I am quickly learning how important these forecasts are to larger clients and as a result our team have been forced to  improve the sophistication of our forecasting models.

When I first spoke to Bas about writing this post my intention was to write a bit of a rant about the cost and time consuming nature of forecasting and as many have already illustrated before “a forecast for SEO is not even worth the paper it is printed on.” However, given our involvement in an ongoing pitch and the subsequent improvements we have made to our model I think it is worth sharing my findings and suggestions and would love to hear what sort of things others in the industry have come up with to cope with the myriad of issues thrown up by this reporting.

So, if you are a technical SEO and you already know how horrible the data is, I urge you to skip to the end of the post and to weigh in on the comments below about some of the techniques and modelling you all have done to make life easier (e.g. Excel hacks, estimation tricks, number of terms use when trying to create a sample, how you deal with the long-tail, etc.). Though you may find some of the statistics and arguments useful should you find yourself needing to forecast!

However, if you are a client, if you have to deal with clients or if you have to sell to clients I strongly urge you to read the below issues with the data most SEOs use for forecasting and also read about the amount of time this sort of work can take – time that we could be spending to make your website better!

Issues with the Data – Mo’ Variables, Mo’ Problems

As most folks who have ever had to do a forecast will know, there are limited places from which to get relevant data and it very much depends on the type of forecasting and budget available to you that determines whether or not this data (flawed as it is) means anything at all. Additional requirements put in place by clients often actually limits the accuracy further though it can save a good bit of time from a researching perspective (i.e. we only want non-brand, we only want long tail, etc.).

Adwords Keyword Tool
When conducting forecasting in new industries or areas where you do not have a great deal of analytics and PPC data we will be forced to use the data from the Google Adwords keyword tool (please note that the API data is generally a bit better). Amongst the SEO community this data is notoriously inaccurate.

For example, in one recent exercise we found that on average the purported exact match volume for keywords being reported by the tool was off by more than 700% (e.g. the tool was reporting a total search volume for a term as 65 and we had seen 400+ visits for that term the previous month). I am sure there are worse examples and the fact of the matter is that we obviously can only rank for one spot – thus the total volume for this term is not only likely to be 400+ but more realistically will be well over 1,000.

This data is notoriously bad but often it is the only data we may have (particularly new or small agencies working for new or small businesses). However, if we are lucky, we are already receiving traffic for some of these terms or can run tests through PPC to try and gauge from where this data is coming and hopefully get more accurate insights.

*I would suggest using Richard Baxter’s keyword tool here as it relies on the volume from the API and allows you to get the volumes for specific terms much more quickly.

The 70/30 split (or was it 80/20? No, 60/40)
The next major issue with the data was very aptly pointed out in a pitch situation after our first attempt at a forecasting model that relied heavily upon the AdWords data. The above data (though incredibly inaccurate) was designed for PPC purposes as Google clearly has no need or incentive to provide SEOs with tools. Therefore, it is probably safe to assume that this data is intended to show all search volume rather than share of the search volume that is going strictly to organic search.

It is essential then that if we are to use the AdWords Keyword Tool data (horrible as it is) that we also need to discount the traffic opportunities through organic search by ~30% (or more). This one is easy enough to overlook but it also adds another element of inaccuracy and uncertainty because this split (and the degree to which it exists) differs horribly and as Pete Young points out: “is due a significant review.” Hence: another layer of uncertainty.

Analytics Segments
One way to improve the reliability of the first set of data (that is to say, volume and potential volume for certain keywords) is to use analytics data for existing traffic to your site for a sample of terms. Sounds great right? If we don’t have to deal with the Keyword tool data we should get much more accurate predictions!

Unfortunately this is not the case. One of the most sensible ways to look at the traffic data would be to create a filter to catch all organic traffic around the term or set of terms in which you are interested. This could save you a significant amount of time against looking at every single term on a case by case basis and would allow you to spit the data out into Excel (much easier to manipulate data here).

The problem with this (particularly for bigger sites over longer time periods) is the increasingly common and unfriendly reminder below:

 

Samples can be good and accurate though! …

Not so much in this case. I think my point here is beginning to come clear. AdWords tool volumes are off by as much as 50% (in our experience anyway), but can be off by 1,000% or more. Analytics and PPC data whilst more accurate than the other options, still will lead to sampled data when looking at large sites or time periods and +/- 99% accuracy on my forecasting model is not something that is going to sit well.

Click Through Rate (CTR) Data
It is fairly safe to assume that most of your SEOs doing forecasting projections rely quite heavily upon the AOL click through rate data to forecast increases in traffic based upon position in the search engine. If you are lucky, your agency keeps track of a broad set of terms and tries to look at the impact of different rankings and so forth, but almost all of the forecasting models I have seen use this data rather than the in house data (which requires a load of servers, hacks and ongoing attention).

So, what’s the problem?

This click stream data came from a leaked document from AOL regarding the behaviour of roughly 650,000 searchers across 20 million queries over a period of three months (source). This is great data from a massive set but the problem is the landscape has changed a great deal. We don’t have one SERP anymore or one “type” of SERP anymore. As I’ve mentioned in the past the landscape is constantly changing and many results now throw-up local results (including a map), shopping results, etc.

And just to make things even more interesting, it is thought that adding microformats increases CTR as well!

So, it would appear that we have yet another layer of complication and another imprecise variable leading to further muddied waters.

Final Word on the Data (TL;DR)
In the very best case scenario we would be asked to improve rankings on a certain set of terms for which we have both PPC and Organic traffic data. Failing this you would at least be able to pay to run a number of short PPC campaigns to capture impression data for the terms in question (don’t forget to set the phrases to exact match!).

However, even with the most accurate possible data we are dealing with a number of unknown and ever changing variables (CTR, popularity of queries, analytics and user behaviour, etc.) there are a number of factors that can drastically throw off this information. Without proper funding and resource it is time consuming task that will not lead to reliable results (no matter how scientifically we try to treat a decidedly unscientific task).

Issues with the Task Itself

Time is Money
At the end of the day, this sort of research (to be done “well”) requires a great deal of time. Setting aside the fact that my time is valuable and I should be paid for work I do on this sort of thing there is a much more important factor: my aim is to help clients increase their traffic, increase their converting traffic (either through CRO or by conducting smarter keyword research), and help them provide a better experience for their users to help them make more money.

I would love to tell them just how much money I can make them, but for the amount of time forecasting takes me I’d really rather just get to work on actually improving their performance and increasing revenue – we can discuss whether it has been satisfactory after a short time period and most contracts are more forgiving in the first few months.

“Getting Burned”
Every single time I have conducted forecasting at the request of a client I have been told how they have been “burned by this sort of thing in the past.” I am by no means having a go at any individual here because literally every client has said it to me and I believe it is largely the result of SEOs (and sales teams!) promising the world when they cannot really deliver. They can blame the data if they like, but ultimately most times it will be because they have stretched the truth or promised something that they do not genuinely believe they can deliver.

There are a number of causes to this but I urge the utmost caution to all future clients: it is about trust. If it sounds too good to be true, it probably is! I know these points sound a bit cliche but at the end of the day it all comes down to trust. Some of these contracts are worth a great deal of money and it is easy to get overzealous with this sort of thing.

It is my aim to always be honest, to be realistic, but also to give ranges. When doing a forecast I try to provide a number of cases: where you will be if you continue as you are, where you will be if you do nothing, where you will be if you hire us, and where you will be if you actually help us along the way.

Many people provide pie-in-the-sky figures and then blame the client for not implementing their ideas. I would much prefer tell the client that the best results will not come without their cooperation and show them realistic estimations based upon their work with us versus our work without them versus their work with someone else. Timelines do not always permit for such scrutiny but it is my aim to deliver forecasting with these ranges to help paint a clear picture of the opportunity.

Pro-tip to clients and salespeople: do no promise something you cannot deliver and always be scrutinizing of forecasts – even the best forecasts rely on heavily flawed data. The reason clients get burned is either because they believe the overhyped and unrealistic projections OR because they put on pressure to achieve unrealistic results founded upon unrealistic expectations and unreliable numbers.

Keyword Research
For a forecast to be done properly (and as a reasonable demand on the agency or consultant) it really requires in-depth knowledge into your industry. Because of non-competes and other legal issues it is rare that we will have an existing client doing the exact same thing you are in the exact same industry. To get the most out of forecasting you really need to have this knowledge either by way of previous (recent!) work from a trusted source or to conduct the work yourself.

For this very reason I would strongly advise selling a piece of keyword research (perhaps as part of or as a compliment to a Site Audit) before agreeing to undertake an in depth and robust forecasting model. With this data you will be in a much better position to look through the SERPs and you will already know where the biggest opportunities lie.

You Spent How Long?!
Somewhat separate to the above point that I would rather be improving a website and a company’s revenue than working on forecasting models, I think the point remains that to do forecasting well it takes an excessive amount of time (even with existing models).

Some agencies have dedicated sales teams with support staff to produce these sorts of things and treat that as overhead or new business resource. However, at the agencies for whom I have worked this modelling has been handled by some of our most experienced SEOs and/or dedicates statistics and business forecasting professionals.

To do this work properly (including the keyword research if not already done) we are looking at a range of anywhere from 20-60 hours of labour. This is a rather large range because the number of times you have done the forecasting will increase your ability to work quickly, Excel wizardry and specialised support helps but many agencies don’t have this.

If basic forecasting is requested it can obviously be completed in much less time but reaching a model in which you are confident (at least in terms of the terms chosen, the regressions used, and so forth) is a huge piece of work. If this sort of forecasting is required (for the board or whomever else) that is fine but requires compensation and it is no wonder that most forecasting predictions are thin and usually about as good as a finger in the air at predicting results.

Ever Changing Rankings
As we’ve all read ad nauseum, rankings don’t mean what they used to. We’ve got “local”, “seven packs”, “personalised search” and it seems like a hundred other varieties of results pages. There are increasingly fewer “traditional” ranking results and this requires a more holistic approach to SEO as a whole but also truly limits our ability to predict and forecast results. If Google decides to drastically change the layout of the SERPs tomorrow such that only paid advertisements are showing on page one, our assumptions that we could get you to position six with the pertaining traffic are no longer valid.

At the end of the day the work is interesting because things are always changing and a good SEO can stay on top of these changes. But with massive changes to the landscape of the SERPs results may not come overnight and that term for which you moved from eighth position to sixth position may now be on page two!

Final Suggestions to Sales, Clients and People Involved in Pitches
If you are thinking about asking for forecasting or selling in forecasting, I would strongly advise that you either require complete cooperation (with access to all available data) and bill the time accordingly, or, better still, sell the initial work that is likely to improve the quality of this work as a one off project and insist that all forecasting should wait until extensive keyword research is done. Your SEO team will thank you and will likely deliver better results.

Please remember the following when considering forecasting:

  • SEO forecasting is not accurate due to data limitations and should not be relied upon.
  • SEO forecasting will be accurate to the extent the data available is accurate (translation: they may have to pay to run some PPC testing or provide you with existing PPC results).
  • The data will be even worse if you do not provide the agency or team with analytics access.
  • Good forecasting should be paid for. Our most recent forecasting (which included enlisting the help of our Data Science team) has taken up a minimum of 20 hours. The model we have built is sound, but this time would almost always be better spent consulting in our area of expertise.

The conclusion here is quite obvious, but just to be perfectly clear: SEO forecasting is unreliable by nature of the data. Intelligent estimates can be produced though doing so is immensely time consuming and a poor allocation of resource.

When you are looking for an agency judge them upon how you get on with them, the degree to which you trust them, what others have to say about them and their past results. Anyone can make promises they won’t fulfil, but this is not the way to pick an agency.

Rant Over – Tune in Technical SEOs!

I hate to write a post without providing at least one takeaway tip for SEOs because the bottom line is: clients are still going to request these forecasts. The tip, unfortunately, does very little to help with the accuracy of the data though it has yielded a considerable time saving. If you are looking for basic instruction to predict yearly traffic look here first.

Our Curent Model & Other Techniques
Our model has come from countless hours and attempts at forecasting and the need to reduce time spent on these acitivties. The recent pitches in which we have participated have required a level of attention to detail we’ve not previously seen and as a result we have teamed up with our Data Sciences team (please visit your resident Excel/Stats expert if you have one in your team) to produce a spreadsheet and a few models that can be used again and fairly easily manipulated.

Previously we relied on manually going through the SERPs for as many as 50 keywords and working backwards from where we reasonably thought we could get a site ranking in twelve months time (based upon competitor research, intuition, and experience in the industry). In this model we had to manually enter the progress month on month, multiply this by the volume and CTR data we had available, apply a seasonal impact and estimate traffic growth month-on-month for these terms. We then took an average uplift in traffic (across these terms compared to Year-on-Year data) and applied this average percent uplift and applied it to the rest of the data.

This wasn’t a perfect approach (due to the data issues I mention above) but we felt it was fairly reliable and something we felt offered a reasonable approximation; however, it was exceedingly time consuming!

There were some obvious problems with this model though so we have upped the ante a bit and I would strongly encourage others to look into ways to automate these processes and provide feedback or tips in the comments below.

Upgrade 1: Use Existing Data to Automate
Obviously the manual task of filling out month-by-month ranking improvements is not scalable in the slightest. However, it’s also not reasonable to just assume “oh, we’ll improve 2 positions each month until we reach #1″. In reality, the positions jump around a bit and there is usually a quick uplift at the beginning (more pronounced when you have a strong site creating new content) before things get more difficult – it’s not linear.

There are a number of ways to automate this and I strongly suggest enlisting the help of someone with mathematics and excel expertise. Once you have created regressions for the improvement anticipated over twelve months you can automate the process such that you enter the current rank and the rank that you expect in twelve months time and Excel can return the rankings for the intermediary 10 months as to how you get there. We have achieved this automation using coefficients and logs based upon where we are now and where we expect we will be in 12 months as a result of our consultation with the Data Science team and used our previous manual method to help sense check the results.

This will not perfectly mimic improvements because there tends to be a fair bit of bouncing around and usually ranking improvements are inconsistent, though we have found the data to correlate well with the previous method. This has helped save an extraordinary amount of time going forward and once created can be used over and over again.

In an ideal world these spreadsheets could be created to account for different types of SERPs (e.g. enter “3” into a cell for one type of curve, “2” for another) and so forth and I have no doubt that this can be taken a great deal further but the level of analysis required to do this truly would require a dedicated effort with a great deal of resource.

One of the areas that we are still working on automating a bit better and I would love to see advice or tips from others deal with: accounting for the long tail in these predictions. This is an obvious area of difficulty because a site with a lot of authority can produce new and meaningful content and usually rank quite easily for long tail terms. However, forecasting on this area remains extremely difficult due to the knock on effect of other improvements across the site as well as the lack of search volume data for long tail queries.

One obvious issue with using the regression model is that it assumes all keywords will be weighted the same and it is unrealistic to try to optimise for a number of terms all at the exact same time (onsite and offsite). However, if you create groups of keywords based upon priority level and staggered the start dates for improvement (reflecting more modest targets if they won’t be targeted until month 9 “where could we be in 3 months for this term”) or by fitting improvement to more gradual curves.

Finally, as I mentioned earlier, one nice filter that we have added (for the cases where we are forced to rely on the AdWords data) is simply a column that allows you to tweak the PPC/SEO split (dependent upon number of ads served in that SERP, etc.).

Essentially, once you have set up this spreadsheet it is considerably easier to adjust variables (by changing your estimated final ranking position) based upon an Optimistic, Pessimistic, and Independent outcome and adjust for changes to ads served for a given term, priority levels, etc.

Pro-Tip: Providing different layers of predictions allows you to illustrate the importance of working WITH the client and that with their participation your results can be considerably stronger. Showing them the difference between where they would be without you, where they’ll be if they leave it up to you, and where they’ll be if they work with you and implement your suggestions can be a very persuasive argument and help ensure things get done!

Bonus Tip to deal with CTR Accuracy
As mentioned above, there are serious concerns dealing with the CTR Accuracy. Although we discussed ways to potentially increase the accuracy of the search volume data (by running PPC campaigns for sample terms and by using existing Analytics data when not based on samples) this still doesn’t account for the lack of accuracy with the CTR data (based on 2006).

As mentioned, the best possible way to deal with this would probably to monitor the SERPs for quite some time and do your own testing based upon type of result and dependent upon the position in which you are ranking (using full referral strings or other shadier methods).

However, one more accessible way to cope with this outdated CTR data was provided to me by Sarah Carling:

I think this tip is probably most useful amongst rankings outside of the top 3, but it is an interesting alternative to relying upon old data.

CTR accuracy will still always be limited and it is important to take into consideration the type of results shown for a given query will have a significant impact. If you cannot rank in Google Places, (as a result of a lack of a retail location in that area) it is worth being realistic about the number of organic listings available for a given term and evaluate if you are truly likely to jump onto the first page for these terms before promising the traffic from them. With the increasingly geo-targeted and geo-sensitive nature of results this may cause issues for a number of online businesses, but it’s important to be realistic!

Example: in the above SERP; ranking “5th” for the term “gym london” in the natural listings (i.e. not Places) is not going to give you anywhere near 4.86% click through anymore, it may not even get you to the first page, which means we’re looking at more like <.66% CTR.

The moral of the story here is that you have to be realistic and that good forecasting will take time (to give an accurate estimate of where you will rank you will probably have to look at each SERP manually to see the type of result displayed). There are ways to automate and some ways to improve data accuracy, but the task is still incredibly demanding.

We will continue to be asked to forecast, but the story remains the same: doing it well takes time, time is money, and the forecast is only as valuable as the data (not very). Either forecasting will become increasingly expensive or it will continue to yield unreliable results. The best advice I can give to fellow SEOs is to continue to push back on this, cite the above reasons and in the meantime to automate as much of the process as possible – oh, and make sure you learn Excel or hire someone who knows it inside and out.

Thanks very much for your thoughts below in the comments and please get in touch: Sam Crocker on Twitter.
AUTHORED BY:
h

Sam Crocker is SEO Associate Director at OMD UK. Sam focuses on increasing traffic and conversions for websites whilst always keeping his eye on a company’s bottom line.
  • http://www.receptional.com Dixon Jones

    Hi Sam,

    Nice article and I share your frustrations. I am glad I don’t have to do that so much these days at a personal level. But here’s my take…

    I tend to work with an idea that Sara Andersson and Bill Hunt have illustrated in the past, of an “opportunity matrix”. Instead of predicting every possible outcome, the idea is that you take the headline (or long tail) phrases, check the client prominence and then show the potential uplift on traffic if they ranked #1 for every phrase. Now of course this is unlikely to be acheived, but it shows the POTENTIAL traffic available from search. if the client know average conversion rates, then this can equate to a real P&L number – at least in theory. Now the question becomes… is it better to spend resources chasing THAT traffic over (say) banners or offline or other marketing strategies.

    That’s the Theory, anyway. In practice we end up giving day rates and seeing if they want to progress.

  • http://www.samuelcrocker.com Sam Crocker

    Hey Dixon,

    Thanks very much for your response. I like the idea of the opportunity matrix and have had a poke around as I’d not done a lot of reading around that so very much appreciate the response and the new suggestion.

    Out of curiosity, do you have any suggestions for folks about finding out just how long the long tail is? In the scenario you describe it seems like the sky is sort of the limit. At what point do you stop and what tools might you suggest folks use to help uncover those long tail opportunities (any time saving tips without having to do the full keyword research would be immensely helpful).

    In the end I think this is a good approach to take, though I’m not convinced it would be enough in the case of many of the clients who have been burned in the past. I agree that it’s all a bit “finger in the air” anyway, but it would seem like this opportunity matrix is very optimistic and in the case that you were pitted against another agency (rather than pitching for a first time project) think it might be tough to rely on. Thankfully, there is plenty of scope for it still to be used though in some of the less scrupulous cases and it does seem much more practical in terms of time implications as well.

    Thanks Dixon,
    Sam

  • http://twitter.com/sarahcarling Sarah Carling

    Ultimately I agree, forecasting is a nightmare for SEO and barely worth the paper it’s written on, but I think it’s an area SEO’s need to come to terms with, every other marketing discipline has to forecast growth, if you’re in house it should be forming part your marketing plan, and if you’re agency, you have to accept that your contact may have to provide it as part of theirs.

    Just to clarify about how I go about forecasting, I use an average position model, so for a new client I would take a sample of about 20 keywords and check rankings, anything above position 50 I factor as 51 for the sake of this average. so assuming the average comes out at position 25, and the click through rate is 0.5% every average position increase would increase that CTR. There are a few caveats I build into that too, I never project an average position higher than 5, or a CTR higher than 5% (again bare in mind these are averages across all positions), for growth above that I build in increases in potential volume from keyword expansion.

    I’ve been using variations on this model for a couple of years, and generally find it to be quick, easy to produce and conservative in terms of what is achievable.

    Also if you have previous data for the client, you can build this in, looking at their average position over time and CTR growth against the available volume to get a more accurate indication of CTR growth.

  • http://samuelcrocker.com Sam Crocker

    Thanks so much Sarah for sharing your methodology. Please accept my apologies if I misunderstood or misquoted you in any way. Your way does definitely sound much quicker and, if I’m honest, it sounds as good as any of the other standard approaches I’ve heard.

    It is how I have historically done forecasting as well, but ultimately a few potential clients of late have been expecting/wanting more. Totally understandable why companies would want something reliable, but obviously leads to a lot more work (unfortunately) and as highlighted the real stumbling block is the quality of the data available in the first place most times.

    Thanks again,
    Sam

  • http://richardfergie.com Richard Fergie

    I’d be interested in seeing a forecasting model for a TV ad campaign or similar. Has anyone got any experience of this?

    Sometimes I think that we are too hard on ourselves in digital because our tracking and data is not accurate. In fact, it is far more accurate than any other channel out there but we still seem to beat ourselves up about it.

  • http://bowdeni.com Bowdeni

    Awesome post, and one that’s very pragmatic. I think I’m exactly in the same boat to you; working for a big media agency on TCR. I’d say at least 50% of my time is spent forecasting at the moment, a combination on new and existing business.

    Reading through your post, I’ve encountered exactly the same issues in regard to CTR, and accounting for the long tail. Generally our approach has been the same. Knock out X keywords, and forecast their progress and the incremental clicks. It is archaic and with it’s frustrating. CTRs vary hugely; especially in the financial SERPs. Take a SERP such as credit cards and the percentage of clicks coming through organic is pretty horrible. In some cases I’ve only forecasted about 35% of clicks coming into organic, and that’s before even working our CTR.
    For the long tail, I generally apply a proxy. Forget the 80:20, 70:30 rules. If you get the phrase to exact match ratio, you have a good idea of how much the hero term accounts for against longer tail. Generally CTR is better on the long tail so factor in that…. yeah forecasting is a headache.

    Already have traffic data is a Godsend. Forecasting is pretty straight forward. I’m usually pretty accurate with it. Today I’ve forecasted in the impact on clicks, QTS and QTS on a highstreet brand. It’s got to the stage where I find assessing my forecast against actuals as exciting as adding incremental volume (!).
    Please do continue to blog your learnings. I for one find it some of the most interesting and practical articles around SEO.

  • http://patrickmoogan.com/blog/ Paddy Moogan

    Great post Sam and one which I know has come from your own experience and frustrations over recent weeks!

    I’d agree with Sarah that SEOs have got to get used to this kind of work, especially if you want to win the big contracts. This opinion also ties in with my belief that whilst SEO will never die, it will become more aligned with traditional marketing methods and expectations. Marketing people are expected to create forecasts, SEOs will need to do the same.

    I think that one thing that SEOs can do right now, is to look at keyword potential as a aggregate rather than trying to predict traffic on an individual basis. For example if you have a group of 100 keywords which all contain the words “plumber london”, then you look at the search landscape for these 100 words to get an idea of where the potential client sits in the market. I think using a large set of data and getting an average position for the client, you are showing the size of the market whilst showing that there is a lot of work to do in order to get to the top.

    Be great to see you publishing more posts like this :)

  • http://www.level8ds.com Geoffrey

    Samuel – Thanks for the opportunity to gain from your insights. More than anything, I wanted to complaint your soft reference to P. Diddy and Biggie Smalls in your first h3 tag – Mo’ Variables, Mo’ Problems. Thanks for keeping SEO slightly gangster.

  • http://www.seolair.com George Fischer

    Sam,

    This post is a breathe of fresh air! Data driven forecasting is key for anyone who is doing real SEO. It’s extremely important to have an NDA and gain access to all of the clients current and past analytics. I’ve been using CTR data that is more recent than AOL’s from Chitika: http://insights.chitika.com/2010/the-value-of-google-result-positioning/

    Also when I’m doing forecasting, I usually give multiple ranges. For example forecasting keywords in position 1, 3, 5, and 7-10 (avg) so they can see the huge differences in revenue ( or leads) based on achieving various SERP positions.

    CTR data gets even more complex based on the industry. For example, users searching for high price point products or products with longer buying cycles are usually going to click on more listings than a user shopping for clothing, etc…

    Great stuff –
    George

  • http://www.seoskeptic.com/ Aaron Bradley

    Thanks for this post Sam. If SEO is a black art, SEO forecasting is the black art of black arts. I have been required to provide many SEO forecasts in my life, and they were all predicated on a hopeless chain of suppositions that – as Sarah said – render the forecast “barely worth the paper” it is written on. An honest forecast would look like this:

    Here’s the forecast. If we can rank for these 50 keywords, and if the rankings for these keywords also elevate rankings for a related number of long-tail keywords that are not enumerated, and if our estimate of traffic generated from above-fold-rankings for these keywords are correct, and if this traffic converts at or near the historical average of similar traffic from search on our site, then I feel confident that this will result in a net revenue increase of X dollars for the company. Thank you, and please approve the SEO budget required to meet these objectives.

    An additional quick and dirty methodological suggestion, at least for ecommerce sites. Google Analytics will provide you with an average keyword value. I segment this into branded and non-branded keywords. However fanciful a traffic projection may be, this enables you to quickly make a bottom line calculation: if my non-branded AKV is $0.50, and I project (ha) that SEO efforts will result in an additional 100,000 visits from search in a year, this translates into $50,000.00. I use this, actually, in conjunction with a version of your optimistic/independent/pessimistic scale. This can often be effective in demonstrating the basic value of SEO to a skeptical executive – “look, even if only the pessimistic forecast is correct, this will result in a revenue increase of X for only and investment of Y.” Of course for non-ecommerce (store) sites this methodology can also be applied to other success metrics (even though it may require more number crunching): for example, taking the average revenue generated from a signup, calculating how many search visits it takes to generate a signup, and calculating the average search visit value from that.

    But I also agree with Sarah that it is a requirement. Not only to win new contracts but, in-house, to win resources for SEO efforts. One of the reasons why companies are more willing to invest in PPC and SEO is because it’s a hell of a lot easier to calculate ROI for the former than the latter (and often, ironically, they’re willing to make that investment even when the ROI is demonstrably negative on PPC, with an emphasis on “demonstrably”).

    On a side note, I think it’s interesting that so many of the methodologies discussed here revolve around a predictive analysis of rankings. Yet at the same time over the past few years there’s a chorus of opinion that SEOs shouldn’t worry about rankings but only focus on traffic. I don’t think it’s possible to estimate traffic without taking rankings into account – including conducting a correlative analysis of keywords one does rank for to estimate likely CTRs and traffic volume for keywords you hope to rank for.

  • http://twitter.com/daveminchala Minchala

    Very nicely done here, sir. I also appreciate the top-notch contributions to this comment thread (especially Aaron’s).

    One concept I’m stuck on is that in order to land a big account, sometimes one must do a forecast. It seems that you and others in the comment thread take that stance and I don’t disagree per se, but I wonder if you’d consider an alternative approach. Yes, most clients are used to receiving forecast deliverables before signing off on any marketing campaign but then those same clients hired you to do something different than what they’re used to in online marketing. I’m always loath to tell clients how much more traffic or revenue the campaign will generate for the same reasons you point out in this article. Sure, we can spend the time crafting a well thought out forecast that incorporates very safe and reasonable assumptions, intense formulas, and a matrix of data points that would make the nerds at IBM quiver in fear. Its still fuzzy math based on data that we know is flawed (or downright wrong).

    My point of views is somewhat unique in that at work we can offer the client a diverse solution that incorporates SEO, PPC, Display, retargeting, and email. So we have other opportunities to give them projections for other traffic/revenue streams based on better data. When we talk about SEO, we try to steer the conversation to where performance is now and what the website has to contribute in order for there to be positive return on investment. We don’t gut-check the assessment of whether or not that required contribution is reasonable and achievable, but it is a less exhaustive proposition than a forecast.

    So what I’m driving at is if you are able to convince your client that any forecast you give them won’t be worth the work needed to do as best a job as possible, you do still have to provide value beyond a list of keywords. So i understand the pressure to agree to do a forecast. But perhaps as the experts in the room, you can convince your prospect that you ought to leave the forecasts for channels that are more reliably quantified and find a way to integrate the SEO strategy into their overall marketing and even branding strategies so that the value becomes one of supplement that will be measured over time as opposed to one of target numbers that aren’t real. Again, i realize it might be easier to do this if you’re already going in there with CPM and CPA forecasts for other channels but the core is still valid in my opinion.

    If everyone’s good at their job, then you’re set up to delight the client. If in the beginning you acquiesce to client demands for things that won’t help them, you’re already starting down a slippery slope. At the end of the engagement if things haven’t gone the way they were projected because of so much client involvement in an area that isn’t their core competency, it’ll still be seen as your fault. And deservedly so.

  • http://samuelcrocker.com/blog Sam Crocker

    I just wanted to return to thank everyone for their open sharing of techniques as well as the commiseration. Really pleased with such a response!

    @George – I have used Chitika data as well and some swear by it. I realise it is more up to date than the AOL data but it still doesn’t seem to represent the data we’re dealing with today – i.e. it doesn’t seem to fix the issue of dealing with one-box, universal, three-packs, etc. It is a bit of a nightmare still and I think to a certain extent consistency is the best one can hope for here (e.g. using the same data throughout). I would love to see someone run this over a larger data set *hint**hint* :)

    @Aaron – thank you so much, a quick and dirty data methodology is always welcome and ultimately is probably about as far as this sort of prediction warrants. Unfortunately, as has been the case in a recent set of pitches, it wasn’t enough for us and we had to go back to the drawing board to provide something a bit more robust. I’m not complaining (too much) because we now have a pretty neat spreadsheet where you can just drag and drop the volume data in and the rest is handled pretty much automatically, but it’s still not great. In any event, thank you very much for your thoughts and I think we’re definitely on a similar page.

    @Minchala – thanks again for the comment. Similar to my point with Aaron, I totally agree that as the experts in the room we should be able to convince the prospect that forecasts ought to be left off the table. The problem was (in a particular handful of cases) it was the case of very large organisations who needed to justify their spend to their CEO, board, etc. Unfortunately the answer: “the data is meaningless” didn’t seem to matter and the concern was much more about methodology than accuracy. I think it may well also have been a case of wanting to see how we deal with this, how our minds work, and what our capabilities are. Needless to say, I would not look forward to doing the activity again. I totally agree with your words of warning, but in the case of some of the bigger clients we work with some of the contract is always going to be based upon performance so I think the exercise (though a complete waste of time in some respects) is equally valuable to us as the pitching agency to see if what they are expecting is realistic against what we think we can deliver.

    Thanks again for all of the great comments and for sharing some of your methods. Really helpful stuff!

  • Curtis

    Great post Sam.

    I have to ask – does anyone have/know of a sample of one of these monsters out in the open?

  • http://www.yucca.co.uk Ingo Bousa

    Very interesting stuff, Sam. You know we talked about the quality of the basic data before and it’s very easy to get frustrated when approaching an SEO ROI forecast based on rankings, traffic, CTRs, etc. The biggest problem are the CTR and traffic figures and some other variables that each play their little part without being easy to take into the equation. And then there is the main problem: How do I actually include direct competitor efforts over time into my equation? Very difficult. Plus the effect on the long tail. Plus additional traffic from ‘other search engines’ [if you’re only ‘doing Google’]. But clients love these kind of forecast models. I saw client side forecast models before where marketing managers combined broad match adwords data with #1 ranking AOL CTR figures, comming up with mindboggeling figures. Crazy..

    So what can we do? I do the forecasts as good as the data that I have allows me and then I am extremely transparent and honest about the forecasted results. Especially the question “and how much time/money will you need now to rank us #1 for keyphrase xyz so we can get xyz ammount of traffic and therefore xyz ammount of sales?” can only be answered with: “The more money/time you give me the higher the chances, but with the given budget/time we need to reassess the campaign/budget/time after 3/6 months to see what we have achieved and then maybe press harder.”

    The temptation to manipulate forecast data to make a proposed strategy look more credible is most of the time almost overwhelming. Don’t do it.

    The problem with all forecasted data is, in the end someone will hold you responsible for your forecasts ..and if you don’t want to look like an idiot, you better not overpromise – And you better not predict too carefully, as you then might not get the job.

    Good luck everybody ; )

  • http://www.usereffect.com Dr. Pete

    I’m way behind on my reading, but I’m glad I bookmarked this a couple of weeks ago – good stuff, Sam. It’s funny, but I think that, as much as SEOs over-promise, clients are too eager to go after confident-sounding promises vs. cautious honesty. I run into that all the time – I’m always hedging data, and people don’t respond well to that, in a sales environment. Businesses need to learn that people who are cautious also won’t take huge risks with their money.

  • Sam Crocker

    @Ingo – Thank you for your sharing as well. I know we’ve spoken about some of this business in the past but I appreciate your thoughts.

    @Curtis – I’m sorry to say that (at the moment) we’ve not got much to share with you here. I can’t say I’ve come across a live example of this but when our model is refined there is a chance we will be willing/able to find it (though more likely at a conference or something than in its full form). Obviously I would love to share it and if it were solely my work I would, but I don’t want to claim too much credit for all of our Data Science team’s hard work!

    @Dr. Pete- thanks very much for your comment and for coming back to read this a bit later. I couldn’t agree more with your thoughts and that is an absolute cracker of a quote: “People who are cautious [with their forecasting] won’t take huge risks with their money”. I will be sure to refer folks back to that if needed :)

  • Pingback: Meet Our 2011 Bloggers: Samuel Crocker - Specials - State of Search()

  • Danny

    Your article is worthless. I can read rants like this on craigslist. Thank you for wasting 10 minutes of my life to tell me shit any SEO already knows.

    • http://www.basvandenbeld.com Bas van den Beld

      That’s too bad Danny. Luckily as you can see there are others who do like the post. Not everbody has the same taste :). Could you tell me which post on Statef of Search you did like, so we’ll know what kind of content we can provide you with?

5 Flares Twitter 1 Facebook 2 Google+ 2 LinkedIn 0 Buffer 0 Email -- StumbleUpon 0 Pin It Share 0 Filament.io 5 Flares ×