If you are a small business owner on a small budget the chances are you are learning about everything you can to set-up and promote your business. You are the CEO, the CMO, the bookeeper and the post room and chances are you make a cracking cuppa. Not every startup is VC funded, not every startup has a budget for everything and not every business can afford to have an agency to support every aspect of their marketing efforts. If this is you then perhaps you are hoping to get your online presence ranking in the search engines in order to attract customers through organic search. Whilst in some business sectors this can be very difficult and very competitive, but in some cases, for example local businesses in less populated areas you might have a decent chance at success providing you have some rudiments in place and an understanding of what search engines are evaluating when attempting to rank your pages.
As full-time search professionals those of us agency-side will come across a number of common mistakes or oversights time and time again, when auditing any website, or when talking to friends and contacts about their small business website. Even if you don’t have budget for an SEO campaign there are a number of things to check and to be aware of if your website is nowhere to be seen in the rankings.
First a little disclaimer: search algorithms are rather complex as they evaluate over two-hundred factors, some of which matter more than others. If one of the below “mistakes” effects you, fixing it doesn’t necessarily mean your site will rank overnight, but rather that these are very common mistakes that can often be found with new websites.
You may or may not know that there is a file that allows you to instruct search engine spiders (or bots) as to how to crawl your site. It is called a robots.txt file and here’s an example.
You (or the person that built your website) can put instructions in the file to specific user-agents, such as Googlebot or Bingbot, though in the above case the use of [*] means all. Using the command >Disallow: you can restrict a user-agent from indexing content from the pages/folders that you specify on your website
The State of Search website has a URL or address which I can type into a browser in order to access it directly. The format of the address may differ from site to site, but when you bu a domain for your web business you are purchasing the domain, specific to a TLD or ccTLD.
A TLD is a top-level domain, such as .com and a ccTLD is a country-code Top Level Domain, such as .fr (for France). The domain part of it is the name that should be akin to your business name. However as is common practise since the web became popular, many websites choose to precede the domain with what is actually a sub-domain www. You don’t have to have the sub-domain to make your website work and here is where this can cause problems. Publishing your website to both www.yourwebsite.com and http://yourwebsite.com if untreated can mean you have essentially duplicated your website. Search engines do not like duplicated content on URLs as they may struggle to work out which version is the intended or preferred. If your site has been around for a while and has gathered some links that point to it, if there are stronger links pointing to one version rather than the other this may often tip the balance in favour of which version is preferred.
Rather than publishing your website to both versions of the address it is generally best-practise to pick a preferred URL .e.g. www.yourwebsite.com and use a 301 redirect instruction so that the http://yourwebsite.com version simply redirects to the preferred version.
The reason you would want to do this is so that from the get-go the search engines do not have to use other signals to work out why you have a duplicated website and which version is preferred. Perhaps more importantly any equity that you accrue in links is funnelled towards the preferred version of the URL. Even if somebody links to you using the non-preferred version the equity afforded to that URL is passed on via the 301 redirect instruction. A bit like when you move house and get your post redirected.
Check to see what is happening with your website by typing in both versions of the URL. For example with State of Search if I type http://stateofsearch.com into my browser it will redirect e to www.stateofsearch.com without my doing anything. You can easily check if the correct redirection is in place using a free tool like this Redirect Checker.
Another thing to be very careful about which I have seen on many occasions is when different content has been published to each version of a URL, for example you may have put a holding page at http://yourwebsite.com and then the finished website at www.yourwebsite.com. I’ve even seen examples of entirely different websites on both versions. This can really inhibit your progress and potential in organic search.
I asked some of my peers at other SEO agencies which common mistakes they see with new websites and Hannah Smith of Distilled pointed out a great one that is more to do with a lack of traffic than rankings and that is when your tracking (e.g. Google Analytics) code has been mistakenly removed. Perhaps you have started out with some traffic showing in your Google Analytics account and then suddenly it all disappears. This is something that can send people looking into all sorts of places and reasons such as Google penalties or a website hack. Sometimes the simplest explanation is the answer and the first place to start is to check that the code has not simply been removed somehow; though normally by human error.
For a site to rank broadly and well it has to be crawlable. Search engines use crawlers as mentioned in the first point 1 above, in order to discover pages on your site, crawl and index the content on them. If your site is hard to access or contains a lot of image and video content there’s a lot more work to be done to make such content accessible to such bots. In fact in a timely post on Search Engine Watch today, author Danny Goodwin cites Google head of webspam Matt Cutts as saying that this is the number one problem he sees with websites not ranking.
Another of my fellow writers and peers Barry Adams of Pierce Communications says that the biggest mistake is one of perception. All too often Adams gets the question why isn’t my site ranking? It has been live for over a week! Simply having a functioning website does not mean that it is going to rank. Certainly a large part of getting your pages crawled, indexed and ranking well is dependent on how many links point to it. Much like reviews for a bed & breakfast, or references on a Linkedin profile, good links don’t happen overnight unless you have a huge marketing budget or you are the source of a scandal! Do beware of any service that offers you X links in X weeks. If the offer seems too good to be true then it almost certainly is, and poor quality links can actually do your site far more harm than good.
If you suspect that a lack of links to your youthful site is inhibiting your progress then we have a wealth of content on this site and by our bloggers on linkbuilding the right way.
17 hours ago