Information Architecture: Common pitfalls (1/2)
Editorial comment: We welcome Bastian Grimm to our blogging team! His first post will be published in two, today part 1, tomorrow part 2.
“A properly optimized and therefore search engine friendly information architecture is probably one of the most underestimated on-page optimization concepts in search.” – That being said, welcome to my very first post at State of Search!
I think one of the reasons – maybe even the number one reason – for websites not having real success online is poor information architecture. You see that happen all the time: Basic rules, for example “one piece of content, one static URL” are not considered and guess what: Most of these websites do badly in search! And since I will be attending Search Engine Strategies 2010 in San Francisco and do a coverage for State of Search and the agenda looks like they’re having a dedicated session on this very specific topic (really looking forward to that one!), I thought it might be nice to revisit some parts of good IA and do an overview on what is important to consider. There are a lot of great resources out there – if you know some more please feel free to mention them in the comments.
But before we really dive into that, let’s have a quick look on a possible IA definition. While doing some research on the phrase itself I came across an organization called the Information Architecture Institute (honestly, I hadn’t really heard of them before. They have a pretty solid definition of what IA means: “The art and science of organizing and labeling web sites, intranets, online communities and software to support usability and findability.” – So let’s see how this correlates with the topics we’re visiting during this post. And a quick side note: Yes, I’m mainly covering the search point of view and for sure will be simplifying things a lot – in a real-world process you’d pretty sure work together with a user-experience designer, etc.
Domains & sub-domains
As Adam Audette states, the domain name itself is one of the critical assets you have – because users will see it in search results (SERPs), hopefully talk about it or even print it on paper – it really makes a difference if you have a short and very memorable domain name people can actually type-in versus something that no one can actually remember or misspells it all the time (and maybe even worse: probably having two, three or even more hyphens in the domain name itself).
In general, if you go after a broad range of keywords it’s very unlikely that you’ll be able to include all of them in your domain name – in this scenario it’ll make sense to pick something that’s just short, easy to remember and generally speaking just “brand-able”. However with the brand update Google did a while ago it even pays more to have short-head, generic keywords in the domain name – so if you’re really focused on a single niche, try to get a short domain including your most important keyword. This will very likely get you additional scoring points when it comes down to ranking for that specific keyword or phrase. And in some cases you will experience a better click-through-rate in the search results.
When you actually start using the domain, make sure you decide for one canonical version: It’s either with- or without www – but not both! And don’t forget to properly 301-redirect the one you’re not going to use.
Usage of sub-domains
At last, let’s also have a look at sub-domains for a second: In my opinion it’s really a decision on a case-per-case basis if it makes sense to use them or not. Most of the time it’s not really worth it considering the fact that search engines treat the sub-domain – more or less – like a new domain (and most likely you’re starting out with zero link equity). This means that you’d have to do all link building efforts for the sub-domain(s) as well. This is really a lot of work and even if – and this is in my opinion up for speculation – search engines inherit some of the domain’s trust to the sub-domain, they’re yet still not as powerful as the domain itself. Therefore most of the time sub-directories will do the trick.
On the other side it could be worth the effort if you are for example running a generic top level domain like domain.com and want to go international: Let’s say es.domain.com (for Spain) and / or de.domain.com (for Germany) – in this case you want to build country specific back-links anyways and it could be a solution to use sub-domains (however the country-code top level domains like domain.es or domain.de might still work slightly better).
In general one could say that a proper URL structure visualizes the concept on how things are organized within your domain. So what does it mean? To explain it in a way everyone understands it, Scott Willoughby wrote a post for SEOmoz where he came up with an analogy between a filing cabinet and proper information architecture. Interesting approach, let’s have a look:
“[…] Let’s first look at how a standard filing cabinet is organized: You have the individual cabinet, drawers in that cabinet, folders within the drawers, files within the folders, and documents within the files. There is only one copy of any individual document, and it’s located in a particular spot. There is a very clear navigation path to get to it.
If I want to find the January 2008 Invoice for a client (Amalgamated Glove & Spat), I will go to the cabinet, and perhaps open the drawer marked Client Accounts, find the Amalgamated Glove & Spat folder, look for the Invoices file, and then flip through the documents until I come to the January 2008 invoice I’m looking for [...].”
I think this is a good explanation on what IA really means from a SEO perspective: It comes down to basically having one easy to understand and nicely labeled route pointing to each of the contents on the domain. And this also means it needs to be “easy to understand” for everyone: Users as well as search engines. To achieve this there are a couple of things to do. But before we move on, there is something you should do upfront (if the domain is already filled with content): Reduce the clutter! What? Generally speaking it’s about eliminating pages that do not help users and search engines. For example (if you’re an ecommerce site) product pages that co-exist in two different categories using different URLs but are exactly the same. Or maybe you have long articles split into up to several pages with different URLs? The same goes for print versions with a dedicated URL and similar stuff like that. It just does not do any good, probably causes keyword cannibalization (which means you’re targeting the same keyword on more than one landing-page) and this will harm your rankings. Get rid of it or at least make sure you exclude it from the search engines (by using robots meta or the rel canonical tag).
This is part one of two for this post, a great openingspost by Bastian Grimm, read the next part tomorrow on State of Search!