It has already been two weeks since the A4Uexpo in Munich, but I still want to cover the “Panda-session”, because it was one of the best sessions at this conference. Ralf Schwoebel of the downloadportal Tradebit, one of the victims of Panda, started the session explaining what changes he did to get back on track. After that Dixon Jones, Marketing Manager of Majestic SEO, showed the results of some research Majestic SEO did.
Ralf is one of the victims of Panda. His open download marketplace Tradebit lost about 15% of it’s traffic since the Panda Update was rolled out. But he managed to increase his revenue by 20%! In this session he will explain how he succeeded.
Ralf states that the Panda update is about quality and it is machine readable. He points out the “23 questions from Amit Singhal” to understand the thoughts behind the algorithm.
But according to Ralf SEO for Google is still based on machine calculations – the whole show just got a bit more complicated. There are new factors that come into the game.
|Old School||Panda School|
|on-page (title, keyword density etc.)||User behaviour (bounce rate, scrolling)|
|on-site (internal links)||semantic analysis|
|off-site (link profile)||off-site (link profile)|
|penalties for thin, dupe etc.||harder penalties for thin, dupe etc.|
|…. and more||…. and more|
Some things to think about:
– the position of your keywords: Google knows if it is above or below the fold.
– Google knows semantics: use advanced search of Google and play around with the reading levels to see if your competition is ranking for different reading levels too.
– Remove Analytics from thin or slow pages; Google knows everything of your site.
– Link Profile: don’t use just money keyword links, but also brand links, read more etc. ???Real link profiles have a wild mix with a lot of brand links.
To “recover” from Panda this is what Tradebit did:
1. User behaviour: they added seals for more trust. Users with more trust stay on the site longer. They also added FB likes and recommendations.
2. Link Profile: they changed keyword links to brand links
3. They added ratings and removed negative files from the catalog
This resulted in a 20% increase of revenue.
After this session Dixon Jones talked about “how to methodically analyze algo changes like Panda”. You find his presentation here: http://blog.majesticseo.com/general/analyzing-effects-like-panda/
To analyze Panda, Dixon took the winners and losers lists of Searchmetrics and Sistrix. He build a custom Google search engine to compare the results of winners and losers with the search query “machine code”. Strange enough also winners ranked with duplicate content pages. So Google is not just deduping.
In the next step Dixon made his own winners and losers data, using Searchmetrics, Sistrix and Alexa data. He refers to the 23 questions of Amit Singhal too.
Dixon did some research and found papers of Stanford university describing a study describing all factors that are important to people when using a website, giving them some kind of weight. This is a mathematical opinion of quality. Dixon tries to find correlations between these elements and his winners and losers lists.
Dixon uses a tool called ConAn Content Analyzer to analze just the relevant data of a page.
Majestic is still testing a lot to find out exactly what is Panda, but till now they found out that there is a significant correlation between sites that were hit by Panda and the percentage of pages on that site that were already in the supplemental index.
So Dixon went back to analyze the “Machine Code” pages and discovered that the winners hat some good deeplinks into the individual internal page. The losers didn’t have deeplinks or had very poor deeplinks to those pages.
Majestic is still assessing how to analyse the “Stanford factors” to see how to avoid a sitewide Panda penalty, but for now Dixon says that if you’re in the “Panda bucket”, pages that have poor deep link equity are the ones that lose out.