Ok, this post is aimed at SEOs rather than a general audience, and is something I’ve been mulling over for quite some time without getting round to posting about it. I got the nudge to finally do so from a post on Search Engine Land – PageRank Sculpting Leaves NoFollowed Tags Behind .
In that article the author essentially discusses PageRank Sculpting, and I should immediately say that in an article-length piece as opposed to something more thesis-length there is a limit to how much can be discussed, so we both have to keep things simpler than we might prefer. We also have to disregard for the purposes of discussion that since you can’t measure real PageRank (as opposed to the hopelessly inaccurate Toolbar PageRank) it’s extremely difficult to know whether your efforts at sculpting have worked.
That said, as I mentioned in my comments to that article there are some fundamental assumptions and questions inherent in trying to decide on a real-world strategy based on this type of sculpting by navigation.
All links are equal but some are more equal than others
The first one is that all links are created equal, but are they? There has been much written about the idea that editorial links, that is links surrounded by body text, are more important than navigational links in the menu system. We don’t know how this works of course – it could for instance be part of the relevancy aspect of the algoritmh that boosts the body links or it could be part of a PageRank algorithm that reduces the value of menu links.
Shooting yourself in the foot(er)
Similarly with footers and other items that can be thought of as boilerplate content. There has been some suggestion that such boilerplate content gets “ignored” for content purposes. Does that mean the links also get ignored or downgraded? And again is that to do with context or PageRank value?
Navigating – by the seat of your pants?
Then there is the thorny question of sitemaps. Not in this case the Google/XML kind (they are thorny as well but for entirely different reasons which I’ll leave for another post) but the old-fashioned HTML sitemap. Many SEOs (and many amateurs) trot out the mantra all too easily of “you must have a sitemap”, “it helps the spiders find your pages”. But what are the implications of a sitemap for how the PageRank flows in your site? This has always seemed to me to be a crucial consideration which has usually been ignored.
I well recall spending ages playing about with and discussing the results of the tool at
when it first appeared back around 2002. Again it’s a very theoretical tool and we don’t know how much the PageRank implementation has changed since then so I’m wary of reading too much into the results these days, but a bit of experimentation with it suggests some pretty major changes in PageRank distribution by using different methods of navigation.
Some people may suggest that a smaller site should use a “get anywhere from anywhere” type of navigation and there is often much benefit to the users in that approach. Some may suggest a more siloed method where different aspect of the site are largely kept separate from each other. Testing the effects of these different scenarios using this tool suggest very different PageRank distribution – sometimes concentrating most of it on the home page and sometimes distributing it almost evenly throughout the site. I’ll leave you to play around with it and see these effects for yourselves, but it may explain some of the odd effects with certain sites where the home page seems to be the one that ranks for everything and the deeper pages don’t seem to get the rankings their content deserves.
My point here is that if you simulate a site and observe the PageRank distribution, and then add a simulated sitemap, even if it’s only linked to from the homepage, then the distribution changes a great deal. Now whether this is matched in a real world situation is another matter, but it bears some thought before blindly following the advice to add a sitemap.
And of course if there is a negative bias against footer links and your sitemap is linked from there… Complicated isn’t it?
And it doesn’t stop there. Generally it’s the higher level pages that target the more generic terms and thus need more PageRank to succeed in a competitive market. But generic rankings often don’t convert well, so does that suggest that we should be distributing PageRank more evenly in order to get more of it to the lower level pages where the better-converting specific terms are being targeted by product pages that will actually produce profits?
Good job we don’t have to include clients with their own agendas into this mix or it would get really complicated… Oh! We do.
Now I don’t have the answers to these questions, and maybe no-one outside Google does, but I think they are important questions to consider and discuss – which is the point of this post. Have you had experience of trying to sculpt PageRank and had unusual ranking results. Have you any pages that have loads of good content but seem dead in ranking and traffic terms despite seemingly normal navigation to them? Have you any ideas about the questions I’ve posed here. Maybe we can all gain some insights into a very complex area, but at least I hope I’ve encouraged you to think about it.