Lessons of the SEOMoz quiz

Earlier this week I wrote a short article on the Oyster Web site about the SEOMoz quiz, If by some chance you aren’t an SEO and don’t know about this, it contained 75 multiple choice questions relating to our dark and mysterious art. After we’d torn the answers apart we enjoyed reading Danny Sullivan and Vanessa Fox doing the same the following day. They seemed to agree that some of the questions were vague, or had multiple possible answers, or were just wrong. It was of course a great bit of link bait and bound to attract endless comment, but it was also a salutary reminder that there are very few hard and fast rules in our business and what is regarded as a certainty by one expert may be only a possibility to another.

SEO as an empirical art…

A lot of what we do is based on our own experience of what has worked for us in the (usually recent) past and observation of what is or isn’t working on sites we are asked to look at. Google occasionally feed us the odd crumb of information amongst a sea of generalities, Yahoo and MSN/Live don’t even go that far. What worked last week may have less effect this week after the filters have been adjusted. That is why there is unlikely to be a useful academic course in SEO in the forseeable future – there is no standards body, and any examination would be out of date before it was set. Whenever you read any book or article about optimisation the first thing you do is check how old it is to find out how much suspicion you should have about whether it’s still true.

…with a commercial twist

Those of us at the sharp end of the business have to constantly rethink our opinions and check what is working, while still maintaining our faith in the fundamentals of structural web coding, good writing, and semantic markup. Those from a marketing background may tend towards one style of working, perhaps emphasising links and social networking solutions a little more, while those from a web design background may tend towards another, emphasising technical considerations.

Some theorists try to conduct experiments (if that isn’t a contradiction!) to determine what really works and what doesn’t, but it means having an isolated site that has no commercial importance – you can’t do that with a client’s site. With so many variables involved it’s also notoriously difficult to isolate definite effects and one of the problems with having an isolated site is that it’s the very connectedness that causes some of the effects.

This constant change of core knowledge is part of what makes SEO so interesting, but it has a high price in terms of research, and is part of the reason that good optimisers are hard to find and worth paying for.

Posted in SEO

To nofollow or not?

As a consequence of the paid link controversy and whether you should use the nofollow attribute on paid outbound links, there is now another one about whether webmasters should use it on internal links. On one side there is the original suggestion from Matt Cutts in his interview with Rand Fishkin of SEOMoz, and followups from such as Dan Thies, while on the other side are people like Michael Martinez who find the idea of trying to manipulate PageRank by this method dubious at best and downright dangerous in most cases.

In the meantime there is also the question of using nofollow on blogs as an anti-spam measure, or not, depending on your perspective.

Nofollow on Blogs

Lets take the easier one first – blogs. This is what the nofollow tag was invented for, to stop people spamming the comments sections of blogs and forums with pointless messages containing embedded links back to their sites. Even then it was a bit controversial and many argued that there were better ways of combatting link spam. The debate has raged on and there is now a “dofollow” movement that advocates getting rid of nofollow tags from blogs (some blog software adds them automatically). Of course you then need other methods of defence against the robot blog-spammers but this can be managed – Askimet does a pretty good job in WordPress blogs and you can pre-moderate if you have low levels of comments or only allow people who have already had comments approved. I already make links within my posts carry full weight and I’m inclined to go the dofollow route on comments too – just need to decide on the best method of doing it.

Nofollow on internal links

On the thornier topic of using nofollow to manipulate PageRank within a site there are a few arguements that I find persuasive.

Firstly I don’t believe enough people actually understand PageRank enough to start trying to fiddle with it. Anyone who reads the SEO forums will know that they are full of questions which show that people believe the most nonsensical rubbish about the subject and pick up on old wives tales at the slightest opportunity. The sort of mess that these folk could make of their sites with nofollow doesn’t bear thinking about.

Secondly the kind of pages that are being suggested as candidates for downgrading – About Us and Contact Us pages for instance – are actually perfectly useful pages that often can be made to rank well for important terms. The potential gains are far outweighed by the likely losses.

Thirdly we have the problems that would be caused to the usability of sites. Many websites use Google’s own search system to provide site-search facilities, and studies show that many users will use the search boxes to navigate a site. If you close off some of your pages with nofollow then those pages won’t show up in these search results. Why would you want that to happen? Golden rule – build sites to serve your users.

Fourthly there is something very fundamental here which I think needs to be addressed. Google have always said that you should show users and search engines the same things. That’s why hidden text and cloaking is so disliked by them. If you show a user a link then you are telling them that it’s worth following it. If you use a nofollow attribute on it then you are telling the search spiders that it isn’t worth following. Isn’t that rather dishonest? Isn’t that against the very rules that Google want us to adhere to? I think it is, and for that reason as well as the other listed above, I won’t be using it.

Bad Reputation

Why does SEO have a bad name?

No matter what we think of ourselves, beavering away in our own market sectors – whether we come from a web design background, a pure SEO background, or a marketing background – the fact is that we have a fairly poor reputation with some of the general public and more importantly the very business people who need our services the most. The situation seems to be worse in the USA but we should be aware of it here in the UK as well.

The other day I wrote an article on the Oyster Web site (Why Danny has to use the f-word) regarding the exasperation felt by SEO guru Danny Sullivan over persistent attacks by two US businessmen who repeatedly refer to SEOs as spammers and a problem for search. While we in Scotland have escaped a lot of the bad reputation, possibly due to our national image as solid, trustworthy and straightforward, there remains a suggestion that search engine optimisation is all tricks and snakeoil done by geeks fronted by used car salesmen barely keeping one step ahead of the algorithms.

You sometimes get the impression that some SEO companies perpetuate this in order to make it seem that they have some secret knowledge that you have to buy to be successful. While at the other end  of the scale there is the real nuisance of the cowboys who charge a few hundred quid and do a grab-it-and-run job leaving the hapless client with a site that will drop out of the indexes as soon as the spiders spot the hidden text or cloaked content. Their bad practice inevitably tars the whole industry.

Improving our image

If SEO is to take its correct place within a mature web industry we have to get rid of this poor image and be seen as the skilled professionals that most of us are. Let’s say it loud and clear. SEO is the art of combining well written quality content, good coding and programming, good design, good usability, and good marketing to make websites that are attractive to users, relevant to search engines and deserve to be highly regarded by both. It takes skills in all these areas and the ability to combine them successfully. No tricks, no cheating, just good professional work. (and very little swearing!)

Posted in SEO

Google dumping data?

I’ve mentioned a few times recently that I was seeing some pretty inconsistent search positions on a number of my sites. There now appears to be a potential explanation. Following a post from Michael Martinez of SEO Theory, a couple of the SEO forums are running threads which suggest that Google may be dumping data in a way that we’ve seen them do before, and that as a result some sites are seeing lower numbers of indexed pages.

This coincided with my seeing a drop in indexed pages on a couple of the sites I monitor and also the strange appearance of a large number of erroneous link reports in the Google Webmaster Tools reports of another site. This site is a bookshop but was being reported as appearing in high positions for a number of totally unrelated topics. Looking at the external links report showed that there was a big increase in the number of links and that their origins and link text tallied with the strange search terms. Needless to say the pages concerned had no links pointing at my this site nor would they ever have any reason to.

What I now suspect, though obviously this can only be conjecture at this stage, is that somewhere along the line the indexes have become corrupted and Google is having to rebuild them. We saw something similar a couple of years ago when they brought a load of new data centres online and something appeared to go wrong. It’s also possible that this may have something to do with the dreaded Supplementals index – although that is pure speculation.

Whatever the reason the only thing to do is sit tight and wait for it all to blow over in a few weeks and then see what the search results look like then. Keep on producing new content as that appears to be being incorporated as normal, but don’t do anything drastic with the old content – it will probably come back into the indexes in good time.

Paid link controversy and thoughts about ranking algorithms

There’s been much reporting of a session at the SES conference in the USA which discussed Google’s stance that paid links should always carry a nofollow tag or risk being penalised. On the expert panel it seems to have been largely a case of Matt Cutts versus the rest, and the audience seems to have been firmly aligned with the anti-Google stance of the rest.

This whole situation not only brings up some questions about how Google can be impartial when they stand to make the profits from the effects via Adwords, but also raises questions about the whole basis of the main Google algorithm.

A great deal has changed since the original algorithm appeared, based on the conceptually simple but mathematically complex idea of using links as a measure of value. There are now, of course, many factors at work in the algorithm but two very important ones are the true PageRank of sites which link to you, and the anchor text used in those links. In very simplistic terms, the former indicates the strength of the link while latter indicates the relevance to a particular set of search terms. One well-known SEO commentator, Michael Martinez of SEO Theory, thinks that Google should stop passing the anchor text, and that this would largely solve the paid text link problem. Now Martinez can be rather outspoken and seldom suffers fools (or amateurs pretending to be professionals) but he’s usually worth listening to even if you don’t always end up agreeing with him. So what would the effect be of dropping the link text from the algorithm? Indeed how easy is it to predict the effects of any change to it?

The algorithm now appears to be so complex that it’s almost like observing a natural world system – and some pretty unpredictable things can happen to them. For instance f you decide to play god and wipe out an irritating insect then the creatures that feed on them will be affected. Some may themselves be drastically reduced in number while others may be able to switch to another food source which in turn affects another species. Movements may occur in populations which then allow other movements and changes in other predators and prey. Similar effects can sometimes be seen in search engines – filters designed to get rid of spammy sites can end up affecting perfectly good sites – remember Big Daddy?

So let’s say we remove link text – that will remove a fair degree of relevance, with mainly the subject matter of the two linked pages being left to determine that. If that happens then it will no longer matter as much that links are from related sites, so existing poor links may increase in value and webmasters will be more tempted to follow the “get loads of links from anywhere” route. Another possibility is that PageRank would become relatively more important again so we might see a return to the tedious reciprocal link requests that insist on a minimum PageRank for the link back, as well as those PR calculators for working out how to concentrate PR on pages by manipulating the navigation (usually making the site unusable in the process).

This whole area is one that I feel needs discussion and thought by people from different perspectives to have any chance of coming to a sensible conclusion. Anyone else got any thoughts on the likely effects? Do you agree with Martinez or do you think his solution is too simplistic to work? Would Google listen to us anyway or are we at their mercy with their experts playing god with the search results? What sort of search engines do we want to see in the future?

Personal search – what is it for?

We’ve had a few weeks now to get used to Universal Search, although the effects still seem minimal on this side of the Atlantic, but for those who stay logged in to their Google accounts (unlike me) what about Personal search? Initially hailed by a host of bandwagon jumpers, there is the beginnings of a suggestion that not everyone is so enamoured anymore. The big question is what is it for?

Supposedly it learns what you search for and what results you find useful so it can tailor future results to your preferences. But hang on; if you were happy with what you got the first time you searched on a phrase why are you searching again? Surely to try different sites because you didn’t get everything you wanted from the first sites. If you are presented with more of the same you’re likely to find them less useful too. To me it seems as if the only people who will find this useful are those who use search engines as a universal interface – doing the same searches time and again because they can’t remember or don’t bookmark the sites that they like. Yet these could very well be the people who aren’t savvy enough to take out accounts in the first place.

We could end up with a situation where the experts and the net illiterates are the ones who don’t use personal search while the ones in the middle do. But only the experts will get the results they want because they know when to turn off their accounts. All seems a bit Alice through the Looking Glass doesn’t it?

In the meantime I’m keeping my account signed out except when I’m using Analytics or Webmaster Tools.