Google changes search results but is it for the better?

Yesterday word started filtering through from America of some fairly fundamental changes in the way Google display search results. This morning it hit the UK datacentres and we’re getting our first proper look at them. They look like being a massive change!

Google intend mixing into the main body of the search results the kind of results we’ve occasionally seen for local search in separate additional boxes at the top of pages. However these aren’t additional any more but form part of the normal 10 results (for anyone using the default setup). That’s not all, it wont just be local search; there may be book search, image search, video search, and others. Essentially Google has decide to include all the additional search features that have been sitting mostly unused on the menu bar above the search box into the main search results.

This change could have a profound impact on the usefulness of results, and depending on what you’re looking for it could be either much better or much worse. So far there is no sign of a way to avoid these changes in your preferences. Nor is there any indication of how often such results will appear.

For many companies who depend on first page/top ten results for traffic it could be a disaster if these new results push them off that vital first page. One example I’ve seen shows a search with local results where the three locals take up three places out of the ten. If you were in positions 8-10 the chances are you’ve been pushed down to the second page.

Only time will tell if this proves popular with users or not and what the ramifications will be for the SEO industry.
If users like it then it will cement Google’s position as the pre-eminent search engine and may consign the struggling MSN/Live Search to oblivion. However if users find their searches producing poor or irrelevant results then we might see a swing back towards anyone who can provide a good alternative. In the meantime I expect to see the conspiracy theorists out in force saying that it’s all a scheme to increase Adwords spending by the folk whose ranking get pushed down.

Absence of competition in the search market

A few short years ago there was still a fair amount of competition for your questions and searches. Even when Google had become the major player there was still a good percentage of the market who used Yahoo and MSN, and a reasonable number who used Alltheweb, Hotbot, Alta Vista, and a few others, while the major ISPs and portals such as AOL, Netscape, Blueyonder etc. also had search results that were somewhat independent even if they were based on one of the major players.

Now however we are seeing almost total domination by Google and a lot of people are getting worried. A recent analysis of search engine share showed that both Yahoo and MSN/Live Search are down in single figures on the percentages and the rest are nowhere. Google UK alone beats all except Yahoo and is not far behind them. Google Canada alone has almost as much share as MSN. Google Germany alone beats Ask.

Meanwhile, reading the webmaster and seo forums shows a lot of people who are totally dependent on Google for their traffic and therefore their business survival and there is no longer an alternative strategy for them. They are worried, and with good reason. Even discounting the ones who proclaim innocense of unethical methods but are shown to be riddled with them when you look at their sites, there are still many who find themselves suddenly dropping out of sight when Google make a change to their algorithm. Indeed it happened to me when the last really major change took place about a year and a half ago – some terms that I had been clearly the most relevant site for dropped completely out of the results, yet once the BigDaddy update was resolved they suddenly reappeared as high as before and are still there. If I had been dependent on them for my income then their loss for 3 months could have had serious consequences. Yet throughout this time there was stony silence from Google other than the general platitudes about creating good content and many webmasters and small businesses will have spent a lot of wasted time making unnecessary changes to their sites in the vain hope of making a difference.

Such a situation makes people jittery and it hasn’t been helped at all by the recent utterances about paid links and suggestions from Google that you should snitch on your competitors. If this had been confined to the more blatant attempts to manipulate PageRank then that would have been one thing, but in fact they appeared to be declaring that all paid links are bad unless they carry nofollow tags, and that has really got a lot of backs up.

Suddenly Google, the company that everyone liked because of their simple, clean and advert-free interface, and their “do no evil” ethos that harked back to a more innocent and somewhat hippy-inspired internet, is being cast in the role of villain and their monopoly seen as a threat.

In fact the unthinkable has happened – Microsoft, the company that everyone loved to hate for their own monopolistic practices, is being seen as a last hope for competition and is being urged to follow through on the rumoured merger with Yahoo.

Google need to be very careful if they are to retain/regain their previous blissful public relations position. Trotting out cuddly Matt Cutts and cute Vanessa may not be enough any more. With worry about the ever increasing amount of private data being collected people naturally think about Big Brother (and I don’t mean the attrocious reality TV programme). They might just start thinking about George Orwell’s other well known line – “power corrupts, absolute power corrupts absolutely”. If that happens then instead of us worrying about how much Google trusts our websites, Google might have to start worrying about how much we trust them!

Keeping out the spiders

I spend most of my time telling people how to best allow spiders into a website, however it can sometimes be just as important knowing how to keep them out.

Why would you want to want your content kept out of the search results? There could be a number of reasons:

  • Unwanted duplication – if you provide printer-friendly versions of some pages for people who need a hard copy of your content without extraneous images or navigation buttons, then you dont want those spidered as they would appear as duplicate content. The search engines don’t like showing their customers two versions of the same material so make it easy for them to tell which one they should use by excluding the printer version.
  • If you keep any sensitive data on your site such as wholesale prices for trade customers then you want to make sure that isn’t made public.
  • If you have any large images or large numbers of moderately sized images then you may wish to avoid high bandwidth usage or high server loads by stopping search engines from indexing these images.
  • If you find bandwidth is being taken up by spiders which are no good to you – link checking spiders, or academic plagiarism spiders for instance – then you may want to keep them out altogether.
  • If you’re troubled by scraper spiders that simply come to steal your content.

The remedies depend on the situation and your site structure. An individual page can be kept out of the indexes using a simple robots meta-tag set to noindex (the major search spiders obey this but more specialist ones may not). Larger sets of files in directories can be isolated using the robots.txt file and specific spiders excluded from part or all of a site by the same method.

Rogue spiders can be more of a problem since by their very nature they usually don’t abide by robots.txt instructions, so you may need to detect them in your log files, identify their ip addresses, and then ban then in your server settings. This is a very specialist area and we recommend that you research it thoroughly before embarking on action. It’s all too easy to ban a wider range of people than intended and it’s also easy to spend an enormous amount of time chasing down rogues who then change ip addresses and reappear.

5 Great Myths of SEO

Somewhere along the line some marketing executives and managers seem to have picked up a lot of “knowledge” about SEO – I get it recounted to me daily when taking enquiries – and it’s usually nonsense. Some of it is simply three or four years out of date, some of it was always nonsense. Astonishingly you still see some of it appearing in the poorer seo forums as well. Here’s five of the most common.

1. The more traffic you get the higher your rankings will be.

How would the search engines know? Have they been reading your log files? Maybe in a more developed web that made such information easily obtainable this might be possible but not currently.

2. Using PPC advertising will improve your rankings

Much as the conspiracy theorists would like to think so there’s no evidence to suggest that this is true. And it would be self-defeating for the serch companies as you could use adverts to get yourself into a stronger position where higher visibility in the natural search results would attract links so that the site could then drop the adverts but maintain their rankings.

3. Putting all your keywords in image alt tags/noscript tags/comments will help your ranking

Once upon a time you could do some of this and get away with it. Now if you put different text in noscript tags from what is visible to the normal user it is regarded as a basic form of cloaking. Comment tag content is read but discarded unless there’s so much of it that it’s regarded as a span attempt. alt text is read and indexed but is given much less weight than was once the case, while keyword stuffing of the tags is considered spam.

4. Google can read Flash so it’s ok to build Flash-only sites.

While it’s true that the latest versions of Flash allow some very basic reading of content there is no structure to it and it thus has very little importance. Flash navigation is still a problem for spiders too so your internal pages will not be spidered . A properly structured html site will always beat a Flash side under current conditions.

5. You should only swap links with sites which have high PageRank.

Toolbar PageRank, which is all you can see, is a waste of time. It’s seldom updated and is not really indicative of the real underlying figure. People have got hung up on PR as indicating the quality of a site – it doesn’t and never did. It’s all maths and indicates the chances of a random surfer landing on the site in question.
There are far more important things to think about when trying to get links from another site – quality of its content, relevance of the site to yours, whether the users if the two sites will find them useful, future prospects of the other site. Forget PageRank.

Posted in SEO

Get your DOCTYPE right

I look at dozens of sites every week and one of the first things I do is check the code to see how they’ve been constructed and, amongst other things, whether the code is valid or not. What I see time and again is designers who don’t understand DOCTYPES and cause themselves and their clients problems. A lot of this stems from the defaults used in Dreamweaver and designers over-relying on it to do things for them.

Earlier versions used an HTML version 4 DOCTYPE but crucially ommitted the address, meaning that the pages were in quirks mode on most browsers. That meant lots of cross browser incompatibilities and problems rendering modern CSS properly. It was a well-known problem amongst the better web design forums and newsgroups but many never seen to have picked up on it. It’s easy enough to set up the various correct types as snippets, and it’s even quite easy to change the default in one of the internal template files.

Recent versions of Dreamweaver use an XHTML DOCTYPE by default and amongst the more stringent requirements of this is that you must close any tag that doesn’t naturally have a closing tag. This includes meta tags and image tags and you must add a trailing / before the closing > in order to be writing valid code. A good designer really should be aware of this but unfortunately a great many aren’t and this results in a lot of sites with a lot of validation errors. Some search engines are more forgiving of validation errors than others but why make it harder for them? All my experience says that cleaning up the code helps rankings; and of course valid code is the only real foundation for maintaining cross-browser compatibility and future-proofing your site.

Incidentally, Dreamweaver is a good product and certainly the best of the visually capable web editors, but like any program it has to be used with knowledge and understanding of the language it’s creating. It generally produces decent code but if you push it in the wrong directions or use it inappropriately then it can produce rubbish just like any other program.

Learn about the different DOCTYPES and the reason for using them at the Web Design Group site, and the W3C Recommended list of DTDs, and make your web designs a lot more professional and your sites a lot more search engine friendly.

The changing landscape of search

It’s amazing how often you need to step back and reassess aspects of the search engine optimisation industry. Sometimes you catch yourself trotting out a piece of advice or information that you’ve been giving for ages and you suddenly realise that the last few projects have shown that those general guidelines have changed.

For one thing I really must stop talking about MSN – it’s been Live Search for months now and the search algorithm appears to be entirely different from the old MSN one if the results I see are anything to go by.

Fast Google results?

A good example of the changes that creep up on you is how easy and in what order you can get improved search results in the big three engines. We used to regularly tell potential clients that for a new site you could get MSN Live Search and Yahoo results in a couple of months but Google would seldom produce anything useful for about six months due to ageing filters. Now it seems things have moved. A new jewellery design site I built from scratch has seen some excellent results inside three months from the initial files going live and from this and others I’m working on I’m increasingly convinced that if you get the structure and content right from the start you can get Google to pay attention much more readily than was previously the case.

That’s the key though – building the site correctly, and having everything in place – no spider stoppers, good structure, good navigation, and good content well formatted. This shows how important it is to involve an SEO expert at the earliest stages, preferably before the design is finalised. It makes it a lot harder if the site is already built with structural problems in any of these areas; even if the SEO gets to it quite quickly and rectifies matters it may already have got off on the wrong foot with the search engines and that may set you back a few months.

And what about links I hear the old hands asking? Well my experience is that a small number of trusted and relevant links are far more useful than large numbers of lesser ones to set a new site on the right track. Of course in a highly competitive market where your main rivals have massive numbers of backlinks you’ll need more, but quality wins over quantity every time, and by that I don’t mean PageRank either! And that’s something else that’s changed a lot – though you wouldn’t think so by the way some of the link requests that come in are worded.

Must go and do a search-and-replace on MSN in my reports…

Posted in SEO