Greatest wastes of time in SEO #2 – superfluous meta-tags

To contnue the theme of the first “wastes of time” post, here’s some more activities that still get thousands of pointless column-inches in SEO forums and people still spend countless hours on. Meta-tags are a prime source of mis-information simply because you can pretty much create anything you like and claim it’s useful. Some of the worst examples are the following:

The Revisit tag

<meta name="revisit-after" content="15 days">

Supposed to be an “instruction” to the spiders about how often to return. Totally useless. Was never implemented by the major engines and is never likely to be.

The Meta-Title tag

<meta name="title" content="">

Not to be confused with the Title tag. Not supported by any of the major search engines. Complete waste of SEO time.

The Index Follow tag

<meta content=”index, follow”>

The waste of time here is the time it takes to get it right. All pages will be treated as to be indexed and followed by default (assuming that your robots.txt file hasn’t already blocked the page) so what’s the point of including an instruction to do what the spider is going to do anyway? But I’ve seen so many mis-formed versions of this tag , some so bad that they could potentially disrupt the spidering action, that if you’re going to do it you need to spend time getting it right. Don’t bother – just take it out and get a smaller page as well.

The Dublin Core tags

These are a set of tags created originally for library classification (the name refers to Dublin Ohio, not Dublin Ireland). They are an extensive system of categorisation that is of considerable value in their correct context. You may decide that you want to have them if your site has the sort of context that merits their use, but be aware that they will take a lot of time to set up correctly. I’m definitely not saying they are useless, but as far as SEO is concerned the time taken to add them is wasted time because again none of the major search engines pay any attention to them. It could be argued that they should, but that’s another matter.

The Meta-Keywords tag

And now we move to that most talked about tag in amateur circles, the one that people spend hours debating and constructing. Just last week Matt Cutts had to reiterate what any remotely professional SEO knew years ago – that Google don’t use it for ranking calculations and haven’t for many years. And yet blogs and forums all over the world have reacted as if it was major news! Come on guys, where have you been for the last five years?

Once upon a time in a far distant galaxy this was a useful tag – not vital, but certainly useful. Last century it could get you a ranking almost on its own. I can remember even in 2002, when I was working for Bigmouthmedia, carefully arranging keywords in different orders so that they automatically created phrases within the keywords tag. By the following year it was already becoming a very secondary activity and it soon became pretty pointless. Worse, many sites would put ALL their keywords into every page instead of targeting them at the pages that were relevant to them. All that does it get you flagged as a potential spammer. In recent years only Yahoo has used the keywords tag in any way at all for ranking. I’ll sometimes stick a few main terms into the tag for a page, more for the sake of completeness or because a client expects it, but often I’ll leave it out altogether – there are far more important things to spend time on.

How mature is Bing and is it worth optimising for it?

While in the US they seem to be raving about the rise in search market share that Bing has achieved, even before the extent of the Yahoo results takeover becomes clear, here in Europe its making little impact on Google’s dominance. In fact the figures I see in the major market share analyst sites doesn’t seem to bear out the hype headlines anyway, which seem to be focusing more on % growth rates rates than actual share, but that’s beside the point I want to make right now.

I monitor a number of sites for search results and the impression I’m getting from these results is that Bing is far from stable. I’m seeing results yo-yoing wildly and the page being ranked changing frequently.

Yesterday I saw a strange result concerning two pages dealing with particular product names. The result for product  A dropped from near the top ranking to the sixth page, because page A was no longer ranking, instead page B was the one being ranked. Page B has no mention whatever of product A apart from the main menu entry. In another instance the ranking from product C moved from Page C to the home page, which again has no mention of product C.

Does Bing have a supplemental index?

So is this because the Bing boffins are still tweaking their algorithm, or is it maybe because they still haven’t finished a full round of indexing. Or could it perhaps be another reason?

This is the sort of behaviour we might expect from Google’s supplemental index when a page that has borderline PageRank drops from the main index to the supplemental index – though with Google you would expect the secondary ranking page to have at least some mention of the product being searched for!

Could it be that Microsoft have ambitions to index the same sort of volumes of data that Google aim at and are running into the same problems of having to exclude some pages from their main index in order to speed up search times? Now this is pure speculation on my part but if it’s true then what signals are they using to decide which pages are main index material and which are relegated to whatever their equivalent of the supplemental index is (assuming they don’t just tip them out of consideration altogether)? Does Bing have something like PageRank or is there some other indictaor that they use?

The reason this might be important is that I’ve seen a number of articles (admittedly American) suggesting that we should all start optimising our sites for Bing. Now aside from the fact that we should optimise primarily for users, not search engines, how would we go about this? If they do have a means of separating out the pages they take notice of we’d need to have a good idea of what it was before we could be sure that the sort of scenario I’ve mentioned above can be avoided. We know very little about the Bing algorithm and we don’t know yet how Yahoo will use their results when they start displaying them. In the UK it probably isn’t that important right now but if something were to cause a swing away from Google then it could get to be. And in the US it could affect 20% of the market.

Webmaster Tools – new feature lets you exclude parameters

Last night while going over some clients’ accounts in Google Analytics and Webmaster Tools I noticed a new feature which hadn’t been there a few days ago. Under Site configuration / Settings there is now a section called Parameter Handling where you can set whether or not Google should ignore any of the parameters that are contained in your URL strings.

Please be very careful with these unless you’re absolutely certain you know what you’re doing. While the tool may well be useful to avoid pages appearing under multiple URLs, a mistake here could easily drop pages from their index that you want in there. There’s a description of the new tool from ex-Google employee Vanessa Fox published on the Search Engine Land site yesterday. Recommended reading. I’ll add any other discussions I see on it that look useful.

Adobe heading for the Big League

The news that Adobe are buying the analytics company Omniture for $1.8 billion  http://www.omniture.com/press/777 suggests that Adobe are looking to move firmly into the markets that are controlled by Google and Microsoft. This seems to be a continuation of the direction they embarked on with the aquisition of Macromedia. At that time some commentators assumed it would be merely a fit with their web design and graphics software, but this new move suggests that it was really the Flash format they wanted to control and that they’re moving much more towards tracking and interaction analysis for online marketing.

I have to say I’ve never been a big fan of Adobe – back in the 1990s I developed a dislike of their interfaces in Pagemaker and Photoshop and I’d much rather they’d left Dreamweaver and Fireworks alone. As an SEO I naturally have a jaundiced eye for Flash and as a user who values my privacy I really dislike Flash cookies which seem to be everywhere – since I installed a Firefox plug-in to delete them I’m finding them all over the place. I can’t really blame Adobe for those though – they just aquired something that was already there. However the combination of them and the sort of tracking capability in Omniture (I haven’t used it myself but I’m told by a friend who has that it’s way more detailed than Google Analytics) promises to give Adobe a lot of clout in the targeted adverts and marketing profile business if I’m reading the situation correctly.

Greatest wastes of time in SEO – #1 Submission

It never ceases to amaze me how many old discredited techniques and actions are still being recommended by people who purport to be experts. So this is the first in an occasional series of pointless things that are not worth the time it takes to think about them let alone to do them. This one was inspired by a thread on a well known business networking site.

Submitting to search engines – don’t do it!!

I can barely remember when it was last even a vaguely good idea to do this. You see there are these things called spiders that go out and find sites every minute of every day – yeah radical stuff eh – and all you need to do is give them one link to a new site or page and they’ll be on it like a rash the next time they crawl the page with the link on it. You’ll be doing this anyway if you have the slightest intention of anyone ever seeing the page so it’s automatic. Hell, Google will sometimes go looking for a site that’s not linked to yet (it may still be in development) because it knows about the domain registration. It’s hard to stop them at times. If you can’t get a new page into the index you’ve got a lot more to worry about than submissions.

I haven’t submitted a site to Google this decade, or Yahoo, and only ever one to MSN when for some reason it was having trouble finding it. And most of my new sites get found within a couple of days at the very most. One was indexed in half an hour because I linked to it from a blog.

And the danger if you don’t stamp out the practice by firmly saying it’s unnecessary is that your clients may themselves submit multiple times and be seen as submission spamming.

So the next time some idiot tries to sell you submission services tell him you’d rather speak to an SEO!

Localised fallout from bank search

I often see ads and articles for local SEO that concentrate on getting into the Google Local results that appear for geographically-related searches – usually after the third organic result. While it’s a useful area to cover I do tend to take it with a little pinch of salt because I’ve seen some odd results from it and there’s no clear relationship between the strength of a site and whether you can get one of the coveted 10 places. But I always thought there was at least some degree of checking of who registers a business at an address – clunky though the send-a-postcard system is. But what are we to make of the result below for the search term “bank of scotland”? Note item H.

A curious Google local result

A curious Google local result

Whatever you think of the banks and the current financial crisis it seems a bit strange that a site that appears to be about exposing banking fraud should appear as a local business, and when you follow the “more” link, with the address of the Bank of Scotland in St Andrew Square. You’d think there’d be enough screening to avoid such entries.