It seems like everyone is practising a little search-engine optimisation (SEO) these days. While you might have the basics covered, do you know where SEO is heading? I’ve been optimising websites for search engines since the late 1990s and this has given me the fortunate opportunity to witness the evolution of search-engine algorithms over the years. Based on this experience, this article attempts to offer some insight on where SEO is moving.

The evolution of SEO has been an interesting one. The term was first coined in 1997, but the intentions behind it have been practised since the early days of web search; one could argue as far back as Jumpstation or even Archie in the early 1990s.

SEO is defined by Wikipedia as: “The process of improving the volume and quality of traffic to a website from search engines via ‘natural’ (‘organic’ or ‘algorithmic[SB1]’) search results for targeted keywords.” When performed successfully, SEO can tap into the tremendous number of searches performed on a daily basis and deliver a considerable stream of traffic and revenue to a website owner.

It is this potential for huge reward that has meant that search engines have had to grow smarter over the years. The early search engines were so primitive that the first phase in the life of SEO began with on-page optimisation. Quite simply, webmasters could tweak the content and various elements of their web pages or documents and, in doing so, be relatively confident of ranking well on their chosen keywords and phrases. I say words and phrases because 15 years ago, one-word searches were commonplace. These days, as users have evolved, the average query length tends to be up to three or four words.

This kind of search optimisation really took off with the introduction of Excite and Yahoo! in 1994. I remember trying to access these sites with my 14400 modem without much joy. Instead I went back to the bulletin boards and, of course, the beloved IRC. By 1995, smart webmasters were starting to make some money as consumers began to turn to search engines in droves. There were so few pages on the web at that time that ranking well was incredibly easy. A common tactic was to optimise pages of your site for high volume phrases (often sex-related … I’ll say no more) and then hope the (male) visitor to your site bought whatever you were selling. These days that kind of untargeted approach to search marketing would be unthinkable.

By the late 1990s, the volume of sites on the web was starting to pick up, and the user experience from search was beginning to buckle as the system of only looking at on-page factors showed its inherent weaknesses when sorting a large number of documents. Fortunately, good ol’ Sergey and Larry came along with the idea that the authority, trust and credibility of a page could be determined by looking at the pages that link to it. It was a new take on a system familiar to academics. Your work is seen with greater respect if more people cite it in their work.

So, Google was born in 1997 or 1998, depending on who’s telling the story. (I remember it because it was around the time I got my first 56k V.90 modem — it was blistering!) Its simple, search-only interface and quality, reliable and consistent results soon attracted the attention of internet users. Since its inception, Google has dominated search and grown stronger by the day. Over the past 10 years, the other major engines have caught up with Google’s innovation and now the number of links to your page — and, more importantly, the relevance and link authority of the page linking to you — is the primary driver of search-engine rankings for any competitive phrase.

There is little sign of Google’s dominance relenting, but it is not a company to rest on its laurels. It knows that if someone else can deliver a better user experience, it will lose its customers. In fact, every search engine has to focus on this very fact. It has to deliver consistent, relevant and reliable results to its users or they will go elsewhere for their search needs.

Page and Brin sum it up in their pre-Google paper The Anatomy of a Large-Scale Hypertextual Web Search Engine: “The most important measure of a search engine is the quality of its search results.”

It is at this point that I start to speculate about the future of SEO based on the logic above. This logic tells me that if search engines have to deliver quality results consistently, then they either need to innovate and constantly better their algorithms or outsource this function to one of those innovators (as Yahoo! has just done with Google). If they don’t do this, the quality of their results will in time stagnate as their competitors improve. Perhaps that is why Excite, once one of the most powerful brands on the internet, is no longer alive today …

If search engines need to innovate to improve, one needs to ask: Are links the best way of determining the real value of a website to a particular search-engine user? I’ll definitely agree that they do have the ability, on a technical level, to demonstrate a website’s relative importance to a search engine. However, I believe they are merely a proxy to the real determinant of a website’s quality. That real determinant is the actual usage data of the website itself. If a search engine could actually see how users were using a site, how long they spent on it and whether they came back, it would know exactly which sites were worthwhile and guide its trusting users accordingly.

As an example of the value of genuine user data, one only has to look at the success of Hitwise. In its dominant UK and Australian markets, it places software on the servers of ISPs, which provides Hitwise with a perfect understanding of what the customers of those ISPs are doing online. It then appends this data with credit-card and other information and sells an expensive but exceptionally valuable insight into online consumer behaviour. Very few players are able to provide such detailed and accurate data, and even Hitwise is only relevant in the regions in which it has widespread ISP relations — how can search engines ever hope to have an understanding of user behaviour across the entire internet?

Yes, one might consider perfect access to user data as a utopian ideal — but is it? Google already has two particular factors in play: the back button on your browser and your interactions with its results pages. For example, if you click on the first result of a search query and then click the back button two seconds later, chances are the page wasn’t very relevant to your query. If thousands of users repeat this same pattern, it sends a message to Google that the page should not be ranked first and it is highly likely that it will be dropped in the rankings.

However, measuring interactions with the results page is perfunctory. Google also has a wealth of other usage data that, while not all encompassing, does provide it with a very valuable set of information. To name but a few, it has Google Analytics, which must be installed on millions of sites by now, AdWords, AdSense, Gmail, Website Optimiser and, of course, the Google Toolbar. Given that I believe that user data is the best way to deliver quality search results, I’m hard pressed to believe that Google isn’t at least thinking about incorporating this enormous collection of data into its algorithm.

As Hitwise has demonstrated, the best way to collect user data is to be at the heart of it and that means being an internet service provider connecting users and servers to the world wide web. Of all the search engines, Google has probably made the biggest leaps into this arena. Not only does it have a few Wimax projects on the go, but it is also offering ISP tools and I’m sure there is little doubt that it will soon offer a cloud-based hosting environment to emulate the successes of Amazon in particular.

If one were a conspiracy theorist, one might think twice about Google’s recent announcement that it would be incorporating page load times into its paid search Quality Score algorithm. The Quality Score seriously affects the ranking and ultimately the return on investment of any paid search campaign. So the question on the conspiracist’s lips is whether hosting your site in the Google cloud will deliver the fastest page load times and therefore the best Quality Score. If that were the case, webmasters would have a financial incentive to host their sites with Google. Once Google has driven enough site owners to give up their data, it can really start to leverage this in its rankings. Then there will be a tipping point where, if you don’t allow Google to see the value of your site, you will lose out on the traffic. Does this mean Google will eventually host the internet? I doubt it; Steve Ballmer wouldn’t let them — he’ll throw his chairs before that happens …

Right now, user data is probably only a very small part of any search engine’s ranking algorithm, but this will grow in time and I wouldn’t be surprised if it has a 50% impact within the next decade. What does this change from links-driven rankings to usage-data-driven rankings mean for website owners and online marketers? The answer to that question is for another day, but in the meantime, put yourself in the mind of your user and ask yourself this question: Would you visit your website and go back to it?

READ NEXT

Rob Stokes

Rob Stokes

Rob Stokes is the founder and Group CEO of Quirk eMarketing, Africa's largest full service online marketing agency. Quirk was founded in 1999 as Rob was nearing the end of his Business Science Marketing...

Leave a comment