Google’s Update Completing Cycles
Google Products Nov 23, 2005

Google’s Update Completing Cycles
Ever since Google introduced its latest algorithm update in September, a fair amount of column space has been dedicated to telling webmasters and small business owners to wait until the update is complete. Although it can be said that the Jagger Update will never be complete, the final cycle of the immediate update appears to be unfolding.
Jagger was a different sort of algorithm update for Google. Its infamous predecessors, Florida and Hilltop, were generally limited shifts in the values Google assigned domains based on content and links. After the immediate punch of previous updates, the search engine results pages (SERPs) would generally return to a stable and predictable state. SERPS generated by Jagger are expected to constantly update themselves with a greater degree of flux and change.
So, what exactly happened during the Jagger Update, and what might it mean to your website? Quite a bit as it turns out.
The Jagger Update was introduced for three main reasons. The first was to deal with manipulative link-network schemes, sites generated with scraped content, and other forms of SEO spam. The second was to allow and account for the inclusion of a greater number of spiderable documents and file types. The third was to allow for and account for new methods of site acquisition beyond using the Googlebot spider.
The update made its first public appearance in late September but had its most significant impact in early October.
At that time, hundreds of thousands of websites that had previously enjoyed strong listings were suddenly struck and sent to the relative oblivion found beyond the second page of results.
Most of those sites lost position due to participation in what Google obviously considers inappropriate linking schemes. This was actually the first conclusion we came to in late September, based on the experience of clients from link networks that had not been expert- or vetted by our link experts.
Discussions in various search engine forums now support this. At the same time, most of those hurt by this part of the update are good people running honest businesses. Google links that irrelevant link networks, no matter how simple or complex, are unhealthy additions to what might otherwise be a good website.
The problem Google faced wasthat some webmasters misunderstood what links are for and how Google uses them to rank documentweb adminse unknown reason, many webmasters or site administrators participated in wholesale link mongering, bulking up on as many inbound links as possible without consideration of the most important factor (in Google’s estimation), the relevance of inbound links.
Google appears to be applying filters based on historic data it has collected about all sites in its index over time. In other words, Google likely knows a lot more about documents linking to a particular website than the person who placed or requested the link in the first place.
SEOs and webmasters should review the “Information Retrieval Based on Historical Data” patent application that Google filed on March 31, 2005, for detailed information.
Google judges sites based on who they link to, as well as who links to them. Before the update, a link from your site to an irrelevant site was more of a waste of time than an opportunity. Today, irrelevant links seem to be both.
Google’s desire to provide stable and highly relevant search engine results pages (SERPs) while preventing outright manipulation of those SERPs was the primary cause of the shift.
The second and third reasons for updating the algorithm at this time are the ability to index documents or information obtained from alternative sources, such as Google Base, Froogle, and blogs, as well as other social networking tools.
Google’s stated goal is to grow to include a reference to all the world’s information.
That information is expressed in multiple places using several unique file formats, some of which are difficult to compare with others. By checking the file or document in question against its long-term history of related documents, Google is better able to establish its theme and intent.
The mass adoption of blogs, promoted by Google, created several problems for the search engine.
Web admins and search marketers will take almost any opportunity to promote their sites, by any means available. Blogs provided ample opportunities, and soon issues, ranging from comment spam to scraped content, started to clutter the SERPs. By comparing document content with the history of other related documents in its index, Google has become much better at spotting blog-enabled spam.
Google faced problems with forms of search engine spam, such as fake directories and on-page spamming techniques, such as hiding information in CSS files.
The Jagger Update appears to be designed to address these issues by applying Google’s extensive knowledge about items in its index to every document or file it ranks. A site that scrapes content, for example, might be weighed against the original papers on which that content was published and the intent of the republishing.
One that hides information in the CSS file will similarly confuse Google’s memory of how the same domain looked and operated before the spam content was inserted.
The third reason for the algorithm update comes from Google’s expansion. Google is now much larger than it was when the Bourbon update was introduced in the early summer.
Audio and video content is spiderable and searchable. Google’s comparison shopping tool, Froogle, is starting to integrate with Google Local, as Google Local and Google Maps are merging. There is some speculation in the SEO community that Google is preparing to incorporate personalized data into the search results served to specific individuals. A strong assumption is that Jagger is part of Google’s movement towards personalization, although there is little evidence to support this idea firmly.
If your website is still suffering the lagging effects of the Jagger Update, your SEO or SEM vendor should be able to offer sound advice.
Chances are, the first thing they will do is a point-by-point inspection of your inbound and outbound links associated with your website. Next, they will likely suggest making it easier for Google to crawl various document file types on your site by providing an XML sitemap to instruct Google’s crawler.
Lastly, they will likely suggest a look at how website visitors behave when visiting your site. Site visitor behavior will play a role in Google’s evaluation of a site’s importance and relevance in its index.
The introduction of Google Analytics provides web admins with a wealth of free information about site visitors, as well as insights into how the site performs on Google’s search engine results pages. It also provides Google with a lot of information about sites running on it.
About the Author Jim Hedger
Jim Hedger is a senior editor for ISEDB.com. He is also a writer, speaker, and search engine marketing expert, working for StepForth Search Engine Placement in Victoria, BC.