Google's New Web Page Spider

Feb 19
21:39

2007

Subhash

Subhash

  • Share this article on Facebook
  • Share this article on Twitter
  • Share this article on Linkedin

Search engines use automated software programs that crawl the web. These programs called "crawlers" or "spiders" go from link to link and store the text and the keywords from the pages in a database. "Googlebot" is the name of Google's spider software.

mediaimage

Search engines use automated software programs that crawl the web. These programs called "crawlers" or "spiders" go from link to link and store the text and the keywords from the pages in a database. "Googlebot" is the name of Google's spider software.

Types of Google Spiders:Many webmasters have noticed that there are now two different Google spiders that index their web pages. At least one of them is performing a complete site scan:

• The normal Google spider: 66.249.64.47 - "GET /robots.txt HTTP/1.0" 404 1227 "-" "Googlebot/2.1"

• The additional Google spider: 66.249.66.129 - "GET / HTTP/1.1" 200 38358 "-" "Mozilla/5.0 (compatible; Googlebot/2.1)"

Difference between these two Google spidersThe new Google spider uses a slightly different user agent: "Mozilla/5.0". This means that Googlebot now also accepts the HTTP 1.1 protocol. The new spider might be able to understand more content formats,Google's New Web Page Spider Articles including compressed HTML.

AdWords Spider Google is using a new crawler software program for their AdWords advertising system that automatically spiders and analyzes the content of advertising landing pages. Google tries to determine the quality of the ad landing pages with the new bot. The content of the landing page will be used for the Quality Score that Google assigns to your ads. Google uses the Quality Score and the amount you are willing to pay to determine the position of your ads. Ads with a high quality score can rank higher even if you pay less than others for the ad.

Purpose of Google Spider Google hasn't revealed the reason for it yet. There are two main theories:

• The first theory is that Google uses the new spider to spot web sites that use cloaking, JavaScript redirects and other dubious web site optimization techniques. As the new spider seems to be more powerful than the old spider, this sounds plausible.

• The second theory is that Google's extensive crawling might be a panic reaction because the index needs to be rebuilt from the ground up in a short time period. The reason for this might be that the old index contains too many spam pages.

What does this mean to your web site? If you use questionable techniques such as cloaking or JavaScript redirects, you might get into trouble. If Google really uses the new spider to detect spamming web sites, it's likely that these sites will be banned from the index. To obtain long-term results on search engines, it's better to use ethical search engine optimization methods. General information about Google's web page spider can be found here.

Receive Email When Google Spiders Your PageA search engine spider is an automated software program that locates and collects data from web pages for inclusion in a search engine's database. The name of Google's spider is "Googlebot". If you have a web site that allows you to use PHP code then your web pages can inform you when Google's spider has indexed them. This little piece of PHP code recognizes Googlebot if it visits the web page, and it informs you by email when Googlebot has been there.

For more details on Google’s Spider visit at www.halfvalue.com and www.halfvalue.co.uk

For more Books information visit at www.lookbookstores.com