Free Articles, Free Web Content, Reprint Articles
Monday, September 16, 2019
 
Free Articles, Free Web Content, Reprint ArticlesRegisterAll CategoriesTop AuthorsSubmit Article (Article Submission)ContactSubscribe Free Articles, Free Web Content, Reprint Articles
 

How to prevent duplicate content with effective use of the robots.txt and robots tag.

Learn how to control the Search Engine Spiders and prevent them from dismissing parts of your site as duplicate or irrelevant content.

Duplicate content is one of the problems that we regularly come across as part of the search engine optimization services we offer. If the search engines determine your site contains similar content, this may result in penalties and even exclusion from the search engines. Fortunately it’s a problem that is easily rectified.

Your primary weapon of choice against duplicate content can be found within “The Robot Exclusion Protocol” which has now been adopted by all the major search engines. There are two ways to control how the search engine spiders index your site.

1. The Robot Exclusion File or “robots.txt” and

2. The Robots TagThe Robots Exclusion File (Robots.txt)

This is a simple text file that can be created in Notepad. Once created you must upload the file into the root directory of your website e.g. www.yourwebsite.com/robots.txt. Before a search engine spider indexes your website they look for this file which tells them exactly how to index your site’s content. The use of the robots.txt file is most suited to static html sites or for excluding certain files in dynamic sites. If the majority of your site is dynamically created then consider using the Robots Tag.

Creating your robots.txt file

Example 1 ScenarioIf you wanted to make the .txt file applicable to all search engine spiders and make the entire site available for indexing. The robots.txt file would look like this:

User-agent: *

Disallow:

Explanation

The use of the asterisk with the “User-agent” means this robots.txt file applies to all search engine spiders. By leaving the “Disallow” blank all parts of the site are suitable for indexing.

Example 2 ScenarioIf you wanted to make the .txt file applicable to all search engine spiders and to stop the spiders from indexing the faq, cgi-bin the images directories and a specific page called faqs.html contained within the root directory, the robots.txt file would look like this:

User-agent: *

Disallow: /faq/

Disallow: /cgi-bin/Disallow: /images/

Disallow: /faqs.html

Explanation

The use of the asterisk with the “User-agent” means this robots.txt file applies to all search engine spiders. Preventing access to the directories is achieved by naming them, and the specific page is referenced directly. The named files & directories will now not be indexed by any search engine spiders.

Example 3 ScenarioIf you wanted to make the .txt file applicable to the Google spider, googlebot and stop it from indexing the faq, cgi-bin, images directories and a specific html page called faqs.html contained within the root directory, the robots.txt file would look like this:

User-agent: googlebot

Disallow: /faq/

Disallow: /cgi-bin/

Disallow: /images/

Disallow: /faqs.html

Explanation

By naming the particular search spider in the “User-agent” you prevent it from indexing the content you specify. Preventing access to the directories is achieved by simply naming them, and the specific page is referenced directly. The named files & directories will not be indexed by Google.

That’s all there is to it!As mentioned earlier the robots.txt file can be difficult to implement in the case of dynamic sites and in this case it’s probably necessary to use a combination of the robots.txt and the robots tag.

The Robots TagThis alternative way of telling the search engines what to do with site content appears in the section of a web page. A simple example would be as follows; In this example we are telling all search engines not to index the page or to follow any of the links contained within the page. In this second example I don’t want Google to cache the page, because the site contains time sensitive information. This can be achieved simply by adding the “noarchive” directive. Free Reprint Articles,>What could be simpler!Although there are other ways of preventing duplicate content from appearing in the Search Engines this is the simplest to implement and all websites should operate either a robots.txt file and or a Robot tag combination.

Source: Free Articles from ArticlesFactory.com

ABOUT THE AUTHOR


Should you require further information about our search engine marketing or optimization services please visit us at http://www.e-prominence.co.uk – The specialist search marketing company.



Health
Business
Finance
Travel
Technology
Home Repair
Computers
Marketing
Autos
Family
Entertainment
Law
Education
Communication
Other
Sports
ECommerce
Home Business
Self Help
Internet
Partners


Page loaded in 0.136 seconds