Duplicate Content - How To Beat Duplicate Content Penalties When Publishing Articles Online

Aug 13
08:58

2007

Elaine Currie

Elaine Currie

  • Share this article on Facebook
  • Share this article on Twitter
  • Share this article on Linkedin

Many webmasters fear Google's duplicate content penaly but they don't all realise how the penalty works, how it can be avoided and how to escape from "Google Hell".

mediaimage

The talk about duplicate content penalties often creates misunderstanding in the minds of inexperienced webmasters who end up with the impression that their whole website will be de-indexed if it contains a phrase to be found elsewhere online. Some of them get the idea that if their websites include pages containing other people's articles,Duplicate Content - How To Beat Duplicate Content Penalties When Publishing Articles Online Articles the weight of the penalties imposed on those pages will drag their whole website down into obscurity. This is not how the duplicate content penalty works; it's brutal but not quite that brutal.

One reason why search engines filter duplicate content is to keep search results free from unhelpful duplication. Therefore, any web page that has a significant amount of text that already exists online, will not make it to the top of search results.

(This avoids the situation where a search would bring up pages and pages of the same "cookie cutter" websites.) So, if you write an article and submit it for publishing at 100 online article directories, even though it might show up on page one of a Google search, the search won't produce a result consisting of 100 instances of your article (ie one at each of the web directories where it is posted) in the first 10 pages. In the case of a recently published article, it might appear several times in a search result but, over time, most of the duplicate entries will be weeded out and moved to the supplemental index (aka "Google Hell").

In theory the way duplicate content filters work (I say "in theory" because the system is far from 100% perfect) is that the website where the article was first posted will be recognised as the original. So, if you write an article and want to get your website in the search results and you also want to distribute your article to article directories, you need to make sure the search engines know where they saw it first. The way to do this is to post the article on your website, wait a few days and then check by doing searches with the major search engines to see if your page has been indexed. Once you know the search engines have indexed your page, you can submit your article to the article directories.

Make sense so far? Here comes the part that makes the system less than 100% perfect. Google's filters don't take the date of first online publication of an article into account when deciding which website has the best claim to the article. They use the number of links back to a website to ascertain the importance of any website carrying the article. So, if your article is published on a high ranking article directory, you will most likely find that is what will appear in a search result while your web page containing the original of your article has been demoted to the supplemental index.

Is there a way to get your web pages recognised as being original? Yes, by having unique content. Is there a way to rescue your web pages from Google Hell? Yes, the way things (ie Google's famous secret algorithms) work at the moment, this can be achieved by building up the number of links back to your web pages so Google will recognise your website as being too "important" for Google Hell.