A new video was posted today on the Google Webmaster Help YouTube Channel, in which Matt Cutts answers a question from Gary Taylor, a webmaster in the UK, about duplicate content.

Gary Taylor from Stratford Upon Avon, UK, asks:

How does Google handle duplicate content and what negative effects can it have on rankings from an SEO perspective?

That’s a great question as it has long been a common fear for many webmasters.

Before all of the recent Google updates and data refreshes, it was common practice for many website owners to scrape, copy and rehash content in order for them to save time and make more money.

Matt says it’s a question they get a lot, before adding:

It’s important to realise that if you look at content on the web, something like 25 or 30% of all of the web’s content is duplicate content.

That’s a pretty crazy percentage, don’t you think?

I knew the internet would be filled with duplicate content, probably mostly innocent, but I never would have imagined that the percentage would be so high. I suppose things like operating systems, software and similar things which often require manuals could play a large part in that, though, as Matt mentions.

Matt also says:

It’s not the case that every single time there’s duplicate content, it’s spam, and if we made that assumption then the changes that happened as a result would probably end up hurting our search quality rather than helping our search quality.

Although most of the duplicate content Google find isn’t treated as spam and doesn’t result in a site being penalised, Matt does make it clear that they’ll usually only show one of the site’s hosting the same content and push the other back in their search results – but he didn’t say which site they would favor.

Remember that, although they don’t view most instances of duplicate content as spam, they will take action against autoblogs and site’s scraping content on a large scale.

Here’s the video: