Search Engine Optimization (SEO) is an ever-important subject for any website operator, and it’s critical to understand that it is a multifaceted topic, providing myriad considerations of ways you can use SEO to increase your website’s ROI. As always, your number one consideration should be creating fantastic content and establishing yourself as an authority in your area of expertise. But beyond the consideration of content, there are several technical components to a website that might be causing search engines to encounter a “roadblock” when determining how to rank your site in search results, and these aspects are often missed. Let’s take a look at these common issues and how they can be resolved:
Most websites add a “www” to their URL, but pages on the site load the same whether or not it is included. The problem that this can cause is that search engines will see yoursite.com and www.yoursite.com (as well as internal pages such as yoursite.com/our-services/ and www.yoursite.com/our-services/) as two distinct URLs, and even though they both point to the same page, they will be considered duplicate content and dilute search results.
This issue can be addressed by creating a 301 (permanent) redirect at the server level to always send traffic to the www version of any page (this standardization is known as “URL canonicalization”). This doesn’t require any changes to content, and once it is set up, it will not only affect existing site content, but any content created in the future, so making sure it is configured correctly will be a huge benefit to SEO.
Another common issue occurs when a page on a site is not found, but the correct error code is not returned to the requester. When a page is not found, a 404 error should be returned; this will let search engines know that the page which is not found should be removed from their index. However, it’s common to see sites that mistakenly return a 200 (OK) code when a page is not found, which will make search engines think that the “Page not found” page is the actual content, and since this page comes up whenever content is not found, it will be considered duplicate content, which will lower search rankings. Making sure the site is configured to return the correct error codes is essential in making sure search engines are able to index your site correctly.
Another issue which can cause search engines to think your site has duplicate content is the case of the words used in your URLs. Search engines will view www.yoursite.com/About-Us/ as a different URL than www.yoursite.com/about-us/, so it’s important to use a consistent strategy when creating URLs. We recommend using all lower case characters, as well as using hyphens in between words rather than underscores (since search engines do not recognize underscores as spaces, using underscores can cause them to recognize the URL keywords as a single keyword comprised of all the words in the URL, e.g. aboutus vs. about us).
A great strategy for forcing URLs to always be recognized in lower case no matter what URL is rendered in the browser is the use of Rel canonical tags. These are meta tags which tell search engines which version of a URL to give “weight” to when indexing a site e.g. the lowercase version of the URL. For static sites, these tags will need to be set up on a page by page basis, but they can be created programmatically in dynamic, CMS-driven sites. Since CMS-driven websites are notorious for URL based duplicate content issues, the configuration of Rel canonical tags is critically important in your overall SEO URL strategy.
Page Load Speed
While we continue to stress that creating quality content should be your number one SEO priority, there are plenty of other aspects to consider. Do you have any questions about how to implement these configurations? Please contact us with any questions, or feel free to leave a comment below.