Tips and Tricks for SEO Best Practices

Search Engine Optimisation – SEO – is simple at a basic level. Design a website that has compelling content for users, build the site well, follow best practices in web coding, and always follow several rules that assist search engines in crawling your site. Too often, webmasters let themselves get overwhelmed by the concept of SEO and are paralyzed from doing anything. Always follow these best practices to cover the basics. Then once those are covered, move onto advanced topics that start to differentiate further. Remember – if you’re not doing the basics, don’t worry about the advanced stuff – it’s always more important to get the basics right with SEO best
practices.

There are two sides to SEO best practices. One side is to ensure you build your page in the right way from a technical perspective. The other side is that you target your content to the appropriate level of specificity. I’ll cover the engineering mechanics second, since they rely on understanding the targeting first.

The most basic guidance I can give people is to build a webpage that is legitimately valuable to real people – that alone is half of the work. SEO principles applied to an already-valuable site are compelling. Start by deciding on the “target” of each page. Ask yourself what the specific purpose of each page is. If you’re not clear on that, search engines (and probably people) won’t be either. A few years ago, people used to discover websites by “drilling down” from a high-level portal like MSN or Yahoo’s directory into specific topics and categories. Those days are gone and instead people now use Google to search on a term or phrase, and the Search Engine returns specific web pages that best match the person’s search words. As a result, each page on your website needs to have a clear target audience – and you as the creator of the content, need to know which specific keywords and phrases you want people to discover your page through. As you are creating the content on your page (the “editorial”), ask yourself what search terms someone would likely type into Google in order to find your page. Then build your content consciously using those phrases. Ultimately, the more relevant you make your page’s content, the closer you can match these keywords, and the higher chances of someone actually finding your page. Having a page that covers huge ranges of content without a reason for doing so probably won’t score nearly as well as a page built around a specific content theme or topic. One very useful way to help make sure you target your website with the right keywords is to create your page, then use Google’s Keyword Analyzer to help generate similar words. Using that tool, you can make sure what YOU think as important keywords ACTUALLY ARE the words people search on.

You should also assume that each individual page on your website is a potential entry into your entire site. Because people arrive at your page from a search engine, assume they know nothing else about your site – and make it easy for them to discover other compelling content that is related to the page they happened to arrive on. Do you provide them links from each page into other areas of your site? Is it clear how they can navigate to related topics, get back to your homepage, discover other interesting content, etc.

In addition to thinking through these topics, you should consider how to build a reputable website. From an SEO perspective, multiple factors play into how a search engine decides on your site’s relevance to a user’s search query. One big factor is how many other reputable sites link into your website. Remember that reputation is very important – some attempts to trick search engines into thinking your site is valuable include buying lots of incoming links through affiliate programs, and this always will backfire eventually. They may work in the short term, but once the search engine catches on, your site will be categorized most likely as spam and you will lose all validity in search results. The only way to approach building a solid SEO reputation is by always focusing on making your content truly valuable to real users and the rest will follow naturally. Never try to trick a search engine – focus on the basics and never be deceitful.

The above is a summary of the content on the page. That is absolutely critical to building a search-friendly website, not to mention a user-friendly website. But content itself is not enough. You must also follow technical best practices in order to achieve great SEO restults. The following are the technical best practices for SEO:

In the “head” section of your webpage, which defines overall page-level information, the top and mandatory SEO items include:

  • The title tag: The title tag is what appears on the Internet Explorer (or Firefox, etc) title bar at the top of the webpage. The title tag should be less than 50 characters and be as specific as possible related to the content of the page. Always start most specific and work your way to most generic in the title. For example, you may have the title “Analysis of SEO Best Practices brought to you by KingFriday.co.uk”. In this example, the more specific content (“Analysis of SEO Best Practices”) is first, followed by the more generic portion (“brought to you by KingFriday.co.uk”). There should be one and only one instance of the title tag.
  • Meta description is another crucial tag for the head section of your page. The meta description is up to 200 characters that literally describes the page content in a human-understandable way. It most often is the short description returned by search engines when they display your site on the on the search engine results page (SERP), below the link. There should be one and only one instance of meta description.
  • The Meta keywords tag gives search engines clues to the topic of the page, using keywords you provide. Think of them as strong hints for the search engine that help narrow down the type of content and specific keywords most important to your page. There should be one and only one instance of meta keywords.
  • Canonical URL is the ability to tell the search engine what your preferred URL is for this page. Since search engines treat each unique URL as a unique page to index, webmasters run into problems when several different URLs all display the same content. Take for example a website “http://www.website.com”. The homepage of the website probably can be accessed by the http://www.website.com address, maybe by the website.com address, and perhaps even by the website.com/default.aspx address. All of these URLs are unique, so a search engine will index each one – but the content is the same. So which one should it return to the user in a search result? But what about users who link TO the website? What if some of them link to one version, while others link to a different version? The canonical URL tag tells the search engines that all of these pages are the same, and it provides the webmaster’s preferred URL for the search engine to consider “right”. This also strongly hints to the search engine that incoming links to alternate versions of the URL should all be consolidated and treated as incoming to the “right” URL.
  • Friendly URLs: the actual URL that users will use to access your page is important. There are two primary aspects: (1) Make the URL as descriptive and short as possible. For example, if you have a news article about the USA Presidential race, and you’re reviewing Hillary Clinton, your ideal URL might be http://presidential.reviews.com/Hillary-Clinton. This is definitely better than a URL like http://reviews.com/review.asp?id=12345. (2) Use few or no parameters. The fewer parameters, the better. In no case should the URL have more than 3 parameters. The URL is important for at least two reasons: (1) URLs are often indicated in the search results from various search engines, and if you have keywords in the URL itself, they will often be highlighted; and (2) people who link to your page (bloggers, for example) often use the URL as the actual link description. When they do this, search engines examine the link description to infer context and relevancy of sites. Bottom line? If you can, make your URL “human readable” and understandable. If a human can look only at the URL and know what content he’s going to see, you’ll gain some bonus points with search engines. You should also choose only letters and numbers in the URL and if you want to separate words, use a dash mark and NOT an underscore. For reference, look at the URL of this webpage you are on.

The head section is clearly important to set the overall information for the page’s content. It is equally important to ensure the rest of the page follows the SEO best practices also. These are the SEO best practices for the main body of the page, still primarily focused on technical aspects. These will affect the “body” section of your webpage.

  • The H1 tag is displayed in the page itself and represents the main “header” for the page. Consider it a mandatory item on every page. You would often see the H1 tag on the page as a highlighted or bolded text as the main topic for the page. Obviously, the H1 tag and the title tag are related in the sense that they both summarize the main purpose of the page; but they should not be identical. It’s fine to use similar keywords in both, but they should each be a unique string – change the order or describe the content of the page slightly differently. You must have one and only one H1 tag on your entire page.
  • The H2..Hx tags are optional tags used to differentiate various sub-sections of your page. For example, if you have an H1 tag that indicates the page is about the History of England, you might use an H2 tag to break the page into sections by year ranges (the 1200s, the 1300s) and then to further use H3 tags to break each year range into additional sections. You can have multiple H2 tags, multiple H3 tag, etc. You may also have none. One word of caution – don’t use these tags for reasons other than subdividing your page into groupings. If you do this, you will confuse search engines and they consider your page relevant for something it’s not, or worse, think it’s not relevant when it should be.
  • Alt and Title tags on images are important to help search engines understand what the image is about. They also are key to ensure visually-impaired visitors can use screen readers to explore your site. When you have an image on the page, make sure it has alt and title attributes that describe the image. It is okay to have the identical description for the alt and title tags.
  • Write the site in plain HTML and upgrade the experience to Flash or Silverlight if the browser supports it. While you can build quite an appealing visual site using Flash or Silverlight, you should always do that as an upgraded experience if the client supports it, and not as the default behavior. Search engines (not to mention visually impaired users) can’t crawl a Flash or Silverlight site as well as they can crawl HTML. Without a highly-crawlable site, the search engine doesn’t know what your pages are about. There has been some improvement in the ability to crawl Flash, but it’s simply a very risky move still for the webmaster to build the site with primarily flash. Silverlight is better than Flash since it has been designed specifically to be crawlable – but again, HTML is by far the better route to go for the default behavior on your site.
  • Well-formed content means that you properly follow the mechanics of opening and closing HTML tags, you follow the coding standards agreed to by W3C, and that your HTML output is well-written enough that a search engine and user’s browser can render your site fine. You should always validate your site through the W3C Markup Validator and resolve any issues before going live (see below for tool links).
  • Use 301, 302, and 404 codes properly. There are some redirect commands a webserver can control. The three most important and 301, 302, and 404 codes. If a piece of content moves or if you want to have multiple URLs all automatically direct the user to the same piece of content on a different URL, you would typically use a redirect code. There are two types of redirects – a 301 code tells search engines that the content has permanently moved to the new location (you always push a 301 and 302 codes with a URL that indicates where the “new” location is). With a 301, search engines will drop the old location from their search index and starting crawling the new location instead. Crucially, any page rank the old page had will be transferred to the new URL. With a 302 redirect code, you are telling the search engine that the old URL actually is still valid, just temporarily it should crawl the new URL. So the two URLs remain separate URLs from the search engine’s perspective and both will be crawled and treated separately. Thus, you do not benefit from combined page ranking. Finally, the 404 status code is required anytime the webserver receives a request for a URL that isn’t a valid URL on your site. This happens sometimes because someone mistypes part of the URL, or because a page has expired or moved elsewhere. The 404 status code is important because you can still “capture” that the person or search engine came to your site, and you can display a nice “sorry, this page you requested doesn’t exist” message, and give them a site map to easily find the content they are looking for. For a search engine, a 404 status code is crucial because it tells it that the URL isn’t valid and it should not crawl that specific URL or return that URL in search results. When your 404 page also contains a site map, the search engine can easily crawl to the rest of your site, and make sure your live URLs are properly indexed.

There are some specific free tools on the web that are extremely useful to ensure you remain SEO compliant:

  • The W3C Validator will check a webpage to ensure it is well formed and passes the appropriate standards. You can see a demo of two pages in the Validator tool by looking at the analysis of this blog post you’re reading (which passes validation hopefully!) and an analysis of a poor web page (which doesn’t pass validation and is highly amusing because it is the homepage of an SEO consulting company).
  • The Reaction Engine performs an analysis of your page for specific keywords. It’s a great tool to use when you’re completed your page and want to know how it “scores” for certain keywords.
  • So you think your site is easily crawlable by search engines? Check that assumption by seeing your site like a search engine does. This tool will return to you exactly what Google “sees” when it crawls your website. If the content you want Google to see isn’t returned, you’re doing something wrong.
  • To stay on top of more advanced SEO topics, there are two sources that I have found very useful. One is the official Google Webmaster Central blog, and the other is from a group called SEO Chat.
  • This HTTP Header sniffer site will show you what HTTP responses are being returned by your 301, 302, 404 and 200 pages.

Remember that building a search-friendly and optimized webpage is important so that users can find your content. Most web users today simple go to their favorite search engine, type in a few key words or phrases, and go to one of the first results. Through following the best practices outlined on this page, your website will be highly crawlable and discoverable by users. Your ultimate success is a factor of all of these items – do the best you can with as many as you can, and good luck! If it helps you as a starting point, you can download this page’s source HTML (stripped of the CSS and non-essential content) by clicking here.

Advertisements