In the previous part of the series, we had discussed in detail about many SEO terms and their concepts that you need to know, but it had mainly covered the off-page SEO terminologies. The important things that got left out from the previous article are the on-page terms, all of which would be covered in detail in this article.
Not only the on-page terms, but also we would focus on many other critically important and miscellaneous SEO terms and their concepts which are needed to be understood from depth too. As I had mentioned in the previous article too, you may (or will) need to bookmark or archive this article and read it several times to grasp these terms and their deep concepts properly.
The Title tag encloses the clickable text that appears in the search engine results page. Unlike Meta tags (which we would discuss in the very next section), Title Tags are not optional. It is very important to put your careful attention and focus in framing each and every single word of the Title tag.
This is something that the search algorithms first look for in your document. The title will give them a good idea of what the document is all about. Not only that, the title is something that can compel the searchers in choosing to visit your site over others. So the Title tag is important both in terms of SEO and also for attracting more visitors.
(Image Source: SearchEngineLand.com)
Other important tags similar to the Title tag is H1, H2, H3 tags. H1 tag or the headline text is the second most important thing that the search engines look for after the Title tag. It is the largest text in the document and is meant to be brief yet descriptive. While the H2 tags serve to be the sub-heading or the subtitle of your document. There can be several H2 tags present in an article which gives a much more specific and concrete description to your content.
Meta Tags are embedded in the HTML code of a particular webpage and provides additional information about that webpage to the search engines. Meta Tags are not visible in the body of the webpage but provides important information to the search engines about that particular webpage.
(Image Source: Moz.com)
Meta description is a short description about the webpage which though not directly present in the webpage, can serve to be the first impression for the searchers in the SERPs as these Meta descriptions can be shown to the searchers as snippets. It’s like a free advertisement of your link the SERPs.
Similarly, Meta Keywords are nothing but a list of some of the most important keywords relevant to the webpage and embedded in the HTML of that webpage. These keywords can help the search engines in understanding the relevancy your page better.
But now-a-days, the engines put very little importance in these Meta information’s, but still they are something you should not ignore entirely as these can improve the chances of your webpage being visited more from the SERPs.
Keyword is one of the most fundamental units of on-page SEO and there is a huge scope of discussion on this topic too. Keywords are basically the words or the phrases which people are actually searching for through the search engines and it is very essential for you to be specifically aware of these keywords.
The most fundamental idea of SEO is to analyze your target keywords which is technically termed as Keyword Research. You need to research and find out keywords which have the largest Search Volume and least Competition. This is the first and foremost thing in SEO and everything else like building backlinks, etc. comes after it.
(Image Source: SEOnick.net)
Now after you have researched and found out the keywords which are mostly searched and have the least competition, you need to use them in your webpage properly. The ratio of the no. of times these keywords are used to the total no. of words or phrases in your webpage, is known as Keyword Density.
There are various myths that hover over the idea of Keyword Density. You may find various sites giving you a variety of numbers – 2%, 5% or even 10% to be used as the right percentage in maintain the density of your target keyword in your webpage. Years ago, there was a time when the search engines used to match keywords word-by-word and the no. of times they have been used. But in today’s SEO world there is essentially no existence of Keyword Density and it’s nothing but a pure myth.
(Image Source: SubmitEdge.com)
As we have talked about Black Hat SEO, let’s discuss some examples of black SEO techniques related to keywords that got its popularity years back and got many webmasters completely penalized from the search index. Keyword Stuffing is the act of putting superfluous keywords to artificially improve the rankings of a certain webpage for a certain search query.
A similar offense is Keyword Cloaking, which is the technique of presenting different keywords to the users and to the search engines bots. Some keywords present in the webpage are crawled by the search engines while they are kept hidden from the actual users just to make the page rank for keywords which it should not rank for.
Another idea that should come in this context is Keyword Cannibalization, which is not a black-hat SEO technique but it’s a mistake that many webmasters tend to make in the initial days of their SEO career. It’s the act of using the same or similar keywords in different pages of your website thereby confusing the search engines and making it increasingly difficult for them to understand which page is the most relevant one to be ranked. It nothing better than eating up your own hard work and efforts.
The topic of Keyword would be incomplete without the introduction of the concept of LSI (Latent Semantic Indexing) Keywords. These are the keywords or phrases which are synonymous (or have almost similar meaning) with your target keywords. Therefore instead of just using your target keywords, these LSI keywords can help the search engines to understand the relevancy of your page better.
(Image Source: MLMDreamSaver.com)
Some Miscellaneous Terms
Robots.txt and Sitemap
Robots.txt a simple text file containing certain restrictions for the crawlers or bots on crawling and indexing a particular webpage. It’s a very important file which needs to be properly configured to get the maximum SEO, security and server performance benefits.
But all bots don’t necessarily obey the restrictions put forward by the robots.txt file, but still most of them do and it’s very important for you to learn the proper way of writing the directives in the robots.txt file to get the most out from it.
(Image Source: SearchEngineLand.com)
Sitemap is another important page which consists of all the user-accessible pages of a particular website and behaves like a roadmap for both the users and the search engines to navigate you site better. The search engines robots use this sitemap to get the pathway they need to move to any page within the website.
Pages having exactly or partially similar content with the pages of a different site or that of your own site, is known as Duplicate Content. The engines always want fresh and unique content, but that doesn’t essentially mean that you are going to get heavily penalized for the duplicate content you publish every time.
Never try to copy or scrape others content, as the search engines give very little value to the duplicate content and sites hosting duplicate content. But there are many cases when you can become a victim of duplicate content, even when you yourself have never indulged in duplicating any content from any site.
If you are using any CMS (like WordPress) you can become a victim of duplicate content anytime. It’s neither your fault nor the fault of the CMS, but it’s just the way they are designed which leads to duplicate content issues and you need to learn the ways to combat this problem and stay safe.
(Image Source: Linkdex.com)
Have you ever thought, what the search engine bots should do if they encounter a same page from two (or more) URLs? Let’s take the example of your homepage, which is same when accessed through:
All of these URLs lead to a single page only – that is the home page of your site. This is what invites the need for Canonicalization which decides the most important, standard, established and original URL from the set. It’s also a very important step you need to take to combat the problem of duplicate content.
Now it’s time for us to end this article as we have covered enough of SEO terms – both on-page and off-page on these two articles. We have not only learned about some essential SEO terms, but have also learned the deep concepts residing inside them and the way they actually work.
In the upcoming articles of this series, we would learn the techniques to achieve an “almost perfect” on-page optimization. It’s worth the wait for one of the most detailed and complete guide on on-page optimization coming next. See you in the next article of this series, only on corePHP.