What is SEO and How it Works? – Part I

Welcome to the first part of the “The Complete Guide on SEO” series. This would be a long series of around 10 articles covering almost everything you need to know about beginner, intermediate and even a bit of advanced level SEO too.

I will cover the basics first and then move on towards the intermediate and advanced level SEO stuffs. It would be a very enjoying and interactive SEO learning course for all readers with 1 or 2 episodes coming up every week.

Introduction

If you want to understand the in-depth concept about SEO, then it is very important for you to first learn how the search engines work. Without a deep and thorough understanding of the working nature of the search engines it would be simply impossible for you to learn SEO in the right way.

To define in the simplest possible way, SEO or Search Engine Optimization is the science of making websites search engine-friendly. Many people often mistake SEO with some cheap techniques to manipulate search engine results. But there is a significant difference between the two words – “optimization” and ”manipulation” and it’s time for you to realize this.

The main goal of SEO, is to optimize your site so as to make it more important in the eyes of the search engines. You need to tell the search engines about your existence in this huge ocean of trillions of webpages in the World Wide Web. On the other hand, “manipulation” is nothing but deceiving both the search engines and your actual visitors, for better visibility.

“Optimization” is the real form of “SEO” which we would learn and practice in this series, but first we need to learn how the search engines actually work. You simply can’t be a doctor without studying the human body first.

SEO-friendly

(Image Source: imz6.com)

How Search Engines Work?

The main aim of the search engines is to provide answers, rather I must say relevant and useful answers to the questions of the searchers. But accomplishing this task is not as simple as you think it to be. The search engines give their best efforts in digging trillions of websites and finding out the best possible results to the searchers in less than seconds. Even a 1-2 second of delay can prove to be dissatisfactory to the searchers.

The search engines have two main tasks to accomplish–

1) Building a huge directory of information,

2) Finding the relevant and useful information from the directory

First Task: Building a Huge Directory of Information

“The more you learn and know, better is your chances of giving answers to all possible questions you face” and this is what the search engines try to follow. Their first and foremost aim is to gather as much information as possible from the entire World Wide Web and build a huge directory with these collected information’s and maintain them in an organized way to be retrieved later.

In technical terms, this huge directory of data is known as “Index” and the process of gathering and organizing data in this directory is known as “Indexing”. But there needs to be an automated mechanism for the engines to jump from one webpage to the other and collect data, as it is not possible for people to manually browse through each and every webpage and gather information from them.

Therefore, the search engines have built automated Robots which jump from one webpage to the other with the aim of collecting as much information as possible. These bots are known as “Crawlers” or “Spiders” too. Why people love calling them as Spiders? Simply because the internet is also known as the World Wide Web and these bots crawl through this web from one corner to the other.


(Image Source: dailydealmedia.com)

Now, you may wonder how these crawlers move from one webpage to the other. The answer is simple, through links or “hyperlinks” which are the roadways for the robots to travel across this huge universe of webpages. It’s like a black hole which can take these bots from one webpage to other instantly.

The links act as bridges connecting webpages which allows both real human visitors as well as search engine crawlers to jump from one webpage to the other without delay.

Second Task: Finding Relevant and Useful Information from the Directory

Now comes another important job for the search engines which is to find and retrieve the most relevant and useful information from the huge index of various information. This step can essentially be divided into two main parts –

1) To find the most relevant documents based on the searchers query

2) To sort the relevant documents on the basis of their popularity and usefulness

The first thing that the search engines need to do is to find out those documents which are relevant to the searchers query. For example, if you search for “cheap flowers” and the search engines show you documents based on “dog foods”, will that be helpful to you at all? You just need some suggestions in buying some cheap flowers and is not interested in dog food at all (neither do you have a pet dog in your house).

So, the search engines need to dig through the entire “index” of trillions of web pages and then find the results which are the most relevant ones. But the job doesn’t simply end here as it would be a complete mess if the searcher is provided will all the relevant documents untouched, without any kind of sorting.

So, the search engines perform an additional task of sorting the documents based on their usefulness, importance and popularity. The best ones are at the top of the list and the worst ones are pushed to the end.

A Practical Scenario


Suppose you search for the term “world wide web” and get about 592 million results for your search query. Behind the scenes, what the search engines did was to dig through the entire index of trillions of web pages and find only those documents which are relevant to your query on “world wide web” and you are lucky enough to have about 592 million documents relevant to your query.

But is it possible for anybody to read a few hundred million documents and find the useful ones? You had never demanded millions of documents from the search engines but only the most useful ones. Therefore, the engines need to work hard enough to sort all these results on the basis of their importance, usefulness and popularity. The best ones come in the first page and the even better ones rank in the top 3 positions.

Relevancy and Usefulness – In SEO Point of View

I don’t what to go too deep here as I will be focusing on those in much greater depth in the upcoming articles of this guide. But as you have understand the working nature of the search engines, it would be easier for you to grasp this concept in an even better way right now.


(Image Source: SubmitEdge.com)

On-Page Search Engine Optimization

“On-Page SEO” takes care of making your website or documents much more relevant and understandable to the search engines. Though the search engines use very sophisticated algorithms to understand the relevancy of your website, still they are not as powerful as humans and cannot think and understand in the way humans do.

So, you need to give the engines appropriate clues and hints to pay more attention to your website. You have to make awareness of your presence in this huge World Wide Web. Though the search engines try their best in understanding the relevancy of the websites and documents, still you need to make certain optimizations in your website to garner more relevancy and importance in the eyes of the search engines.

Off-Page Search Engine Optimization

If you have guessed it correct, then “Off-Page SEO” takes care of making your site or a particular document more important and trust-worthy to the search engines. Relevancy is not enough to make you appearance higher in the search results page, as there can be millions of other sites equally relevant to yours.

You need to be more important and popular in the eyes of the search engines. You need to make the engines pay more attention to you and put you higher up in the search results page. You need to tell them that you are the best and more useful to the searchers.

Although this should happen naturally, but sometimes you need to make a sound loud enough to make people be aware that you existence.

Suppose, you host a website and it is visited by 10 people at first and 3 of them like it enough to share it with their friends and reference your site from their own sites.

Now with these shares your site gets visible to 100 more people and suppose 20 of them repeats the sharing and referencing job again. So the chain reactions starts and your website slowly builds up its traffic and most importantly reputation and popularity in the eyes of the engines.

But you need to make it visible to the first 10 people, isn’t it? You need to give it the initial push which it needs to survive in in long run.

In the next article of the series, we would talk about the necessity of SEO and why we need to practice it? We would also talk about the marketing aspects of SEO and its future in the internet marketing industry. So stay tuned for the next article and keep looking for it only at corePHP.

Aritra Roy
Aritra Roy is a Blogger, Freelance Writer, Designer and Online Entrepreneur who believes in the power of written words to educate, influence and inspire people.