HOW SEARCH ENGINES WORK
There two major functions that search engines have, the first is crawling and building an index, the other is providing those who have searched with a list of websites they believe are relevant to their searches in a ranked format.
Picture the World Wide Web to be a connected series of stops in a large city subway system.
It is the work of the search engines to “crawl” all stops in the city. The stops are different unique documents that are often webpages, but sometimes PDFs, images, or other file types. To be able to crawl the whole city, search engines use the best route they can find-links.
Crawling and Indexing
Crawling and indexing all of the numerous documents, files, news, pages, media, and videos on the World Wide Web.
Supplying answers to queries of users by listing relevant pages that they have gathered and ranked on the basis of their relevance.
The link structure of the web functions to connect all of the web pages.
Links make it possible for “crawlers” or “spiders” (automated robots of search engines) to access billions of documents that are interconnected on the web.
Immediately the pages are found by search engines, they decode the codes from them and get some chosen parts of them stored in massive databases where they would be easily recalled anytime they are needed for a search query. To execute the difficult operation of storing numerous pages that can be reached in in a few seconds, companies operating search engines have built datacentres across the world.
These huge facilities for storage house a great number of machines that process numerous information very fast. When you search anything using any of the popular search engines, you expect results immediately, a few seconds delay would make you dissatisfied, therefore, the search engines have been programmed to work so hard so that they can supply results as fast as possible.
Search engines are answer machines. When you search anything online, the search engines quickly flip through their series of stored documents and other file types and then do two things: supply only results they consider relevant to your search queries, and, present the results in a ranked format with the ranking on the basis of how popular the websites providing the information are. SEO is therefore to gain more relevance and popularity.
How is popularity and relevance judged by search engines?
Relevance to a search engine does not only mean finding the right words on a page. That was during the early days of the web. Then, search engines only checked for the right words on a page and that made the results provided to be of limited value, but today, web gurus have found better techniques of matching results to queries. In this current age of the web, a lot of factors can influence relevance, and we will explain all the important factors in this guide.
Search engines hold the believe that the more popular a site, document, or page is, the more valuable the information it houses would be. The assumption is fair, judging from the satisfaction of users with search results.
The process of judging relevance and popularity is not manual. The search engines use algorithms (mathematical equations) to separate the crops from the weed (relevance), and to rank the crops on the basis of their quality (popularity).
Often, these mathematical equations comprise hundreds of variables which are called “ranking factors.”
You can deduce that to search engines, the most popular and relevant page for the search “universities” is Ohio state’s while Harvard’s is less popular/relevant.
What should I do?
Or, “how search marketers become successful”
The complex algorithms of search engines may seem impregnable. Even the engines provide only little clues to achieving improved results or driving more traffic. They little info they have given us on optimization and the effective practices are explained below:
SEO INFORMATION FROM GOOGLE WEBMASTER GUIDELINES
To get an improved ranking in their search engine, Google recommends the following:
- Make pages primarily user friendly, not search engine-friendly. Never attempt to deceive your users or display to search engines a different content from what you displayed to users, a practice called “cloaking.”
- Make a site with a distinct ordering and text links. All pages should be accessible from at least a static text link.
- Create a useful site that has enough information, and write pages that unambiguously and perfectly describe your content. Ensure that your <title> elements and ALT features are perfect and descriptive.
- With keywords, create a URLs that are descriptive and human-friendly. Supply one version of a URL to access a document and use 301 redirects or the rel=”canonical” feature to address contents that are duplicate.
SEO INFORMATION FROM BING WEBMASTER GUIDELINES
To get an improved ranking in their search engine, Engineers of Bing at Microsoft recommend the following:
- Put a URL structure that is keyword-rich in place.
- Create content that is rich in keywords and ensure the keywords are what users are searching for. Update your content regularly.
- Let texts you want indexed stand independently. For example, if you need the name of your company to be indexed, don’t put it inside the logo of the company.
Fear Not, Partner in Search Marketing!
As an addition to this advice freely given, after trying different options throughout the 15+ years of search engine existence, search marketers have discovered some cool ways to get details about how search engines rank pages. SEO experts and marketers use the details they have to increase the ranking of their sites and also their clients’.
As a surprise, the search engines are found to be in support of the efforts, although there is often a very low public visibility. Representatives and engineers from all the big engines today are attracted by search marketing conference like Search Engine Strategies, Search Marketing Expo, Distilled, Pubcon, etc. Representatives of search engines also help webmasters by joining some online forums, groups, or participating in blogs.
It can be said that there is no bigger tool used by webmasters that research the activities of search engines than the liberty to use the search engines to try out experiments, test hypotheses, and develop opinions. Through this iterative process that can be sometimes painstaking, a great amount of useful details about the operations of the engines have been revealed. Some experiments we have tried went like this:
- Get a new website registered with meaningless keywords (e.g., falopuyhul.com).
- Create several pages on that website and have all target a similarly idiotic term (e.g., hiiilopile).
- Make all the pages as identical as possible, then, alter variables one after the other, experimenting with use of keywords, placement of texts, link structures, formatting, etc.
- Point links at the domain from well-crawled, indexed pages on other domains.
- Make a record of how the pages rank in search engines.
- Now, make little changes to the pages and check how the changes affect the search results so you can be sure of the factors that can move a page up or down.
- Make a record of any result that is effective and test them again with other terms, or on other domains. If numerous tests produce a consistent result, you can assume you have found a pattern used by search engines.