You are using an outdated browser. For a faster, safer browsing experience, upgrade for free today.

What is the search engine mechanism to know before SEO?

No matter how much you learn about SEO and content marketing techniques, that alone will not be a result.Know the mechanism of collecting information on search engines and strengthen your website from the structure.

As a measure to improve the results of the web, interest in SEO (Search Engine Optimization) and content marketing is increasing.However, many people have learned how to raise their search ranking with one keyword, and they have not been able to get results because they are missing.

The important thing is that search engines just learn what kind of structure this site has, rather than just capturing which page is the keywords written, and that is useful for search users.I'm trying to judge whether it is something.

Even if you do not understand this mechanism and just create a lot of pages that include keywords, search engines will not emphasize.This time, let's understand the mechanism where the search engine is looking at the site.

What is the difference between search engines and search services?

A search engine is used to find pages, images, and videos on the Internet.Currently, Google's share is high, and Yahoo! and bing are also known.

These sites are generally called "search engines", but strictly, they should be called "search services".The search engine refers to the "search mechanism used by search services".In other words, the search environment provides the search environment on the site is "search service", which is called "search engine" that is located behind it, which is organized and introduced in accordance with search.

For example, yahoo!Even now, Yahoo! search windows are used by many people, but the search mechanism is used by Google.If you only provide a search window, anyone can start a search service by pasting Google search windows on your own site.The entered keyword flies to Google's search engine, finds information that matches the Google database and returns.

Two types of search engines.Directory type and robot type

Search engines can be broadly divided into "directory type" and "robot type".As a robot -shaped application, there is a mechanism that searches for multiple robot -type search engines at the same time, and sometimes it is called "met search", but first of all, I understand "directory type" and "robot type".prize.

Directory type search engine

The directory type was originally a mechanism to collect and classify information by hand, and register it in a database.Yahoo!, born in the United States in 1994, is the representative.

At first, there was no search window, and it was the same as a link collection.It is a mechanism that registers the names and introductions of many sites, classify each category and select from the table of contents.

For example, as follows

 Business B2B metal processing

If you go into the deep level of the table of contents in the order of such a order, you can reach the "list of metal processing companies" you are aiming for.These destinations are called "directory".The directory will tell you that there are such sites and companies, so the page introduced is basically the top page.As a result, in the era of Yahoo! Directory, more than 70 % of the access to corporate sites began on top pages.

Humans had to collect information on directory search engines.The registered person is also writing the introduction.It is difficult to take time, but there was also the advantage that "poor quality sites are not posted".

Above all, if you follow the category, you can reach the target site, so the search side was easy.However, there was a drawback that if you did not know what the information you aim for, you could not reach it.

After that, a search window appeared on Yahoo!, and you can search for keywords in the registered categories, site names, and referral texts.However, the introduction of the site was a short sentence.At that time, the introduction of Yahoo! was less than 30 characters, and it was a spectacular sentence such as "Metal processing company in Tokyo. Molds, etc."

Moreover, it took a long time to register and register with the eyes of people.Since three months have been waiting, I have created a service called "Business Express" to pay for registration later.

And since 1997, when the site began to increase explosively, the sites registered in the Yahoo! Directory are enormous, and even if you arrive in the category you are aiming for, there are many sites that you can see which you want. became.

Therefore, "robot -type" search engines have become popular instead of directory type.

Robot -shaped search engine

A robot -type search engine is a mechanism in which a small program called a robot automatically patrols on the Internet.The program is also called "crawler" because it swims (crawls) on the net.Altavista, a pioneer of robot -type search engines, was born in 1995 in 1998, in 1998.

The robot type and the directory type are different from the mechanism of information gathering and the classification, but the collected information is registered in the database and the result is the same as the result of the user search.

If you are in charge of the web, remember the difference between the directory type and the robot type as follows.

With this difference, more and more robot -type search engines have come to pages other than the top.At present, access from the top is less than 20 %.More than 80 % start access from a deep hierarchy page other than the top.

And thanks to the fact that the words posted on the site, not the fixed introductory text, have been able to meet customers beyond the "category" framework determined by people.rice field.A former web person in charge was pleased that when Google came out, he could meet customers by using words that users search.

In this way, "words that users search" is important, but one day it is misunderstood that it is a word that the company takes care of.It is unfortunate that the user is raising the search ranking with words that no one uses.

Those who search for information with a robot -type search engine have difficulty finding a good solution.So repeat the search with the keywords you devised many times.And I'm happy to come to the site with "I finally found a good site!"The important thing is that you are searching because you don't know.

When we think about search engines, we should first think of the happy face of those who search and find the site.If you think of the search engine ranking rules earlier, you will fail to meet people, even if the ranking on the search engine is raised.Let's explain why.

Mechanism of robot -type search engine

For robot -type search engines such as Google, use a program called robot or crawler to gather information.

What is the difference between a site that robots often visit and sites that are hard to visit?Unless it becomes a frequently visited site, SEO's efforts will not be rewarded.

When analyzing with a server log, I aggregate the pages seen by the robot separately from human access.Content that companies are focusing on may not be aware of the robot forever.There is no way to say SEO or content marketing, even if the robot doesn't notice.

There are also pages where robots often come to get information even though companies do not think it is important.For example, the pages of monthly public relations magazines are the most frequently updated information on that site, so they are important to Google robots.

When Google's robot comes to the server, a record remains in the form of "Mozilla/5.0 (compatible; Googlebot/2.1; +http: //www.google.com/bot.bot.html)".The information of the browser used by the user is recorded in the information called "agent".

"Mozilla Compatible" is inherited the name of the app "Mosaic" in the 1990s.In the past, more suspicious programs were running around the Internet.So, "I use mosaics for the people who see the web, so let's pass the mosaic access."So, if you look at the agent information, the server will pass through the mosaic.As a result, all programs that move on the Internet have been claimed to be "Mozilla Compatible".

It's not a coincidence that a robot program pretends to be a browser.The job is to read the content of the page.What you do is the same as "text browser".The browser reads the content of the page and tries to display the page on a personal computer, but the search robot is to send information to the search engine database side.

The "GoogleBot" found in the agent is the name of this robot.There are a lot of bots, so small bots come together on the site and bring them with page information here and there.

Each bot visits using each IP address, so you can find out what each bot is doing.

Bots don't crawl all pages, so let's make a table of contents

Next, I will explain how the bot comes.First, a bot like a lookout comes.This bot is a kind of update checker.

SEOの前に知っておくべき、検索エンジンの仕組みとは?

If the timing is good, it will come and get the latest information as soon as you update.On the contrary, if it is bad, you may be waiting for a while.It is important to find out how pace the watchman is visiting.

It is important pages such as the top page that the watchman comes to see.Whether it is important is determined by the following items.

Depending on the site, the pages where the watchman come to see is "News List" or "Public Relations Magazine Back Number List", and there are usually many table of contents pages of corners that are constantly updated.

Next, when the watchman Bot notices the update of this page, he calls other bots, saying, "This site has updated."Other bots come together and collect information with unique movements, saying, "I'm specializing in news," "I'll look wide and shallow," "Let's see a lot of pages this time."It is an image.

Bot can move from the page to the page because it finds a link in the page.If you notice the link while reading the contents of a certain page, move to that page.So, if you make it easier to notice the news, the latest information will be quickly obtained.The search engine learns the relationship between the website page and the page and the structure of the site.

But bots don't look at all pages.There is "deep crawl" about once a month to see a lot of pages, but most bots are returned if you look at one or two pages, so there are pages that you do not see.

The "sitemap.xml" (sitemap file) of the XML file is to avoid the bot problems as described above, teach the search engine the configuration of the site so that the bots can rotate efficiently.The SEO know -how site tells you that if you put it on the server and tell the search engine the existence, the bot will take it based on the structure of the site based on it, but it is actually easy.Not.

In fact, it is good to adjust the pages that the bot is looking at from the server log and adjust according to the actual situation.The basis of adjustment is to put a link to the page you want to convey at the top of the page where the bot is often looking at.However, it is difficult to do this aggregation today.

In the robot -type search engine, the "important page update monitoring bot" calls out the "partial information collection bot".This means that "Google is trying to learn the" structure of the site "."I'm looking at which pages are in the table of contents and the information that is useful for the searcher belongs to it.Therefore, the web person in charge needs to create a table of contents and teach Google about its existence.The following is important for bots to visit the site frequently.

Bots frequently visit this table of contents pages and bring the latest information as soon as possible.For example, the "Monthly Public Relations Magazine Page" is generally fulfilling this condition.Information will be added regularly, and the content will be enhanced in order with its own theme.In addition, the table of contents page is clearly present, and the link to the latest issue is listed at the top.This is a "useful site structure" that is very easy to understand for Google.

Don't think about raising the search ranking by asking a contractor without updating the content of the site.Even if such a site is displayed at the top, it will not be for the searcher, and Google will be aware of it and the ranking will drop.

If the site is well -formed and the latest information is always transmitted, bots will come and frequently visit.This is not for search engines, but for customers and prospects.

In addition, the following three items called "E-A-T" are said to be the point that Google emphasizes the quality evaluation of the page.

Is this on the site of each company's specialty, authority, and reliability?about it.

Also, when you hear the word renewal operation, some companies dislike it."We are not a company with new products every year, and there is no latest information."Is it true?The product is continuing to improve.Is the feature adopted by the minor change on the site?Is the telephone correspondence conducted by the customer consultation room reflected in "Frequently Asked Questions"?That is your specialty, authoritative, and reliability.

The information found by the bot is sent to "Indexa"

Next, the bot who visits the site reads the information on the page and sends information to a program called "Indexa".Indexa means an index (table of contents).We will organize the received information and register the following information in the database.

Thanks to this arrangement, it is determined how to introduce pages that match the search.The rules that determine this order are sometimes called "algorithm"."Algorithm" is a word that refers to the "calculation procedure and processing flow" in the program, not just searching.

The web person is working hard to learn Google's algorithm (here is a ranking rule here).Occasionally, the algorithm has changed again.But if you follow it, there's nothing good.Leave it to an expert.

Rather, write your specialty content on the site.Put the antenna in the company and add and update the latest information useful for customers.Even if you do not increase the number of pages, you only need to adjust the contents of the same page.

As a result, an important table of table pages will be updated frequently, and a link to the updated page will be posted at the top.That way, the bot comes frequently, moves to the updated page and sends information to indexa.

Relationship between SEO and search engine

So far, we have explained the mechanism of the search engine, but here are the relationships with SEO.No matter how much you follow the algorithm, there will be no bots to sites that do not update, so it doesn't make much sense to put sitemap.xml or write a title tag.

From Google's service called Search Console, it is possible to communicate the index requests to Google, so the bot may come once, but Google understands that "this site is important". It does not mean.

In addition, in content marketing, many companies do the work of "creating pages in order and linking for inquiries from each page", but they do not become "covered" content.Furthermore, the theme that bundles the increased pages is not unique to the company, so no matter how much page you increase, you will not feel specialty, authority, and reliability.

The web is like making a dictionary.Deciding on a unique theme in your specialty field and increasing the number of pages is not a duplicate but a "cover".Sites that only make pages with duplicate content just because you want to increase the number of pages that include keywords, so Google does not make it higher.

First, make at least the "whole table of contents" and then start creating a page.The table of contents brought by the trader is the number of pages tailored to the estimate, so it is not always "comprehensive".It is not great because there are many pages, but it is troublesome in the table of contents with many omissions.

The core of SEO will be useful for users to tell which page of the site?

The core part of SEO is "Which pages will be the most pleased or useful for those who searched with a keyword?"

SEO traders are not renewed companies that change the entire site, so they adjust the site so that they do not affect the entire site, so that they are advantageous for searching.This is also a very difficult job, and it is a job that is honestly the head, but I tend to think about the ranking rules in "page units".In this, the core part will be invisible.

In addition, some people only care about search ranking and search for a certain keyword every day to check the rankings.If you search and find your own site, click to see what pages are introduced.At this time, click properly to display the page.This is to understand the feelings of the searcher who is disappointed, "Why this page comes out."

If an outbreak page is introduced, the person who visited for the search will return 95 %.PDFs are often introduced in technical companies.There are also sad sites where privacy policy is searched by the company name.

For example, look at the figure below.There is a whole table of contents, and it is like a blog that is divided into the category.

If the configuration is as shown in the figure, the user looks at the table of contents in the category and selects an article.The yellow page is a page where the search keywords are listed.Now, which pages are introduced on the search engine, will it be most useful for users?

The correct answer is 2.

When the "Category Table of Contents" page of 2 is introduced in the search results, the searcher notices that there are many pages that are interested in it, and proceeds to the target page from among them.

1の「全体の目次」ページが検索結果で紹介されても、今検索したキーワードについて、たくさんのページがあるとは気づけません。

So what if one of the "individual pages" was introduced?This page itself fits the searcher's interest and may be interesting.However, you will not notice that there are many related pages about the same keyword, and it will be difficult to choose the next page.

このように、カテゴリ(重視するキーワード)の目次があることで、コンテンツ全体に構造を持たること、その構造を検索エンジンが学べることが重要なのです。ページ単位でキーワードを入れていくという発想では、「検索エンジンに構造を教え、適切なページを紹介させる」ことができません。ページ単体で考えてしまうと、そのページは検索訪問者が増えますが、直帰率が高くなって、有効な誘導ができません。キーワードを含んだページを増やせば増やすほど、効果が分散して、弱くなってしまうこともあります。

How to create an effective site for SEO?

For example, let's say you have created a site that contains similar keywords on the entire site during renewal.

Keywords exist for each product page, corporate information, and technical information.However, if this happens, there is no table of contents page that lists pages containing this keyword.

Even if the overall top page plays the role of a table of contents page, the only top page is linked to "product information" and "corporate information".Then, it will not be introduced to those who searched for this keyword.Sites that increase pages containing similar keywords on blogs one after another tend to have such a phenomenon that "search engines are unknown".

If you know the mechanism of the robot -type search engine, many bots will come and learn the structure of the site and acquire pages information, so it is necessary to create a table of contents that crosses the site and structured it.You will understand.

So how do you deliver the information you want to know and make it an advantageous site in SEO?First, create a table of contents page that overlooks the whole as shown in the following figure, and link to each page.

If this table of contents pages are introduced on the search engine, the searcher will notice many content and select the page you want to see.You will feel, "I found a good site with a lot of information!"Furthermore, by creating this table of contents page, you can learn the search engine for this keyword.

By linking from the top page of the top page or site map page, this table of contents page is easier for search engines to crawl and make it easier to understand the importance.In addition, let's link to the keywords of each distributed page to this table of contents page in text.In this way, by structuring the distributed information on the site on the table of contents page, the page to be introduced is taught to the search engine.

In addition, it would be nice to have a link to the latest information on the table of contents page.At the top of the page, a mark such as "New!" To introduce the latest information and create an area to link.Each time you add information, this table of contents will be updated, so Google Bot will frequently visit this page and quickly notice the latest information and pass it to the indexa.

In this way, rather than remembering the detailed algorithm, if you find which page the search engine introduces, the searcher will notice that you have found a site with lots of related information.

Bonus "Meta Keyword is not used in Google's ranking, but important"

For example, Google is said to not use "Meta Keywords" in the ranking, and some sites have stopped using HTML elements called meta keywords.Meta keywords are tags in the following format."Meta Keywords" lists keywords related to the page.

Make sure that this tag is described on your site.The procedure is as follows.

  1. ブラウザで自社サイトのトップページを表示し、カーソルをページのどこかに置いて右クリック。
  2. 選択肢から「ページのソースを表示」を選択する。
  3. ctrl+Fキーを押して、「meta」もしくは「keyword」と入力して検索。

Google certainly no longer uses this meta keyword tag for ranking, but Google is reading this tag.Also, many other search services use this.

If you write the part number of your product on the page on the meta keyword, you can use it in the site search.When it is discontinued, you can search for the meta keyword and identify the posted location immediately.Even though Google is no longer used for ranking decisions, meta keywords are a tag that can be said to be the ally of the web staff that can be used for many other management.