How to build a semantic core from scratch? How to compose a semantic core for an online store: step by step instructions Semantics semantic core.

How to build a semantic core from scratch? How to compose a semantic core for an online store: step by step instructions Semantics semantic core.

03.12.2021

The semantic core of the site is the keywords (queries) that users use on the Internet to search for services, goods and any other information that this site offers. For webmasters, this is an action plan to promote a resource. Ideally the semantic core of the site is created once, before starting optimization and promotion.


The semantic core of the site is usually compiled in several stages:

  1. All sorts of words (phrases) suitable for the theme of the site are selected. In the beginning, you can limit yourself to 100 - 200 search queries. In order to know which queries suit you, answer the question "What do I want to devote my site to?"
  2. Expanding the semantic core through associative queries
  3. You should filter out inappropriate words. Here you filter out those phrases for which you will not promote your site. There are usually more than half of such words.
  4. Highly competitive requests for which there is no point in promoting the site are eliminated. Typically, three words out of five or more are removed.
  5. And the last is the correct distribution of the list of search queries on the resource pages. It is recommended to leave highly competitive requests on the main page of the resource, less competitive ones should be grouped by meaning and placed on other pages. To do this, you need to create a document in Excel and break the keywords into pages.

Search query selection and frequency check

The first thing to do is to collect as many of the various queries on your topic as possible that are of interest to users on the Web. There are two methods for this:

  • Free, which include: Wordstat Yandex, Slovoeb, the old-fashioned way, hints from Google (External Keyword Tool), analysis of the semantics of competitors and search suggestions.
  • Paid ones that include Key Collector, Semrush, Pastukhov's bases and some other services.

These tools are suitable for various purposes (for example, Semrush is best used for a bourgeoisie). Of course, all this can be entrusted to the optimizers, but there is a possibility that you will have an incomplete semantic core.

Many people use Pastukhov's database to collect key phrases, but with Key Collector it is much more convenient to collect requests from Yandex and Google statistics services.

At the initial stage, collecting queries is best done in Excel, it looks like this:


If Google is more important for your resource, then focus on it, but also take into account and analyze keywords from Yandex. It is also very important to collect a long tail of low-frequency requests, for them you will receive traffic much faster.

You can also use one more option, this is to find out key phrases (words) from competitors and use them. At this stage, you simply collect as many key phrases (words) as possible on the topic of your resource, and then proceed to the next stage - filtering.

Analysis of requests, removal of "dummies"

This stage is already easier, here you need to filter out dummy words and those that are not related to the topic of the site. For example, you have lunch delivery in Kiev, but there are other cities on the list.

How do I identify empty queries? Go to Yandex Wordstat and enter the keyword:


You see 881 impressions per month, but to be more precise:


Now a completely different picture is emerging. This may not be the best example, but the main thing is that you understand the essence. There are a large number of key phrases that show sufficient traffic, when in reality they are all missing. That is why you need to weed out such phrases.

For example, if a person, before (or after) typing the query "lunch delivery", entered another phrase in the search line (called one search session), then Yandex makes the assumption that these search phrases are somehow interrelated. If such a relationship is observed in several people, then such associative queries are shown in the right column of wordstat.


Such search queries are sorted in the wordstat window in descending order of the frequency of their input in conjunction with the main query this month (the frequency of their use in the Yandex search engine is shown). You need to use this information to expand the semantic core of your resource.

Distribution of requests by pages

After that, you need to distribute the keywords (phrases) you have collected on the pages of your site. Distribution is much easier when you don't have the pages themselves.

Focus primarily on keywords in search queries and their frequency. With concurrency, you should do this: allocate the main page for one or two highly competitive queries.

For medium-competitive or low-competitive queries, optimize the section and article pages accordingly.

If there are semantic similarities in the search queries, just collect the same phrases and put them in one group. When composing keywords to promote a resource, always use not only standard tools, but also a creative approach.

Combining non-standard and classic methods, you can simply and quickly create a semantic core of the site, choose the most optimal promotion strategy and achieve success much faster!

Semantic Core is a scary name that SEOs have come up with to refer to a fairly simple thing. We just need to select the key queries for which we will promote our site.

And in this article I will show you how to correctly compose the semantic core so that your site quickly goes to the TOP, and does not stagnate for months. It also has its own "secrets".

And before we move on to compiling the SY, let's take a look at what it is and what we should eventually come to.

What is the semantic core in simple words

Strange as it may seem, but the semantic core is a regular excel file, which contains a list of key queries for which you (or your copywriter) will write articles for the site.

For example, this is how my semantic core looks like:

I have marked in green those keywords for which I have already written articles. Yellow - those for which I am going to write articles in the near future. And colorless cells - this means that it will come to these requests a little later.

For each key request, I have determined the frequency, concurrency, and came up with a "catchy" heading. You should get about the same file. Now my SN consists of 150 keywords. This means that I am provided with "material" for at least 5 months in advance (even if I write one article a day).

Below we will talk about what to prepare for if you suddenly decide to order the collection of the semantic core from specialists. Here I will say briefly - you will be given the same list, but only for thousands of "keys". However, in SA, it is not quantity that is important, but quality. And we will focus on this.

Why do you need a semantic core at all?

And in fact, why do we need this torment? You can, after all, just write quality articles and attract an audience by this, right? Yes, you can write, but you won't be able to attract.

The main mistake 90% of bloggers make is just writing high-quality articles. I'm not kidding, they have really interesting and useful content. But the search engines don't know about it. They are not psychics, they are just robots. Accordingly, they do not put your article in the TOP.

There is another subtle point here with the title. For example, you have a very high-quality article on the topic “How to do business in a“ mordo book ”. There you describe everything about Facebook in great detail and professionally. Including how to promote communities there. Your article is the highest quality, useful and interesting article on this topic on the Internet. Nobody was lying next to you. But it still won't help you.

Why high-quality articles fly out of the TOP

Imagine that your site is not visited by a robot, but by a live inspector (assessor) from Yandex. He realized that you have the coolest article. And rukami put you in first place in the search results for the request "Promotion of the community on Facebook."

Do you know what happens next? You will fly out of there very soon. Because on your article, even in the first place, no one will click. People enter the request “Community promotion on Facebook”, and your headline is “How to do business in a“ mordobook ”correctly. Original, fresh, funny, but ... not on request. People want to see exactly what they were looking for, not your creative.

Accordingly, your article will idle to occupy a place in the TOP of the SERP. And a living assessor, an ardent admirer of your creativity, can beg the authorities as much as they want to leave you at least in the TOP-10. But it won't help. All the first places will be occupied by articles, empty, like the husks of seeds, which were copied from each other by yesterday's schoolchildren.

But these articles will have the correct "relevant" title - "Promoting the Facebook community from scratch" ( step by step, in 5 steps, from A to Z, free etc.) Offensive? Still would. Well, fight against injustice. Let's put together a competent semantic core so that your articles take the first places they deserve.

Another reason to start making up SJ right now

There is one more thing that for some reason people think a little about. You need to write articles often - at least every week, and preferably 2-3 times a week in order to gain more traffic and faster.

Everyone knows this, but almost no one does it. And all because they have "creative stagnation", "they can not force themselves in any way", "just laziness." But in fact, the whole problem is precisely in the absence of a specific semantic core.

I entered one of my basic keys - "smm" into the search field, and Yandex immediately gave me a dozen tips, what else might be of interest to people who are interested in "smm". I just have to copy these keys into a notebook. Then I will check each of them in the same way, and collect clues for them as well.

After the first stage of collecting SN, you should have a text document in which there will be 10-30 wide basic keys, with which we will work further.

Step # 2 - Parsing Base Keys in SlovoEB

Of course, if you write an article for the request "webinar" or "smm", then a miracle will not happen. You will never be able to reach the TOP for such a broad request. We need to split the base key into many small queries on this topic. And we will do this with the help of a special program.

I am using KeyCollector but it is paid. You can use a free analogue - the SlovoEB program. You can download it from the official website.

The most difficult thing in working with this program is to configure it correctly. I am showing how to properly configure and use Slovoeb. But in that article, I focus on the selection of keys for Yandex Direct.

And here, let's take a step-by-step look at the features of using this program for compiling a semantic core for SEO.

First, we create a new project, and name it according to the broad key that you want to parse.

I usually give the project the same name as my base key so I don't get confused later. And yes, I will warn you of one more mistake. Don't try to parse all base keys at the same time. Then it will be very difficult for you to filter out “empty” keywords from gold grains. Let's parse one key at a time.

After creating the project, we carry out the basic operation. That is, we actually parse the key through Yandex Wordstat. To do this, click on the "Vorstat" button in the program interface, enter your basic key, and click "Start collection".

For example, let's parse the base key for my "contextual advertising" blog.

After that, the process will start, and after a while the program will give us a result - up to 2000 key queries, which contain "contextual advertising".

Also, next to each request there will be a "dirty" frequency - how many times this key (+ its word forms and tails) was searched per month through Yandex. But I do not advise you to draw any conclusions from these numbers.

Step # 3 - Collecting Accurate Key Frequencies

Dirty frequency won't show us anything. If you focus on it, then do not be surprised later when your key for 1000 requests does not bring a single click per month.

We need to bring out the pure frequency. And to do this, we first select all the keys found with checkmarks, and then click on the "Yandex Direct" button and start the process again. Now Slovoeb will search for the exact frequency of the request per month for each key.

Now we have an objective picture - how many times a request was entered by Internet users in the last month. Now I propose to group all key queries by frequency, so that it is more convenient to work with them.

To do this, click on the "filter" icon in the "Frequency" column! ", And specify - to filter out keys with the value" less than or equal to 10 ".

Now the program will show you only those queries, the frequency of which is less than or equal to the value "10". You can delete these queries or copy them for the future to another group of key queries. Less than 10 is very little. Writing articles for these queries is a waste of time.

Now we need to choose those keywords that will bring us more or less good traffic. And for this we need to find out one more parameter - the level of concurrency of the request.

Step # 4 - Checking Competitive Queries

All "keys" in this world are divided into 3 types: high-frequency (HF), mid-frequency (MF), low-frequency (LF). They can also be highly competitive (VC), medium competitive (SK) and low competitive (NK).

As a rule, HF requests are simultaneously received by VC. That is, if a request is often searched for on the Internet, then there are a lot of sites that want to promote it. But this is not always the case, there are happy exceptions.

The art of compiling a semantic core lies precisely in finding such queries that have a high frequency, and their level of competition is low. It is very difficult to determine the level of competition manually.

You can focus on indicators such as the number of main pages in the TOP-10, the length and quality of texts. the level of trust and particles of sites in the TOP of the issuance on request. All of this will give you some idea of ​​how tough the battle for positions is for this particular query.

But I recommend that you take advantage of service Mutagen... It takes into account all the parameters that I mentioned above, plus a dozen more that neither you nor I have probably even heard of. After analysis, the service gives the exact value - what is the level of competition for this request.

Here I checked the query "setting up contextual advertising in google adwords". The mutagen showed us that this key has a competitiveness of "more than 25" - this is the maximum value that it shows. And this query has only 11 views per month. So it definitely doesn't suit us.

We can copy all the keys we picked up in Slovoeb and do a bulk check in Mutagen. After that, we will only have to look at the list and take those requests that have a lot of requests and a low level of competition.

Mutagen is a paid service. But you can do 10 checks a day for free. In addition, the cost of verification is very low. For all the time I have worked with him, I have not yet spent even 300 rubles.

By the way, at the expense of the level of competition. If you have a young site, then it is better to choose queries with a competition level of 3-5. And if you have been promoting for more than a year, then you can take 10-15.

By the way, at the expense of the frequency of requests. We now need to take the final step, which will allow you to attract a lot of traffic even for low-frequency requests.

Step # 5 - Collecting Tails for Selected Keys

As it has been proven and verified many times, your site will receive the bulk of traffic not from the main keys, but from the so-called "tails". This is when a person enters strange keywords into the search box, with a frequency of 1-2 per month, but there are a lot of such requests.

To see the "tail" - just go to Yandex and enter your chosen keyword in the search bar. Here's what you will see roughly.

Now you just need to write out these additional words in a separate document and use them in your article. Moreover, you do not need to always put them next to the main key. Otherwise, search engines will see "over-optimization" and your articles will drop in the search results.

Just use them in different places in your article, and then you will receive additional traffic from them as well. I would also recommend that you try to use as many word forms and synonyms as possible for your main keyword query.

For example, we have a request - "Setting up contextual advertising". Here's how to reformulate it:

  • Customize = customize, make, build, run, run, enable, host ...
  • Contextual advertising = context, direct, teaser, YAN, adwords, cms. direct, adwords ...

You never know exactly how people will look for information. Add all these additional words to your semantic core, and use them when writing texts.

So, we collect a list of 100 - 150 keywords. If this is your first time compiling the semantic core, it may take you several weeks.

Or maybe well, break his eyes? Maybe there is an opportunity to delegate the preparation of the CL to specialists who will do it better and faster? Yes, there are such specialists, but you don't always need to use their services.

Should I order a CJ from specialists?

By and large, the specialists in compiling the semantic core will only take you steps 1 - 3 from our diagram. Sometimes, for a large additional fee, steps 4-5 will also be done - (collecting tails and checking the competitiveness of requests).

After that, they will give you several thousand keywords that you will need to work with in the future.

And the question here is, are you going to write articles yourself, or hire copywriters for this. If you want to focus on quality, not quantity, then you have to write yourself. But then it won't be enough for you just to get a list of keys. You will need to choose topics that you know well enough to write a quality article.

And this is where the question arises - why, then, are we actually needed specialists in SY? Agree, parsing the base key and collecting the exact frequencies (steps # 1-3) is not at all difficult. It will take you literally half an hour of time.

The most difficult thing is to select RF requests, which have low competition. And now, as it turns out, you need HF-NK, on ​​which you can write a good article. This is exactly what will take you 99% of your time working on the semantic core. And no specialist will do this for you. Well, is it worth spending money on ordering such services?

When the services of a specialist in CN is useful

It's another matter if you initially plan to attract copywriters. Then you do not need to understand the subject of the request. Your copywriters will not understand it either. They will simply take a few articles on this topic and compile their own text from them.

Such articles will be empty, squalid, almost useless. But there will be many of them. On your own, you can write a maximum of 2-3 high-quality articles per week. And the army of copywriters will provide you with 2-3 shit text a day. At the same time, they will be optimized for requests, which means they will attract some kind of traffic.

In this case, yes, feel free to hire a specialist in YA. Let them also compose TK for copywriters at the same time. But you understand, it will also cost some money.

Summary

Let's go over the main points in the article again to consolidate the information.

  • The semantic core is simply a list of keywords for which you will write articles on the site for promotion.
  • It is necessary to optimize the texts for the exact key queries, otherwise your even the highest quality articles will never go to the TOP.
  • SN is like a content plan for social media. It helps you not to fall into a "creative crisis" and always know exactly what you will be writing about tomorrow, the day after tomorrow and in a month.
  • To compile the semantic core, it is convenient to use the free Slovoeb program, you just need it.
  • There are five steps to compiling a CL: 1 - Selecting base keys; 2 - Parsing base keys; 3 - Collecting the exact frequency for requests; 4 - Checking the concurrency of keys; 5 - Collection of "tails".
  • If you want to write articles yourself, then it is better to make the semantic core yourself, for yourself. The CL compilers will not be able to help you here.
  • If you want to work for quantity and use copywriters to write articles, then it is entirely possible to involve delegate and compilation of the semantic core. If only there was enough money for everything.

Hope this tutorial was helpful to you. Save it to your favorites so as not to lose it, and share with your friends. Don't forget to download my book. There I show you the fastest way from zero to the first million on the Internet (extract from personal experience for 10 years =)

See you later!

Your Dmitry Novosyolov

Often novice webmasters, faced with the need to create a semantic core, do not know where to start. Although there is nothing complicated in this process. Simply put, you need to collect a list of key phrases by which Internet users are looking for information on your site.

The more complete and accurate it is, the easier it is for a copywriter to write a good text, and for you to get high positions in the search for the required queries. This article will discuss how to correctly compose large and high-quality semantic cores and what to do with them further so that the site goes to the top and collects a lot of traffic.

The semantic core is a set of key phrases, ungrouped by meaning, where each group reflects one need or desire of the user (intent). That is, what a person thinks about when they drive their query into the search bar.

The whole process of creating a kernel can be represented in 4 steps:

  1. We are faced with a task or problem;
  2. In our head we formulate how we can find its solution through a search;
  3. We drive a request into Yandex or Google. Other people besides us do the same;
  4. The most frequent options for calls fall into analytics services and become key phrases that we collect and group according to needs. As a result of all these manipulations, a semantic core is obtained.

Is it necessary to select key phrases or can you do without it?

Previously, semantics were compiled in order to find the most frequent keywords on a topic, write them into the text and get good visibility for them in the search. For the last 5 years, search engines have been striving to switch to a model where the relevance of a document to a query will be assessed not by the number of words and the variety of their variations in the text, but by the assessment of the disclosure of the intent.

Google started it in 2013 with the Hummingbird algorithm, Yandex in 2016 and 2017 with Palekh and Korolev technologies, respectively.

Texts written without a SA will not be able to fully disclose the topic, which means that it will not work to compete with the TOP in high-frequency and medium-frequency queries. It makes no sense to bet on low-frequency queries - there is too little traffic for them.

If you want to successfully promote yourself or your product on the Internet in the future, you need to learn how to compose the correct semantics that fully reveals the needs of users.

Search query classification

Let's look at 3 types of parameters by which keywords are evaluated.

By frequency:

  • High-frequency (HF) - phrases that define the topic. Consist of 1-2 words. On average, the number of search queries starts from 1000-3000 per month and can reach hundreds of thousands of impressions, depending on the topic. Most often, the main pages of sites are sharpened for them.
  • Medium-frequency (MF) - separate directions in the topic. Mostly contain 2-3 words. With an exact frequency of 500 to 1000. Usually commercial site categories or topics for large informational articles.
  • Low-frequency (LF) - queries related to the search for a specific answer to a question. As a rule, from 3-4 words. This can be a product card or an article topic. On average, they search from 50 to 500 people per month.
  • When analyzing metrics or data from statistics counters, one more type can be found - micro LF keys. These are phrases that are often asked once on a search. There is no point in sharpening a page for them. It is enough to be in the top for bass, which includes them.



Competitiveness:

  • Highly competitive (VK);
  • Medium-slope (SK);
  • Low competitive (NK);

On demand:

  • Navigational. Express the user's desire to find a specific Internet resource or information on it;
  • Informational. They are characterized by a need for information as a response to a request;
  • Transactional. Directly related to the desire to make a purchase;
  • Fuzzy or general. Those for which it is difficult to accurately determine the intent.
  • Geo-dependent and geo-independent. Reflect the need to search for information or complete a transaction in your city or without regional reference.


Depending on the type of site, you can give the following recommendations when selecting key phrases for the semantic core.

  1. Information resource... The main emphasis should be on finding topics for articles in the form of MF and LF queries with low competition. It is recommended to open the topic broadly and deeply, sharpening the page for a large number of LF keys.
  2. Online store or commercial site. We collect HF, MF and LF, segmenting as clearly as possible so that all phrases are of a transactional type and belong to one cluster. We focus on finding well-converting low-frequency NK keywords.

How to correctly compose a large semantic core - step by step instructions

We have moved on to the main part of the article, where I will sequentially analyze the main stages that need to go through to build the core of the future site.
To make the process clearer, all steps are given with examples.

Search for basic phrases

Working with the SEO kernel begins with choosing a primary list of basic words and phrases (HF) that best characterize the topic and are used in a broad sense. They are also called markers.

These can be either the names of the directions, or the types of products, popular queries from the topic. As a rule, they consist of 1-2 words and have tens, and sometimes hundreds of thousands of impressions per month. It is better not to take very wide keys, so as not to drown in negative keywords during the expansion stage.

The most convenient way to select marker phrases is to use. Driving a query into it, in the left column we see the phrases that it contains in itself, in the right - similar queries from which you can often find topics suitable for expanding. The service also shows the basic frequency of the phrase, that is, how many times it was asked per month in all word forms and with the addition of any words to it.

By itself, such a frequency is of little interest, so to get more accurate values, you need to use operators. Let's analyze what it is and what it is for.

Operators Yandex Wordstat:

1) "..." - quotes. A query in quotation marks allows you to track how many times a phrase was searched in Yandex with all its word forms, but without adding other words (tails).

2)! - Exclamation point. Using it before each word in the request, we fix its form and get the number of impressions in the search for a key phrase only in the specified word form, but with a tail.

3) "! ...! ...! ..." - quotes and an exclamation mark before each word. The most important operator for the optimizer. It allows you to understand how many times a keyword is requested per month strictly for a given phrase, as it is written, without adding any words.

4) +. Yandex Wordstat does not take into account prepositions and pronouns when making a request. If you need him to show them, we put a plus sign in front of them.

5) -. The second most important operator. With its help, words that do not fit are quickly eliminated. To apply it, after the analyzed phrase, put a minus and a stop word. If there are several of them we repeat the procedure.

6) (… |…). If you need to get data from Yandex Wordstat for several phrases at the same time, we enclose them in brackets and separate them with a forward slash. In practice, the method is rarely used.

For the convenience of working with the service, I recommend installing a special browser extension “Wordstat Assistant”. Installed on Mozila, Google Chrome, J. Browser and allows you to copy phrases and their frequencies with one click of the "+" or "Add all" icon.


Let's say we decided to make our own SEO blog. Let's choose 7 basic phrases for him:

  • semantic core;
  • optimization;
  • copywriting;
  • promotion;
  • monetization;
  • Direct

Search for synonyms

When formulating a query to search engines, users can use words that are similar in meaning, but different in spelling.

For example, "car" and "car".

It is important to find as many synonyms for the main words as possible in order to increase the coverage of the future semantic core. If this is not done, then when parsing, we will miss a whole layer of key phrases that reveal user needs.

What we use:

  • Brainstorm;
  • Right column Yandex Wordstat;
  • Requests typed in Cyrillic;
  • Special terms, abbreviations, slang expressions from the subject;
  • Yandex and Google blocks - search together with "request name";
  • Competitor snippets.

As a result of all actions for the selected topic, we get the following list of phrases:


Extending Basic Queries

Let's parse these keywords to identify the basic needs of people in this area.
The most convenient way to do this is in the Key Collector program, but if it's a pity to pay 1800 rubles for a license, use its free analogue - Slovoyob.

In terms of functionality, it is certainly weaker, but it is suitable for small projects.
If you do not want to delve into the work of the programs, you can use the Just-Magiс service and Rush Analytics. Still, it's better to spend a little time and figure out the software.

I will show you how it works in Kay Collector, but if you work with Slovoeb, then everything will be clear too. The program interface is similar.

Procedure:

1) Add a list of basic phrases to the program and remove the basic and exact frequency for them. If we plan to promote in a specific region, we indicate the regionality. For informational sites, this is most often not necessary.


2) Let's parse the left column of Yandex Wordstat by added words to get all requests from our topic.


3) At the output, we got 3374 phrases. Let's take them off the exact frequency, as in the 1st paragraph.


4) Let's check if there are any keys with zero base frequency in the list.


If there is, delete it and go to the next step.

Negative words

Many people neglect the procedure of collecting negative keywords, replacing it with the removal of phrases that are not suitable. But later you will realize that it is convenient and really saves time.

Open the Data -> Analysis tab in the Key Collector. We select the type of grouping by individual words and scroll through the list of keys. If we see a phrase that does not fit, click on the blue icon and add the word instead of with all its word forms to the stop words.


In Slovoeba, work with stop words is implemented in a more simplified form, but you can also create your own list of phrases that do not fit and apply them to the list.

Do not forget to use sorting by Base frequency and number of phrases. This option helps to quickly reduce the list of original phrases or filter out rare ones.


After we have compiled a list of stop words, we apply them to our project and proceed to collecting search suggestions.

Parsing hints

When you enter a query in Yandex or Google, search engines offer their options for its continuation from the most popular phrases that Internet users drive in. These keywords are called search clues.

Many of them do not fall into Wordstat, therefore, when constructing a semantic one, it is imperative to collect such queries.

Key Collector, by default parses them with an iteration of endings, Cyrillic and Latin alphabets and with a space after each phrase. If you are ready to sacrifice quantity in order to significantly speed up the process, put a tick on the box "Collect only TOP prompts without brute force and space after the phrase."


Often among search suggestions you can find phrases with good frequency and competition ten times lower than in Wordstat, so in narrow niches I recommend collecting as many words as possible.

The time for parsing hints directly depends on the number of simultaneous calls to the search engine servers. The maximum Key Collector supports 50-thread work.
But in order to parse requests in this mode, you will need the same number of proxies and Yandex accounts.

For our project, after collecting tips, we got 29595 unique phrases. In terms of time, the whole process took a little more than 2 hours on 10 threads. That is, if there are 50 of them, we will keep within 25 minutes.


Determination of the base and exact frequency for all phrases

For further work, it is important to determine the base and exact frequency and weed out all zeroes. Leave requests with a low number of impressions if they are targeted.
This will help you better understand the intent and create a more complete structure of the article than is in the top.

In order to remove the frequency, we first filter out all unnecessary:

  • repetitions of words
  • keys with other symbols;
  • duplicate phrases (via the Implicit Duplicate Analysis tool)


For the remaining phrases, we will determine the exact and basic frequency.

a) for phrases up to 7 words:

  • Select through the filter "The phrase consists of no more than 7 words"
  • Open the "Collect from Yandex.Direct" window by clicking on the "D" icon;
  • If necessary, indicate the region;
  • Choose the guaranteed impressions mode;
  • We put the collection period - 1 month and check the boxes above the required types of frequencies;
  • Click "Get data".


b) for phrases from 8 words:

  • We set a filter for the "Phrase" column - "consists of at least 8 words";
  • If you need to advance in a specific city, we indicate the region below;
  • Click on the magnifying glass and select “Collect all kinds of frequencies”.


Cleaning keywords from garbage

After we have received information about the number of impressions for our keys, we can start filtering out those that are not suitable.

Let's consider the order of actions in steps:

1. Go to the "Group Analysis" Key Collector and sort the keys by the number of words used. The task is to find non-target and frequent ones and add them to the list of stop words.
We do everything in the same way as in the "Minus words" paragraph.


2. We apply to the list of our phrases all found stop words and go over it so as not to lose target queries for sure. After checking, click delete "Marked phrases".


3. We filter out dummy phrases that are rarely used in an exact match, but have a high base frequency. To do this, in the settings of the Kay Collector program in the "KEY & SERP" item, insert the calculation formula: KEY 1 = (YandexWordstatBaseFreq) / (YandexWordstatQuotePointFreq) and save the changes.


4. We calculate KEY 1 and delete those phrases for which this parameter is 100 or more.


The remaining keys need to be grouped by landing page.

Clustering

The distribution of queries into groups begins with the clustering of phrases according to the top through the free program "Majento Clusterizer". I recommend KeyAssort, a paid analogue with wider functionality and faster work speed, but the free one is quite enough for a small kernel. The only caveat is that to work in any of them you will need to buy XML limits. Average price - 5 rubles. for 1000 requests. That is, processing an average kernel for 20-30 thousand keys will cost 100-150 rubles. See the screenshot below for the address of the service you are using.


The essence of key clustering using this method is to group phrases that have Yandex Top 10:

  • shared urls with each other (Hard)
  • with the most frequent request in the group (Soft).

Depending on the number of such coincidences for different sites, clustering thresholds are distinguished: 2, 3, 4 ... 10.

The advantage of this method is the grouping of phrases according to the needs of people, and not only by synonyms. This allows you to immediately understand which keywords can be used on one landing page.

Suitable for informational people:

  • Soft with a threshold of 3-4 and then brushing by hand;
  • Hard in 3-ke, and then the unification of clusters according to the meaning.

Online stores and commercial sites, as a rule, are promoted on Hard with a clustering threshold of 3. The topic is voluminous, so I will analyze it later in a separate article.

For our project, after grouping by the Hard method on a 3-ke, 317 groups were obtained.


Competition check

There is no point in promoting highly competitive queries. It is difficult to get to the top, and without it there will be no traffic to the article. To understand what topics it is profitable to write on, we use the following method:

We focus on the exact frequency of the group of phrases for which the article is being written and the competition for Mutagen. For informational sites, I recommend that you take into work topics that have a total exact frequency of 300 or more, and the coefficient of competition is from 1 to 12, inclusive.

In commercial topics, focus on the marginality of a product or service and how competitors do it in the top 10. Even 5-10 targeted queries per month can be a reason to make a separate page for it.

How to check on-demand competition:

a) manually, by typing in the appropriate phrase in the service itself or through bulk tasks;


b) in batch mode through the Key Collector program.


Topic selection and grouping

Let's consider each of the resulting groups for our project after clustering and select themes for the site.
Majento, unlike Key Assort, does not provide the ability to download data on the number of impressions for each phrase, so you will have to additionally shoot them through Key Collector.

Instructions:

1) Unload all groups from Majento in CSV format;
2) We concatenate phrases in Excel using the "group: key" mask;
3) Load the resulting list into the Key Сollector. In the settings, there must be a check mark in the "Group: Key" import mode and do not monitor the presence of phrases in other groups;


4) We remove the base and exact frequency for keywords from the newly created groups. (If you use Key Assort, then you do not need to do this. The program allows you to work with additional columns)
5) We are looking for clusters with a unique intent, containing at least 3 phrases and the number of impressions for all requests in the sum is more than 300. Next, we check the 3-4 most frequent ones for mutagen competition. If among these phrases there are keys with competition less than 12 - we take it to work;

6) We look through the rest of the groups. If there are phrases that are close in meaning and are worth considering within the same page, we combine them. For groups containing new meanings, we look at the prospects for the total frequency of phrases, if it is less than 150 per month, then we postpone it until we go through the entire core. Perhaps it will be possible to combine them with another cluster and collect 300 accurate impressions - this is the minimum from which it is worth taking an article into work. To speed up manual grouping, use auxiliary tools: quick filter and frequency dictionary. They will help you quickly find suitable phrases from other clusters;


Attention!!! How to understand that clusters can be combined? We take 2 frequency keys from those selected in step 5 for the landing page and 1 request from the new group.
Add them to Arsenkin's tool "Unload Top 10", indicate the desired region, if necessary. Next, we look at the number of crossings by color for the 3rd phrase with the rest. We unite groups if there are 3 or more of them. If there are no coincidences or one thing, it is impossible to combine - different intents, in the case of 2 intersections, look at the issue by hand and use logic.

7) After grouping the keys, we get a list of promising topics for articles and the semantics for them.


Deleting other content type requests

When compiling the semantic core, it is important to understand that commercial queries are not needed for blogs and communication sites. Just like online stores do not need information.

We go through each group and clean up all unnecessary, if it is not possible to accurately determine the intent of the request, we compare the results or use the tools:

  • Commercialization check from Pixel Tools (free, but with a daily limit of checks);
  • Just-Magic service, clustering with a tick check the commercial request (paid, the cost depends on the tariff)

After that, we move on to the last stage.

Optimizing phrases

We optimize the semantic core so that it would be convenient for a seo specialist and copywriter to work with it in the future. To do this, we will leave key phrases in each group that fully reflect the needs of people and contain as many synonyms for the main phrases as possible.

Algorithm of actions:

  • Sort keywords in Excel or Key Collector alphabetically from A to Z;
  • Let's choose those that reveal the topic from different angles and in different words. All other things being equal, we leave phrases with a higher exact frequency or which have a lower key 1 (the ratio of the base frequency to the exact one);
  • We remove keywords with less than 7 impressions per month, which do not carry new meanings and do not contain unique synonyms.

An example of what a well-composed semantic core looks like -

In red, I marked phrases that do not fit the intent. If you neglect my recommendations for manual grouping and do not check compatibility, it will turn out that the page will be optimized for incompatible key phrases and you will no longer see high positions for promoted queries.

Final checklist

  1. We select the main high-frequency queries that set the topic;
  2. We are looking for synonyms for them, using the left and right columns of Wordstat, competitors' sites and their snippets;
  3. Expand the received requests by parsing the left column of Wordstat;
  4. Preparing a list of stop words and applying them to the resulting phrases;
  5. Parse Yandex and Google tips;
  6. We remove the base and exact frequency;
  7. Expanding the list of negative keywords. We clean up garbage and dummy requests
  8. We do clustering via Majento or KeyAssort. For information sites in Soft mode, the threshold is 3-4. For commercial Internet resources using the Hard method with a threshold of 3.
  9. We import data into Key Collector and determine the competition of 3-4 phrases for each cluster with a unique intent;
  10. We select topics and decide on landing pages for queries based on an estimate of the total number of exact impressions for all phrases from one cluster (from 300 for informational people) and competition for the most frequent of them for Mutagen (up to 12).
  11. For each suitable page, we look for other clusters with similar user needs. If we can consider them on one page, we combine them. When the need is not clear or there is a suspicion that another type of content or page should be the answer to it, we check by issuance or through the Pixel Tools or Just-Magic tools. For content sites, the core should consist of information requests, for commercial sites, of transactional ones. We delete the excess.
  12. We sort the keys in each group alphabetically and leave those that describe the topic from different angles and in different words. All other things being equal, priority is given to those queries that have a lower base-to-exact frequency ratio and a higher number of accurate impressions per month.

What to do with the SEO core after it is created

They made a list of the keys, gave them to the author, and he wrote an excellent article in full, revealing all the meanings. Eh, I was dreaming ... An explanatory text will only work if the copywriter clearly understands what you want from him and how he can check himself.

Let's analyze 4 components, having worked well, which you are guaranteed to get a lot of targeted traffic to the article:

Nice structure. We analyze the requests selected for the landing page and identify what needs people have in this topic. Next, we write the outline of the article, which fully answers them. The task is to make sure that people, having entered the site, receive a voluminous and comprehensive answer according to the semantics that you have compiled. This will give good behavioral and high relevance to the intent. Once you've made a plan, take a look at the competitors' sites by typing the main search query you are promoting. You need to do exactly in this sequence. That is, first we do it ourselves, then we look at what others have and, if necessary, refine it.

Optimization for keys. The article itself is sharpened under 1-2 most frequent keys with competition for Mutagen up to 12. Another 2-3 mid-frequency phrases can be used as headings, but in a diluted form, that is, by inserting additional words that are not related to the topic, using synonyms and word forms ... We focus on low-frequency phrases from which we pull out a unique part - the tail and evenly embed it into the text. The search engines will find and glue everything themselves.

Synonyms for basic queries. We write them out separately from our semantic core and set the task for the copywriter to use them evenly throughout the text. This will help to reduce the density for our main words and at the same time the text will be optimized enough to get to the top.

Subject-setting phrases. By themselves, LSI does not promote the page, but their presence indicates that most likely the written text belongs to the "pen" of an expert, and this is already a plus to the quality of the content. To search for thematic phrases, we use the "Terms of Reference for a Copywriter" tool from Pixel Tools.


An alternative method of selecting key phrases using services for competitor analysis

There is a quick approach to building a Semantic Core that is applicable to both novice and advanced users. The essence of the method is that we initially select keys not for the entire site or category, but specifically for the article, landing page.

It can be implemented in 2 ways, which differ in how we choose themes for the page and how deeply we expand key phrases:

  • by parsing primary keys;
  • based on competitor analysis.

Each of them can be implemented at a simpler and more complex level. Let's take a look at all the options.

Without using programs

A copywriter or webmaster often does not want to deal with the interface of a large number of programs, but they need good themes and key phrases for them.
This method is just for beginners and those who do not want to bother. All actions are performed without using additional software, using simple and understandable services.

What you need:

  • Keys.so service for competitor analysis - RUB 1,500 With the "altblog" promo code - 15% discount;
  • Mutagen. Checking the competitiveness of requests - 30 kopecks, collecting the basic and exact frequency - 2 kopecks per check;
  • Bukvarix - free version or business account - 995 rubles. (now with a discount of 695 rubles)

Option 1. Selecting a topic by parsing basic phrases:

  1. We select the main keys from the topic in a broad sense, using brainstorming and the left and right columns of Yandex Wordstat;
  2. Next, we look for synonyms for them, the methods of which were mentioned earlier;
  3. We fill in all received marker requests in Bukvarix (you will need to pay a paid tariff) in the extended mode "Search by the list of keywords";
  4. Specify in the filter: "! Exact! Frequency" from 50, Number of words from 3;
  5. We export the entire list to Excel;
  6. We select all the keywords and send them for grouping to the "Kulakov Clusterizer" service. If the site is regional, select the desired city. Leave the clustering threshold for informational sites at 2, for commercial sites set at 3;
  7. After grouping, we select topics for articles, looking at the resulting clusters. We take those where the number of phrases is from 3 and with a unique intent. The analysis of urls of sites from the top in the column "Competitors" (on the right in the Kulakov's service plate) helps to better understand the needs of people. Also, do not forget to check the Mutagen competition. We punch 2-3 requests from the cluster. If everything is more than 12, then the topic should not be taken;
  8. We decided on the name of the future landing page, it remains to choose key phrases for it;
  9. From the "Competitors" field, copy 3 URLs with the appropriate type of pages (if the site is informational - we take links to articles, if it is commercial, then to stores);
  10. We insert them sequentially into keys.so and unload all key phrases for them;
  11. We combine them in Excel and delete duplicates;
  12. Service data alone is not enough, so you need to expand it. Let's use Bukvarix again;
  13. We send the resulting list for clustering to the "Kulakov Clusterizer";
  14. We select groups of requests that are suitable for the landing page, focusing on the intent;
  15. We remove the base and exact frequency through the Mutagen in the "Mass missions" mode;
  16. We export the list with refined data on the number of impressions in Excel. Remove nulls for both types of frequencies;
  17. Also in Excel, we add the formula for the ratio of the base frequency to the exact one and leave only those keys for which this ratio is less than 100;
  18. We delete requests of a different type of content;
  19. We leave phrases that reveal the main intent as fully and in different words as possible;
  20. We repeat all the same steps for points 8-19 for the rest of the topics.

Option 2. Choosing a topic through competitor analysis:

1. We are looking for top sites in our topic, driving in high-frequency queries and looking through the results through Arsenkin's tool "Top-10 Analysis". It is enough to find 1-2 suitable resources.
If we are promoting a site in a specific city, we indicate the regionality;
2. Go to the keys.so service and enter the urls of the sites that we found and see which pages of competitors bring the most traffic.
3. 3-5 of the most accurate frequency queries of them we check for competitiveness. If for all phrases it is above 12, then it is better to look for another topic that is less competitive.
4. If you need to find more sites for analysis, open the "Competitors" tab and set the parameters: similarity - 3, thematicity - 10. Sort the data in descending order of traffic.
5. After we have selected a topic, we drive its name into the search results and copy 3 urls from the top.
6. Then we repeat points 10-19 from the 1st option.

Using Kei Collector or Slovoeba

This method will differ from the previous one only by using the Key Collector program for some operations and by deeper expansion of keys.

What you need:

  • Key Collector program - 1800 rubles;
  • all the same services as in the previous method.

"Advanced - 1"

  1. Parse the left and right Yandex columns for the entire list of phrases;
  2. We remove the exact and base frequency through the Key Collector;
  3. We calculate the indicator key 1;
  4. We delete requests for zeroes and with key 1> 100;
  5. Then we do everything in the same way as in paragraphs 18-19 of option 1.

"Advanced - 2"

  1. We do steps 1-5, as in option 2;
  2. Collect keys in keys.so for each url;
  3. Removing duplicates in the Key Collector;
  4. Repeat Points 1-4 as in the "Advanced -1" method.

Now let's compare the number of keys received and their exact total frequency when collecting the SN by different methods:

As you can see from the table, the best result was shown by an alternative method of creating a core for a page - "Advanced 1,2". It was possible to obtain 34% more target keys and at the same time the total traffic across the cluster turned out to be 51% more than in the case of the classical method.

Below in the screenshots, you can see how the finished kernel looks like, in each of the cases. I took phrases with the exact number of impressions from 7 per month, so that I could evaluate the quality of the keywords. For the full semantics, see the table under the "View" link.

A)


B)


V)

Now you know that it is not always the most common way, as everyone does, the most correct and correct, but you should not give up other methods either. Much depends on the topic itself. For commercial sites where there are not so many keys, the classic version is quite enough. On information sites, you can also get excellent results if you correctly draw up the technical assignment for a copywriter, make a good structure and seo-optimization. We will talk about all this in detail in the following articles.

3 common mistakes when creating a semantic core

1. Collecting phrases at the top. It's not enough to parse Wordstat to get a good result!
More than 70% of queries that people enter rarely or periodically do not get there at all. But among them there are often key phrases with good conversion and really low competition. How not to miss them? Be sure to collect search suggestions and combine them with data from different sources (, counters on sites, statistics services and databases).

2. Mixing informational and commercial requests on one page. We have already discussed that key phrases differ according to the type of needs. If a visitor comes to your site who wants to make a purchase, but sees a page with an article as a response to his request, do you think he will be satisfied? No! Search engines also think when they rank a page, which means that you can immediately forget about the top in MF and HF phrases. Therefore, if you are in doubt about determining the type of request, see the results or use the tools Pixel Tools, Just-Magis to determine the commercial value.

3. Choice to promote highly competitive queries. Positions for HF VK phrases by 60-70% depend on behavioral factors, and in order to get them you need to get to the top. The more applicants, the longer the line of applicants and the higher the requirements for sites. Everything is like in life or sports. Becoming a world champion is much more difficult than getting the same title in your city.
Therefore, it is better to go into a quiet rather than an overheated niche.

Earlier, it was even more difficult to get to the top. In the top they were on the principle of who had time, he ate. Leaders came to the fore, and they could only be displaced by accumulating behavioral factors. And how to get them if you are on the second or third page ... Yandex broke this vicious circle in the summer of 2015 by introducing the “multi-armed bandit” algorithm. Its essence is precisely to randomly increase and decrease the position of sites in order to understand whether more worthy candidates have appeared for being in the top.

How much money do you need to start?

To answer this question, let's calculate the costs of the required arsenal of programs and services in order to prepare and ungroup key phrases into 100 articles.

The very minimum (suitable for the classic version):

1. Slovoeb - free
2. Majento Cluster - free
3. Recognition of captchas - 30 rubles.
4. Xml-limits - 70 rubles.
5. Checking the competition of a request for Mutagen - 10 checks per day free of charge
6. If you are in no hurry and are ready to spend 20-30 hours on parsing, you can do without a proxy.
—————————
The total is 100 rubles. If you enter captchas yourself, and get xml limits in exchange for those transferred from your site, then you can actually prepare the kernel for free. You just need to spend another day setting up and mastering the programs and another 3-4 days waiting for the parsing results.

Standard semantist set (for advanced and classic method):

1. Kay Collector - 1900 rubles
2. Key Assort - 1700 rubles
3. Bukvarix (business account) - 650 rubles.
4. Service of the analysis of competitors keys.so - 1500 rubles.
5. 5 proxies - 350 rubles per month
6. Anticaptcha - about 30 rubles.
7. Xml limits - about 80 rubles.
8. Checking competition with Mutagen (1 check = 30 kopecks) - we will keep within 200 rubles.
———————-
The result is 6410 rubles. You can, of course, do without KeyAssort, replacing it with Majento with a clusterer and use Slovoyob instead of Key Collector. Then 2810 rubles will be enough.

Is it worth trusting the development of the "pro" kernel, or is it better to figure it out and do it yourself?

If a person regularly does what he loves, pumps in it, then following the logic, his results should be definitely better than those of a beginner in this area. But with the selection of keywords, everything turns out exactly the opposite.

Why does a beginner do better than a professional in 90% of cases?

It's all about the approach. The task of the semantist is not to collect the best core for you, but to complete his work in the shortest possible time and so that its quality suits you.

If you do everything yourself according to the algorithms mentioned earlier, the result will be an order of magnitude higher for two reasons:

  • You understand the topic. This means that you know the needs of your clients or site users and can, at the initial stage, maximize the token queries for parsing, using a large number of synonyms and specific words.
  • We are interested in doing everything efficiently. The business owner or employee of the company in which he works, of course, will approach the issue more responsibly and will try to do everything to the maximum. The more complete the core and the more low-competitive queries in it, the more targeted traffic will be collected, which means that the profit will be higher with the same investments in content.

How do you find the remaining 10% that will make up the core better than you?

Look for companies in which the selection of key phrases is a key competence. And immediately discuss what you want the result, like everyone else or the maximum. In the second case, it will be 2-3 times more expensive, but in the long term it will pay off many times over. For those who want to order a service from me, all the necessary information and conditions. I guarantee the quality!

Why is it so important to fully work out semantics

Here, as in any field, the principle of "good and bad choice" works. What is its essence?
Every day we are faced with what we choose:

  • to meet with a person who seems to be nothing, but does not cling to, or having figured out himself to build a harmonious relationship with someone who is needed;
  • to do work that you do not like or find something to which your soul lies and make it your profession;
  • renting a room for a store in a non-walkable place or still waiting until it becomes free is a suitable option;
  • to hire not the best sales manager, but the one who showed himself the best in today's interview.

Everything seems to be clear. And if you look at it from the other side, presenting each choice as an investment in the future. This is where the fun begins!

Saved on this. core, 3-5 thousand. Happy as elephants! But what does this lead to next:

a) for information sites:

  • Losses in traffic by at least 1.5 times with the same investments in content. Comparing different methods of obtaining key phrases, we have already found out empirically that the alternative method allows you to collect 51% more;
  • The project sags faster in the search results... It is easy for competitors to bypass us by giving a more complete answer on the intent.

b) for commercial projects:

  • Fewer or more leads... If we have semantics, like everyone else, then we are moving forward on the same queries as competitors. A large number of offers with constant demand reduces the share of each of them on the market;
  • Low conversion. Specific inquiries translate better into sales. Saving on this. core, we lose the most conversion keys;
  • The harder it is to move forward. There are many who want to be in the top - the requirements for each of the candidates are higher.

I wish you always make a good choice and invest only in a plus!

P.S. Bonus "How to write a good article with bad semantics", as well as other life hacks for promoting and making money on the Internet, read in my group

Organic search is the most effective source of targeted traffic. To use it, you need to make the site interesting and visible to users of the search engines Yandex and Google. There is no need to reinvent the wheel here: it is enough to define what the audience of your project is interested in and how they seek information. This task is solved when building a semantic core.

Semantic core- a set of words and phrases that reflect the topic and structure of the site. Semantics- a section of linguistics that studies the semantic content of language units. Therefore, the terms "semantic core" and "semantic core" are identical. Remember this line, it will not let you slide into keyword stuffing or stuffing content with keywords.

Composing the semantic core, you answer the global question: what information can be found on the site. Since one of the main principles of business and marketing is customer focus, you can look at the creation of the semantic core from the other side. You need to determine what search terms users use to search for information that will be published on the site.

The construction of a core of meaning solves another problem. We are talking about the distribution of search phrases across the pages of the resource. By working with the engine, you determine which page will respond best to a particular search query or group of queries.

There are two approaches to solving this problem.

  • The first assumes creation of the site structure based on the results of the analysis of the user's search queries... In this case, the semantic core defines the framework and architecture of the resource.
  • The second approach involves preliminary planning of the resource structure before analyzing search queries... In this case, the semantic core is distributed over the finished framework.

Both approaches work in one way or another. But it is more logical to first plan the structure of the site, and then determine the requests by which users will be able to find this or that page. In this case, you remain proactive: you yourself choose what you want to tell potential customers. If you match the resource structure to the keys, then you remain an object and react to the environment, rather than actively changing it.

The difference between SEO and marketing approaches to building a core needs to be clearly emphasized here. Here's the logic of a typical old-school SEO: to build a website, you need to find keywords and select phrases that will just get to the top of the results. After that, you need to create a site structure and distribute the keys across the pages. The page content needs to be optimized for key phrases.

This is the logic of a businessman or a marketer: you need to decide what information to broadcast to the audience using the site. To do this, you need to know your industry and business well. First, you need to plan a rough site structure and a preliminary list of pages. After that, when building a semantic core, you need to find out how the audience is looking for information. With the help of content, you need to answer the questions that the audience asks.

What are the negative consequences of using the "SEO" approach in practice? Due to the development according to the principle of "dancing from the stove", the information value of the resource decreases. The business must shape trends and choose what to say to customers. A business should not limit itself to reactions to the statistics of search phrases and create pages only for the sake of optimizing the site for some key.

The planned result of building a semantic core is a list of key queries distributed across the pages of the site. It contains page URLs, search queries and an indication of their frequency.

How to build a site structure

The site structure is a hierarchical page layout. With its help, you solve several problems: you plan the information policy and the logic of information presentation, ensure the usability of the resource, and ensure that the site meets the requirements of search engines.

To build a structure, use a convenient tool: spreadsheet editors, Word or other software. You can also draw the structure on a piece of paper.

When planning your hierarchy, answer two questions:

  1. What information do you want to communicate to users?
  2. Where should this or that information block be published?

Imagine planning a site structure for a small pastry shop. The resource includes information pages, a publications section, and a showcase or product catalog. Visually, the structure might look like this:

For further work with the semantic core, arrange the site structure in the form of a table. In it, indicate the names of the pages and indicate their subordination. Also include columns in the table for page URLs, keywords, and frequency. The table might look like this:

You will fill in the URL, Keys and Frequency columns later. Now go to search for keywords.

What you need to know about keywords

To find a semantic core, you must understand what are keywords and what keys the audience is using... With this knowledge, you will be able to correctly use one of the keyword research tools.

What keys are used by the audience

Keys are words or phrases that potential customers use to find the information they need. For example, to make a cake, the user enters the query "Napoleon recipe with photo" into the search box.

Keywords are classified according to several criteria. By popularity, high-, medium- and low-frequency queries are distinguished. According to various sources, search phrases are grouped as follows:

  • TO low-frequency includes requests with a frequency of impressions up to 100 per month. Some experts include requests with a frequency of up to 1000 impressions in the group.
  • TO mid-range includes requests with a frequency of up to 1000 impressions. Sometimes experts increase the threshold to 5,000 impressions.
  • TO high frequency queries include phrases with a frequency of 1000 impressions. Some authors consider high-frequency keys with 5,000 or even 10,000 requests.

The difference in the frequency estimate is due to the different popularity of the topics. If you are building a core for an online store that sells laptops, the phrase "buy a samsung laptop" with a frequency of about 6 thousand per month will be mid-range. If you are building a core for a sports club site, the request for "aikido section" with a frequency of about 1000 requests will be high-frequency.

What do you need to know about frequency when composing a semantic core? According to various sources, from two-thirds to four-fifths of all user requests are low-frequency. Therefore, you need to build as broad a semantic core as possible. In practice, it must constantly expand with low-frequency phrases.

Does this mean that high and medium frequency requests can be ignored? No, you can't do without them. But consider low-frequency keys as the main resource for attracting targeted visitors.

According to the needs of users, the keys are combined into the following groups:

  • Information... The audience uses them to find information. Examples of information requests: "how to store baked goods correctly", "how to separate the yolk from the protein."
  • Transactional... Users enter them when they plan to take an action. This group includes the keys "buy a bread maker", "download a recipe book", "order pizza with delivery."
  • Other inquiries... These are key phrases for which it is difficult to determine the user's intent. For example, when a person uses the "cake" key, they may be planning to buy a culinary product or prepare one themselves. In addition, the user may be interested in information about the cakes: definition, features, classification, etc.

Some experts distinguish navigation queries into a separate group. With their help, the audience searches for information on specific sites. Here are some examples: "laptops connected", "city express track delivery", "sign up for LinkedIn." Navigation queries that are not specific to your business can be ignored when compiling the semantic core.

How to use this classification method when constructing a semantic core? First, you must consider the needs of your audience when distributing keys across pages and creating your content plan. Everything is obvious here: publications of informational sections should respond to information requests. There should also be most of the key phrases without express intention. Transactional questions should be answered by pages from the "Store" or "Showcase" sections.

Second, remember that many transactional issues are commercial. To attract natural traffic for the request “buy a Samsung smartphone”, you will have to compete with Euroset, Eldorado and other business heavyweights. To avoid unequal competition, you can use the advice above. Expand your kernel as much as possible and reduce the request rate. For example, the frequency of the request “buy a Samsung Galaxy s6 smartphone” is an order of magnitude lower than the frequency of the “Buy a Samsung Galaxy smartphone” key.

What you need to know about the anatomy of search queries

Search phrases consist of several parts: body, specifier and tail... This can be seen with an example.

What about the query "cake"? It cannot be used to determine the user's intent. It is high-frequency, which determines the high competition in the search results. Using this request for promotion will bring a large proportion of non-targeted traffic, which negatively affects behavioral metrics. The high frequency and non-specificity of the “cake” request is determined by its anatomy: it consists only of the body.

Pay attention to the request "buy a cake". It consists of the body "cake" and the qualifier "buy". The latter determines the user's intent. Specifiers indicate that the key belongs to transactional or informational. Take a look at examples:

  • Buy a cake.
  • Cake recipes.
  • How to serve a cake.

Sometimes specifiers can express exactly the opposite intent of the user. A simple example: users are planning to buy or sell a car.

Now look at the query "buy cake with delivery". It consists of a body, a specifier, and a tail. The latter does not change, but details the intention or informational need of the user. Take a look at examples:

  • Buy cake online.
  • Buy a cake in Tula with delivery.
  • Buy homemade cake in Oryol.

In each case, the intention of the person to purchase the cake is visible. And the tail of the key phrase details this need.

Knowledge of the anatomy of search phrases allows you to derive a conditional formula for the selection of keys for the semantic core. You must define basic terms related to your business, product, and user needs. For example, customers of a confectionery firm are interested in cakes, pastries, cookies, pastries, cupcakes and other confectionery products.

After that, you need to find the tails and specifiers that the project audience uses with the basic terms. With tailed phrases, you simultaneously increase reach and reduce core competitiveness.

Long tail or long tail is a term that defines the strategy for promoting a resource for low-frequency keywords. It consists in using the maximum number of keys with a low contention level. Low-frequency promotion ensures high efficiency of marketing campaigns. This is due to the following factors:

  • Promotion by low-frequency keywords requires less effort compared to promotion by high-frequency competitive requests.
  • Working with long-tail queries is guaranteed to bring results, although marketers cannot always predict exactly which keywords will generate traffic. When dealing with high-volume queries, decent marketers cannot guarantee results.
  • Low-frequency devices provide a higher specificity of the output results to the needs of users.

For large sites, the semantic core can contain tens of thousands of requests, and it is almost impossible to select and correctly group them by hand.

Services for compiling the semantic core

There are quite a few keyword research tools out there. You can build a kernel using paid or free services and programs. Choose a specific remedy depending on your tasks.

Key collector

You cannot do without this tool if you are engaged in internet marketing professionally, develop several sites or compose the core for a large site. Here is a list of the main tasks that the program solves:

  • Selection of keywords. Key Collector collects queries through Yandex's Wordstat.
  • Parsing search suggestions.
  • Clipping inappropriate search phrases with stop words.
  • Filtering requests by frequency.
  • Search for implicit duplicate queries.
  • Definition of seasonal requests.
  • Collection of statistics from third-party services and platforms: Liveinternet.ru, Metrika, Google Analytics, Google AdWords, Direct, Vkontakte and others.
  • Search for pages relevant to the request.
  • Search query clustering.

Key collector is a multifunctional tool that automates the operations required to build a semantic core. The program is paid. You can perform all the actions that Key Collector "knows how" with the help of alternative free tools. But for this you will have to use several services and programs.

SlovoEB

This is a free tool from the creators of Key Collector. The program collects keywords through Wordstat, determines the frequency of queries, parses search suggestions.

To use the program, in the settings, specify the username and password for the "Direct" account. Do not use your main account, as Yandex may block it for automatic requests.

Create a new project. On the "Data" tab, select the "Add phrases" option. Specify search phrases that the project audience is supposed to use to find information about products.

In the "Collect keywords and statistics" section of the menu, select the required option and run the program. For example, determine the frequency of key phrases.

The tool allows you to select keywords, as well as automatically perform some tasks related to the analysis and grouping of queries.

Keyword selection service Yandex Wordstat

To see which phrases a page is displayed for in Yandex search results, open the "Search queries" tab in the Yandex.Webmaster panel -> "Recent requests".

We see the phrases for which there were clicks or the site snippet was shown in the TOP-50 of Yandex for the last 7 days.

To view data only for the page that interests us, you need to use filters.

The possibilities of searching for additional phrases in Yandex.Webmaster are not limited to this.

Go to the "Search queries" tab -> Recommended Requests.

There may not be many phrases here, but you can find additional phrases for which the promoted page does not fall into the TOP-50.

Request history

The big disadvantage of the visibility analysis in Yandex.Webmaster, of course, is that there is data only for the last 7 days. To get around this limitation a little, you can try to supplement the list using the "Searches" tab -> "Request history".

Here you will need to select "Popular Searches".

You will receive a list of the most popular phrases from the last 3 months.

To get phrases from Google Search Console, go to the "Search Traffic" tab -> "Analysis of search queries." Next, select "Impressions", "CTR", "Clicks". This will allow you to see more information that can be useful when analyzing phrases.

By default, the tab displays data for 28 days, but you can expand the range to 90 days. You can also select the desired country.

As a result, we get a list of requests, similar to the one shown in the screenshot.

New version of Search Console

Google has already made available some of the tools in the new version of the panel. To view requests for a page, go to the "Status" tab - > "Efficiency".

In the new version, the filters are arranged differently, but the filtering logic is preserved. I think there is no point in dwelling on this question. Of the significant differences, it is worth noting the ability to analyze data for a longer period, and not just for 90 days. A significant advantage when compared to Yandex.Webmaster (only 7 days).

Analysis services for competing websites

Competitor sites are a great source of keyword ideas. If you are interested in a specific page, you can manually determine the search phrases for which it is optimized. To find the main keys, it is usually enough to read the material or check the content of the keywords meta tag in the page code. You can also use services for semantic analysis of texts, for example, Istio or Advego.

If you need to analyze the entire site, use the services of complex competitive analysis:

You can use other tools to collect key phrases as well. Here are some examples: Google Trends, WordTracker, WordStream, Ubersuggest, Topvisor... But do not rush to master all services and programs at once. If you are composing the semantic core for your own small site, use a free tool, for example, Yandex keyword selection service or Google Planner.

How to find keywords for the semantic core

The process of selecting key phrases is combined in several stages:

  1. In the first, you will define the base keywords that the audience uses to search for your product or business.
  2. The second stage is devoted to the expansion of the semantic core.
  3. In the third step, you will remove inappropriate search phrases.

Defining base keys

Fill in a spreadsheet or write down general search phrases related to your business and products. Gather colleagues and brainstorm. Record all proposed ideas without discussion.

Your list will look something like this:

Most of the keys you wrote down are high in frequency and low in specificity. In order to get high-specificity mid- and low-frequency search phrases, you need to expand your core as much as possible.

Expanding the semantic core

You will accomplish this task using keyword research tools such as Wordstat. If your business has a regional binding, select the appropriate region in the settings.

Using the service for selecting key phrases, you need to analyze all the keys recorded at the previous stage.

Copy the phrases from the left column of Wordstat and paste into the table. Pay attention to the right column of Wordstat. In it, Yandex offers phrases that people used in conjunction with the main query. Depending on the content, you can immediately select the appropriate keys from the right column or copy the entire list. In the second case, inappropriate requests will be eliminated at the next stage.

And the result of this stage of work will be a list of search phrases for each basic key that you brainstormed. Lists can contain hundreds or thousands of queries.

Remove inappropriate search phrases

This is the most time consuming stage of working with the kernel. You need to manually remove inappropriate search phrases from the kernel.

Do not use frequency, concurrency, or other purely "SEO" metrics as a criterion for evaluating keys. Do you know why old-school SEOs consider certain search phrases to be junk? For example, take the "diet cake" key. The Wordstat service predicts 3 impressions for it per month in the Cherepovets region.

To promote pages for specific keywords, old school SEOs bought or rented links. By the way, some experts still use this approach. It is clear that search phrases with low frequency in most cases do not pay off the money spent on buying links.

Now look at the phrase "diet cakes" through the eyes of the average marketer. Some representatives of the CA confectionery company are really interested in such products. Therefore, the key can and should be included in the semantic core. If a pastry shop prepares appropriate foods, the phrase will come in handy in the product description section. If the company for some reason does not work with diet cakes, the key can be used as a content idea for the information section.

What phrases can be safely excluded from the list? Here are some examples:

  • Keys mentioning competing brands.
  • Keys that mention products or services that you do not sell or plan to sell.
  • Keys with the words "inexpensive", "cheap", "at a discount" included. If you are not dumping, cut off those who love the cheap so as not to spoil the behavioral metrics.
  • Duplicate keys. For example, of the three keys "custom-made cakes for a birthday", "cakes to order for the day" and "cakes to order for birth", it is enough to leave the first one.
  • Keys mentioning inappropriate regions or addresses. For example, if you serve residents of the Northern District of Cherepovets, the key "cakes to order industrial district" is not suitable for you.
  • Phrases entered with errors or typos. Search engines understand that the user is looking for croissants, even if they enter the key "croissants" in the search box.

After deleting the inappropriate phrases, you got a list of queries for the basic key "cakes to order". The same lists need to be drawn up for other base clues from the brainstorming phase. After that, move on to grouping key phrases.

How to group keywords and build a relevance map

The search phrases that users use to find or will find your site are combined into semantic clusters, a process called clustering search queries... These are similar groups of requests. For example, the semantic cluster "Cake" includes all key phrases associated with this word: cake recipes, ordering a cake, photo of cakes, wedding cake, etc.

Semantic cluster is a group of queries united in the sense. It is a multi-level structure. Inside the first-order cluster "Cake" there are clusters of the second order "Cake recipes", "Ordering cakes", "Photos of cakes". Within the cluster of the second order "Cake Recipes" it is theoretically possible to distinguish the third order of clustering: "Recipes for cakes with mastic", "Recipes for biscuit cakes", "Recipes for shortbread cakes". The number of levels in a cluster depends on the breadth of the topic. In practice, in most topics, it is sufficient to single out second-order clusters specific to business within the first-order clusters.

In theory, a semantic cluster can have many levels.
In practice, you will have to work with clusters of the first and second levels.

Most of the first level clusters you brainstorm when you write down your basic key phrases. To do this, it is enough to understand your own business, as well as peep at the site diagram that you drew up before starting work on the semantic core.

It is very important to correctly perform clustering at the second level. Here, search phrases are changed with specifiers that indicate user intent. A simple example is the cake recipe and custom cake clusters. The first search phrases are used by people in need of information. Cluster 2 keys are used by customers who want to buy a cake.

You have identified the search phrases for the cake-to-order cluster using Wordstat and manual filtering. They must be distributed between the pages of the "Cakes" section.

For example, in the cluster there are searches for “custom-made football cakes” and “custom-made soccer cakes”.

If there is a corresponding product in the assortment of the company, it is necessary to create a corresponding page in the section "Mastic Cakes". Add it to the site structure: include the name, URL and search phrases with frequency.

Use Keyword Research or similar tools to see what other search phrases potential customers are using to find soccer-themed cakes. List the pages that are relevant to your keyword list.

In the list of cluster search phrases, mark the distributed keys in a convenient way. Distribute the remaining search phrases.

If necessary, change the structure of the site: create new sections and categories. For example, the page "Custom Paw Patrol Cakes" should go under the "Baby Cakes" section. At the same time, it can be included in the "Mastic Cakes" section.

Pay attention to two points. First, the cluster may not have matching phrases for the page you are planning to create. This can happen for a variety of reasons. For example, these include the imperfection of tools for collecting search phrases or their incorrect use, as well as the low popularity of the product.

The absence of a suitable key in the cluster is not a reason to refuse to create a page and sell a product. For example, imagine that a confectionery company sells children's cakes featuring Peppa Pig's characters. If the corresponding keys are not included in the list, clarify the needs of the audience using Wordstat or another service. In most cases, there will be suitable queries.

Secondly, even after removing unnecessary keys, search phrases may remain in the cluster that are not suitable for the created and scheduled pages. They can be ignored or used in another cluster. For example, if a pastry shop for some reason fundamentally does not sell Napoleon cake, the corresponding key phrases can be used in the Recipes section.

Search query clustering

Grouping of search queries can be done manually, in Excel or Google spreadsheets, or automated, using special applications and services.

Clustering allows you to understand how requests can be distributed across the pages of the site for their fastest and most effective promotion.

Automatic clustering or grouping of search queries of the semantic core is carried out based on the analysis of sites included in the TOP-10 results of the search engines Google and Yandex.

How automatic query grouping works: for each of the requests, the search results are among the TOP-10 sites. If there are matches among at least 4-6 of them, then the requests can be grouped to be placed on one page.

Automatic grouping is the fastest and most effective way to combine keywords to form a nearly ready-to-use site structure.

If it is not true, from the point of view of search engine statistics, it will be, alas, impossible to form the structure of the site and distribute requests among its pages, to successfully promote the pages to the TOP!

Applications and services for automatic grouping of search queries

Among the services that automate the grouping of keywords, it is worth highlighting:

  • Key Collector.
  • Rush Analytics.
  • TopVisor.

After distributing all the keys, you will receive a list of existing and planned site pages with URL, search phrases and frequency. What to do with them next?

What to do with the semantic core

A table with a semantic core should become a roadmap and the main source of ideas when forming:

Look: you have a list with preliminary page titles and search phrases. They define the needs of the audience. When drawing up a content plan, you just need to clarify the title of the page or publication. Include your main search term in it. This is not always the most popular key. In addition to popularity, the query in the title should best reflect the needs of the page audience.

Use the rest of the search phrases as an answer to the question "what to write about." Remember, you don't need to write all the search phrases in your newsletter or product description by all means. The content should cover the topic and answer users' questions. Note again that you need to focus on information needs, not search phrases and how they fit into the text.

Semantic core for online stores

The specificity of the preparation and clustering of semantics lies in the presence of four very important, for the subsequent, groups of pages:

  • Home page.
  • Pages of sections and subsections of the catalog.
  • Product card pages.
  • Blog article pages.

Above, we have already talked about different types of search queries: informational, transactional, commercial, navigation. For pages of sections and products of an online store, first of all, transactional ones are interesting, i.e. queries using which search engine users want to see sites where they can make a purchase.

It is necessary to start forming the core with a list of products that you are already selling or planning to sell.

For online stores:

  • as " body»Requests will be made product names;
  • as " specifiers"Phrases:" buy», « price», « sale», « to order», « Photo», « description», «

Many web publications and publications talk about the importance of the semantic core.

There are similar texts on our site "What to Do". At the same time, only the general theoretical part of the issue is often mentioned, while the practice remains unclear.

All experienced webmasters insist that it is necessary to form a basis for promotion, but only a few can explain how to use it in practice. To remove the veil of secrecy from this issue, we decided to highlight the practical side of using the semantic core.

Why do you need a semantic core

This is, first of all, the basis and plan for the further filling and promotion of the site. The semantic basis, divided by the structure of the web resource, are pointers on the way to the systematic and purposeful development of the site.

Once you have such a foundation, you don't have to think about the topic of each next article, you just need to follow the points on the list. With the core, website promotion moves much faster. And the promotion becomes clear and transparent.

How to use the semantic core in practice

To begin with, it is worth understanding how the semantic basis is generally compiled. In essence, this is a list of key phrases for your future project, supplemented by the frequency of each request.

It will not be difficult to collect such information using the Yandex Wordstat service:

http://wordstat.yandex.ru/

or any other special service or program. In this case, the procedure will be as follows ...

How to build a semantic core in practice

1. Collect in a single file (Exel, Notepad, Word) all queries on your key topic, taken from statistics data. This also includes phrases "out of your head", that is, logically acceptable phrases, morphological variants (as you yourself would look for your topic) and even variants with typos!

2. The list of semantic queries is sorted by frequency. From queries with the highest frequency to queries with the lowest popularity.

3. All garbage queries that do not correspond to the topic or focus of your site are removed from the semantic basis and cleaned out. For example, if you tell people about washing machines for free but don't sell them, you don't need to use words like:

  • "buy"
  • "wholesale"
  • "delivery"
  • "to order"
  • "cheap"
  • "Video" (if there are no videos on the site) ...

Meaning: you must not mislead users! Otherwise, your site will receive a huge number of bounces, which will affect its rankings. And this is important!

4. When the main list is cleared of unnecessary phrases and queries, includes a sufficient number of items, you can use the semantic core in practice.

IMPORTANT: A semantic list can never be considered completely finished and complete. In any topic, you will have to update and supplement the core with new phrases and queries, periodically tracking innovations and changes.

IMPORTANT: the number of articles on the future site will depend on the number of items in the list. Consequently, this will affect the volume of the required content, the working time of the author of the articles, and the duration of filling the resource.

Overlaying the semantic core on the site structure

To make sense of the entire list received, you need to distribute requests (depending on frequency) across the structure of the site. It is difficult to give specific numbers here, since the scale and frequency difference can be quite significant for different projects.

If, for example, you take a query with a millionth frequency as a basis, even a phrase with a 10,000 query will seem mediocre.

On the other hand, when your main request is 10,000 frequencies, the average frequency will be about 5,000 requests per month. Those. some relativity is taken into account:

"HF - SR - LF" or "Maximum - Mid - Minimum"

But in any case (even visually) you need to divide the whole core into 3 categories:

  1. high-frequency queries (high-frequency - short phrases with the maximum frequency);
  2. low-frequency queries (low-frequency - rarely requested phrases and phrases with a low frequency);
  3. mid-range queries (mid-range - all mid-range queries that appear in the middle of your list.

At the next stage, 1 or several (maximum 3) requests for the main page are supported. These phrases should be as high as possible. The high-frequency players are placed on the main page!

Further, from the general logic of the semantic core, it is worth highlighting several main key phrases, from which the sections (categories) of the site will be created. Here you could also use tweeters with a lower frequency than the main one, or better - mid-frequency requests.

Low-frequency remaining phrases are sorted into categories (under the created sections and categories), turned into topics for future site publications. But this is easier to understand with an example.

EXAMPLE

An illustrative example of using the semantic core in practice:

1. Home page (HF) - high-frequency query - "website promotion".

2. Section pages (SC) - "custom site promotion", "self-promotion", "site promotion by articles", "site promotion by links". Or simply (if adapted for the menu):

Section number 1 - "to order"
Section number 2 - "independently"
Section number 3 - "article promotion"
Section # 4 - "link promotion"

All this is very similar to the data structure on your computer: logical drive (main) - folders (partitions) - files (articles).

3. Pages of articles and publications (LF) - "fast site promotion for free", "promotion to order cheap", "how to promote a site with articles", "promotion of a project on the Internet to order", "inexpensive site promotion by links", etc. ...

In this list, you will have the largest number of various phrases and phrases for which you will have to create further site publications.

How to use a ready-made semantic core in practice

Using a query list is an internal content optimization. The secret is to optimize (adjust) each page of the web resource for the corresponding core point. That is, in fact, you take a key phrase and write the most relevant article and page for it. A special service available at the link will help you to assess the relevance:

In order to have at least some guidelines in your SEO work, it is better to first check the relevance of sites from the TOP for specific queries.

For example, if you write a text on the low-frequency phrase "inexpensive website promotion by links", then first simply enter it in the search and rate the TOP-5 sites in the SERP using the relevance rating service.

If the service showed that sites from the TOP-5 for the query "inexpensive website promotion by links" have relevance from 18% to 30%, then you need to focus on the same percentage indicators. Better yet, create unique copy with keywords and relevance of about 35-50%. Bypassing your competitors a little at this stage, you will lay a good foundation for further promotion.

IMPORTANT: the use of the semantic core in practice implies that one phrase corresponds to one unique resource page. The maximum here is 2 requests per article.

The more fully the semantic core is revealed, the more informative your project will be. But if you are not ready for long-term work and thousands of new articles, you do not need to tackle wide thematic niches. Even a narrow, specialized area with 100% coverage will bring in more traffic than an unfinished large site.

For example, you could take as the basis of the site not a high-frequency key "site promotion" (where there is colossal competition), but a phrase with a lower frequency and narrower specialization - "article site promotion" or "link promotion", but reveal this topic to the maximum in all articles of the virtual platform! The effect will be higher.

Useful information for the future

Further use of your semantic core in practice will be only to:

  • correct and update the list;
  • write optimized texts with high relevance and uniqueness;
  • publish articles on the site (1 request - 1 article);
  • increase the usefulness of the material (edit ready-made texts);
  • improve the quality of articles and the site as a whole, monitor competitors;
  • mark in the kernel list those queries that have already been used;
  • supplement optimization with other internal and external factors (links, usability, design, usefulness, videos, online help tools).

Note: All of the above is a very simplified version of the activities. In fact, on the basis of the core, sublevels, deep nesting structures, and branches to forums, blogs, chats can be created. But the principle will always be the same.

GIFT: a useful tool for collecting the kernel in the Mozilla FireFox browser -

© 2021 hecc.ru - Computer technology news