How to build a semantic core in Key Collector: a complete guide. Automatic compilation of the semantic core by URL Automatic compilation of the semantic core by URL

How to build a semantic core in Key Collector: a complete guide. Automatic compilation of the semantic core by URL Automatic compilation of the semantic core by URL

21.01.2022

Sergey Arsentiev

KeyCollector (Key Collector): professional compilation of the semantic core.

The semantic core is the basis for the successful promotion of any Internet project. Therefore, it is important to pay maximum attention to this stage. To do this, you can use the most popular program for collecting key search queries - KeyCollector.

KeyCollector is a paid program but worth every dollar spent on it.

This is a really great and powerful parser for a variety of indicators, ranging from keywords, bids in Yandex.Direct and Google.Adwords, verification of geo-dependence and correctness of word forms, competition level, and ending with advanced analysis of data from leading SEO aggregators.

It is simply impossible to describe all the features of the program in one article, so now I will focus on the most important of them - the professional compilation of the semantic core.

There can be quite a lot of methods for selecting, filtering and grouping requests, but I will describe the process of selecting keys using the Key Collector exactly as I "got the hang" to do with minimal time and with the achievement of the desired result. It usually takes me about 10-15 minutes to select keys for one promoted topic. So let's get started.

Immediately upon opening Key Collector will offer to create a new project or open an old one. In one project, it is advisable to select and store key queries for the entire site if it is relatively small, for example, up to 1000 pages. Therefore, my project name is usually the name of the site.

So we create a new project, save it under any name, and first of all, we indicate the site address at the top of the page in the URL field.

Next, specify the region where you want to collect requests. To do this, click at the bottom of the program on each button Regions (there are 4 in total) and select the desired region in the window that appears. We will need the first button to collect statistics and word forms from Yandex.Wordstat, the second - to get frequencies from Yandex.Direct, the third - to analyze the level of competition, and the last one to collect statistics from Google.

Basic settings.

Now you need to configure the collection of statistics from Yandex. It is configured by default, but you need to make small changes depending on the specific task of compiling the semantic core.

First, specify the lower frequency limit for the added phrases. This is done in the "Add phrases with frequencies from" to the table. If your goal is to collect thousands of low-frequency queries, set an approximate range 5-50 . If you need to collect high-frequency requests, then set the lower frequency limit from 50. In all other cases, the default option is fine - from 10.

It makes sense to check the "Automatically write 0" box. In this case, when collecting some low-frequency queries, there will be no empty results.

You probably noticed that we left the parsing depth equal to 0. It makes no sense to use a large depth when collecting regular regional queries, since queries are usually less than 40 pages, which are set by default. It makes sense to set a depth of 1 only if the task is to collect the maximum of relevant keys for very high-frequency queries and at the same time it is necessary to specify a decent value, for example, 100 in the paragraph "Do not add phrases for in-depth research with a base frequency equal to or lower than".

I usually increase the delays between requests a little. With values ​​of 8000-15000, I have never had a captcha without any proxy servers, and the speed of work is more than acceptable.

But if the captcha pops up too often, then the most reasonable choice, of course, is to use proxy servers, here is a proxy specifically for Keycollector https://proxy-sale.com

It remains only to open the Yandex.Direct tab in the same settings and add 5-6 specially created fake accounts in the form of address:password.

The program is set up and ready to collect keywords.

Collection of keywords.

Personally, I find it most convenient to collect keywords for a specific page. Although some SEOs prefer to collect thousands of phrases first, and then scatter them across the pages using filters.

In this article, I will consider the first option, since it is simpler and, in my opinion, more correct and preferable when compiling the core for promoting ordinary business sites or the same blog.

We click on the Yandex.Wordstat statistics collection icon and enter a list of keys that are suitable for the promoted page or section of the site. Here you need to show a little imagination and come up with all kinds of words that can reflect the essence of your page and by which potential visitors can search for the promoted site in the search.

If your imagination is tight, just manually find examples of similar search queries in the same Yandex,.

Of course, you can collect hints using the same KeyCollector program, but when compiling a semantic core for a specific section or page, it is usually redundant and it is faster to just drive in a few common words, usually there are not many of them, only 5-10 for each promoted page.

You don’t need to worry about endings or word forms, the program will find all the options - just drive in the most common, but suitable in meaning and excluding other interpretations of the word.

For an article on the review of exchanges of eternal links, I originally used these words:

If I used only the word "exchange", then the program would give out a lot of garbage that does not fit the content of my article, for example, about stocks, stock speculation, labor exchanges, etc. And for the query “link exchange”, a variety of combinations were found, all of which are well suited for my page, including “the best link exchange”, “eternal link exchanges”, etc.

At the same time, it is best to use the common word “miralinks” alone in order to find the largest possible number of query options with this word: after all, only those who are basically interested in this particular topic and no other type it.

We weed out the excess.

After collecting keywords using the KeyCollector program, a list of all phrases found in the search statistics corresponding to the configured parameters will appear on the current tab.

Among them, there are two types of keys that are not suitable for promotion:

  • Keys, with a small number of direct entries.
  • Keys containing negative keywords.

Keys with few direct entries.

I already wrote earlier what it is, now I will only repeat a little. Initially, the program collects keywords with base frequency. This means that the number of given requests per month will include all word forms with this request.

For example, the query "buy links exchange" = 55 impressions per month. But these impressions will include many variations of this query, for example, “reviews about the link buying exchange” or “sape link buying exchange”, etc.
To promote the site, it is not possible to use all these options at once, because key queries will need to be written in meta tags, which are very small in volume, added to the headings and subheadings of pages, which are also not rubber and without overspam with normal density to fit into the text.

Therefore, it is logical to choose the most frequently encountered queries and optimize the page for them, so that as many people as possible visit the site being promoted. And how many impressions were there of the “exchange for buying links” without any additions and variations? To do this, you need to collect data with the so-called frequency "!".

Let's get started.
Click on the Yandex.Direct icon, check the box next to “! word” and click “Get data”.

As you can see, the number of direct requests for the “link buying exchange” key is quite small - only 3 (!) Requests per month, instead of 55. So I personally see no point in optimizing the page for this request.

Therefore, it is important to filter out all queries that have few direct exact occurrences in the search, for example, less than 5. To do this quickly, we simply sort all collected keys by the exact frequency of "!".

Then we select the lines that contain words with insufficient frequency and delete them.
10 seconds, and a list of hundreds of words is usually reduced by 3-5 times. Now it makes sense to discard the remaining irrelevant keys.

Keys with negative keywords.

What it is? This is a search query that is looking for a non-promoted page and obviously not potential customers.
For example, for a store selling vacuum cleaners, such irrelevant queries can be keys with the words “instructions”, “reviews”,
That is, the request “instruction for a Samsung vacuum cleaner” is unlikely to be useful for the owner of an ordinary online store, because even if many people come to it, the maximum that they will do is download this instruction, and not buy a new vacuum cleaner at all.

In my case, it made sense to remove the keys with the words "script", "www", etc.

Let's get started.
Click on the Stop Words button. In the window that appears, enter negative keywords, make sure that at the bottom the type of search for occurrences is “Independent of the word form of the stop word”. This is necessary in order not to write each negative keyword exactly, but to use more general negative keywords.
In this case, when using, for example, the negative keyword "instructions", the search queries "vacuum cleaners instructions", "download instructions for vacuum cleaners", etc., that is, queries with any word form, will be deleted.

Click on "Mark in table" and as a result, all keywords containing negative keywords become flagged.

Now it remains only to remove them.
Open the "Data" tab at the top, select "Delete marked phrases" and that's it - now you have a list of keys without extraneous requests.

Determine the level of competition.

A great feature of the KeyCollector program is parsing data on the number of indexed pages found for each search query, the number of sites in the TOP-10 that contain this key phrase in the title of the page Title, and how many of the top ten pages for these queries are the main pages.

It is clear that the more indexed pages, optimized titles and main pages in the SERP, the more difficult it will be to compete with them.

However, very often there is a situation when good high-frequency requests have a relatively small level of competition. Therefore, it is very important to analyze all the collected key requests for the level of competition in order to select and move forward on the most profitable and still unoccupied keys.

To do this, click on the "KEI" icon and select "Get data for Yandex PS".
You can, of course, specify the level of competition in other search engines, it depends on the task for promotion, but in most cases Yandex is enough to get an objective picture of the complexity of promoting a particular query.

Of course, this level of definition of competition is imperfect. It would be just ideal if the program could also parse for each request:

  • Average TIC and PR sites in the TOP-10.
  • Average page length.
  • Average number of external links to a competitor, etc.

In this case, the result would be more accurate.
But as practice shows, even such a “fluent” analysis of competition is enough to successfully find profitable keywords and quickly move along them, since many optimizers do not carry it out at all and, as a result, many sites advance according to complex competitive queries, although they “lie” nearby requests with no less frequency and with a complete absence of optimized competitors.

We have already written how assemble the semantic core. But before dividing requests into categories, they should be thoroughly cleaned. How to remove eight levels of slag and leave pure silver? You will need a Key Collector account and 12 minutes to read this post.

1. Cleaning the semantic core by marker words

You can also add all unnecessary words directly from the full list of queries. In this case, we create a separate group - specifically for such stop words.

Action algorithm:

5. We clean the core using the word group analysis function

Groups marked in the table are automatically marked in the main query list. After all inappropriate words have been marked, we close the table and delete all unnecessary queries.

6. Find and remove implicit duplicates

To use this method, you must first collect information about the frequency of requests. After that, go to the tab "Data" - "Analysis of implicit duplicates":

The program will automatically mark all implicit duplicates, the frequency of which is less in the specified search engine.

7. Manual search in a group of queries

Finally, you can manually mark all unnecessary words in the semantic core: slang, misspelled words, and so on. The main array of irrelevant queries has already been cleared earlier, so manual cleaning will not take much time.

8. Cleaning requests by frequency

Using the advanced filter in the KeyCollector, we set the query frequency parameters and mark all low-frequency phrases. This step is needed far from always.

conclusions

To qualitatively clean the semantic core from garbage, you should perform eight steps in the KeyCollector:

  1. Semantic core cleaning by marker words.
  2. Removing duplicate words.
  3. Removal of Latin letters, special characters, queries with numbers.
  4. Cleaning the core with the word group analysis function.
  5. Search and removal of implicit duplicates.
  6. Manual search in a group of queries.
  7. Cleaning requests by frequency.

At each stage, it is advisable to review the words marked for deletion, as there is a risk of deleting high-quality and relevant queries.

Instead of deleting unnecessary requests, it is better to create a separate group and move them there. In the latest updates of Key Collector, a corresponding default group has appeared - "Basket".

After a thorough cleaning of the semantic core, you can proceed to the next stage - clustering and grouping requests.

I note that there is always a risk of missing a couple of irrelevant requests during the cleaning of the core. They are just very easy to identify and remove at the grouping stage, but more on that next time.

Dear friends, today I want to talk about how to effectively clean search queries in the Key Collector program http://www.key-collector.ru/.

To clean up the semantic core, I use the following methods:

  1. Cleaning the semantic core with regular expressions.
  2. Delete using the stopword list.
  3. Deletion using word groups.
  4. Filter cleaning.

Using them will allow you to quickly and efficiently clean up the list of collected keywords and remove all phrases that are not suitable for your site.

To show everything clearly, I decided to record a video tutorial:

The review is best viewed in full screen mode as 720 HD. Also don't forget subscribe to my channel on Youtube so you don't miss new videos.

I will show you several ways to do this. If you know more ways, drop them in the comments. I myself use all the described methods. They save me a lot of time.

So let's go.

Regular expressions greatly expand the possibilities for selecting queries and save time.

Let's say we want to select all search queries that contain numbers.

To do this, click on the specified icon in the "Phrase" column:

Select the option "matches the reg. expression" and insert the following regular expression into the field:

It remains to click the "Apply" button, and you will receive a list of all requests that contain numbers.

I love using regular expressions to find search terms that are questions.

For example, if you specify this regular expression:

Then we get a list of all requests that begin with the word "how" (as well as with the words "what", "what", "what"):

Such requests are great for informational articles, even if the site is commercial.

If you use this expression:

free$

Then we get all requests that end with the word "free":

Thus, you can immediately get rid of freebie lovers 🙂. No, how can I type the request "air conditioning free"? The thirst for freebies has no boundaries. It's like in that joke "I will accept a Bentley as a gift" 😉 . Okay, let's get serious.

If we need to find all phrases that contain letters of the Latin alphabet, then the following expression will come in handy:

Here are examples of other regular expressions that I use:

^(\S+?\s\S+?)$- all queries consisting of 2 words

^(\S+?\s\S+?\s\S+?)$- consisting of 3 words

^(\S+?\s\S+?\s\S+?\s\S+?)$- consisting of 4 words

^(\S+?\s\S+?\s\S+?\s\S+?\s\S+?)$- from 5 words

^(\S+?\s\S+?\s\S+?\s\S+?\s\S+?\s\S+?)$- from 6 words

^(\S+?\s\S+?\s\S+?\s\S+?\s\S+?\s\S+?\s\S+?)$- from 7 words

^(\S+?\s\S+?\s\S+?\s\S+?\s\S+?\s\S+?\s\S+?\s\S+?)$- from 8 words

Searching for queries of 6 or more words is useful, as they often contain a lot of junk phrases.

The program has another opportunity to find such requests - just select the desired item below from the drop-down menu:

2. List of stop words

To clean up search queries, it makes sense to create a list of unwanted words that you want to remove from the collected queries.

For example, if you have a commercial site, then you can use the following stop words:

free

shake

essay

I specifically write some words only partially to cover all possible options. For example, using the stop word "free" will allow you not to collect requests containing:

is free

free

The stop word "download" will make it possible not to collect requests that include:

download

sway

In the Key Collector program, in the "Data collection" tab, go to the "Stop words" item:

And we add unwanted words through the options "Add as a list" or "Load from file":

Going to the main window of the program, we will see how many requests were marked for the specified stop words:

It remains only to find the marked queries, right-click on them and select "Delete marked lines":

Comrades who want air conditioners for free are not interested in us 🙂 .

You can even not look for an example of a marked query, but immediately right-click on any query, even one that is not marked, and select "Delete marked lines".

I also actively use city names as stop words. For example, I need to collect requests only for Moscow. Therefore, the use of stop words with city names will allow you not to collect requests that contain the names of other cities.

Here are examples of such stop words:

Saint

Peter

Peter

All these words will allow not to collect requests containing various variants of the name of St. Petersburg. As in the previous example, I use abbreviated city names.

I also advise you to use numbers from previous years as stop words, since practically no one will type queries with them:

I will share with you my list of stop words, which contains:

  • cities of Russia
  • cities of Ukraine
  • cities of Belarus
  • cities of Kazakhstan

As well as my list of common stop words (free, download, abstract, pdf, etc.).

A complete list of stop words can be obtained by anyone absolutely free of charge.

I use this method very actively. In any topic, there will be queries that cannot be removed using the same stop words or word groups.

For example, stop words do not take into account all the variety of word forms that can be.

Let's say your company sells air conditioners. However, services such as refueling and repairs are not provided.

When viewing requests, you can send inappropriate words to the list of stop words using the specified icon:

However, requests that contain the words "refuel", "refills", etc. will not be covered.

In order to use the full range of such requests that you want to delete, and save yourself from unnecessary work, we do the following.

When viewing a list of queries, some words will not be covered, as in the example above.

I open a text file and enter only part of the word "refueling" into it to cover all possible word forms based on it:

As a result, I will get a list of search queries with all possible variants of the word "gas station":

To reset the quick filter, click on the indicated checkbox:

This method allows you to delete all the word forms of those queries that do not suit you right in the process of work. The main thing is to use abbreviated versions of words for maximum coverage.

In many topics, some methods of collecting keywords from sources such as search suggestions end up producing a lot of junk queries. Tips also need to be used, they come across great keywords, but they also need to be cleaned.

To quickly clean up such requests, it makes sense to use this method.

Click on the specified icon at the top of the "Source" column:

After that, select the desired source. I usually work with suggestions from different search engines:

You can work with the tips of each search engine separately, or you can add a condition:

Use "OR" instead of "AND" and select multiple hint sources at once:

As a result, you will get a list of queries from search suggestions from several sources at once - Yandex, Google, etc.

From my own experience, I can say that cleaning queries against such a list based on sources is much faster and more efficient.

Everyone knows this method. It consists in the usual selection of one or more queries with a checkmark, right-clicking and selecting the "Delete marked lines" item:

I use this method at the final stage. After all the purges, you need to review all the requests again and manually delete those that do not fit, but have passed all the previous filters.

So to say, this is the final "polishing" of the semantic core 🙂 .

Key Collector is one of the main SEO tools. This program, created for the selection of the semantic core, is included in the category of must-have tools for promotion. It is as important as a scalpel to a surgeon or a steering wheel to a pilot. After all, without keywords is unthinkable.

In this article, we will look at what the Key Collector is and how to work with it.

What is Key Collector for?

Then go to the settings (gear button in the panel in the upper left corner of the program window) and find the tab " Yandex.Direct«.

Click on the button " Add to list» and enter the created accounts in the format Login: Password.

Attention! add @yandex.ru after login no need!

After all the operations, something like this will turn out:

But that's not all. Now you need to create a Google AdWords account that will be linked to this Google account. Without an AdWords account, it will be impossible to receive data on keywords, since they are taken from there. When creating an account, select your language, time zone, and currency. Please note that this data cannot bechange.

After creating your AdWords account, re-open the Key Collector settings and the " Google.AdWords". Here in the settings it is recommended to use only one Google account.

Anticaptcha

This item is optional, but I still recommend using anti-captcha. Of course, if you like entering captcha manually every time, it's up to you. But if you don’t want to waste your time on this, find the “Anti-captcha” tab in the settings, turn on the “Antigate” radio button (or any other of the proposed options) and enter your anti-captcha key in the field that appears. If you don't have a key yet, create it.

Captcha recognition is a paid service, but $10 is enough for at least a month. In addition, if you do not parse search engines daily, this amount will be enough for a year.

Proxy

By default, the program uses your main IP address for parsing. If you often do not need to use the Key Collector, you can forget about the proxy settings. But if you often work with the program, search engines can often slip you a captcha and even temporarily ban your IP. In addition, all users who go online under a common IP will suffer. This problem occurs, for example, in offices.

Also, users from Ukraine may experience difficulties when parsing Yandex from the main IP.

Finding free proxies that are still not in the search engine ban can be quite difficult. If you have a list of such addresses, enter them in the settings in the " Net". Then click on the button " Add line«.

Another option is to create a file with addresses in the format IP:port, copy them to the clipboard and add to the collector using the " Add from buffer«.

But I recommend connecting a paid VPN from hidemy.name . In this case, an application is installed on the computer that enables/disables VPN on demand. In this application, you can also change the proxy itself and its country. Additionally, you do not have to configure anything. Just turn on the VPN and work comfortably with the Collector.

I have listed the basic settings that are needed to start work. I advise you to go through all the tabs yourself and study the program settings. Maybe you will find items in the settings that are right for you.

Keyword search with Key Collector

Finally, we have reached the actual selection of the semantic core. In the main window of the program, click on the big button " New project". I advise you to name the project file with the name of the site, for example, site.ru, and save it to a specially created folder for Key Collector projects, so that you do not waste time searching later.

In the Collector, it is convenient to sort keywords into groups. It is convenient for me when the hierarchy of groups in the project corresponds to the future one, so the first group (the default group) corresponds to the main page of the site.

For example, let's work with the topic "website creation Moscow". Let's start with Yandex.

First you need to set the region:

Now you need to open Batch collection of words from the left column of Yandex.Wordstat”And in the window that appears, enter the 5 most obvious key phrases in this topic (they will be used for parsing).

Now you need to click on the button Start collecting«.

That's it, you can go make coffee or switch to other tasks. It will take some time for the Key Collector to parse key phrases.

As a result, something like this will be displayed:

Stop words

Now you need to filter out currently unsuitable words and phrases. For example, a combination of the words "creation of sites Moscow is free” will not work, as we do not provide free services. Searching for such phrases manually in the semantic core for hundreds and thousands of queries is extremely exciting, but it’s better to use a special tool.

Then you need to click on the plus sign:

You probably noticed that the program has a large number of different options when working with keywords. I explain the basic, most basic operations in the Key Collector.

Working with request frequency

After filtering by negative keywords, you can start parsing by frequency.

Now we see only the column with the total frequency. To get the exact frequency for each keyword, you need to enter it in Wordstat in the quote operator - “keyword”.

In the Collector, this is done as follows:

If necessary, you can collect the frequency with the "!word" operator.

Then you need to sort the list by frequency » » and remove words with frequency less than 10 (sometimes 20-30).

The second way to collect frequency (slower):

If you know for sure that the frequency below a certain value does not interest you, you can set the threshold in the program settings. In this case, phrases with a frequency below the threshold will not be included in the list at all. But this way you can miss promising phrases, so I don’t use this setting and don’t advise you. However, do as you please.

As a result, a semantic core more or less suitable for subsequent work is obtained:

Please note that this semantic core is just an example, created only to demonstrate how the program works. It is not suitable for a real project, as it is poorly developed.

Right column Yandex.Wordstat

Sometimes it makes sense to parse the right column of Wordstat (queries similar to "your query"). To do this, click on the appropriate button:

Google and Key Collector

Queries from Google statistics are parsed by analogy with Yandex. If you have created a Google account and an AdWords account (as we remember, a Google account alone is not enough), click on the corresponding button:

In the window that opens, enter the queries of interest and start the selection. All by analogy with Wordstat parsing. If necessary, in the same window, specify additional settings specifically for Google (clicking on the question icon will display help).

As a result, you will receive the following data on AdWords:

And you can continue to work with semantics.

conclusions

We have dismantled the basic settings of the Key Collector (that without which it is impossible to start working). We also looked at the simplest (and most basic) examples of using the program. And we picked up a simple semantic core using Yandex.Wordstat and Google AdWords statistics.

As you understand, the article shows about 20% of all the features of the program. To master the Key Collector, you need to spend a few hours and study the official manual. But it's worth it.

If after this article you decide that it is easier to order a semantic core from specialists than to figure it out yourself, write to me through the page and we will discuss the details.

And a bonus video: a dude named Derek Brown masterfully plays the saxophone. I even went to his concert during the jazz festival, it's really cool.

How to collect negative keywords for Yandex Direct, as well as the cleanest and most selling semantics for your advertising campaigns.

There are 2 approaches to collecting negative keywords and phrases:

Express Method

Many directors offer recruitment methods in 20 minutes, 10 or less.

The point is that you collect output from wordstat, bring it to Excel or an equivalent and pull out keys and relevant additional words from there, as a result, you are left with a list of minuses.

Why this method is bad and why I will teach you another method:

— Limitation.

Wordstat makes it possible to collect requests for only the first 40 pages. Even if you don’t collect each request manually, but with the help of some plugin, you will sit like a fool and flip through forty pages.

— The inability to collect a complete list of stop words.

Due to the limitations of wordstat, you will get a decent list, but not a complete one. This is especially true for broad semantics like apartments in Moscow. There, requests for the last page come with a frequency of 300 requests. And you need to leave at 30 or less, because if you show up for non-targeted requests, even for low frequencies, you will drain a lot of money and the same number of tears, the niche is expensive.

In addition, in many niches, mobile issuance is important, and Yandex is always smart with geo on the phone - Nizhny Novgorod always determines me instead of Saratov. In order for the SERP to be geo-relevant, you will have to enable advanced geo-targeting, but at the same time minus all the cities in which you do not want to appear.

The express method will not give you such an opportunity, you have to add it yourself.

- AND THE MOST IMPORTANT: with the help of my method, in addition to the list of minuses, you will collect ALL targeted keywords!

Collection of negative keywordsKay Collector

It is much easier and more convenient to parse both semantics and “bad” words for it in special services. You can use the free Slovoeb, Magadan parser (there is a free version), etc., there are quite a few programs.

I'll show you how to filter stop words in Key Collector and get clean conversion semantics for ads at the same time.

It all starts with the fact that you collect the main masks (directions for parsing) from a wordstat or advertising campaign budget estimates, then multiply them and carry them to the Kay Collector.

For example, the task is to clean up the semantics for the sale of trampolines in Moscow. We collected all the requests, we see the picture:

1. Go to the stop words window.

2. In it, select the type of search for occurrences: Independent of the word form of the stop word. This way we will snatch out all the variants of one word (rent, rent, lease).

3. Click on the green plus sign.

4. In the drop-down window, we prescribe harmful words (separated by commas, it is possible from a new line).

What negative keywords to write here?

For starters, the most obvious ones are those that show the intention not to buy, but, for example, to watch reviews or do it on your own: how, what, better, for free, with your own hands, what, repair, photo, video. If you are selling a new product and do not use message boards, add "avito". Etc.

Here it is necessary to delve into the niche, to understand the features of the sales processes and the thinking of the client when searching. What is important when choosing what characteristics, terms of purchase, delivery, warranty or service.

5. Click the "Mark phrases in the table" button.

6. The 15-20 most obvious words gave me 396 junk searches. We put them in the basket.

Select "Move phrases to another group", in the drop-down window, click "Trash" and then "OK". We admire the result (circled in green):

BUT! We are not done with these requests yet. We go to the basket and view requests for the presence of commercial. We see such a key "buy a trampoline price photo":

We press Ctrl + click on it, the program transfers us to Yandex with the issuance of this request. We scroll, follow the links and see that the key is commercial:

We drag it back.

7. Above the list in the "quick filter" field, write the word "price" and analyze:

Similarly, you filter by other selling words (buy, order, city, prices) and take back the necessary keys. In this niche, people are looking for trampolines without a net, and they also ended up in the basket, I'll take them to the working group.

8. The next step is to delete incomplete keys.

These are queries that have decent numbers in terms of the base frequency, and zero-ones in quoted (when phrases include word forms, but do not include other words) frequency.

In other words, no one is looking for them in this form, and they can have entries in queries with broader semantics or be simply informational. People don’t write: “a trampoline with a cm grid” - they probably mean a trampoline of a certain diameter. And this is an info key.

Procedure: click on the field “Frequency“ ”and thereby sort the frequencies with quotes in descending order. Scroll to frequency 0 and view. We see the following:

We delete everything like that, otherwise we will be left without money.

9. Cleaning cities.

© 2022 hecc.ru - Computer technology news