The Googlebot, Google is the information of the Web page index in the robot that is used at the time of, the crawler might over, called a spider. If Googlebot cannot reach the web page, Google cannot index the page and that page will not appear in Google search results, so letting Googlebot access and index the page is a major prerequisite for SEO measures.
On the contrary, the index if you do not want to be in the robots.txt index you can specify the page or area you do not want to.

In addition to collecting page information, Google is also performing rendering (checking what the page will look like in the browser). This is done by a robot called Caffeine, which is different from Googlebot, but it is often confused with Googlebot.
Caffeine has only a function equivalent to Chrome 41 for a long time, and there was a problem that the rendering function could not catch up with the current browser, but it was updated in 2019 and the current Chrome rendering engine will be installed. became.

Leave a Reply

Your email address will not be published. Required fields are marked *