robots.txt is a file for controlling access to crawlers such as search engines. robots.txt needs to be placed at the top of the site, and it is possible to specify a specific user agent (User-agent 🙂 and describe access prohibition (Disallow) and access permission (Allow).
However, robots.txt is not enforceable for crawlers, so Google’s crawlers stipulate that they will follow robots.txt, but there is no guarantee that other crawlers will follow. If you want to ban access to the
crawler, you need to add a password.