Hi,
Robot.txt allows the google crawler, which page to crawl of your site and which not to, after indexing.
For example, if you specify in your Robots.txt file that you don’t want the search engines to be able to access your thank you page, that page won’t be able to show up in the search results and web users won’t be able to find it.
Search engines send out tiny programs called “spiders” or “robots” to search your site and bring information back to the search engines so that the pages of your site can be indexed in the search results and found by web users. Your Robots.txt file instructs these programs not to search pages on your site which you designate using a “disallow” command.
For example, the following Robots.txt command:
User-agent: *
Disallow: /images
would block all search engine robots from visiting the following page on your website
Robot.txt allows the google crawler, which page to crawl of your site and which not to, after indexing.
For example, if you specify in your Robots.txt file that you don’t want the search engines to be able to access your thank you page, that page won’t be able to show up in the search results and web users won’t be able to find it.
Search engines send out tiny programs called “spiders” or “robots” to search your site and bring information back to the search engines so that the pages of your site can be indexed in the search results and found by web users. Your Robots.txt file instructs these programs not to search pages on your site which you designate using a “disallow” command.
For example, the following Robots.txt command:
User-agent: *
Disallow: /images
would block all search engine robots from visiting the following page on your website