txt file is then parsed and may instruct the robotic regarding which web pages usually are not being crawled. Like a internet search engine crawler may maintain a cached duplicate of this file, it may well once in a while crawl pages a webmaster doesn't need to crawl. Web pages usually prevented from becoming crawled include login-particular pages