Category: Crawl


Data provided in the URL to specify a site’s behavior.

Robots.txt File

This file prevents web spiders/crawlers such as Googlebot from accessing all or parts of your website which is publically viewable.


Stands for Really Simple Syndication. It’s an easy way to “feed” content from one site to another.

Web Spider

Search engines use spiders to crawl linked pages of a website to index them and determine their rankings in regards to search terms.