• Schedule and run crawling/scraping tasks;
  • Control crawlers’ robots settings;
  • Set email notifications for various events;
  • Keep track, and get statistical reports.

Functionality includes crawling of web resources (HTML and XML pages, binary documents like pdf, doc, ppt, etc., binary images like jpg, gif, png, etc.) including collecting URL links with the maximum limit.


Internal structure consists of two layers:

  • Web User interface (UI)
  • Distributed Crawler (DC) service installation

Both work independently and on different isolated platforms.

© 2015-2016 TagsReaper. All rights reserved.