WCRI 111 - The user shall have an option to search only within the specified domain.WCRI 110 - The crawler shall use multiple threads to avoid putting too much stress on an individual web host.WCRI 109 - The crawler shall follow the robot exclusionary protocol.WCRI 108 - The user shall be kept apprised of the total number of pages crawled.WCRI 107 - The user shall be kept apprised of the total number of pages left to crawl.WCRI 106 - The user shall be notified when the crawl is complete.WCRI 105 - The user shall be allowed to stop the crawl at any time before it finishes.WCRI 104 - The user shall be allowed to specify the maximum number of websites to crawl before stopping.WCRI 103 - The user shall have the ability to specify a log file in which to save the results of the crawl.WCRI 102 - The user shall have the ability to specify the number of back-links required for a website to be maintained in the final list.WCRI 101 - The user shall be allowed to specify the starting website (if none is specified, will be used).WCRI 100 - The user shall have the ability to perform a web crawl based on a starting website.Bolded requirements represent Critical Project Requirements.ARI 109 - The application shall be able to be closed without having to perform a Control-C from the command line.ARI 108 - The application shall be able to be minimized.ARI 107 - The application shall be platform independent.ARI 106 - The application's menu bar shall contain shortcut keys.ARI 105 - The application's Help menu shall contain at a minimum an About menu item.ARI 104 - The application shall allow the user to save entity search results.
without having to perform any setup steps)