answersLogoWhite

0


Best Answer

* User satisfaction from search directed access to resources and easier browsability (via maintenance and advancements of the Web resulting from analyses).

* Reduced network traffic in document space resulting from search-directed access.

* Effecting archiving/mirroring, and populating caches (to produce associated benefits).

* Monitoring and informing users of changes to relevant areas of Web-space.

* "Schooling" network traffic into localised neighbourhoods through having effected mirroring, archiving or caching.

* Multi-functional robots can perform a number of the above tasks, perhaps simultaneously.

User Avatar

Wiki User

11y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

12y ago

A web crawler (also known as a web spider, web robot) is a software program or automated script that browses the world wide web in methodical automated manner, to produce/populate an index or a database.

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What are the features of web crawlers?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

What is the part of the search engine responsible for collecting data on the web?

Web crawlers are charged with the responsibility to visiting webpages and reporting what they find to the search engines. Google has its own web crawlers (aka robots) and they call them Googlebots. Web crawlers have also been referred to as spiders, although I think this term is more commonly replaced with "bots".


What web crawlers use PHP?

PHPCrawl, PHP Parallel Web Scraper I'm sure there are many others.


What search engine uses web crawlers?

The most well known are Google, Bing, and Yahoo.


What are crawlers and for what purpose are they used?

A crawler is a computer program with the purpose to visit web sites and do something with the information on it. Many crawlers crawl for search engines to index whatever page they visit. Such crawlers often return several times per day to check for updates. Another use is to gather information such as mail addresses or something that suits the owner. This kind of crawlers check all the links on the page and visit them after the information collection, and in this way never stopping but keep crawling all over (the public parts of) the Web.


When was The Crawlers created?

The Crawlers was created in 1954.


What is a sitemap?

Google Sitemaps is an experiment in Web crawling by using Sitemaps to inform and direct Google search crawlers. Webmasters can place a Sitemap-formatted file on their Web server which enables Google crawlers to find out what pages are present and which have recently changed, and to crawl your site accordingly. Google Sitemaps is intended for all web site owners, from those with a single web page to companies with millions of ever-changing pages.


What is a Google Sitemap?

Google Sitemaps is an experiment in Web crawling by using Sitemaps to inform and direct Google search crawlers. Webmasters can place a Sitemap-formatted file on their Web server which enables Google crawlers to find out what pages are present and which have recently changed, and to crawl your site accordingly. Google Sitemaps is intended for all web site owners, from those with a single web page to companies with millions of ever-changing pages.


When was Creepy Crawlers created?

Creepy Crawlers was created in 1964.


When was The Sky Crawlers created?

The Sky Crawlers was created in 2001.


How does one raise night crawlers?

How do you raise night crawlers


How does Lycos fetch submitted documents?

Lycos fetches submitted documents by sending out automated web crawlers, also known as spiders, to systematically browse and index content from publicly accessible web pages. These crawlers follow links from one page to another, collecting information to be stored in the search engine's database for retrieval in response to user queries.


What is the duration of The Sky Crawlers?

The duration of The Sky Crawlers is 2.02 hours.