Wednesday, February 4, 2009

Crawler Search Engines

A crawler search engine, like Google, ‘crawls’ or ‘spiders’ through your website and all its related links. This means that the spider will visit your website then read all the pages and follow all the links. Spiders usually return to your website every month or so to look for changes. Everything the spider finds goes into an index. The index is like a large catalogue containing a copy of every webpage the spiders find in cyberspace. If your webpage changes, then the spider should find this change and update the index. When you type in a keyword or phrase into a crawler-based search engine the search engine software sifts through the millions of pages stored in the index and gives you the results it believes is most relevant to your search query. Most crawler-based search engines work in this way with minor changes to software, indexing, etc. That is why if you type the same query into different search engines you can get varied results.

Directories differ from crawler-based search engines in that you, as a webmaster, submit a short description about your site, which is then categorised by humans. Search results are based on what is in the description about your site.

Hybrid search engines are a combination of the two types of search engines listed above.

1 Comment:

  1. khansa said...
    mantep nih...ilmu lagi disini

Post a Comment