it-e-02 web harvesting

As the amount of information on the Web grows, that information becomes ever harder to
keep track of and use. Search engines are a big help, but they can do only part of the work, and
they are hard-pressed to keep up with daily changes.
Consider that even when you use a search engine to locate data, you still have to do the
following tasks to capture the information you need: scan the content until you find the
information, mark the information (usually by highlighting with a mouse), switch to another
application ( such as a spreadsheet, database or word processor), paste the information into that
application.

A better solution, especially for companies that are aiming to exploit a broad swath of data
about markets or competitors, lies with Web harvesting tools.
Web harvesting software automatically extracts information from the Web and picks up
where search engines leave off, doing the work the search engine can't. Extraction tools automate
the reading, copying and pasting necessary to collect information for analysis, and they have
proved useful for pulling together information on competitors, prices and financial data or all
types.
There are three ways we can extract more useful information from the Web.
The first technique, Web content harvesting, is concerned directly with the specific content
of documents or their descriptions, such as HTML files, images or e-mail messages. Since most
text documents are relatively unstructured (at least as far as machine interpretation is concerned),
one common approach is to exploit what's already known about the general structure of
documents and map this to some data model.
The other approach to Web content harvesting involves trying to improve on the content
searches that tools like search engines perform. This type of content harvesting goes beyond
keyword extraction and the production of simple statistics relating to words and phrases in
documents.
Another technique, Web structure harvesting, takes advantage of the fact that Web pages
can reveal more information than just their obvious content. Links from other sources that point
to a particular Web page indicate the popularity of that page, while links within a Web page that
point to other resources may indicate the richness or variety of topics covered in that page. This
is like analyzing bibliographical citations— paper that's often cited in bibliographies and other
paper is usually considered to be important.
The third technique, Web usage harvesting, uses data recorded by Web servers about user
interactions to help understand user behavior and evaluate the effectiveness of the Web structure.
General access—pattern tracking analyzes Web logs to understand access patterns and
trends in order to identify structural issues and resource groupings.
Customized usage tracking analyzes individual trends so that Web sites can be personalized
to specific users. Over time, based on access patterns, a site can be dynamically customized for a
user in terms of the information displayed , the depth of the site structure and the format of the
resource presented.


Total views.

© 2013 - 2024. All rights reserved.

Powered by Hydejack v6.6.1