Web archiving is the process of collecting portions of the World Wide Web and ensuring the collection is preserved in an archive, such as an archive site, for future researchers, historians, and the public. Due to the massive size of the Web, web archivists typically employ web crawlers for automated collection. The largest web archiving organization based on a crawling approach is the Internet Archive which strives to maintain an archive of the entire Web. National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content. Commercial web archiving software and services are also available to organizations who need to archive their own web content for legal or regulatory purposes.
Since web sites are often copyrighted, web archiving has to consider legal and social issues. Due to the global nature of a web environment, complex issues arise.
Collecting the Web
Methods of collection
The most common web archiving technique uses web crawlers to automate the process of collecting web pages. Web crawlers typically view web pages in the same manner as users with a browser see the Web, and therefore provide a comparatively simple method of remotely harvesting web content.
Examples of web crawlers frequently used for web archiving include:
Heritrix is the Internet Archive’s web crawler that was specially designed for web archiving. It is open-sourceed and written in Java. The main interface is accessible using a web browser, containing a command-line tool that can optionally be used to initiate crawls.
Heritrix was developed jointly by Internet Archive and the Nordic national libraries on specifications written in early 2003. The first official release was in January 2004, and since then, has continually improved by members of the Internet Archive and other interested third parties.
A number of organizations and national libraries are using Heritrix, among them:
- Library and Archives Canada
- Bibliothèque nationale de France
- National and University Library of Iceland
- National Library of New Zealand
- Documenting Internet2
HTTrack is a free and open source Web crawler and offline browser, developed by Xavier Roche and licensed under the GNU General Public License, that allows one to download World Wide Web sites from the Internet to a local computer. By default, HTTrack arranges the downloaded site by the original site's relative link-structure. The downloaded (or "mirrored") website can be browsed by opening a page of the site in a browser.
HTTrack can also update an existing mirrored site and resume interrupted downloads. HTTrack is fully configurable by options and by filters (include/exclude), and has an integrated help system. There is a basic command line version and two GUI versions (WinHTTrack and WebHTrack); the former can be part of scripts and cron jobs.
There are numerous services that may be used to archive web resources "on-demand," using web crawling techniques:
- WebCite, a service specifically for scholarly authors, journal editors and publishers to permanently archive and retrieve cited Internet references (Eysenbach and Trudel, 2005).
- Archive-It, a subscription service, allows institutions to build, manage and search their own web archive.
- Hanzo Archives offer commercial web archiving tools and services, implementing an archive policy for web content and enabling electronic discovery, litigation support or regulatory compliance.
Database archiving refers to methods for archiving the underlying content of database-driven websites. It typically requires the extraction of the database content into a standard schema, often using XML. Once stored in that standard format, the archived content of multiple databases can then be made available using a single access system. This approach is exemplified by the DeepArc and Xinq tools developed by the Bibliothèque nationale de France and the National Library of Australia respectively. DeepArc enables the structure of a relational database to be mapped to an XML schema, and the content exported into an XML document. Xinq then allows that content to be delivered online. Although the original layout and behavior of the website cannot be preserved exactly, Xinq does allow the basic querying and retrieval functionality to be replicated.
Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a web server and a web browser. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular website, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information.
A transactional archiving system typically operates by intercepting every HTTP request to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams. A transactional archiving system requires the installation of software on the web server, and cannot therefore be used to collect content from a remote website.
Examples of commercial transactional archiving software include:
Difficulties and limitations
Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling:
- The robots exclusion protocol may request crawlers portions of a website inaccesible. Some web archivists may ignore the request and crawl those portions anyway.
- Large portions of a web site may be hidden in the Deep Web. For example, the results page behind a web form lies in the deep web because a crawler cannot follow a link to the results page.
- Some web servers may return a different page for a web crawler than it would for a regular browser request. This is typically done to fool search engines into sending more traffic to a website.
- Crawler traps (e.g., calendars) may cause a crawler to download an infinite number of pages, so crawlers are usually configured to limit the number of dynamic pages they crawl.
The Web is so large that crawling a significant portion of it takes a large amount of technical resources. The Web is changing so fast that portions of a website may change before a crawler has even finished crawling it.
Not only must web archivists deal with the technical challenges of web archiving, they must also contend with intellectual property laws. Peter Lyman (2002) states that "although the Web is popularly regarded as a public domain resource, it is copyrighted; thus, archivists have no legal right to copy the Web." However national libraries in many countries do have a legal right to copy portions of the web under an extension of a legal deposit.
Some private non-profit web archives that are made publicly accessible like WebCite or the Internet Archive allow content owners to hide or remove archived content that they do not want the public to have access to. Other web archives are only accessible from certain locations or have regulated usage. WebCite also cites on its FAQ a recent lawsuit against the caching mechanism, which Google won.
Aspects of Web curation
Web curation, like any digital curation, entails:
- Collecting verifiable Web assets
- Providing Web asset search and retrieval
- Certification of the trustworthiness and integrity of the collection content
- Semantic and ontological continuity and comparability of the collection content
Thus, besides the discussion on methods of collecting the web, those of providing access, certification, and organizing must be included. There are a set of popular tools that addresses these curation steps:
A suit of tools for Web Curation by International Internet Preservation Consortium:
- Heritrix - official website - collecting Web asset
- NutchWAX - search Web archive collections
- Wayback (Open source Wayback Machine) - search and navigate Web archive collections using NutchWax
- Web Curator Tool - Selection and Management of Web Collection
Other open source tools for manipulating web archives:
- WARC Tools - for creating, reading, parsing and manipulating, web archives programmatically
- Search Tools - for indexing and searching full-text and metadata within web archives
An example of web archives
The Internet Archive
The Internet Archive (IA) is a nonprofit organization dedicated to building and maintaining a free and openly accessible online digital library, which includes an archive of the World Wide Web. With offices located in the Presidio in San Francisco, California, and data centers in San Francisco, Redwood City, and Mountain View, CA, the archive includes "snapshots of the World Wide Web" (archived copies of pages, taken at various points in time), software, movies, books, and audio recordings. To ensure the stability and endurance of the Internet Archive, its collection is mirrored at the Bibliotheca Alexandrina in Egypt, so far the only library in the world with a mirror. The IA makes its collections available at no cost to researchers, historians, and scholars. It is a member of the American Library Association and is officially recognized by the State of California as a library.
- Digital preservation
- Internet Archive
- Library of Congress Digital Library project
- National Digital Information Infrastructure and Preservation Program
- Web crawling
- The Internet Archive at the New Library of Alexandria, International School of Information Science (ISIS). Retrieved November 22, 2008.
- "Internet Archive officially a library" Retrieved November 22, 2008.
- web.archive.org Retrieved November 22, 2008.
ReferencesISBN links support NWE through referral fees
- Brown, A. 2006. Archiving Websites: a practical guide for information management professionals. Facet Publishing. ISBN 1-85604-553-6
- Brügger, N. 2005. Archiving Websites. General Considerations and Strategies The Centre for Internet Research. ISBN 87-990507-0-6. Retrieved November 11, 2008.
- Day, M. 2003. Preserving the Fabric of Our Lives: A Survey of Web Preservation Initiatives Research and Advanced Technology for Digital Libraries: Proceedings of the 7th European Conference (ECDL), 461–472. Retrieved November 11, 2008.
- Eysenbach, G. and M. Trudel. 2005. Going, going, still there: using the WebCite service to permanently archive cited web pages Journal of Medical Internet Research 7 (5). Retrieved November 11, 2008.
- Fitch, Kent. 2003. "Web site archiving - an approach to recording every materially different response produced by a website" Ausweb 03. Retrieved November 11, 2008.
- Lyman, P. 2002. Archiving the World Wide Web Building a National Strategy for Preservation: Issues in Digital Media Archiving. Retrieved November 11, 2008.
- Masanès, J. (ed.). 2006. Web Archiving. Springer-Verlag. ISBN 3-540-23338-5
All links retrieved June 11, 2020.
- International Internet Preservation Consortium (IIPC) - International consortium whose mission is to acquire, preserve, and make accessible knowledge and information from the Internet for future generations
- International Web Archiving Workshop (IWAW) - Annual workshop that focuses on web archiving
- The Library of Congress, Digital Collections and Programs
- Library of Congress, Web Capture
- Web archiving bibliography - Lengthy list of web-archiving resources
- Web archiving programs:
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed.