URL normalization is the process by which URLs are modified and standardized in a consistent manner. The goal of the normalization process is to transform a URL into a normalized URL so it is possible to determine if two syntactically different URLs may be equivalent.
Search engines employ URL normalization in order to [clarify] and to reduce indexing of duplicate pages. Web crawlers perform URL normalization in order to avoid crawling the same resource more than once. Web browsers may perform normalization to determine if a link has been visited or to determine if a page has been cached.
There are several types of normalization that may be performed. Some of them are always semantics preserving and some may not be.
%7A), DIGIT (
%39), hyphen (
%2D), period (
%2E), underscore (
%5F), or tilde (
%7E) should not be created by URI producers and, when found in a URI, should be decoded to their corresponding unreserved characters by URI normalizers. Example:
For http and https URLs, the following normalizations listed in RFC 3986 may result in equivalent URLs, but are not guaranteed to by the standards:
.." component, e.g. "
b/..", is a symlink to a directory with a different parent, eliding "
b/.." will result in a different path and URL. In rare cases depending on the web server, this may even be true for the root directory (e.g. "
//www.example.com/.." may not be equivalent to "
Applying the following normalizations result in a semantically different URL although it may refer to the same resource:
http://www.example.com/may access the same website. Many websites redirect the user from the www to the non-www address or vice versa. A normalizer may determine if one of these URLs redirects to the other and normalize all URLs appropriately. Example:
Some normalization rules may be developed for specific websites by examining URL lists obtained from previous crawls or web server logs. For example, if the URL
appears in a crawl log several times along with
we may assume that the two URLs are equivalent and can be normalized to one of the URL forms.
Schonfeld et al. (2006) present a heuristic called DustBuster for detecting DUST (different URLs with similar text) rules that can be applied to URL lists. They showed that once the correct DUST rules were found and applied with a normalization algorithm, they were able to find up to 68% of the redundant URLs in a URL list.
Manage research, learning and skills at defaultlogic.com. Create an account using LinkedIn to manage and organize your omni-channel knowledge. defaultlogic.com is like a shopping cart for information -- helping you to save, discuss and share.