Link rot is also called "link death", "link breaking" or "reference rot". A link that does not work any more is called a "broken link", "dead link", or "dangling link". Formally, this is a form of dangling reference: The target of the reference no longer exists.
One of the most common reasons for a broken link is that the web page to which it points no longer exists. This frequently results in a 404 error, which indicates that the web server responded but the specific page could not be found. Another type of dead link occurs when the server that hosts the target page stops working or relocates to a new domain name. The browser may return a DNS error or display a site unrelated to the content originally sought. The latter can occur when a domain name lapses and is reregistered by another party. Other reasons for broken links include:
Websites can be restructured or redesigned, or the underlying technology can be changed, altering or invalidating large numbers of inbound or internal links.
Many news sites keep articles freely accessible for only a short time period, and then move them behind a paywall. This causes a significant loss of supporting links in sites discussing news events and using media sites as references.
Content may automatically expire after a certain time period.
Content may be intentionally removed by the owner.
A server may be upgraded and code (e.g. PHP) may no longer function properly.
Links may be removed as a result of legal action or court order.
Search results from social media such as Facebook and Tumblr are prone to link rot because of frequent changes in user privacy, the deletion of accounts, search result pointing to a dynamic page that has new results that differ from the cached result, or the deletion of links or photos.
Links can contain ephemeral, user-specific information such as session or login data. Because these are not universally valid, the result can be a broken link.
A website may be closed or taken down, invalidating the links which are pointing to it.
A website might change its domain name. Links pointing to the old name might then become invalid.
Dead links can occur on the authoring side, when website content is assembled from Internet sources and deployed without properly verifying the link targets.
As new private gTLDs became popular, top-level-domains like .mcdonalds or .xperia were revoked.
The 404 "Not Found" response is familiar to even the occasional web user. A number of studies have examined the prevalence of link rot on the web, in academic literature, and in digital libraries. In a 2003 experiment, Fetterly et al. discovered that about one link out of every 200 disappeared each week from the Internet. McCown et al. (2005) discovered that half of the URLs cited in D-Lib Magazine articles were no longer accessible 10 years after publication, and other studies have shown link rot in academic literature to be even worse (Spinellis, 2003, Lawrence et al., 2001). Nelson and Allen (2002) examined link rot in digital libraries and found that about 3% of the objects were no longer accessible after one year. In 2014, bookmarking site Pinboard's owner Maciej Ceg?owski reported a "pretty steady rate" of 5% link rot per year.
A 2014 Harvard Law School study by Jonathan Zittrain, Kendra Albert and Lawrence Lessig, determined that approximately 50% of the URLs in U.S. Supreme Court opinions no longer link to the original information. They also found that in a selection of legal journals published between 1999 and 2011, more than 70% of the links no longer functioned as intended. A 2013 study in BMC Bioinformatics analyzed nearly 15,000 links in abstracts from Thomson Reuters' Web of Science citation index and found that the median lifespan of web pages was 9.3 years, and just 62% were archived. In August 2015 Weblock analyzed more than 180,000 links from references in the full-text corpora of three major open access publishers and found that overall 24.5% of links cited were no longer available.
Discovering broken links might be done manually or automatically. Automated methods, including plug-ins for WordPress, Drupal and other content management system can be used to detect the presence of broken URLs. An alternative is using a specific broken link checker like Xenu's Link Sleuth. However, if a URL returns an HTTP200 (OK) response, it may be accessible, but the contents of the page could have changed and may no longer be relevant. So manual checking links seems to be a must. Some web servers also return a soft 404, reporting to computers that the link works even though it doesn't. Bar-Yossef et al. (2004)  developed a heuristic for automatically discovering soft 404s.
There are numerous solutions for tackling broken links: Some work to prevent them in the first place, while others try to resolve them when they have occurred. There are also numerous tools that have been developed to help combat link rot.
Carefully select and implement hyperlinks, and verify them regularly after publication. Best practices include linking to primary rather than secondary sources and prioritizing stable sites. McCown et al., 2005, suggests avoiding URL citations that point to resources on researchers' personal pages.
Whenever possible, use persistent identifiers (URLs designed for durability) such as ARKs, DOIs, Handle System references, and PURLs.
Avoid linking to PDF documents if possible. Because PDFs are documents rather than web pages, their content can change without notice, and their names are more likely to contain characters such as spaces that must be translated into safe codes for URLs. Large PDFs may also download slowly and cause a timeout error.
Avoid linking to pages deep in a website, a practice known as deep linking.
Use web archiving services (for example, WebCite) to permanently archive and retrieve cited Internet references (Eysenbach and Trudel, 2005).
Never change URLs and never remove pages. If there is a reason to no longer have a page, such as a news site redacting a story, replace it with a message explaining its removal.
IBM's Peridot attempts to automatically fix broken links.
Permalinking stops broken links by guaranteeing that the content will not move for the foreseeable future. Another form of permalinking is linking to a permalink that then redirects to the actual content, ensuring that even though the real content may be moved etc., links pointing to the resources stay intact.
Design URLs--for example, semantic URLs--such that they won't need to change when a different person takes over maintenance of a document or when different software is used on the server.
The Linkgraph widget gets the URL of the correct page based upon the old broken URL by using historical location information.
The Google 404 Widget attempts to "guess" the correct URL, and also provides the user with a search box to find the correct page.
When a user receives a 404 response, the Google Toolbar attempts to assist the user in finding the missing page.
To combat link rot, web archivists are actively engaged in collecting the Web or particular portions of the Web and ensuring the collection is preserved in an archive, such as an archive site, for future researchers, historians, and the public. The goal of the Internet Archive is to maintain an archive of the entire Web, taking periodic snapshots of pages that can then be accessed for free via the Wayback Machine. In January 2013 the company announced that it had reached the milestone of 240 billion archived URLs.National libraries, national archives and other organizations are also involved in archiving culturally important Web content.
Individuals may use a number of tools that allow them to archive web resources that may go missing in the future:
The "WayBack Machine", at the Internet Archive, is a free website that archives old web pages. It does not archive websites whose owners have stated they do not want their website archived.
WebCite, a tool specifically for scholarly authors, journal editors and publishers to permanently archive "on-demand" and retrieve cited Internet references (Eysenbach and Trudel, 2005).
Archive.is, an archive site which stores snapshots of web pages. It retrieves one page at a time, but unlike WebCite, it includes Web 2.0 sites such as Google Maps and Twitter.
Perma.cc, which is supported by the Harvard Law School together with a broad coalition of university libraries, takes a snapshot of a URL's content and returns a permanent link.
The Hiberlink project, a collaboration between the University of Edinburgh, the Los Alamos National Laboratory and others, is working to measure "reference rot" in online academic articles, and also to what extent Web content has been archived. A related project, Memento, has established a technical standard for accessing online content as it existed in the past.
Some social bookmarking websites allow users to make online clones of any web page on the internet, creating a copy at an independent url which remains online even if the original page goes down.
Bar-Yossef, Ziv; Broder, Andrei Z.; Kumar, Ravi; Tomkins, Andrew (2004). "Sic transit gloria telae: towards an understanding of the Web's decay". Proceedings of the 13th international conference on World Wide Web. pp. 328-337. doi:10.1145/988672.988716.
Dellavalle, Robert P.; Hester, Eric J.; Heilig, Lauren F.; Drake, Amanda L.; Kuntzman, Jeff W.; Graber, Marla; Schilling, Lisa M. (2003). "Going, Going, Gone: Lost Internet References". Science. 302 (5646): 787-788. doi:10.1126/science.1088234. PMID14593153.
Sellitto, Carmine (2005). "The impact of impermanent Web-located citations: A study of 123 scholarly conference publications". Journal of the American Society for Information Science and Technology. 56 (7): 695-703. CiteSeerX10.1.1.473.2732. doi:10.1002/asi.20159.
Nelson, Michael L.; Allen, B. Danette (2002). "Object Persistence and Availability in Digital Libraries". D-Lib Magazine. 8 (1). doi:10.1045/january2002-nelson.
^Zittrain, Jonathan; Albert, Kendra; Lessig, Lawrence (12 June 2014). "Perma: Scoping and Addressing the Problem of Link and Reference Rot in Legal Citations". Legal Information Management. 14 (2): 88-99. doi:10.1017/S1472669614000255.
^ abZittrain, Jonathan; Albert, Kendra; Lessig, Lawrence (12 June 2014). "Perma: Scoping and Addressing the Problem of Link and Reference Rot in Legal Citations". Legal Information Management. 14 (2): 88-99. doi:10.1017/S1472669614000255.
Led Digital Marketing Efforts of Top 500 e-Retailers.
Worked with Top Brands at Leading Agencies.
Successfully Managed Over $50 million in Digital Ad Spend.
Developed Strategies and Processes that Enabled Brands to Grow During an Economic Downturn.
Taught Advanced Internet Marketing Strategies at the graduate level.
Manage research, learning and skills at defaultlogic.com. Create an account using LinkedIn to manage and organize your omni-channel knowledge. defaultlogic.com is like a shopping cart for information -- helping you to save, discuss and share.