|This article is of interest to the following WikiProjects:|
For what i read here it is a (computing) type of article, completely centered around the specific use of the term "high availability" in computer world. However i find much of what is written here could be perfectly valid for Mass Transportation Systems (like elevators and escalators) as well as for uncountable industrial systems. Would it be best to rename the article and create another (more general)one using it as a base? Or should then this article be reshaped a little?
(just wanted to discuss it before being bold...) Helsinkijaui 10:11, 25 September 2007 (UTC)
Is the term commonly used for those systems also, or just that it could be used? In either case, I'm suggesting (below) to merge this article into the one on availability; I'm not sure how that might interact with your suggestion. NathanWalther 17:41, 5 November 2007 (UTC)
I think "high availability" should just be a section of a discussion on availability. Any objections to merging this article into Availability?
NathanWalther 17:36, 5 November 2007 (UTC)
High availability systems are basically an entirely separate set of computer systems; they have either at least on redundancy per choke point, or sometimes two. Some high availability systems are two of the exact same hardware, with one set to kick in if one fails. this is fairly separate from just availability. I suggest to leave it. --Preceding unsigned comment added by 18.104.22.168 (talk) 19:21, 16 March 2010 (UTC)
This article is an article on high availability in computing, and as such, I'm going to make some changes to make it clear that this is specific to computing. It would require a lot of work to merge into a general availability article, but it could probably be done. 22.214.171.124 (talk) 02:51, 26 August 2010 (UTC)
True high avalibility systems extend beyond the computers (hardware, software, networks etc), and includes the people, processes and procedures. Where the system is avalible, but running at less than ideal performace the systems still provide degraded availibilty, to the extreme where the process and procedures provide a service with no running computer systems. EFTPOS is an example - a manual paper based process can be used when the computers or networks fail, meaning the customer can still make use of the system.
The artical is overly simplistic and fails to consider that high reliablilty does not come from reliable components- contary to what the artical states, once a system has achieved true high reliabilty, unreliable individual components are no longer a risk to the system as a whole. For instance, computing hardware in Airtraffic control systems sold today (by the two largest manufacturers) is COTs hardware and LINUX OS and many Open Source components, yet provide some of the most reliable systems in the world. --Preceding unsigned comment added by 126.96.36.199 (talk) 08:39, 21 September 2010 (UTC)
This PDF seems somewhat informative, but there is at least one circular reference, as it points back to Wikipedia, and at a #REDIRECT page at that. It's also not particularly well-written. I personally stopped reading it at about the 4th or 5th bad spelling and 3rd or so bad capitalization. I suppose you could call it a quirk of mine, but I find it too distracting to read on after a few errors like this. -- Joe (talk) 03:53, 23 November 2010 (UTC)
In percentage calculation: "The following table shows the downtime that will be allowed for a particular percentage of availability, presuming that the system is required to operate continuously.". Either the table was never written or an edit has removed the table. Can someone please create or bring back the table? --188.8.131.52 (talk) 19:55, 1 September 2012 (UTC)
I think the idea of quantifying the impact here is important, but I doubt the validity of quoting a study from 1996 on the impact of system availability. In 96 ecommerce and eBusiness were marketing concepts that had yet to be fully realized. Yes, there were large business systems and wide use of interconnected technology, but the interdependence of systems that we have today was barely imagined. -- Preceding unsigned comment added by 184.108.40.206 (talk) 10:25, 11 December 2012 (UTC)
This is an area I know quite a bit about - IT systems and high availability - in my professional capacity, and while there's a bit of jargon here and there, overall, this is an admirable, high-level, non-technical discussion of the concept and what's important about it as it relates to information systems. FWIW high availability as a term of art is singular to the IT industry even tho similar concepts apply everywhere, from the Space Shuttle's famous redundancy systems to railroad schedules and power delivery grids. This wiki article is accessible, correct and relevant pretty much as is, and the statistical charts on uptime percentages are great. I would leave this alone unless it turns out to have been plagiarized, even though the references are pretty bunk.220.127.116.11 (talk) 17:46, 2 January 2013 (UTC)
This article currently duplicates most of Class of 9s, and I see very little scope for that article to be expanded usefully. As such, I'm proposing that article be merged into this one. --me_and 16:51, 11 March 2013 (UTC)
As with the above merge proposal, Nines (engineering) doesn't contain anything that wouldn't fit perfectly well into this article. As such, I'm proposing that article be merged into this one in the same manner as Class of 9s.
I don't think it makes any difference, but I'm noting it here for clarity anyway: there was a previous proposal to merge Nines (engineering) into Class of 9s last August/September. The only two accounts to comment there have both been blocked indefinitely as being sockpuppets of Gamsbart. As such, I think that discussion should be considered null and void. My confusion over what happened there is the reason I didn't propose both merges at once.
High availability is more than only IT-industry jargon. It comes from network theory/telecommunication, e.g. phone lines or fly-by-wire controls. The article at hand doesn't very much cover these aspects (albeit its general reference to system design) nor does it very much touch upon the OSI/ISO model. Now. percentage calculation can always apply, but it is chiefly the (professional data centre) IT industry that makes use of the "Class of 9s". In so far, it becomes more specific, thus justifying an article of its own. So, to leave the articles as they are shall be preferable. LordZebedee (talk) 10:34, 2 June 2013 (UTC)
The table in the Percentage calculation section has had columns and rows added over the years. Unfortunately, the editors doing this seemed to have used different definitions for the number of days in a year each time they made an addition. The downtime per year column used 365 day years for most values other than the 97% row which used a 365.25 day year. The downtime per month column used a 360 day accounting year for most rows other than 99.8% and 99.95% which used a 359.3 day year and the three nines plus four nines rows used either a 365 or 365.25 day year (the results are the same as whoever added it rounded the values). The rows for eight and nine nines used a 365.24219 day mean tropical year. In other words, there was no consistency from one row or column to the next. The per week and per day columns were okay as those do not depend on the length of the year.
I have seen Service level agreements define the year as
Manage research, learning and skills at defaultlogic.com. Create an account using LinkedIn to manage and organize your omni-channel knowledge. defaultlogic.com is like a shopping cart for information -- helping you to save, discuss and share.