A Case for Redundant Arrays of Inexpensive Disks (RAID)

Transcript

1 A Case Redundant Arrays of Inexpensive Disks (RAID) for Katz A Gibson, and Randy H Garth Davtd Patterson, Saence D~v~smn Computer of Elecmcal Engmeermg Department Computer Sclencea and 571 Hall Evans of Cabforma Umversity CA 94720 Berkeley. ([email protected] -kY du) of number maxnnum the m growth the 1s technology tik magneuc of ure be wrll Abstract Increasmg performance of CPUs and memorres or mch, square stored per can that bits track a m mch per bits the be Whde by a peformance ourease m II0 sunrlm squandered lf not matched per tracks of number the umes mch Called MA D , for maxunal area1 Expenstve Large Smgle capactty the of rapuily, grown has (SLED) D&T density, Law m Disk Density” predicts ~rank87] the “Fmt Redundant performance rmprovement of SLED has been modest the (RAID), Disks Inexpensive of Arrays based duk the magnetic on lo(Year-1971)/10 = MAD an computers, personal for developed attractive technology offers alternattve mogm&e of or&r an m onprovements promtang SLED, IO of three every pnce halved and capacity doubled has technology dd Magnettc power pctformance, rehabdlty, consumption, and scalalnlrty Thu paper with hne m years, rate the of semiconductor memory, and m growth levels of RAIDS, grvmg rheu relative rntroducesfivc costlpetfotmance, and 1979 the practice between data IBM average 1967 of capacity dtsk the and 3380 RAID an to IBM Eagle and a Fupisu Super compares than more system processmg with ] [Stevens81 memory mam its up kept not memory charactensuc that must grow the o~rty IS Capacity Performance 1 Background: Rlsrng CPU and Memory rapidly to mamtam system balance, which the speed with since growth The users of computers are currently enJoymg unprecedented to a CPU also determmes msuuctions ulamdte and data are its delivered 1984. m the speed of computers Gordon Bell said that between 1974 and perfarmanceThespeedof~mem~has~tpacefoPtworeasons year, about smgle chip computers improved m performance by 40% per (1) managed be can buff= small a that showmg caches, of mvenuon the 841 Joy rate of mmlcomputers [Bell the In the followmg year B111 twice substanttal a contain automamzally to fractmn memory of refaences. 851 predicted an even faster growth [Joy caches, (2) and the SRAM technology, has speed whose build to used lmpmvedattherateof4O%tolOO%peryear pnmary memory technologres, the performance of to umtmst In expensive ma8netuz d&s (SLED) has improved at a modest single large Mamframe and supercomputer manufacturers, havmg &fficulty keeping seek and the rotahon rate These mechamcal devu~ are dommated by the by pace the rapId growth predicted with “Joy’s Law,” cope by offermg 1981, the raw seek tune for a high-end IBM from delays disk 1971 to m&processors as theu top-of-the-lme product. of factor a only by improved not did hme rocstlon the whllt two a a Amdahl Gene make system fast not does CPU But fast related when Greater denslty the cbange[Harkex811 a lugher transfer rate means [Siewmrek CPU speed to mam memory 821 s12e rule this usmg mformatmn found. and extra heads can but seek educe the aveaage tnne, 1s no 1s There year per a 7% of rate at unproved the raw seek hme only requues memory, of byte one Each second per mnstrucaon CPU moan reasontoexpectafasterratemthenearfuture To mamtam balance, computer systems have been usmg even larger If memory, of cost the by dommated be to not are costs system computer I/O to d&s state solid or memones mam buffer some of acttvlty the memory then should capacity chip grow that suggests constant Amdahl’s I/O actrvlty whose for solutron fine a be may This apphcattons has Moore at rate growth that pr&cted over Gordon rate same the 20 years locality of reference and for which volatlltty 1s not an issue. but by peces appbcauons dommated a high rate of random muests for small = 2y*-1%4 fransuforslclup by of (such BS tmmact~on-pmcessmg) or data a low number of requests for on massive amounts of data (such as large simulahons nmnmg capacity m AK predzted every quadrupled have RAMs Law, Moore’s by hmuatmn supercomputers) are facmg a sermus p&mnance twotMoom75110threeyeaFIyers861 Recently MIPS ha9 been of memory to megabytes mam of rauo the Crisw I/O Pendrng The 2. = alpha defti as ahha [Garcm 841. vvlth Amdahl’s constant meanmg 1 In lmprovmg the t3 the Impact of What performance of a of pieces sOme have we.9 memory because drop mam prices, memory of rapti the of parl others while leavmg same? the problem answer Amdahl’s known IS now grownfastexthanCPUspeedsandmanymachmesare~ppedtoday~th asAmdahl'sLaw[Amdahl67] or alphas of 3 tigha 1 balance m To mamtam the of secondary systems, computer costs z S A key meas- the parts other storage the system m must match advances of +flk (1-n Whae = S the speedup, effecttve f=fractmnofworkmfastermode,and speedup k = m faster mode whde In ume 10% spend appbcatmns current some that Suppose of thev granted pronded that Pemuswn the COP!S mthout fee to all or w of &IS copy matcnal IS computers m to faster--accordmg 10X are Bdl when Joy I/G Then JUSt made advantage, notIce commernal dwct for lstnbuted or not zzrc copyright ACM the efQcove predicts Law Amdahl’s years--then thtte Over only be wdl speedup IW by and the tltk of the pubbcatuon and IS da’, appear, and notxe IS @“en that COPYI"K Computing for of pemtrs~on Machtnery To the Association to otherwIse, COPY or 5X When we umprcuzessors computers lOOX faster--vm evolutmn of have fee and/or spenfic perm~ss~o” repubbsh, requres B or by multiprocessors-&s applrcatlon will be less than 10X faster, the wastmg of 90% speedup potenhal $1 0 1988 ACM 0-89791~268-3/88/~/OlOP 50

2 IS no and rehabduy Our reasoning price-performance that If there are Whde software file systems via we can improvements m lmagme or d&vantages m rehabdlty, then temble pnceperformance m advantages term 40 demands, we need mnovaUon to J./O buffcrmg for near an avoid to a We further explore chamctenze need IIO IS there transacUon-processing crms [Boral83] to evaluate of a col&Uon of iexpensive d&s. but workload performance hardware that CollecUon is Just one a component of a remember such Inexpensrve of Arrays Solution: A 3 Disks a TPS deslgnmg While system tranacUon-processmg complete complete the only been unprovements m capacity of large disks have not RapId resst that temptaUon m this based these ideas 1s enUcmg, we will on created a market have target ofd& designers, smce personal for computers issue packagmg, Cabling and certamly an paper m the cost and rehablhty disks inexpensive These lower perfor- lower cost &sks have magnetic d&s, an this beyond also IS paper’s mexpenslve many of array scope of below mance as well as less capacity Table I compares the top-of-the-lme M2361A FUJ~$U IBM 3380 model AK4 mamframe dtsk, “Super Eagle” Mainframe Computer Small Conner the and CP muucomputer personal 3100 Penpherals disk, computer d& CPU CPU ChoroctensacS 3380 v FUJUSU Canners IBM v 2361 LJ CP3100 M2361A 3380 31Go 3100 (>I mmrr Is tt?tter) 3100 dm Channel Memoly D&c dmmeter (mches) 14 3 105 35 4 Formatted DaraCapaclty (MB) 7500 600 100 01 2 0% a ) Pr~ce/MB(controller mcl $18-$10 $20517 $lO-$7 l-25 17-3 MlTFRated (hours) [email protected],ooo 1 15 30,oLw 3 MlTF m pracUce (hours) 100,000 ? ?V . . . No 4 Actuators 1 1 2 1 MaxmuunUO’$econd/ActuaU~ 50 40 30 8 6 I/O’s/second/Actuator JO 24 20 Typical 8 7 -~wdsecond/box 200 40 30 2 8 . . . 120 24 20 Typical VO’s/secondmox 2 8 Transfer (MB/set) Rate 3 25 1 4 3 (w) Power/box 6,600 640 10 64 660 small and Figure 1 Comparison organizations of for typlca/ mat&me Volume ) ft (cu 24 34 03 800 110 ahk Adaptec Stngle tnterfaces SCSI [email protected] such compter as the chrp DMA the allow a MC-6250 single ure to computer small crUp to be the IBM AK4 for Companson of model 3380 dtsk marnframe Table I each controllerfor tnterface dtsk [Adeptec as well as pronde an embedded for rmnrcomputers, dtsk Eagle” “Super M2361A Fuptsu the computers, the evetythtng shaded per pnce an (The 871 mcludes zn I Table megabyte computers By and the Conners Penpherals CP 3100 dtsk for personal box.?sabovc) rMxmtum average “‘MOxtmum Ilo’slsecond” we mean the number of seeks and Cost access and sector stngle a for rehabthty average rotates 5. And Now The Bad News: Reliabihty rnfonnatzon w&spread on the comes 3380 [IBM from expertence 871 forces make to managers systems computer The unrehabd~ty of d&s O?kd the [hvh2k87] the [Fu& FuJltsu the from on lnformatlon manual of backup versions of mformaUon quite frequently m case What fmlure on 871, some whtle speculatton on based are CP3100 new the numbers a havmg of relmbdlty on impact the be would m Increase hundredfold gven prices dflerent for allow to range a as w megabyte per The pnce for exponenhally disks? an is. rate--that fmlure constant a Assummg &scount (The volume and [email protected] mark-up practtces of the 8 vendors that fadure--and to Ume dlsmbuted Independent--both are failures rncreased maximum to watts 10 to allow was CP3100 the of power watt cakulaUng the Mean Time assumptmns dtsk manufacturers by when made tne&xency other rhe stnce supply. drives external an of for the power array an of zebablhty O--the Fadure To IS d&s of supphes) contan their awn power &sk ofa slngtc MITF Array Drsk MTI’F of a = One suqmsmg fact is that the number of I/Ck per per Bctuator in an second Array the m MDuks Number is &Sk inexpensive within a factor of two of the large d&s In several of the metrics, mcludmg pnce per megabyte, the mexpenslve disk remammg CP 3100 d&s 1s of MTTF the I. Using the mformatron m 100 Table Qsks the to ts supenor or equal large = 2 weeks or hours, 300 than 30,000/100 Compared to the 30,ooO less more even are power low and size small The dsks since Impressive IBM hour 3 years) MTTF of the (> 3380, this IS &smal consider If we the such as the CP31CO contam full track buffers and most funcUons of the army one about or hours 30 IS MTTF lhen disks, 1000 to the scaling traditional provide can manufacturers disk Small controller mainframe dismal reqmrmg an ad~ecIne. worse rhan day, because dusks volume such high m funcUons standards efforts the of of too fault tolerance, large Without of mexpenstve Qsks are arrays mterfaces. comm~ttces m defmmg hrgher level penpheral such as the ANSI unrehable to be useful Small x3 131-1986 Computer System Interface (SCSI) Such standards 6. A Better Solution’ RAID encouraged as mterfaces SCSI single offer to Adeptec bke companies have overcome we challenge, of the extra To must make use rebabtbty compames m turn allowing to &Sk chips, embed mamfiame controller mformaUon redundant contammg ongmai d&s mformatmn the recover to functrons low at cost Figure 1 compares the uadltlonal mamframe dsk when fads &Sk a acronym Inexpensn’e of these Our Redundant Arrays for computer disk approach 7%~. sine SCSI mterface and the approach small of explanaUon the sunplify To RAID IS Disks to and proposal final our chip can the aS uSed be also dmXt disk every as emLxd&d m a controller wnh previous work, we give a five of taxonomy different avold confusmn SCSI bus of the deuce at memory other end access @MA) the and disks murored with begmnmg orgamzaUons of dtsk progressmg arrays, proposal lead charactensUcs to our for buddmg I/O systems as Such through variety of ahemaUves with &ffenng performance and rehablhty a either tninsfers large the for mterleaved d&s, mexpenslve of -YS of to refer We RAID a level as orgamzauon each many the for mdependent or 871[Satem861 86]@vny [I(lm supercomputers levels all as describe we that forewarned be should reader The If m mformamn the mnsfen processmg transacUon of small ‘fable Usmg slmphfy to solely hardware m implemented RAID for presentation, the 12 bandwIdth ~~xpensrve disks potentmlly have 75 hmcs the I/O I, of the power IBM 3380 and the same capacity, with lower and as COnSUmpUOn Cost hardware Ideas are apphcable to software implementauons as well be into Our baste approach will Reltabthty to break the arrays group Caveats rellabrhty groups, with each 4 having extra “check” disks contammg we redundant When a disk fads mformauon assume that withm a short -ys m the We cannot explore all space issues associated with such replaced time failed disk ~111 be the and the mformauon wdl be so paper, this for avaIlable of fundamental estimates on ConCefltNte we 110

3 for Smce formula 1s tbe same the each level, we make the abstract the new dlbk usmg the redundant mformauon Th1.s recon~ to acted on as numbers total D=loO appropriate data parameters these usmg concrete time be IS MTTR Ldled the reduced mean time to repair (MlTR) The can d&s, 1 = MmR hours, 30,000 = group, per disks data G=lO M7VDcsk standby spares, when a If system includes extra the d&s to act as “hot” IS a elecrromcally m swltched Penodlcally replacement a fmls, disk disk the with hour, level RAID the by detennmed C group per d&s check This Relubrlrty Overhead Cost IS stmply the extra check faded all replaces use human operator we that terms other are Here d&s shall we As a as disks. expressed D &sks data of number the of percentage D d&s). check extra mcludmg (not data with d&s of number total = level below, to down 100% fmm 4% RAID WIUI vanes cost the see = G number of data d&s m d&s), group (not mcludmg extra check a way Another Percentage Capacity Storage Useable to group, C = number of check d&s m a the percentage express the total this rellabdlty overhead 1s m terms of of nG =D/G=nUmberOfgoUp& disks that store to used be can data check and &sks data of capacity make As menhoned above we the same assumptions that disk a of a from vanes this orgamauon, the on Depending of high 50% to low mdependent (An manufacturers make--that fadura are exponenual and 96% IS a sltuatlon where an array of d&s or not power earthquake surge might supercomputer Smce Performance and applications Since very wdl prticuons reliability be ) Independently foul high, these rates, and access &fferent have systems transaction-processing patterns we rehabdlty only the the of IS &sk-head the that emphasize to want we we count the need both For supercomputers evaluate different metncs to the not and model, fmlure this whole software and with assemblies with number of reads and wnte.s per second for large blocks of large data, our technology system In ad&non, m electromc view the pace of means III a d& Durmg data each from sector group least at gettmg as defined one of independent “overlull”--for, are WF lugh bfeume, expected extremely a or each umt, a as act group readmg m disks the all transfers large stngle will users After &sks all, how many people are stdl obsolete replace wntmg m block data parallel large the of pomon a year d&s? using old 20 A number s systems transacuon-processmg the for better of measure for single-error repamng RAID The 1s general calculation MT’TF writes indlvrdual reads or transacuon-processing Smce second per III IS given MTIF group the Fmt, steps two (e deblts/cre&ts) , g a systems disk of sequence read-modify-wnte use well each we mclude that metnc as accesses, Ideally durmg small transfers I mFDtsk e~thez act can group a m dsk mdepe&ndy. mdependent wntmg or readmg * = MrrF,,,, mfonnatmn a need applicauons rure dura hrgh supercomputer In summary group Probabdrty ofanotherfadure m a G+C rate II0 hrgh a need g transacuon-pmcessm whale repamng oisk the b&re dead large the both For transfer we assume the small and calculauons small IS sector, that a sector 1s a relauve to a track, request user mlmmum formally more As m second a of probabdlty the appendix, the denved to size that there 1s enough work and keep every devtce busy Thus sector fa&nebeforethefirsthasbeenrepauedIs dusk storage efficiency and transfer sue Figure 2 shows the both affects MlTR hill-R uiealoperauonoflargeandsmall~accessesmaRAID E Probabdrty of = Failure DIS~T- Another 1) bfnF,,,,k /(No /(w-l) MmF/j,k the formal calculation m-the appendix comes The behmd mtmuon durmg from the average number of second d& fdures calculate to trymg Qsk that assume we Since fadures fadures the repau time for X single &Sk number occur at a umform rate, tha average of the durmg fa&ues second rcpau tune for X first fadures 1s *MlTR X of remamtng d&s u) the group MlTF “Graupcd” or Large Stngle (a) Read (lreadqwadoverGd&s) of then 1s d&z smgle a for fathues second number average The MlTR W?UlUllIl~ the l?l drSkS group Of No / bfnFD,& IS a The MTTF disk of MTI’F the Just smgle disks retnaming the of the by number of go4 disks m dnwkd gmup. gwmg the result above the 1 1tt of step which system, whole the IS reltablhty the The second IS (smcc qmte MITFGrow 1s not approxl~~~teiy titnbuted exponentrally) .*. q nl MTrFGrarp Indmdual Writes and Reads or (b) Several Smll MTTFRAID = (GndsandlorwntcsqmndawrG&sks) Pi a WI tran$ers small vs tramfer Large 2. Figure d&s G of group Pluggmg all together, we get. It The SIX pelformauce of number the then are and wntes, reads, memcs second (grouped) small or [email protected] per large for both (mdlvldual) 1 mFD,sk mFD,sk calculate for we memc, each transfers Rather than @ve absolute numbers * *- - MITFRAID = to the efficiency the number of events per second for a RAID relative “c (G+C-l)*MITR G+C I/O events per second for corrcqondmg smgle dusk (This ts Boral’s a disk bandwidth per ggabyte moral 831 scaled to glgabytes per Uns ) In (MmFDtsk)2 ample. we are after fundamental Mferences so we use pap demmmlstlc throughput measures for our pezformance memc rather than latency (G+C-l)*MITR (G+C)*tlG * = of Effective Dnk The cost Per d&s can be a Performance large portmn of the cost of a database system, so the I/O performance per m disk--factonng overhead of the check disks--suggests the the ‘flus IS the bottom line for a RAID cost/performance of a system 111

4 1) a read step to get all the rest ofthe data, Mwrored First 7. Level RAID: Disks step old and new the merge to mformatwn. mad&v a 2) for approach tradmonal 11 are dusks Mmored rellabdlty of lmprovmg write a 3) lnformatwn check tncludmg group, full the write to step expensive all since consider we opuon magneuc most the IS This disks RAID scores we Smce d&s m a of and smce some accesses are have a tiks are duplicated (G=l and C=l). and eve.ry wnte to a data dusk 1s also mimic d&s, of groups to solution by bit-mterleavmg the DRAM can we the controllers fault wnte to a check &Sk Tandem of doubles number for a add check d&s to and group then of disks the across data the enough lets reads allowing tolerance, an opwnized version of mirrored d&s that smgle detect correct a smgle error A and panty dusk can detect smgle a parallel Table II occur m the shows memcs for assummg RAID 1 Level a but error, dusks to ulentiy the erroI enough we check an correct to need this optnnuatton the error For a group sue of 10 data do& (G) we need 4 check disk with = To keep [HammmgSO] 5 = C then 25 down G d and total, m (C) d&s MTTF Roduct Useful Exceeds Ltiwne 25 the the cost of redundancy, we assume group size will vary from 10 to hrs or > 500 (4500.000 years) transfer data mdwidual our Since sector, interleaved a Just bit- is urn1 2D D&s of Number Total large transfer for this RAID must be at least sectors dsks mean that G a Cost Ovcrhcad 100% L&e from “cctor full a readmg unpiles amount smaller a to reads DRAMS, Usecrble Storage Capacity 50% of writes and each unit single group, a m disks bit-mterleaved the a of Table the cycle to hll the Qsks involve III shows the read-modify-wnte Full RAID Eventslscc vs Smgle Disk [email protected] Per Disk RAID 2 Level this of metncs hrge Grouped) (or Readr 00/s 1 ws MlTF ExceedsUseful~ehme Grouped) (or Large Wrues 50/S D/S G=lO G=Z R-M-W Large (or Grouped) 4Dl3S 67/S (494500 hrs (103500 llrs W Indsvuiual) Rends (or Small 100 or >50 years) 12 years) OT Writes Small D hd~vuiual) (or 50 Total Number of D&s 14OD 12OD Small In&dual) (or R-M-W 4D/3 61 overhud Cost 40% 20% Storage Capacity Useable 71% 83% II. Table of assume that writes Charactenstrcs Level 1 RAID Here we EventslSec Dtsk Full RAID Eficlency Per Ask Eflc~ncy Per slowed the are jar second wrote to complete because the waztrng by not L2lLI Disk) Single (vs L2 L2 L2ILI to for writing S slowdown the 2 slowdown for compared dtsks 1s mtnor 71% 111s hgeRe& 86% 86/S D/S “pure” mtrrored a Unltke d&s 25 scheme 10 lo of group whole a wntrng Lurgc wrllcs 71/s 143% 112% 86/s D/S optmuted mvlsrble to the s&ware, we assume an extra &As that wtth are Large 107% 71/s R-M-W 129% 86/S D/S all d&s, scheme with twice as many controllers allowtng parallel reads to DISC Reodr Small 01/s 6% 3% 03lS for of the allowtng and reads reads bandwidth disk full grvmg large Small Wrttes o2.B 3% 6% 04/S D12sG rea&noaijj-nntes in paralbzl to occw DISC R-M-W Small 4% 03/S 071s 9% accesses mdwldual average When d&s, muluple acmss dlsmbuted am a gives L2lLI The 2 Level column of Charactenstlcs III Table RAID Qsk queuemg. seek, and rotate delays may &ffer from the smgle case means (>lOO% 1 lewl of IS m 2 level of performance % the L.2 terms evenly, bandwidth may be Although it is Qsmbuted more unchanged, spread over all the faster) long as the transfer taut ts large enough to As the queuemg too not IS load disk high, If and, delay m vanance reducing full each &Sk, bandwuith of the IIOs large the get a of d& data group. I 871 [Llvny parallebsm through delay queuemg expected the reducmg also allow &w&d by S to all dtsks m a group to complete Level 1 large reads the described sector, many arms seek to When same track then rotate to the do &a IS duphcated and so the redwdoncy d&s can also fmler because are be wdl time rotate and seek average the smgle a for average the than larger all independent 110s still reqture accessmg Small the Isks tn a accesses generally disk, tendmg toward the worst case tunes Tlus affect should not agam dwrded by S group. so only DIG small IIOc can to at a tone, happen the more than double average access tlmc to a smgle sector whde stdl jintsh small a group of disks to allow Small Level 2 writes are hke case with the In parallel m sectors many gettmg special rmrrored &sks of written becalcse full sectors must be R-M-W before new &ta can be read sector any read can that arms between choice the controllers, sufficient data each of part onto sector by 881 mltton 45% up to seek read average the for tune the reduce will emphasis our retam to but factors these for allow To we fundamental has the as performance same the level system 2 level large For wntes, m a slowdown factor, S, when there are more than two d&s a apply it It basis though even 1 disk per a on so and disks, check fewer uses 2 group general, 1 5 S < In whenever groups of work m parallel disk outperforms transfers the performance 1s &smal level 1 For small data With are disks the spindles of all disks m the group synchronous system the of disks all disk, per or be whole the for either a group must of a group synchronous d&s pass so the correspondmg that sectors of transfer, llmltmg the Ipaxrmum number of small a for accessed disks under the heads stmultaneously,[Kmzwd 881 so for synchronous also S factor slowdown DIG We the include simultaneous to accesses IS = S and no there slowdown data one only has RAID 1 Level a Smce 1 for the access must wat smce all the disks to complete group, its disk m same the reqmres transfer large the that assume we RAID but for desuable IS supercomputers 2 level Thus mapproprmte higher level found the of groups m number of Qsks actmg in concert 9s transaction systems, with for increasing group size. processmg increasing 25 to 10 RAIDS d&s per In two the disk applications performance m disparity the for the the database of doubhng mean can msks cost Dupllcatmg all Thrnkmg fact, this of Incorporated announced a Machmes recognition largess Such system or using only 50% of the disk storage capacity Level Machme supercomputer called 2 RAID this year for its Connecuon RAID inspires the next levels of hot standby spare “Data one Vault,” with G = 32 and C = 8, mcludmg the ECC 8 Second Level RAID: Hammmg Code for [H&s 871 memory reduce history of main The orgaruzauons suggests a way to data small improving on more once concentrate we Before transfers, cost the of 4K and 16K the DRAMS, With rehablhty of introduction lowenng the cost to SubJe.Ct were devices new these that discovered designers computer losing due many were there Smce part&s alpha to information single 9 Thwd Level RAID: Single Check Disk Per Group groups of system m a and smce they were usually accessed m bit DRAMS used m check which determme to Most are RAID 2 level the disks at 64 to 16 chips correct to chips redundant added designers system ume, a redundant detect to needed is disk panty an error one only for faded, disk to detect double errors m each group This increased the errors and single can These extra disks are truly “redundant” smce most drsk controllers to chips of memory by 12% number 38%--depending on the size of the the m provided signals special through either faded dusk a If detect already rehabdlty slgmdcantly 11 group--but improved a disk interface or sector extra checking mformauon at the end of used the m a group are read or wntten together, As long as all the dam bits errors to be can disk failed the on mformatlon So soft correct and detect the group less of reads However, performance on Impact no 1s there than of reconstructed calculatmg the parity by the remaining good disks and mformatir? IS size requue readmg the correct, whole group to be sure the panty then bit-by-bit to the companng calculated for the ongmal full writes and steps mean group the of poruon three a to 112

5 a reads levels 2,3, and 4 By stormg RAID whole transfer umt m a sector, group faded bu was a 0, othcrwtse it these parmcs agree, the When two mdependent the yet disk a can be sull and operate at maxrmum rate of was If 1 drsk IS the fadure,Just read all the data drsks check and store a the drsk the in detect errors Thus the primary change between replacement 3 and 4 the group panty level WC that IS mterlcave data toone per reduces the overhead (C=l) Reducmg the check d&s group 10% for the group stzes considered here cost to between 4% and The 4 Tran$er thud same as the Level 2 system level RAID is the for performance the UIlllS smce it needs fewer RAID, but the effectrve performance per dtsk mcreases c d & b, a, increases total but relrabdtty, also d&s m reductron This check d&s IS a minor since It is shll larger than the useful hfehme of disks, this that advantage extra pomt One of a level 2 system over level 3 is the 4 Level each IS mformatton assocrated with check sector to correct soft errors not capactty per dtsk by perhaps 10% Level 2 also needed, mcreasmg the 0 Sector allows a all soft errors to be corrected “on the fly” wnhout havmg to reread &la summarizes charactensncs RAID level thud the and IV sector Table Disk 1 and d&s for levels 2 and 3 sector the compares 3 Figure check layout A 0 Secwr MlTF Useful Lrfenme Exceeds T Data G=lO G=25 A 2 2 Disk a hrs (820,000 hrs (346,000 years) or or >90 40 years) 1 1OD D&s of Number Total 104D 0 Sectar cost owrhcad 10% 4% L&a Capacity 91% Useable Storage 96% Dtsk 3 EventslSec Disk EIficclency RAID Full Per Eflctency Per Disk Sector 0 (vs Disk) Single WILI WILl Lx2 w WIL2 L3 D& LargeRecu& 127% 91/S 91% 112% 96% 96/S D/S 4 Disk Writes Large 182% 91/S 121% 96/S 192% 112% D/S R-M-W Large 142% 127% 91/S 112% 96/S 136% D/S aEcc0 Readr Small 09/S 127% 8% DISC 3% 112% 041s ECCa Sector 0 bECC0 Small Vyrites 8% 05/S 02/S 112% 127% 3% D/2sG ECCb Check CECCO R-M-W Small DISC 127% 11% 041s 09/S 112% 5% ECCc Disk 5 dECC0 ECCd Table IV Characterrstrcs one of a (Only Level gives column L3lL2 The RAID 3 @tier (Each aEcc1 Sector 0 of of check L2 the give; it in terms tn and L3ILl L3 the performance % &Sk column umt tnto placed 1s bECC1 Check 3 level tn faster) of LI terms (>loO% L3 IS The performance for the full means sector single a D cECC1 6 Ask Check rnfo 2 levels m same are IS fewer the and 3, systems RAID there since but check the Note that I dEcc1 calculated ts performance the dtsks check dnk per tmproves ~$0 IS now calculated s aEcc2 Sector 0 aver each a each of pece over bECC2 Check a thud level RAID system Park and Balasubramaman proposed 10~1 lran.$er tran$er urut ) L cECC2 7 Disk applicauon Our calculattons partrcular park861 a suggestmg without dECC2 better suggest to supercomputer apphcatrons tt 1s than to a much match year manufacturers disk two This systems transacuon processing have In 3 Comparrson of locatton of data and check mformatlon Frgure such 5 synchronized usmg apphcanons 25 for RAIDS 3 level announced for 4 and 3, 2, levels RAID for sectors small the IS shown Not G=4 one mch from one and IvIaxtor from Mtcropohs C=l and G=4 with disks mformatton to controller amount of check disk per sector added by the 871 [Magmms detect and correct soft errors use a sector Remember that we wlthm has brought the rehabrhty overhead cost to its lowest This thud level ideas physical sector numbers and hardware control to explain these but levels level, small of performance Improve we accesses two last the in so unplemented can RAID be disks and sectors logical ucmg sofmare by rehabrlny or cost changmg w&out ReadsbVrltes 10. Fourth Level Independent RAID that wnte a smglz mdrvldual an At expect mrght you thought fust to the wuhm &sks all across transfer a Spreadmg the has group m disk stdl mvolves all the disks sector a group smce (1) the check mutt advantage followmg rest the the of the data be new wnh rewritten panty dash> data, and (2) . Large or grouped transfer ttme IS reduced because transfer to read calculate be must the new panty data Recall that each to able be be can array entue the of bandwulth exploned Just bit IS panty a smgle exclusive OR of s+l the correspondmg data NIL 11 But followmg it drsadvantagek as well has the ITXFI is calculatron panty 4 3, level unhke RAID, level In group a the . ReadmgAvnhng to to readmg/wnhng requues group a m disk a data we if al know the old since, value and the old parity balue simpler can only RAIDS 3 and 2 levels group, a m d&s the all perform can well mforrmror: panty new the calculate sr we value, data new the as group one I/O at a Pme per follows . not the If disks are synchromzed, you do not see average seek xor new panty = (old data xor new data ) old pantv towards and rotattonal delays, the observed delays should move 2 dtsks to 4 4 accesses-2 a level In small wnte rea& then uses perform the the worst case, hence S factor m the equatrons above one and 2 wrnes--whtle a small mad mvolves only one read on disk Table RAID transfers This fourth level through improves performance of small all that Note charactensucs RAID V fourth the small summarmes level a ume We parallehsm--the do more than one to I/O per group abrhty at reads--but the small for accesses improve--dramatrcally the several &sks, no across informanon transfer mdtvtdual the spread longer 1 read-modrfy-wnte is strll so slow relatrve to a level ns RAID that ma disk unit mdrvrdual each keep smgle but to applrcabduy transactron and Salem Recently doubtful is processmg bit-mterleavmg easy The vutue of Hammmg 1s the calculatron of the Gama-Molma [Salem 86) proposed a Level 4 system 2 needed to detect code correct errors in level or But thud m the that recall level Before to the next proceedmg we need to explam the we RAID level errors wnhm detect single to controller a drsk the on rely performance of small writes small in Table V (and hence in umt we a single sector, store rf Hence, sector drsk an mdrvrdual transfer read-modify-writes smce they entarl the same operatrons m dus RAID) errors on an mdtvtdual read drsk accessing any other without we can detect D 2 becau*e 4 of Instead 2 by formula drvrdes wntes small the for The in stored is mformatron the ways different the shows 3 Frgure sector for a 113

6 old accesses parallel the old data and m panty can be read at can proceed Check 5 D&s ume new panty can be wntten at the same the the same new and and data IDataD& Disk (contamng Data and Checks) also performance because of small d~nded writes the The nme IS by G must read check disk m a group smgle be small every with wntten and m hmmng the number that of wnte that can be writes group, thereby groups of the to time a at performed number &sk 1s the bouleneck, and the fmal level RAID removes The check bottleneck thus MlTF hfetune Useful Bxceeds G&O 6-25 hrs (820,ooo hrs (346,000 years) or>90 years) 40 or II D&s Number of 11OD Total 104D II II cl B cost overhead 10% 4% Capacy Useabk Storage 91% 96% for rnforrnarron Check (al [email protected] for (b) Check Events&x RAID Full Efitency Per Dtsk Dark Per Eficwncy and G=4 for RAID 4 Level Level G-4 and 5 RAID for Smgk (vs Dtsk) L4ILl L4lL3 LA L4iL.3 L4lLl IL4 are shown C=I sectors The are shown sectors The C=I Large R& 91% 91/S 100% 961.3 100% 96% DIS below (The the d&s below the wtth the disks. Writes Large 182% 91/S 100% 192% DIS 100% %/s the u&ate arem checked &ta mJornmaon check and Large R-M-W 91/s 100% 136% 96/S D/S 146% 100% ) Wrues check mformatwn spreadevenly through all the SmallReads D 91% 91 1200% 96 3OCKI% 96% tosoofdtsk2andsl of disks So of& 2 to Writes Wrttes Small 120% 9% 05 120% 02 4% So writes to unply 3 aisk sttll 3 dtsk of sl and 2 nnply Small R-M-W 120% 14% 09 120% 04 6% and sl of dtsk 5 The they can split be but wntes, the becomes (5) dtsk check 5 dask So to dtsh 2 across of V. Table Charactenstrcs of a Level 4 RAID The L4lL3 columt~ gwes write bottleneck 4 and to sl of&Sk and L3 of terms in an it gwes column L4lLl @LA P&Wn0nCe % the the (>100% Ll terms of L4 is faster) Small reads improve because means per informanon of Localton 4 Figure check sector for Level RAID 4 longer trc up a whok group at a time Small writes and R-M-Ws they no vs. Level 5 RAID improve because tn made we make some same assumpttons as we the MlTF Weeds Useful Lifetune the slowdown for two only Table related IIOs can be ignored because II G=lO G=.Z are d&s two znvolved hrs (820.000 hrs &woo Check Level 11. Disk No Single Fifth RAID: [email protected] years) or 40 actieved parallelism Whde writes are shll RAlQ 4 level for-reads. Total Number of Disks tlOD 104D evay limited to one per group smce the wnte must and read wnte check OWhf?lkiCOSt 10% 4% check mformahon disk The final level RAID dtsmbutes the data and Useable Capacy Swmge 91% 96% all the d&s--mcludmg check dlsLs Figure 4 compares the the across d&s and mformauon m the sectors of check for levels 4 of 5 locauon EventslSec Full RAID Efiuncy Per Disk Eficuncy Per Dtsk RAIDS fvs Single Dtsk) LslLl L5 LA!.4 LslL.4 L.5lLI Ls large small dus IS Impact smce. RAID of performance The change L4UgeRmdr 91/s 100% 91% D/S 96/S 100% 96% level example, For mup per writes m&vldual mulnple support can 5 Large Writes 182% 100% 91/s DIS 100% 96/s 192% supposemF~gure4abovewewanttowntesectorOofdrsk2andsectorl R-M-W Lurge 136% 100% 91/S D/S 144% 100% 96/s du& of writes these 4 level RAID m 4. Figure left the on shown As 3 Small Reads (1-D 100 110% 100% 100% 100 104% be must 5 disk of 1 sector and 0 sector both smce sequenti be must 50% 550% 25 (l+C/G)DI4 Writes Small 50% 1300% 25 the right,, m RAID level 5 the writes can wntten However, as shown on R-M-W 75% 550% 50 (l+C/G)&-2 Small 75% 1300% so wnte to sector 0 of &sk 2 still involves a wnte proceed m parallel smce a toQsk5butawntetosectorlofd&3mvolvesawntetodlsk4 a The Table VI Charactensttcs of column Level 5 RAID W/L4 gives small 5 worlds both of best the near level RAID bnng changes These m the LStLl the and L4 of terms column LT of performance % gwes w tn a 1 level read-mtify-writes now perform close to the speed per d&c of (>I0096 means L5 U tenm Because red can be spread over c$Ll farlcr) per performance and transfer large the keeping while RAID high &Sk check small 4, level m d&s all were mcludutg drsks. all 110s what storage useful capacity percentage of the RAID levels 3 and 4 Spreadmg +ClG unprove a factor of 1 by Small writes and unprove because R-M-Ws the data all Qsks even improves across performance of small reads, the dtsk are no longer constratned they group size, getting the full by &Sk one IS there smce per group that contams more data Table VI accesses the bandwtdth 4 Ilo’s assonated these with for We make agatn of charactens~cs the summanze.s dus RAID II Tables m ti we and slowdown the as assumpttons same the V for Keepmg mmd the caveats given m a Level 5 RAID appears earher, two two [email protected] IIOs can be rgnored beeawe only d&s are mvolved JUSt or SUpfXCOIIIpUbZ apphcatlons, you do If attractlve Just very to want Vo as a protocol that suppats data sector an with hack full drive--such per or If you want to pmcessmg storage capacity when transaction 1s lmuted. out-of-order--then improves RAIDS of performance returned the do supercomputer appbcanons and Iransacnon pmcessmg both track disk sigmficantly because of the full every For example, buffer If m 12. Dwusslon as u reaches the next every disk begms transfemng to ns buffer as soon to Before the paper, we wish concludmg note few more mterestmg a less than 1 since there would be vntually sector, then no S may reduce to fti smpmg pomts about RAIDs The disk 1s that whde the schemes for clear even not IS it track, a of size the wuts transfer Wnh delay rotauonal panty were there hardware, by done were lfthey as presented and support 1s RAID If synchromzmg the performance m a group Improves disks the and method, between decmon no necessny to do so WeJust give the pomu separable the two makes paper bmldmg This of advantages smctly cost and software soluuons IS hardware one of and benefit. For five from systems I/O personal computer disks and the advantages of there IS no extra d&s example, m 1s where &Sk buffenng effecave, cases m army different &Sk array oqamzahons, mdependent of disks used those and data old the smce writes small 5 level panty for reads be m old would tradmonal wrth to d&is mIrrOred achieve the starts pomt later The mam memory, so software would give the as well as performance best the level lmprovmg succeedmg each WI~I rehablhty, acceptable cost. least a rate, the &a characterued by l small number of requests per second In the umt IS transfer a muluple of the assumed have we paper thus for massive amounts of sequentml mformauon (supercomputer than transfer smallest the of size the As sector one larger grows unit apphcauons). 114

7 event The mam memory m the backed-up of an extended power fadure . by a large number of read-mtify-wnles to the 110 charactenzcd rate, ties during capacity of these d&s also smaller up less of the database (Imnsacuon-pmcessmg), small amount of random mformauon a reconstrucuon, to higher avadabdlty leading (Note that Level 5 nes up l caponly, storage useable the or Level event of failure whde the 1 only needs the d&s in a all m group all possibly or three in single moored &Sk dunng reconstrucuon, glvmg Level 1 the edge per shows level each for dusk 5 improvements performance the Figure avadabdlty) performance per The RAID &Sk comes from ather Level 1 or highest transaction-processmg 5 Level smiauons usmg no In than 50% of 13. Concluslon more storage However, 1) (Level murored if IS choice the then capacity, d&s to opuon effecuve cost a offer RAIDS of challenge the meet more capacity, sltuatlon calls for using the than 50% of for or storage processor the m growth exponentml the believe We speeds memory and apphcanons combmed for or apphcauons, supercomputer supercomputer a is disks computer personal of reduction size d&c of success the to key and and best Both the strength looks Level 5 processmg. then transacuon size reduction the that arrays, of argues Bell Gordon as gust of Level 1 IS that It dupbcates data rather than calculatmg check weakpess mulupmcessors microprocessors m success the to key a LS [Bell 851 In lowers but performance read Improves data duphcated the for mformauon, cases both the smaller size slmphfies the mterconnectlon of the many f&lure capacity and wnte performance,whde check data IS useful only on a large cabling and packagmg as well as components of While arrays 781, we Inspued by the space-tune product of pagmg studies [Denmng it processors easier certamly is to possible. are SLEDS) (or mamframe propose a smgle figure of ment called the space-speedproducr the useable PC from array an construct microprocessors of number (or same the Usmg per storage fracuon umes the eff&ncy 5 Level memc. this event Bell Just a the coined distmgmsh as term “multi” to dnves) for has reads and 3 3 for writes for G=lO advantage over Level 1 of 17 an we microprocessors, from made muluprocessor “RAID” to term the use system to us Let the the advantages point, of buddmg I/O return fast computer ldenhfy a &Sk array made from d&s personal personal Single tradmonal to Compared disks from Large computer m advantages With consumptron, power rebabllrty, cost-performance, Expensive (RAID) Disks Inexpensive of Arrays Redundant (SLED). D&s I/O we growth, and expect RAIDS to replace SLEDS m future modular advantages slgmficant offer level a compares VII Table 5 same the for cost systems There are, however, several open issues that may bare on the size to the 100 mexpensive data 10 of RAID group a with disks using pracncalnv of RAIDS 3380 As you can see, a level 5 RAID offers a factor of IBM 10 roughly latency? on RAID a of impact Is-the What fahre impact What IS on the MlTF calculabons of non-exponential performance, m Improvement consumption power and (and rehab&y. hence this over size m reduchon 3 of factor a and costs) condmomng au assumptwns for mdnhal d&s? 5 level a compares also VII Table SLED data mexpenslve 10 usmg RAID MTTF a hfetune real the be will What RAID vs calculated of ustng the mdependentfdwe model? thus “Super In FUJUSU M2361A Eagle” group of size a with dusks a to 10 comparison offers roughly a factor of 5 improvement m performance? RAID 5 and 4 level a$ect drrks synchromred would How RAID “slowdown” does How performance, power consumption, and size with more than two orders of 871 [Ldvny behave? actually S How do 4gect sectors dcfchve RAID? magmtude improvement m (calculated) rehabdlty do How SLED maxtmtse to RAIDS write 5 level to HO schedule you RAID offers the further advantage of modular growth over mcrease 7.500 Rather than being hmited to MB per for $100,000 as m pamlkllsrd grow of the case of this model IBM disk, RAIDs can at ather the group processmg? Is there localtty of reference of aisk accesses tn transactwn redrstrtbuted drsks lnformahon he automatccally Can over 100 to [email protected] Sll,ooO) sue (1000 MB for dark the at allowed, are groups pamal if or, 7 to reduce contentwn $1,100) The fhp side of the (100 LS that RAID also size corn MB for in considerably smaller than a SLED makes sense Small systems Wdl dtsk controller deszgn hnut RAID pe~ormance~ connected phystcally and constructed be d&s 1000 to 100 should makes also costs incremental to standby hot spares further reduce pracncal How increase the MlTF of a large system the w M’ITR and processor? thereby example, For a impact of cablmg on is pe~onnance, and reluabdity~ What the cost, 5 few standby spares 10 level disk 1000 a size of group a with RAID and not be to a CPU so as connected to ltmlt Where should a RAID have a calculated MTIF of over 45 years could Cachet pe~ormance? Memory bus? II0 bus? concerns comment a A complete deslgnmg of prospect the fmal or Level 5 RAID The for polices stnping aiffer allow system file a Can d$erentjiles? transachon processmg system from enher a Level 1 allows mexpenslve of megabyte per power lower drasucally systems d&s What RAID? the role of solid state drsks and WORMS m a IS battery consider to designers power on array--the dusk whole the for backup What IS the zmpact RAID of “paralkl access” disks (access to every . . ..- 110 Eagles for needed Another Super FUJITSU two than less is dusks PC approach be use a few such d&s to save the contents of bat&y would to - SLED RAID RAID5L RAID SLED RAIDSL ti%mcte~hCS Small I/o E Capacity q •I LargevO v SLED (Fyusu (IBM (10010) v SLED (lOJO) bener (>Ibena (CP3100) M2361) (>I (CP3100) 33&I) fw MD) MD) for 1,000 167 600 133 7,500 10.000 (MB) Capacity Data Formatted 90% $ll-$8 22-9 $20-$1725-15 $ll-$8 $18-510 ) (ContmUer FncefMB lncl 80% 20,000 410 8,200,OOO MTIT Rated 3 27 30,000 820,000 (hours) 70% (hours) in MTTF 9 practice 100,000 9 9 9 9 60% No 110 Actuators 4 11 225 11 1 50% Max I/owctuator 30 50 6 30 40 8 Max Grouped RMW/box 1250 100 125 125 62 20 40% RMW/box 825 Max Individual 100 83 82 42 20 30% I/OS/Actuator 20 30 7 20 Typ 8 24 20% Gmuped Typ RMW/hox 833 60 83 139 69 12 10% Typ 550 RMW/box Individual 60 9 2 55 12 46 0% 10 Volume/Box (cubic feet) 24 24 34 34 1 1 2 3 4 5 1100 (W) Power/box 6,600 110 60 640 58 Level RAID 600 7,500 100-1000 5-75 06-6 7 Expansion Mm lOO-loo0 (MB) Size RAID to 5 Level AK4 model disk 3380 IBM of Companson usmg VII Table Large Plot of 5 (Grouped) and Small (Indrvrdual) Figure group a and d&s 3100s CP Conners I00 &Associates comparison a and 10 of sne Read-Mod&-Writes second per disk and useable storage per to mexpensrve 10 usrng RAID 5 level a data Eagle” of the Fujitsu M2361A ‘Super G=lO) RAID of levels five all for capacity (D=lOO, We disks wrth columns group sne of 10 Numbers greater than 1 m the comparison a levels wtth for umformly factor S stngle a all assume S=l3 favor the RAID where neea’ed II IS 115

8 References Acknowledgements IEEE Industries,’ Micro and Mm1 “The Bell, G C S-t] [Bell parhclpated to acknowledge wish followmg people who m the the We Vol 17 No 10 (October 1984). pp 14-30 Compur~r these from dlscusslons Ideas John Stonebraker, which Michael emerged prcscntaoon panel Feb 1985 ‘85 ISSCC session, Jo) B at 851 [Jo) Ousterhout, Anapum Doug Johnson, Gaetano Bone110 Ken Bhlde, Lutz, P and G C Slruloreh. A Bell, Compnler Newell, D S2] [Slculorcl, David Wood, and students m SPATS semmar offered at U C Clark H111, and Exm~lec, Smu IIUL r Prm 46 p y’kr Fall 1987 We who people followmg the thank also wish to Berkeley III m “Progress Moore, E G 751 [Moore Digital Integrated Electromcs,” of preparation the m useful Anapum Bhtde, gave paper this comments Meerng, EIecrromL lnregrated Drg~tol IEEE Proc (1975). p 11 Dewce D~tzel, Pete Dieter Doughs, Fred Gawlsk, Dave David, Ron Chen, Jim G 861 [Mlcrs “Microprocessor House, I.. D and Yu, C Y A Mycr\ J Gray, Pendleton, Martm Schulze, and Joan Johnson, Doug I-h11 Mark 74, ” (December 12, no 1986). Trends Vol IEEE, Proc Technology supported was work This Touau Her& Science National the by pp 1605-1622 Foundauon under grant # MIP-8715235 Honeyman, P Cullmgford, R Molma, Garcld H 841 [Garcia Lipton, R Report Case Massive Memory,” Technical for 326, Dept of EE “The Prmceton CS. and 1984 May Univ. Appendix Rehabhty Calculation “The 861 W Myers, [Myers Compeutweness of the Umted States Drsk M’ITFG,,,~ We first 11 ’ Usmg probabdny theory we can calculate the Industry,” IEEE Computer, Vol 19, No (January 1986), pp 85-90 “Advances P 871 D Frank, (Frank in Head Technology,” presentauon at assume a bmsed uses model Our rates fadure exponenti and Independent Informauon Course, Shorf Technology Dask m Challenges Insutute for that probablhty the bemg heads second of a corn with the probabdlty Cahfomla, Santa Umversrty, Clara Santa Technology, Storage Clara, failure fadures Smce fadure first a of MTIR the wllhm occur wdl dusk 15-17.1987 December exponential are 811 D L of [Stevens Storage,” Evoluuon “The Stevens, IBM Magneuc and Research of Vol No 5, Sept 1981, Journal pp 25, Development, MTTR) Probabhty(at of the remammg disks fadmg m one least 663-675 = [ 1 - (e-mDl,)(G+c-l) ] [Harker81] J Disk of Century Quarter “A File , al et Harker M Innovatton,” tbtd , pp 677-689 cases pracucal In all of [Amdahl671 M Amdahl, “Vah&ty to the single processor approach G Proceedrngs 1967 AFIPS capabdlties,” compuhng scale large achlevmg (Atlanttc Jersey Joint Computer Conference Vol 30 Spnng City, New 483-485 pp 1%7), Apnl DeWm, J D and Boral H [Boral83] Ideas An Machmes “Database approxunately smce X c 0 for X -Z-Z 1s esX) and 1 - (1 Future the of Cntlque A of Whose Time Has Passed? Database on Conf Internarwnal Proc Machmes,” by Edited Machmes, Database [email protected](at one of the remammg disks fading m MTTX) least 1983 Berlm, H -0 Lelhch and M M&off, Spnnger-Verlag, m*(G+C-l)/MTIFD,k = GC IBM “IBM 3380 Duect Access Storage Introducuon.” 871 [IBM Then that on a tlus fiulun we fbp corn &sk 26-4491-O. September 1987 a heads => before system crash, the a second fatlure occurs because D 871 ,1987 commumcauon, pnvate Nov Gawhck, [Gawbck 871 “M2361A Mlm-Disk fast Dnve Engmeenng was reprured, [FUJITSU Specifications,” and error from recover => tads (revised) conanue B03P-4825-OOOlA ,1987. Feb DBOOO3-00 [Adaptec 871 AIC-6250, IC Producr G&e. Adaptec, stock # Then Fadures] between Expectedrune = p 46 B, rev 1987, mGroup M S Khoshaflan, H Boral. Llvny, (L1vny871 “Multi-disk , flips * Expectedin of unul fmt heads] management 1987 May algontbms.” UGMETRXS, ACM of ?roc disk M IEEE Trans interleaving,” Y Kim. “Synchronized [Kim 861 Expectedrune Fad-1 between 1986 Nov on Computers, vol C-35, 11, no = “Disk Stnpmg,” IEEE 861 K [Salem Salem and Garcia-Molma, H , Probabfity(heads) Engmeenng, Data on Int 1986 1986 Conf 1988 press. zn Shadowing,” Gray, J and Bltton D 881 [Bitton “D&c mD,sk F Kurzwed, “Small &sk Arrays - The Emergmg [Kmzweil88] = COMPCON Approach to High Performance,” presentauon at Sprmg CA 88, March 1.1988, Franasco, San R W Hammmg, “Error Detectmg and Correcting [Hammmg50] (MTTFD,skP The Bell System Techmcal Journal, Vol XXVI, No 2 (Apnl Codes,” 147-160 pp 1950). = -Group commumcauon, [Hdlts 871 D Hilhs. pnvate October, 1987 (G+C)*(G+C-l)*MTIR Tolerance A Park and K Baiasubramanmn, “Provldmg Fault W [Pmk Storage Secondary Parallel m Department Computer of Systems,” have not precisely exponenual m our model, but we Group farlure IS Science, Prmceton Umvemty. CS-TR-O57-86, Nov 7.1986 of MTTR << validated thus slmphfymg assumption for pracucal cases [Magmms 871 N B Magmms. “Store More, Spend Less Mid-range of the whole system JUSt MTTF/(G+C) This makes the MTTF O$ons Abound.“Comp~rerworid, Nov 16.1987. p 71 the nG by divided MTl’FGmu,, groups, number of Sets F Dennmn and D P.J Slutz, “Generalized Workmg lDermmn 781 for Segment Reference S&gs,” CACM, vol 21, no 9. (Sept. 1978) pp 750-759 multiprocessor of class new a “Multls , C Bell, G 851 [Bell computers,“Snence. vol. 228 (Apnl26.1985) 462-467 116

Related documents