-----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Mithun Bhattacharya Sent: Saturday, January 14, 2006 1:00 PM To: The Linux-Delhi mailing list Subject: Re: [ilugd] NAS idea
--- Manish Verma <[EMAIL PROTECTED]> wrote: > i would like to add few more points while designing a NAS solution > > 1.writing of data on to HDD should be fast enough, at time when you > are > mounting central storage on multiple server the concurrent NFS > operation > makes it very difficult and the write on the box slows down e.g > netapp uses > wafl filesystem (or for that matter all NAS storage uses the same way > of > writing through cache) which does the writing of data from cache. > Your > solution should be based on using the cache to the maximum for both > read and > write. I have seen xfs,ext3 going down in the load condition. I guess corporate customers would like SCSI disks, even home users shouldnt go for anything less than SATA disks. ************** Its not the question of corporate of SCSI, people are using SATA disk in production as well. Its based on your requirement. netapp R200, Intransa IP SAN, they all use either SATA or PATA disks. If you are writing data from cache performence is not a problem only thing which you need to take care while using SATA / PATA is of redundancy, Dual Parity does that. I do recall NFS having lad issues that definitely needs to be looked into. Guess we will have to develop a proper set of test cases based on your feedback :). ********* If you are ready with you NAS header solution i can offer you a DE in my IDC. I would be more than happy to be your beta customer. > 2. You will have to check the boot time of your NAS box and it should > be > under control there are NAS boxes ( i would not put the name here, > but you > can easily find it out) which takes 15-20 minutes to boot. The box > should be > able to boot in 3-4 minutes even if it a abnormal reboot. I have used pretty simple installations of LVM over RAID5 on Fedora Core 3 and it did come up in a few min. I do agree complex LVM and RAID installations need to be tested. > 3.you should benchmark your NFS box for NFS operation not I/O > operation on > disk. You may get good I/O on disk but may not be able to get good > NFS > operation. > > 4.The redundancy of storage array should not be limited to RAID5 > because now > the storage capacity of individual disks are going high (500GB SATA, > 300GB > SCSI and FCAL are in the market) you should be able to handle dual > disk > failure in the same RAID group lets say you have 14 disk RAID group > and your > one disk failed and rebuilding is going on (which will take some time > as the > capacity is more) and during that time if your second disk also fails > then > your whole of the RAID volume will go offline, you should have dual > parity > kind of solution. Well isnt the concept of spare disks in RAID meant for these purposes ? *********************** spare disks are used accross the different RAID group they are teh global spares. If you have multiple RAID grup in a single box you can fail one disk in each raid group and spare will take over but if you are failing more than one disk raid group will crash. > 5.NAS solution should be modular i.e i should be able to add storage > and > processing both e.g if i am handling n NFS operation using one NAS > header > today with NTB storage and tomorrow if i want to increase my NFS > operation i > should be able to add more processing (NAS header to the same > storage) and > lets say i don't want to increase the NFS operation but i want to add > more > storage i should be able to do that as well and that too "on the fly" > because if my box is in production i cant shut it down. Adding disks would be limited by the enclosure holding them. A highly modular system I am afraid wont be there in a first cut we would probably have to set up a research lab or something where CPU and hard disks both are modular and are being used efficiently. Also if I add a CPU does it contribute to a existing box or creates a new box. There are various degrees of modularity which could be achieved but definitely not everything in the first version. ********************* Its not CPU and disk modularity...it should be DEs(disk enclosures) and Controller (NAS header) modularity i was talking about. > May be there are other things also can be considered and included but that is all i could put down off hand. > > When ever you are ready with your NAS solution i could be a good > customer Haha as soon as I get someone interested in coughing up resources you can be the beta tester. Unless ofcourse you have some spare cash and the knowhow to make cabinets suitable for housing hard disks with proper cooling. I wonder how temperature could be measured in the cabinet over a period of time - anyone has any thoughts on the same ? *********************** The SCSI DEs are available if you want to go for branded one there are vault (Dell) or MSA30/500/1000 from HP . All is required to put a NAS header on top of that and you are done with you NAS solution . The problem people are facing today is not of DEs but to have a good NAS header which can write data on to your storage Box. > for you :). also read on "onstor" while designing the solution. No kidding plz..gave you pointer to some good technology. Its up to you to read or not to read. Thanks Manish __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com _______________________________________________ ilugd mailinglist -- [email protected] http://frodo.hserus.net/mailman/listinfo/ilugd Archives at: http://news.gmane.org/gmane.user-groups.linux.delhi http://www.mail-archive.com/[email protected]/ _______________________________________________ ilugd mailinglist -- [email protected] http://frodo.hserus.net/mailman/listinfo/ilugd Archives at: http://news.gmane.org/gmane.user-groups.linux.delhi http://www.mail-archive.com/[email protected]/
