--- Alan DuBoff <[EMAIL PROTECTED]> wrote: > On Tue, 10 Apr 2007, Chung Hang Christopher Chan > wrote: > > > Alan, maybe Nexenta is not quite considered Open > > Solaris but I had that installed and updated to > > 'elatte' with the b55 kernel/ON but then I had to > wipe > > it out with Solaris Express b59 to get the latest > > iscsi-target code that is supposed to fix some > issue > > with the Windows iscsi initiator. > > That's certainly OpenSolaris to me.
Great, made your day then. > > > This box is the backup server for the company I > work > > for with two 750GB SATA disks sitting in > single-drive > > eSATA cases connected via eSATA cables for the zfs > > mirror array besides the single sata system disk. > > How did this support compare to your previous > operating system, whatever > that might be? I suspect Linux from the way you've > been talking about it. This backup server was Open Solaris from the start since I wanted to give Open Solaris a shot as there is no iscsi target support included with the Linux kernel as yet. > > Does Linux have good support for RAID bundled with > it? RAID0, RAID1, RAID10 are okay. Never tried RAID5 since I have heard quite a few horror stories about Linux software raid arrays in raid5 mode. > > What filesystem are you comparing zfs to? Hmm. I don't believe I have actually made disparaging comments about the Solaris kernel and stuff that comes with it besides the userland packaging. And nobody has done an in my face 'you do that with this on Open Solaris' so I guess that means I have to wait. > > > Plugging in card, turning on box and then plugging > in > > the sata drives and running cfgadm twice and then > a > > zpool command without any 'echo "magic" > > /proc/scsi' > > as you do on Linux was a really nice experience. > At > > least this is what you have to do with hotswap > stuff > > in Linux 2.4. Not sure what happens with the > latest > > 2.6.18 kernels that will come with RHEL5 which may > > even things. > > I'm not familiar with the support on Linux, but how > does a company like > Red Hat support such practice? I mean, give us > something to laugh about so > we can pee our pants! Ah, I don't know if you can find that in official Redhat docs (must be somewhere in the RHEL3 docs...) but you can go here for laughs: http://www.nber.org/sys-admin/granite-digital-linux.html The line: echo "scsi add-single-device 0 0 0 0" >/proc/scsi/scsi to tell the scsi system to rescan I believe... http://tldp.org/HOWTO/Software-RAID-HOWTO-4.html The line: echo "scsi remove-single-device 0 0 2 0" > /proc/scsi/scsi BEFORE you yank the disk... Oh, and you have to get the numbers right too... > > How can RHEL5 even things? Can it offer you 128-bit > filesystems? Does it > have stable RAID support? I was only talking about the hotswap procedures. :P regarding hardware raid...when will 3ware come to Solaris? Are those guys not at all interested? > > I admit to not having much experience with Linux > RAID since the 2.2/2.4 > kernel days, but the support was not anything I > would write home about and > the author was working on the drivers at VA Linux > Systems when I worked > there. > > Do you use ReiserFS? Ext3? What are you comparing > zfs to? Both of those are a joke and so is XFS, the other filesystem in the top three most popular Linux filesystems, so I obviously did not make any ZFS vs Linux 'x' filesystem. > > How did you configure your filesystems on zfs for > your testing? How many > disks did you give it? The two 750GB SATA disks for the mirror and they were given whole without any partitioning. > > > unless of course there is a newer driver release > > of the si3124 driver/SATA framework/whatever that > is > > behind to be what appears to be command timeouts, > as > > was first reported by Chris Csanady on the > > driver-discuss list, leading to long periods of > > inactivity during rcps of bzipped tarballs. > > What's your experience? Could be other hardware > related issues that he's > seeing. Have you tried this yourself? Well, he does have another chipset, si3132 I believe. My controller uses the si3124 chip. He is reporting cp from one zfs to another zfs on pools sitting on sata disks connected to his si3132 controller. I am reporting rcp from a linux box (maildir mail host + file server) to the solaris box will run into 'timeouts' at random times and i have apparently worked around it by using rsync over rsh instead of rcp. In other words I wanted to rcp the initial stuff over and then later use rsync to keep things in sync but in the end I had to use rsync over rsh to the initial copy since using rcp was too unpredictable. This is with Solaris Express b59 so if there has been an update to the si3124 driver/SATA framework, please let me know how I can get them so I can test. Send instant messages to your online friends http://uk.messenger.yahoo.com _______________________________________________ opensolaris-discuss mailing list [email protected]
