Re: [zfs-discuss] file system full corruption in ZFS

2007-05-29 Thread dudekula mastan
Atleaset in my experience, I saw Corruptions when ZFS file system was full. So far there is no way to check the file system consistency on ZFS (to the best of my knowledge). ZFS people claiming that ZFS file system is always consistent and there is no need for FSCK command. I cannot

Re: [zfs-discuss] file system full corruption in ZFS

2007-05-29 Thread Manoj Joseph
Michael Barrett wrote: Normally if you have a ufs file system hit 100% and you have a very high level of system and application load on the box (that resides in the 100% file system) you will run into inode issues that require a fsck and show themselves by not being about to long list out all

Re: [zfs-discuss] file system full corruption in ZFS

2007-05-29 Thread Manoj Joseph
dudekula mastan wrote: Atleaset in my experience, I saw Corruptions when ZFS file system was full. So far there is no way to check the file system consistency on ZFS (to the best of my knowledge). ZFS people claiming that ZFS file system is always consistent and there is no need for FSCK

Re: [zfs-discuss] file system full corruption in ZFS

2007-05-29 Thread Michael Barrett
dudekula mastan wrote: Atleaset in my experience, I saw Corruptions when ZFS file system was full. So far there is no way to check the file system consistency on ZFS (to the best of my knowledge). ZFS people claiming that ZFS file system is always consistent and there is no need for FSCK

Re: [zfs-discuss] file system full corruption in ZFS

2007-05-29 Thread Michael Barrett
Manoj Joseph wrote: Michael Barrett wrote: Normally if you have a ufs file system hit 100% and you have a very high level of system and application load on the box (that resides in the 100% file system) you will run into inode issues that require a fsck and show themselves by not being about

[zfs-discuss] zfs migration

2007-05-29 Thread Krzys
Hello folks, I have a question. Currently I have zfs pool (mirror) on two internal disks... I wanted to connect that server to SAN, then add more storage to this pool (double the space) then start using it. Then what I wanted to do is just take out the internal disks out of that pool and use

Re: [zfs-discuss] Re: b64 zfs on boot ?

2007-05-29 Thread Lori Alt
Also, build 64 still had this bug: 6553537 (zfs root fails to boot from a snv_63+zfsboot-pfinstall netinstall image) which affects zfs roots set up with netinstall/dvdinstall, but not the manual install. The bug is fixed in build 65. And yes, the standard installation software still

Re: [zfs-discuss] zfs migration

2007-05-29 Thread Cyril Plisko
On 5/29/07, Krzys [EMAIL PROTECTED] wrote: Hello folks, I have a question. Currently I have zfs pool (mirror) on two internal disks... I wanted to connect that server to SAN, then add more storage to this pool (double the space) then start using it. Then what I wanted to do is just take out the

Re: [zfs-discuss] zfs migration

2007-05-29 Thread Krzys
Perfect, i will try to play with that... Regards, Chris On Tue, 29 May 2007, Cyril Plisko wrote: On 5/29/07, Krzys [EMAIL PROTECTED] wrote: Hello folks, I have a question. Currently I have zfs pool (mirror) on two internal disks... I wanted to connect that server to SAN, then add more

[zfs-discuss] how to move a zfs file system between disks

2007-05-29 Thread H E
Hi all, I am trying to write a script to move disk partitions from one disk to another. The ufs partitions are transfered using ufsdump and ufsrestore - quite easily. My question is : How can I do a dump and restore of a partition that contains a ZFS file system? P.S. My script would have

Re: [zfs-discuss] ZVol Panic on 62

2007-05-29 Thread Mark J Musante
On Fri, 25 May 2007, Ben Rockwood wrote: May 25 23:32:59 summer unix: [ID 836849 kern.notice] May 25 23:32:59 summer ^Mpanic[cpu1]/thread=1bf2e740: May 25 23:32:59 summer genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ff00232c3a80 addr=490 occurred in

[zfs-discuss] Deterioration with zfs performace and recent zfs bits?

2007-05-29 Thread Jürgen Keil
Has anyone else noticed a significant zfs performance deterioration when running recent opensolaris bits? My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow compilation disabled; using an lzjb compressed zpool / zfs

[zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread Lida Horn
Point one, the comments that Eric made do not give the complete picture. All the tests that Eric's referring to were done through ZFS filesystem. When sequential I/O is done to the disk directly there is no performance degradation at all. Second point, it does not take any additional time in

Re: [zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread johansen-osdev
When sequential I/O is done to the disk directly there is no performance degradation at all. All filesystems impose some overhead compared to the rate of raw disk I/O. It's going to be hard to store data on a disk unless some kind of filesystem is used. All the tests that Eric and I have

[zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread eric kustarz
On May 29, 2007, at 1:25 PM, Lida Horn wrote: Point one, the comments that Eric made do not give the complete picture. All the tests that Eric's referring to were done through ZFS filesystem. When sequential I/O is done to the disk directly there is no performance degradation at all.

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-29 Thread Richard Elling
Robert Milkowski wrote: Hello Richard, Thursday, May 24, 2007, 6:10:34 PM, you wrote: RE Incidentally, thumper field reliability is better than we expected. This is causing RE me to do extra work, because I have to explain why. I've got some thumpers and there're very reliable. Even disks

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-29 Thread Carson Gaspar
Richard Elling wrote: But I am curious as to why you believe 2x CF are necessary? I presume this is so that you can mirror. But the remaining memory in such systems is not mirrored. Comments and experiences are welcome. CF == bit-rot-prone disk, not RAM. You need to mirror it for all the

RE: [zfs-discuss] Re: ZFS - Use h/w raid or not?Thoughts. Considerations.

2007-05-29 Thread Ellis, Mike
Also the unmirrored memory for the rest of the system has ECC and ChipKill, which provides at least SOME protection against random bit-flips. -- Question: It appears that CF and friends would make a descent live-boot (but don't run on me like I'm a disk) type of boot-media due to the limited

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not?Thoughts. Considerations.

2007-05-29 Thread Richard Elling
Ellis, Mike wrote: Also the unmirrored memory for the rest of the system has ECC and ChipKill, which provides at least SOME protection against random bit-flips. CF devices, at least the ones we'd be interested in, do have ECC as well as spare sectors and write verification. Note: flash

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not?Thoughts. Considerations.

2007-05-29 Thread Bill Sommerfeld
On Tue, 2007-05-29 at 18:48 -0700, Richard Elling wrote: The belief is that COW file systems which implement checksums and data redundancy (eg, ZFS and the ZFS copies option) will be redundant over CF's ECC and wear leveling *at the block level.* We believe ZFS will excel in this area, but

RE: [zfs-discuss] Re: ZFS - Use h/w raid or not?Thoughts.Considerations.

2007-05-29 Thread Ellis, Mike
Hey Richard, thanks for sparking the conversation... This is a very interesting topic (especially if you take it out of the HPC we need 1000 servers to have this minimal boot image space into general purpose/enterprise computing) -- Based on your earlier note, it appears you're not planning

[zfs-discuss] RAIDZn+1 (related to the h/w raid ponderings)

2007-05-29 Thread Dale Ghent
Dropping in on this convo a little late, but here's something that has been nagging me - gaining the ability to mirror two (or more) RAIDZ sets. A little background on why I'd really like to see this I have two data centers on my campus and my FC-based SAN stretches between them.

Re: [zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread Lida Horn
Roch Bourbonnais wrote: Le 29 mai 07 à 22:59, [EMAIL PROTECTED] a écrit : When sequential I/O is done to the disk directly there is no performance degradation at all. All filesystems impose some overhead compared to the rate of raw disk I/O. It's going to be hard to store data on a disk

RE: [zfs-discuss] Re: ZFS - Use h/w raid or not?Thoughts.Considerations.

2007-05-29 Thread Ellis, Mike
Hey Richard, thanks for sparking the conversation... This is a very interesting topic (especially if you take it out of the HPC we need 1000 servers to have this minimal boot image space into general purpose/enterprise computing) -- Based on your earlier note, it appears you're not planning

[zfs-discuss] Mirrored RAID-z2

2007-05-29 Thread Brett
Hi All, I've been reading through the documentation for ZFS and have noted in several blogs that ZFS should support more advanced layouts like RAID1+0, RAID5+0, etc. I am having a little trouble getting these more advanced configurations to play nicely. I have two disk shelves, each with 9x

Re: [zfs-discuss] Mirrored RAID-z2

2007-05-29 Thread Ian Collins
Brett wrote: Hi All, I've been reading through the documentation for ZFS and have noted in several blogs that ZFS should support more advanced layouts like RAID1+0, RAID5+0, etc. I am having a little trouble getting these more advanced configurations to play nicely. I have two disk