Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-06 Thread Ross Smith
Hmm... got a bit more information for you to add to that bug I think. Zpool import also doesn't work if you have mirrored log devices and either one of them is offline. I created two ramdisks with: # ramdiskadm -a rc-pool-zil-1 256m # ramdiskadm -a rc-pool-zil-2 256m And added them to the

[zfs-discuss] more ZFS recovery

2008-08-06 Thread Tom Bird
Hi, Have a problem with a ZFS on a single device, this device is 48 1T SATA drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had a ZFS on it as a single device. There was a problem with the SAS bus which caused various errors including the inevitable kernel panic, the thing

Re: [zfs-discuss] OpenSolaris+ZFS+RAIDZ+VirtualBox - ready for production systems?

2008-08-06 Thread Orvar Korvar
I use a Intel Q9450 + P45 mobo + ATI 4850 + ZFS + VirtualBox. I have installed WinXP. It works good and is stable. There are features not implemented yet, though. For instance USB. I suggest you try VB yourself. It is ~20MB and installs quick. I used it on 1GB RAM P4 machine. It worked fine.

[zfs-discuss] zfs status -v tries too hard?

2008-08-06 Thread James Litchfield
After some errors were logged as to a problem with a ZFS file system, I ran zfs status followed by zfs status -v... # zpool status pool: ehome state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore

Re: [zfs-discuss] zpool upgrade wrecked GRUB

2008-08-06 Thread Timothy Noronha
Almost. I did exactly the same thing to my system -- upgrading ZFS. The 2008.11 development snapshot CD I found is based on snv_93 and doesn't yet suport ZFS v.11 so it refuses to import the pool. My system doesn't have a DVD drive, so I cannot boot the SXCE snv_94 DVD. I guess I have to

Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-06 Thread Neil Perrin
Ross, Thanks, I have updated the bug with this info. Neil. Ross Smith wrote: Hmm... got a bit more information for you to add to that bug I think. Zpool import also doesn't work if you have mirrored log devices and either one of them is offline. I created two ramdisks with: #

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Richard Elling
Tom Bird wrote: Hi, Have a problem with a ZFS on a single device, this device is 48 1T SATA drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had a ZFS on it as a single device. There was a problem with the SAS bus which caused various errors including the inevitable

[zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Bryan Allen
Good afternoon, I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The pool is hanging off two LSI Logic SAS3041X-Rs (no RAID configured). When I put a moderate amount of load on the zpool (like, say, copying many files locally, or deleting a large number of ZFS fs), the

[zfs-discuss] Strange burstiness in write speed with a mirror

2008-08-06 Thread Will Murnane
I've got a pool which I'm currently syncing a few hundred gigabytes to using rsync. The source machine is pretty slow, so it only goes at about 20 MB/s. Watching zpool iostat -v local-space 10, I see a pattern like this (trimmed to take up less space): capacity operations

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Will Murnane
On Wed, Aug 6, 2008 at 13:31, Bryan Allen [EMAIL PROTECTED] wrote: I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The pool is hanging off two LSI Logic SAS3041X-Rs (no RAID configured). You might try taking out 4gb of the ram (!). Some 32-bit drivers have problems doing

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Miles Nordin
re == Richard Elling [EMAIL PROTECTED] writes: tb == Tom Bird [EMAIL PROTECTED] writes: tb There was a problem with the SAS bus which caused various tb errors including the inevitable kernel panic, the thing came tb back up with 3 out of 4 zfs mounted. re In general, ZFS can

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Will Murnane
On Wed, Aug 6, 2008 at 13:57, Miles Nordin [EMAIL PROTECTED] wrote: re == Richard Elling [EMAIL PROTECTED] writes: tb == Tom Bird [EMAIL PROTECTED] writes: tb There was a problem with the SAS bus which caused various tb errors including the inevitable kernel panic, the thing came tb

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Richard Elling
Miles Nordin wrote: re == Richard Elling [EMAIL PROTECTED] writes: tb == Tom Bird [EMAIL PROTECTED] writes: tb There was a problem with the SAS bus which caused various tb errors including the inevitable kernel panic, the thing came tb back up with 3 out of 4 zfs

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Miles Nordin
re == Richard Elling [EMAIL PROTECTED] writes: c If that's really the excuse for this situation, then ZFS is c not ``always consistent on the disk'' for single-VDEV pools. re I disagree with your assessment. The on-disk format (any re on-disk format) necessarily assumes no

[zfs-discuss] zfs crash CR6727355 marked incomplete

2008-08-06 Thread Michael Hale
A bug report I've submitted for a zfs-related kernel crash has been marked incomplete and I've been asked to provide more information. This CR has been marked as incomplete by User 1-5Q-2508 for the reason Need More Info. Please update the CR providing the information requested in the

Re: [zfs-discuss] zfs crash CR6727355 marked incomplete

2008-08-06 Thread Neil Perrin
Michael Hale wrote: A bug report I've submitted for a zfs-related kernel crash has been marked incomplete and I've been asked to provide more information. This CR has been marked as incomplete by User 1-5Q-2508 for the reason Need More Info. Please update the CR providing the

Re: [zfs-discuss] Strange burstiness in write speed with a mirror

2008-08-06 Thread Bob Friesenhahn
On Wed, 6 Aug 2008, Will Murnane wrote: I've got a pool which I'm currently syncing a few hundred gigabytes to using rsync. The source machine is pretty slow, so it only goes at about 20 MB/s. Watching zpool iostat -v local-space 10, I see a pattern like this (trimmed to take up less

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Thomas Garner
For what it's worth I see this as well on 32-bit Xeons, 1GB ram, and dual AOC-SAT2-MV8 (large amounts of io sometimes resulting in lockup requiring a reboot --- though my setup is Nexenta b85). Nothing in the logging, nor loadavg increasing significantly. It could be the regular Marvell driver

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Brian D. Horn
In the most recent code base (both OpenSolaris/Nevada and S10Ux with patches) all the known marvell88sx problems have long ago been dealt with. However, I've said this before. Solaris on 32-bit platforms has problems and is not to be trusted. There are far, far too many places in the source

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread James C. McPherson
Brian D. Horn wrote: In the most recent code base (both OpenSolaris/Nevada and S10Ux with patches) all the known marvell88sx problems have long ago been dealt with. However, I've said this before. Solaris on 32-bit platforms has problems and is not to be trusted. There are far, far too

Re: [zfs-discuss] OpenSolaris+ZFS+RAIDZ+VirtualBox - ready for production systems?

2008-08-06 Thread Evert Meulie
Oh, I have 'played' with them all: VirtualBox, VMware, KVM... But now I need to set up a production system for various Linux Windows guests. And none of the 3 mentioned are 100% perfect, so the choice is difficult... My first choice would be KVM+RAIDZ, but since KVM only works on Linux, and

Re: [zfs-discuss] zfs-auto-snapshot 0.11 work (was Re: zfs-auto-snapshot with at scheduling )

2008-08-06 Thread Rob
The other changes that will appear in 0.11 (which is nearly done) are: Still looking forward to seeing .11 :) Think we can expect a release soon? (or at least svn access so that others can check out the trunk?) This message posted from opensolaris.org

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Al Hopper
On Wed, Aug 6, 2008 at 8:20 AM, Tom Bird [EMAIL PROTECTED] wrote: Hi, Have a problem with a ZFS on a single device, this device is 48 1T SATA drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had a ZFS on it as a single device. There was a problem with the SAS bus which

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Carson Gaspar
Brian D. Horn wrote: In the most recent code base (both OpenSolaris/Nevada and S10Ux with patches) all the known marvell88sx problems have long ago been dealt with. Not true. The working marvell patches still have not been released for Solaris. They're still just IDRs. Unless you know

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Brian D. Horn
As far as I can tell from the patch web patches: For Solaris 10 x86 138053-01 should have the fixes it does depend on other earlier patches though). I find it very difficult to tell what the story is with patches as the patch numbers seem to have very little in them to correlate them to code

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Mike Gerdts
On Wed, Aug 6, 2008 at 6:22 PM, Carson Gaspar [EMAIL PROTECTED] wrote: Brian D. Horn wrote: In the most recent code base (both OpenSolaris/Nevada and S10Ux with patches) all the known marvell88sx problems have long ago been dealt with. Not true. The working marvell patches still have not been

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Peter Bortas
On Thu, Aug 7, 2008 at 5:32 AM, Peter Bortas [EMAIL PROTECTED] wrote: On Wed, Aug 6, 2008 at 7:31 PM, Bryan Allen [EMAIL PROTECTED] wrote: Good afternoon, I have a ~600GB zpool living on older Xeons. The system has 8GB of RAM. The pool is hanging off two LSI Logic SAS3041X-Rs (no RAID

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Marc Bevand
Bryan, Thomas: these hangs of 32-bit Solaris under heavy (fs, I/O) loads are a well known problem. They are caused by memory contention in the kernel heap. Check 'kstat vmem::heap'. The usual recommendation is to change the kernelbase. It worked for me. See:

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Anton B. Rang
From the ZFS Administration Guide, Chapter 11, Data Repair section: Given that the fsck utility is designed to repair known pathologies specific to individual file systems, writing such a utility for a file system with no known pathologies is impossible. That's a fallacy (and is incorrect

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Anton B. Rang
As others have explained, if ZFS does not have a config with data redundancy - there is not much that can be learned - except that it just broke. Plenty can be learned by just looking at the pool. Unfortunately ZFS currently doesn't have tools which make that easy; as I understand it, zdb

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Nicolas Williams
On Wed, Aug 06, 2008 at 02:23:44PM -0400, Will Murnane wrote: On Wed, Aug 6, 2008 at 13:57, Miles Nordin [EMAIL PROTECTED] wrote: If that's really the excuse for this situation, then ZFS is not ``always consistent on the disk'' for single-VDEV pools. Well, yes. If data is sent, but

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Nicolas Williams
On Wed, Aug 06, 2008 at 03:44:08PM -0400, Miles Nordin wrote: re == Richard Elling [EMAIL PROTECTED] writes: c If that's really the excuse for this situation, then ZFS is c not ``always consistent on the disk'' for single-VDEV pools. re I disagree with your assessment. The

Re: [zfs-discuss] ZFS on 32bit.

2008-08-06 Thread Brian D. Horn
Yes, there have been bugs with heavy I/O and ZFS running the system out of memory. However, there was a contention in the thread about it possibly being due to marvell88sx driver bugs (most likely not). Further, my mention of 32-bit Solaris being unsafe at any speed is still true. Without

Re: [zfs-discuss] zpool upgrade wrecked GRUB

2008-08-06 Thread andrew
so finally, I gathered up some courage and installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c2d0s0 seemed to write out what I assume is a new MBR. Not the MBR - the stage1 and 2 files are written to the boot area of the Solaris FDISK partition. tried to also installgrub on the

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Miles Nordin
re == Richard Elling [EMAIL PROTECTED] writes: re If your pool is not redundant, the chance that data re corruption can render some or all of your data inaccessible is re always present. 1. data corruption != unclean shutdown 2. other filesystems do not need a mirror to recover

Re: [zfs-discuss] more ZFS recovery

2008-08-06 Thread Miles Nordin
nw == Nicolas Williams [EMAIL PROTECTED] writes: nw Without ZFS the OP would have had silent, undetected (by the nw OS that is) data corruption. It sounds to me more like the system would have paniced as soon as he pulled the cord, and when it rebooted, it would have rolled the UFS log