[zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Douglas Denny
Last Friday, one of our V880s kernel panicked with the following message.This is a SAN connected ZFS pool attached to one LUN. From this, it appears that the SAN 'disappeared' and then there was a panic shortly after. Am I reading this correctly? Is this normal behavior for ZFS? This is a

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread James C. McPherson
Douglas Denny wrote: Last Friday, one of our V880s kernel panicked with the following message.This is a SAN connected ZFS pool attached to one LUN. From this, it appears that the SAN 'disappeared' and then there was a panic shortly after. Am I reading this correctly? Yes. Is this

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Douglas Denny
On 12/4/06, James C. McPherson [EMAIL PROTECTED] wrote: Is this normal behavior for ZFS? Yes. You have no redundancy (from ZFS' point of view at least), so ZFS has no option except panicing in order to maintain the integrity of your data. This is interesting from a implementation point of

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Richard Elling
Douglas Denny wrote: On 12/4/06, James C. McPherson [EMAIL PROTECTED] wrote: Is this normal behavior for ZFS? Yes. You have no redundancy (from ZFS' point of view at least), so ZFS has no option except panicing in order to maintain the integrity of your data. This is interesting from a

Re: [zfs-discuss] replacing a drive in a raidz vdev

2006-12-04 Thread Krzys
I am having no luck replacing my drive as well. few days ago I replaced my drive and its completly messed up now. pool: mypool2 state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Jason J. W. Williams
Hi all, Having experienced this, it would be nice if there was an option to offline the filesystem instead of kernel panicking on a per-zpool basis. If its a system-critical partition like a database I'd prefer it to kernel-panick and thereby trigger a fail-over of the application. However, if

Re: [zfs-discuss] replacing a drive in a raidz vdev

2006-12-04 Thread Bill Sommerfeld
On Mon, 2006-12-04 at 13:56 -0500, Krzys wrote: mypool2/[EMAIL PROTECTED] 34.4M - 151G - mypool2/[EMAIL PROTECTED] 141K - 189G - mypool2/d3 492G 254G 11.5G legacy I am so confused with all of this... Why its taking so long to replace that one bad

[zfs-discuss] ZFS on multi-volume

2006-12-04 Thread Albert Shih
Hi all Sorry if my question is not very clear, I'm not very familiar with ZFS (why I ask this question). Suppose I've lot of low-cost raid array disk (like Brownie meaning IDE/SATA disk)) all in SCSI attachement (lot of ~ 10 and the sum of space is ~ 20 To). Now if I buy some «high» level big

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Matthew Ahrens
Jason J. W. Williams wrote: Hi all, Having experienced this, it would be nice if there was an option to offline the filesystem instead of kernel panicking on a per-zpool basis. If its a system-critical partition like a database I'd prefer it to kernel-panick and thereby trigger a fail-over of

[zfs-discuss] Re: ZFS related kernel panic

2006-12-04 Thread Peter Eriksson
If you take a look at these messages the somewhat unusual condition that may lead to unexpected behaviour (ie. fast giveup) is that whilst this is a SAN connection it is achieved through a non- Leadville config, note the fibre-channel and sd references. In a Leadville compliant

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Jason J. W. Williams
Any chance we might get a short refresher warning when creating a striped zpool? O:-) Best Regards, Jason On 12/4/06, Matthew Ahrens [EMAIL PROTECTED] wrote: Jason J. W. Williams wrote: Hi all, Having experienced this, it would be nice if there was an option to offline the filesystem

[zfs-discuss] it's me Lester

2006-12-04 Thread Lester Harmon
You cannot make big returns on an oil company AFTER huge profits are reported. You also can't make them by getting in AFTER successful drilling results. Everyone needs a helping hand at getting in BEFORE the big events, and that's what we are giving you here. Great product, great sector,

Re: [zfs-discuss] Re: ZFS related kernel panic

2006-12-04 Thread James C. McPherson
Peter Eriksson wrote: If you take a look at these messages the somewhat unusual condition that may lead to unexpected behaviour (ie. fast giveup) is that whilst this is a SAN connection it is achieved through a non- Leadville config, note the fibre-channel and sd references. In a Leadville

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Dale Ghent
Matthew Ahrens wrote: Jason J. W. Williams wrote: Hi all, Having experienced this, it would be nice if there was an option to offline the filesystem instead of kernel panicking on a per-zpool basis. If its a system-critical partition like a database I'd prefer it to kernel-panick and thereby

[zfs-discuss] Re: ZFS on multi-volume

2006-12-04 Thread Anton B. Rang
It is possible to configure ZFS in the way you describe, but your performance will be limited by the older array. All mirror writes have to be stored on both arrays before they are considered complete, so writes will be as slow as the slowest disk or array involved. ZFS does not currently

[zfs-discuss] Re: ZFS related kernel panic

2006-12-04 Thread Anton B. Rang
And to panic? How can that in any sane way be good way to protect the application? *BANG* - no chance at all for the application to handle the problem... I agree -- a disk error should never be fatal to the system; at worst, the file system should appear to have been forcibly unmounted (and

Re: [zfs-discuss] Re: ZFS related kernel panic

2006-12-04 Thread James C. McPherson
Anton B. Rang wrote: Peter Eriksson wrote: And to panic? How can that in any sane way be good way to protect the application? *BANG* - no chance at all for the application to handle the problem... I agree -- a disk error should never be fatal to the system; at worst, the file system should

Re: [zfs-discuss] Re: ZFS related kernel panic

2006-12-04 Thread Richard Elling
Anton B. Rang wrote: And to panic? How can that in any sane way be good way to protect the application? *BANG* - no chance at all for the application to handle the problem... I agree -- a disk error should never be fatal to the system; at worst, the file system should appear to have been

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Richard Elling
Dale Ghent wrote: Matthew Ahrens wrote: Jason J. W. Williams wrote: Hi all, Having experienced this, it would be nice if there was an option to offline the filesystem instead of kernel panicking on a per-zpool basis. If its a system-critical partition like a database I'd prefer it to

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Dale Ghent
Richard Elling wrote: Actually, it would be interesting to see how many customers change the onerror setting. We have some data, just need more days in the hour. I'm pretty sure you'd find that info in over 6 years of submitted Explorer output :) I imagine that stuff is sandboxed away in

[zfs-discuss] need Clarification on ZFS

2006-12-04 Thread dudekula mastan
Hi All, I am new to solaris. Please clarify me on the following questions. 1) On Linux to know the presence of ext2/ext3 file systems on a device we use tune2fs command. Similar to tune2fs command is there any command to know the presence of ZFS file system on a device ? 2)

Re: [zfs-discuss] need Clarification on ZFS

2006-12-04 Thread Jason A. Hoffman
Hi Mastan, On Dec 4, 2006, at 11:13 PM, dudekula mastan wrote: Hi All, I am new to solaris. Please clarify me on the following questions. 1) On Linux to know the presence of ext2/ext3 file systems on a device we use tune2fs command. Similar to tune2fs command is there any command to know

Re: [zfs-discuss] need Clarification on ZFS

2006-12-04 Thread Darren Dunham
1) On Linux to know the presence of ext2/ext3 file systems on a device we use tune2fs command. Similar to tune2fs command is there any command to know the presence of ZFS file system on a device ? You can use 'zpool import' to check normal disk devices, or give an optional list of