Re: [zfs-discuss] replacing a drive in a raidz vdev

2006-12-05 Thread Jeremy Teo
On 12/5/06, Bill Sommerfeld [EMAIL PROTECTED] wrote: On Mon, 2006-12-04 at 13:56 -0500, Krzys wrote: mypool2/[EMAIL PROTECTED] 34.4M - 151G - mypool2/[EMAIL PROTECTED] 141K - 189G - mypool2/d3 492G 254G 11.5G legacy I am so confused with all of

Re: [zfs-discuss] need Clarification on ZFS

2006-12-05 Thread Ian Collins
dudekula mastan wrote: 5) Like fsck command on Linux, is there any command to check the consistency of the ZFS file system ? As others have mentioned, ZFS doesn't require off line consistency checking. You can run 'zpool scrub' on a live system and check the result with 'zpool

Re: [zfs-discuss] need Clarification on ZFS

2006-12-05 Thread Joerg Schilling
dudekula mastan [EMAIL PROTECTED] wrote: 1) On Linux to know the presence of ext2/ext3 file systems on a device we use tune2fs command. Similar to tune2fs command is there any command to know the presence of ZFS file system on a device ? 2) When a device is shared between two

Re: [zfs-discuss] need Clarification on ZFS

2006-12-05 Thread Albert Shih
Le 04/12/2006 à 23:34:39-0800, Jason A. Hoffman a écrit Hi Mastan, Like this , Can We share zfs file system between two machines. If so please explain it. It's always going from machine 1 to machine 2? zfs send [EMAIL PROTECTED] | ssh [EMAIL PROTECTED] | zfs recv

Re: [zfs-discuss] Re: ZFS on multi-volume

2006-12-05 Thread Tim Foster
Hi Albert, On Tue, 2006-12-05 at 14:16 +0100, Albert Shih wrote: It's possible to configure the server, the high level raid array, and the pool of my old array raid to do : 1/ When the server read/write he do from high level raid 2/ The server make a copie of all data from high

Re: [zfs-discuss] need Clarification on ZFS

2006-12-05 Thread Tim Foster
On Tue, 2006-12-05 at 14:56 +0100, Albert Shih wrote: That's impressive. Whath the size of the file you send throught ssh ? Is that size is exactly same of the FS or the occupation of FS ? Can I send just the diff ? For example At t=0 I send a big file using your command

Re: [zfs-discuss] Re: ZFS on multi-volume

2006-12-05 Thread Roch - PAE
How about attaching the slow storage and kick off a scrub during the nights ? Then detach in the morning ? Downside: you are running an unreplicated pool during the day. Storage side errors won't be recoverable. -r Albert Shih writes: Le 04/12/2006 à 21:24:26-0800, Anton B. Rang a écrit

[zfs-discuss] Re: need Clarification on ZFS

2006-12-05 Thread Anton B. Rang
is there any command to know the presence of ZFS file system on a device ? fstyp is the Solaris command to determine what type of file system may be present on a disk: # fstyp /dev/dsk/c0t1d0s6 zfs When a device is shared between two machines [ ... ] You can use the same mount/unmount

[zfs-discuss] Re: Re: ZFS related kernel panic

2006-12-05 Thread Anton B. Rang
But it's still not the application's problem to handle the underlying device failure. But it is the application's problem to handle an error writing to the file system -- that's why the file system is allowed to return errors. ;-) Some applications might not check them, some applications

[zfs-discuss] weird thing with zfs

2006-12-05 Thread Krzys
ok, two weeks ago I did notice one of my disk in zpool got problems. I was getting Corrupt label; wrong magic number messages, then when I looked in format it did not see that disk... (last disk) I had that setup running for few months now and all of the sudden last disk failed. So I ordered

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Krzys
Thanks, ah another wird thing is that when I run format on that frive I get a coredump :( format Searching for disks... efi_alloc_and_init failed. done AVAILABLE DISK SELECTIONS: 0. c1t0d0 SEAGATE-ST337LC-D703 cyl 45265 alt 2 hd 16 sec 809 /[EMAIL

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Torrey McMahon
Krzys wrote: Thanks, ah another wird thing is that when I run format on that frive I get a coredump :( Run pstack /path/to/core and send the output. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Krzys
[12:00:40] [EMAIL PROTECTED]: /d/d3/nb1 pstack core core 'core' of 29506: format -e - lwp# 1 / thread# 1 000239b8 c_disk (51800, 52000, 4bde4, 525f4, 54e78, 0) + 4e0 00020fb4 main (2, 0, ffbff8e8, 0, 52000, 29000) + 46c 000141a8 _start (0, 0,

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Al Hopper
On Tue, 5 Dec 2006, Krzys wrote: Thanks, ah another wird thing is that when I run format on that frive I get a coredump :( ... snip Try zeroing out the disk label with something like: dd if=/dev/zero of=/dev/rdsk/c?t?d?p0 bs=1024k count=1024 Regards, Al Hopper Logical Approach

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Joerg Schilling
Al Hopper [EMAIL PROTECTED] wrote: On Tue, 5 Dec 2006, Krzys wrote: Thanks, ah another wird thing is that when I run format on that frive I get a coredump :( ... snip Try zeroing out the disk label with something like: dd if=/dev/zero of=/dev/rdsk/c?t?d?p0 bs=1024k

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Al Hopper
On Tue, 5 Dec 2006, Joerg Schilling wrote: Al Hopper [EMAIL PROTECTED] wrote: On Tue, 5 Dec 2006, Krzys wrote: Thanks, ah another wird thing is that when I run format on that frive I get a coredump :( ... snip Try zeroing out the disk label with something like:

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Krzys
Does not work :( dd if=/dev/zero of=/dev/rdsk/c3t6d0s0 bs=1024k count=1024 dd: opening `/dev/rdsk/c3t6d0s0': I/O error That is so strange... it seems like I lost another disk... I will try to reboot and see what I get, but I guess I need to order another disk then and give it a try... Chris

[zfs-discuss] Re: Managed to corrupt my pool

2006-12-05 Thread Jim Hranicky
So the questions are: - is this fixable? I don't see an inum I could run find on to remove, and I can't even do a zfs volinit anyway: nextest-01# zfs volinit cannot iterate filesystems: I/O error - would not enabling zil_disable have prevented this? - Should I have

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Krzys
Ok, so here is an update I did restart my sysyte, I power it off and power it on. Here is screen capture of my boot. I certainly do have some hard drive issues and will need to take a look at them... But I got my disk back visible to the system and zfs is doing resilvering again Rebooting

Re: [zfs-discuss] replacing a drive in a raidz vdev

2006-12-05 Thread Mark Maybee
Jeremy Teo wrote: On 12/5/06, Bill Sommerfeld [EMAIL PROTECTED] wrote: On Mon, 2006-12-04 at 13:56 -0500, Krzys wrote: mypool2/[EMAIL PROTECTED] 34.4M - 151G - mypool2/[EMAIL PROTECTED] 141K - 189G - mypool2/d3 492G 254G 11.5G legacy I am so

[zfs-discuss] Re: Managed to corrupt my pool

2006-12-05 Thread Jim Hranicky
Anyone have any thoughts on this? I'd really like to be able to build a nice ZFS box for file service but if a hardware failure can corrupt a disk pool I'll have to try to find another solution, I'm afraid. Sorry, I worded this poorly -- if the loss of a disk in a mirror can corrupt the

[zfs-discuss] Re: weird thing with zfs

2006-12-05 Thread Chris Gerhard
What os is this? What is the hardware? can you try running format with efi_debug set. You have to run format using a debugger and patch the variable. Here is how using mdb (set a break point in main so that the dynamic linker has done it's stuff, then update the value of efi_debug to be 1,

[zfs-discuss] Re: Re: ZFS related kernel panic

2006-12-05 Thread Peter Eriksson
So ZFS should be more resilient against write errors, and the SCSI disk or FC drivers should be more resilient against LIPs (the most likely cause of your problem) or other transient errors. (Alternatively, the ifp driver should be updated to support the maximum number of targets on a

[zfs-discuss] Re: Re: ZFS related kernel panic

2006-12-05 Thread Peter Eriksson
Hmm... I just noticed this qla2100.conf option: # During link down conditions enable/disable the reporting of # errors. #0 = disabled, 1 = enable hba0-link-down-error=1; hba1-link-down-error=1; I _wonder_ what might possibly happen if I change that 1 to a 0 (zero)... :-) This message

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Richard Elling
This looks more like a cabling or connector problem. When that happens you should see parity errors and transfer rate negotiations. -- richard Krzys wrote: Ok, so here is an update I did restart my sysyte, I power it off and power it on. Here is screen capture of my boot. I certainly do

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Richard Elling
BTW, there is a way to check what the SCSI negotiations resolved to. I wrote about it once in a BluePrint http://www.sun.com/blueprints/0500/sysperfnc.pdf See page 11 -- richard Richard Elling wrote: This looks more like a cabling or connector problem. When that happens you should see

Re: [zfs-discuss] Re: Re: ZFS related kernel panic

2006-12-05 Thread Douglas Denny
On 12/5/06, Peter Eriksson [EMAIL PROTECTED] wrote: Hmm... I just noticed this qla2100.conf option: # During link down conditions enable/disable the reporting of # errors. #0 = disabled, 1 = enable hba0-link-down-error=1; hba1-link-down-error=1; This is the driver the we are using in this

Re: [zfs-discuss] weird thing with zfs

2006-12-05 Thread Nathan Kroenert
Hm. If the disk has no label, why would it have an s0? Or, did you mean p0? Nathan. On Wed, 2006-12-06 at 04:45, Krzys wrote: Does not work :( dd if=/dev/zero of=/dev/rdsk/c3t6d0s0 bs=1024k count=1024 dd: opening `/dev/rdsk/c3t6d0s0': I/O error That is so strange... it seems like I

Re: [zfs-discuss] raidz DEGRADED state

2006-12-05 Thread David Bustos
Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500: I currently have a 400GB disk that is full of data on a linux system. If I buy 2 more disks and put them into a raid-z'ed zfs under solaris, is there a generally accepted way to build an degraded array with the 2 disks, copy the

Re: [zfs-discuss] raidz DEGRADED state

2006-12-05 Thread Thomas Garner
So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? On 12/5/06, David Bustos [EMAIL PROTECTED] wrote: Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500: I currently have a 400GB disk that is full of data on a linux system. If I buy

Re: [zfs-discuss] Re: Managed to corrupt my pool

2006-12-05 Thread Neil Perrin
Jim, I'm not at all sure what happened to your pool. However, I can answer some of your questions. Jim Hranicky wrote On 12/05/06 11:32,: So the questions are: - is this fixable? I don't see an inum I could run find on to remove, I think the pool is busted. Even the message printed in your

[zfs-discuss] Re: Shared ZFS pools

2006-12-05 Thread Anton B. Rang
You specify the mirroring configuration. The top-level vdevs are implicitly striped. So if you, for instance, request something like zpool create mirror AA BA mirror AB BB then you will have a pool consisting of a stripe of two mirrors. Each mirror will have one copy of its data at each

[zfs-discuss] Re: Re: Managed to corrupt my pool

2006-12-05 Thread Anton B. Rang
I think the pool is busted. Even the message printed in your previous email is bad: DATASET OBJECT RANGE 15 0 lvl=4294967295 blkid=0 as level is way out of range. I think this could be from dmu_objset_open_impl(). It sets object to 0 and level to -1 (=

[zfs-discuss] Re: raidz DEGRADED state

2006-12-05 Thread Anton B. Rang
Creating an array configuration with one element being a sparse file, then removing that file, comes to mind, but I wouldn't want to be the first to attempt it. ;-) This message posted from opensolaris.org ___ zfs-discuss mailing list