[zfs-discuss] Re: zpool create -f ... fails on disk with previous

2007-05-15 Thread Matthew Flanagan
On 5/15/07, eric kustarz [EMAIL PROTECTED] wrote: On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote: On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote: Hi, I have a test server that I use for testing my different jumpstart installations. This system is continuously installed and

Re: [zfs-discuss] Odd zpool create error

2007-05-15 Thread Trevor Watson
Ian, It looks like the error message is wrong - slice 7 overlaps slice 4 - note that slice 4 ends at c6404, but slice 7 starts at c6394. Slice 6 is also completely contained within slice 4's range of cylinders, but that won't matter unless you attempt to use it. Trev Ian Collins wrote:

Re: [zfs-discuss] zpool create -f ... fails on disk with previous UFS on it

2007-05-15 Thread Robert Milkowski
Hello Matthew, Friday, May 11, 2007, 7:04:06 AM, you wrote: Check in your script (df -h?) if s6 isn't mounted anyway... -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com

Re: [zfs-discuss] Re: zfs and jbod-storage

2007-05-15 Thread Robert Milkowski
Hello Gino, Monday, May 14, 2007, 4:07:31 PM, you wrote: G We are using a lot of EMC DAE2. Works well with ZFS. Without head units? Dual-pathed connections to hosts + MPxIO? -- Best regards, Robertmailto:[EMAIL PROTECTED]

Re: [zfs-discuss] Odd zpool create error

2007-05-15 Thread Ian Collins
Trevor Watson wrote: Ian, It looks like the error message is wrong - slice 7 overlaps slice 4 - note that slice 4 ends at c6404, but slice 7 starts at c6394. Slice 6 is also completely contained within slice 4's range of cylinders, but that won't matter unless you attempt to use it.

Re: [zfs-discuss] Odd zpool create error

2007-05-15 Thread Ian Collins
Ian Collins wrote: Trevor Watson wrote: Ian, It looks like the error message is wrong - slice 7 overlaps slice 4 - note that slice 4 ends at c6404, but slice 7 starts at c6394. Slice 6 is also completely contained within slice 4's range of cylinders, but that won't matter unless you

Re: [zfs-discuss] Odd zpool create error

2007-05-15 Thread Trevor Watson
I don't suppose that it has anything to do with the flag being wm instead of wu on your second drive does it? Maybe if the driver thinks slice 2 is writeable, it treats it as a valid slice? Trev Ian Collins wrote: Ian Collins wrote: Trevor Watson wrote: Ian, It looks like the error

[zfs-discuss] Re: Re: zfs and jbod-storage

2007-05-15 Thread Gino
Hello Robert, G We are using a lot of EMC DAE2. Works well with ZFS. Without head units? Yes. Just make sure to format disks to 512 bytes per sector if they are from EMC. Dual-pathed connections to hosts + MPxIO? sure. Also we are using some Xyratex JBOD boxes. gino This message

Re[2]: [zfs-discuss] How does ZFS write data to disks?

2007-05-15 Thread Robert Milkowski
Hello James, Thursday, May 10, 2007, 11:12:57 PM, you wrote: zfs will interpret zero'd sectors as holes, so wont really write them to disk, they just adjust the file size accordingly. It does that only with compression turned on. -- Best regards, Robert

Re: [zfs-discuss] Re: Optimal strategy (add or replace disks) to build a cheap and raidz?

2007-05-15 Thread Robert Milkowski
Hello Pal, Friday, May 11, 2007, 6:41:41 PM, you wrote: PB Note! You can't even regret what you have added to a pool. Being PB able to evacuate a vdev and replace it by a bigger one would have PB helped. But this isn't possible either (currently). Actually you can. See 'zpool replace'. So you

[zfs-discuss] Re: Re: Optimal strategy (add or replace disks) tobuild a cheap and raidz?

2007-05-15 Thread Christian Rost
Yes I have tested this virtually with vmware. Replacing disks by bigger ones works great. But the new space becomes usable only after replacing *all* disks. I hoped that new space will be usable after replacing 3 or 4 disks. I think the best strategy for me now is buying 2 x 750 GB Disks and

[zfs-discuss] Re: Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Nick G
I have no idea what to make of all this, except that it ZFS has a problem with this hardware/drivers that UFS and other traditional file systems, don't. Is it a bug in the driver that ZFS is inadvertently exposing? A specific feature that ZFS assumes the hardware to have, but it doesn't?

[zfs-discuss] snv63: kernel panic on import

2007-05-15 Thread Tomasz Torcz
Hi, I have a problem with ZFS filesystem on array. ZFS was created by Solaris 10 U2. Some glitches with array made it panic Solaris on boot. I've installed snv63 (as snv60 contains some important fixes), systems boots but kernel panic when I try to import pool. This is with zfs_recover=1.

Re: [zfs-discuss] Odd zpool create error

2007-05-15 Thread Mark J Musante
On Tue, 15 May 2007, Trevor Watson wrote: I don't suppose that it has anything to do with the flag being wm instead of wu on your second drive does it? Maybe if the driver thinks slice 2 is writeable, it treats it as a valid slice? If the slice doesn't take up the *entire* disk, then it

[zfs-discuss] Re: Remove files when at quota limit

2007-05-15 Thread Ben Miller
Has anyone else run into this situation? Does anyone have any solutions other than removing snapshots or increasing the quota? I'd like to put in an RFE to reserve some space so files can be removed when users are at their quota. Any thoughts from the ZFS team? Ben We have around 1000

Re: [zfs-discuss] Re: Remove files when at quota limit

2007-05-15 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 05/15/2007 09:01:00 AM: Has anyone else run into this situation? Does anyone have any solutions other than removing snapshots or increasing the quota? I'd like to put in an RFE to reserve some space so files can be removed when users are at their quota. Any

[zfs-discuss] Re: Best way to migrate filesystems to ZFS?

2007-05-15 Thread Pål Baltzersen
I would use rsync; over NFS if possible otherwise over ssh: (NFS performs significantly better on read than write so preferably share from the old and mount on the new) old# share -F nfs -o [EMAIL PROTECTED],[EMAIL PROTECTED] /my/data (or edit /etc/dfs/dfstab and shareall) new# mount -r

[zfs-discuss] Re: Best way to migrate filesystems to ZFS?

2007-05-15 Thread Pål Baltzersen
Sorry I realize I was a bit misleading in the path handling and need to correct this part: new# mount -r old:/my/data /mnt new# mkdir -p /my/data new# cd /mnt ; rsync -aRHDn --delete ./ /my/data/ new# cd /mnt ; rsync -aRHD --delete ./ /my/data/ new# umount /mnt .. new# cd /mnt ; rsync -aRHD

Re: [zfs-discuss] Re: Remove files when at quota limit

2007-05-15 Thread Eric Schrock
On Tue, May 15, 2007 at 09:36:35AM -0500, [EMAIL PROTECTED] wrote: * Ignore snapshot reservations when calculating quota -- Don't punish users for administratively driven snap policy. See: 6431277 want filesystem-only quotas * Ignore COW overhead for quotas (allow unlink anytime) -- from my

[zfs-discuss] Clear corrupted data

2007-05-15 Thread XIU
Hey, I'm currently running on Nexenta alpha 6 and I have some corrupted data in a pool. The output from sudo zpool status -v data is: pool: data state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action:

[zfs-discuss] Re: Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Jürgen Keil
Would you mind also doing: ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1 to see the raw performance of underlying hardware. This dd command is reading from the block device, which might cache dataand probably splits requests into maxphys pieces (which happens to be 56K on an

Re: [zfs-discuss] Re: Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Jonathan Edwards
On May 15, 2007, at 13:13, Jürgen Keil wrote: Would you mind also doing: ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1 to see the raw performance of underlying hardware. This dd command is reading from the block device, which might cache dataand probably splits requests into

Re: [zfs-discuss] Clear corrupted data

2007-05-15 Thread eric kustarz
On May 15, 2007, at 9:37 AM, XIU wrote: Hey, I'm currently running on Nexenta alpha 6 and I have some corrupted data in a pool. The output from sudo zpool status -v data is: pool: data state: ONLINE status: One or more devices has experienced an error resulting in data corruption.

[zfs-discuss] Re: ZFS over a layered driver interface

2007-05-15 Thread Shweta Krishnan
With what Edward suggested, I got rid of the ldi_get_size() error by defining the prop_op entry point appropriately. However, the zpool create still fails - with zio_wait() returning 22. bash-3.00# dtrace -n 'fbt::ldi_get_size:entry{self-t=1;} fbt::ldi_get_size:entry/self-t/{}

Re: [zfs-discuss] Clear corrupted data

2007-05-15 Thread XIU
Hey, Using the steps on http://www.opensolaris.org/jive/thread.jspa?messageID=39450tstart=0confirms that it's the iso file. Removing the file does work, I'll just download the file again and let a scrub clean up the error message. Steve On 5/15/07, eric kustarz [EMAIL PROTECTED] wrote: On

Re: [zfs-discuss] Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread johansen-osdev
Each drive is freshly formatted with one 2G file copied to it. How are you creating each of these files? Also, would you please include a the output from the isalist(1) command? These are snapshots of iostat -xnczpm 3 captured somewhere in the middle of the operation. Have you

[zfs-discuss] Re: snv63: kernel panic on import

2007-05-15 Thread Nigel Smith
I seem to have got the same core dump, in a different way. I had a zpool setup on a iscsi 'disk'. For details see: http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/001162.html But after a reboot the iscsi target was not longer available, so the iscsi initiator could not provide the

Re: [zfs-discuss] Re: zpool create -f ... fails on disk with previous

2007-05-15 Thread Matthew Flanagan
On 5/15/07, Matthew Flanagan [EMAIL PROTECTED] wrote: On 5/15/07, eric kustarz [EMAIL PROTECTED] wrote: On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote: On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote: Hi, I have a test server that I use for testing my different

Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Matthew Ahrens
Marko Milisavljevic wrote: I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. Our experience is that ZFS gets very

[zfs-discuss] Solaris Backup Server

2007-05-15 Thread Hazvinei Mugwagwa
I have an opensolaris server running with a raidz zfs pool with almost 1TB of storage. This is intended to be a central fileserver via samba and ftp for all sorts of purposes. I also want to use it to backup my XP laptop. I am having trouble finding out how I can setup solaris to allow my XP

Re: [zfs-discuss] Solaris Backup Server

2007-05-15 Thread Michael Hale
On May 15, 2007, at 9:32 PM, Hazvinei Mugwagwa wrote: I have an opensolaris server running with a raidz zfs pool with almost 1TB of storage. This is intended to be a central fileserver via samba and ftp for all sorts of purposes. I also want to use it to backup my XP laptop. I am having

Re: [zfs-discuss] Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Marko Milisavljevic
Hello Matthew, Yes, my machine is 32-bit, with 1.5G of RAM. -bash-3.00# echo ::memstat | mdb -k Page SummaryPagesMB %Tot Kernel 123249 481 32% Anon

Re: [zfs-discuss] Re: Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Marko Milisavljevic
I tried as you suggested, but I notice that output from iostat while doing dd if=/dev/dsk/... still shows that reading is done in 56k chunks. I haven't see any change in performance. Perhaps iostat doesn't say what I think it does. Using dd if=/dev/rdsk/.. gives 256k, and dd if=zfsfile gives 128k

Re: [zfs-discuss] Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Marko Milisavljevic
On 5/15/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Each drive is freshly formatted with one 2G file copied to it. How are you creating each of these files? zpool create tank c0d0 c0d1; zfs create tank/test; cp ~/bigfile /tank/test/ Actual content of the file is random junk from