Re: [zfs-discuss] SOLVED: Mount ZFS pool on different system

2009-01-06 Thread Cyril Payet
Btw, When you import a pool, you must know its name. Is there any command to get the pool name to whom a non-impoted disks belongs. # vxdisk -o alldgs list does this with vxm. Thanx for your replies. C. -Message d'origine- De : zfs-discuss-boun...@opensolaris.org

Re: [zfs-discuss] SOLVED: Mount ZFS pool on different system

2009-01-06 Thread Rodney Lindner - Services Chief Technologist
Yep.. Just run zpool import without a poolname and it will list any pools that are available for import. eg: sb2000::#zpool import pool: mp id: 17232673347678393572 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: mp ONLINE raidz2 ONLINE

Re: [zfs-discuss] Metaslab alignment on RAID-Z

2009-01-06 Thread Robert Milkowski
Is there any update on this? You suggested that Jeff had some kind of solution for this - has it been integrated or is someone working on it? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-06 Thread Anton B. Rang
For SCSI disks (including FC), you would use the FUA bit on the read command. For SATA disks ... does anyone care? ;-) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] zfs list improvements?

2009-01-06 Thread Chris Gerhard
To improve the performance of scripts that manipulate zfs snapshots and the zfs snapshot service in perticular there needs to be a way to list all the snapshots for a given object and only the snapshots for that object. There are two RFEs filed that cover this:

Re: [zfs-discuss] SOLVED: Mount ZFS pool on different system

2009-01-06 Thread Cyril Payet
OK, got it : just use zpool import. Sorry for the inconvenience ;-) C. -Message d'origine- De : zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] De la part de Cyril Payet Envoyé : mardi 6 janvier 2009 09:16 À : D. Eckert; zfs-discuss@opensolaris.org Objet

Re: [zfs-discuss] How to find out the zpool of an uberblock printed with the fbt:zfs:uberblock_update: probes?

2009-01-06 Thread Marcelo Leal
Hi, Hello Bernd, After I published a blog entry about installing OpenSolaris 2008.11 on a USB stick, I read a comment about a possible issue with wearing out blocks on the USB stick after some time because ZFS overwrites its uberblocks in place. I did not understand well what you

[zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Jacob Ritorto
My OpenSolaris 2008/11 PC seems to attain better throughput with one big sixteen-device RAIDZ2 than with four stripes of 4-device RAIDZ. I know it's by no means an exhaustive test, but catting /dev/zero to a file in the pool now frequently exceeds 600 Megabytes per second, whereas before with

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Keith Bierman
On Jan 6, 2009, at 9:44 AM 1/6/, Jacob Ritorto wrote: but catting /dev/zero to a file in the pool now f Do you get the same sort of results from /dev/random? I wouldn't be surprised if /dev/zero turns out to be a special case. Indeed, using any of the special files is probably not ideal.

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Bob Friesenhahn
On Tue, 6 Jan 2009, Jacob Ritorto wrote: My OpenSolaris 2008/11 PC seems to attain better throughput with one big sixteen-device RAIDZ2 than with four stripes of 4-device RAIDZ. I know it's by no means an exhaustive test, but catting /dev/zero to a file in the pool now frequently exceeds

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Bob Friesenhahn
On Tue, 6 Jan 2009, Keith Bierman wrote: Do you get the same sort of results from /dev/random? /dev/random is very slow and should not be used for benchmarking. Bob == Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Jacob Ritorto
Is urandom nonblocking? On Tue, Jan 6, 2009 at 1:12 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 6 Jan 2009, Keith Bierman wrote: Do you get the same sort of results from /dev/random? /dev/random is very slow and should not be used for benchmarking. Bob

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Keith Bierman
On Jan 6, 2009, at 11:12 AM 1/6/, Bob Friesenhahn wrote: On Tue, 6 Jan 2009, Keith Bierman wrote: Do you get the same sort of results from /dev/random? /dev/random is very slow and should not be used for benchmarking. Not directly, no. But copying from /dev/random to a real file and

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Bob Friesenhahn
On Tue, 6 Jan 2009, Jacob Ritorto wrote: Is urandom nonblocking? The OS provided random devices need to be secure and so they depend on collecting entropy from the system so the random values are truely random. They also execute complex code to produce the random numbers. As a result, both

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Jacob Ritorto
OK, so use a real io test program or at least pre-generate files large enough to exceed RAM caching? On Tue, Jan 6, 2009 at 1:19 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Tue, 6 Jan 2009, Jacob Ritorto wrote: Is urandom nonblocking? The OS provided random devices need to

Re: [zfs-discuss] X4500, snv_101a, hd and zfs

2009-01-06 Thread Elaine Ashton
Ok, it gets a bit more specific hdadm and write_cache run 'format -e -d $disk' On this system, format will produce the list of devices in short order - format -e, however, takes much, much longer and would explain why it takes hours to iterate over 48 drives. It's very curious and

[zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Alex Viskovatoff
Hi all, I did an install of OpenSolaris in which I specified that the whole disk should be used for the installation. Here is what format verify produces for that disk: Part TagFlag Cylinders SizeBlocks 0 rootwm 1 - 60797 465.73GB

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread A Darren Dunham
On Tue, Jan 06, 2009 at 08:44:01AM -0800, Jacob Ritorto wrote: Is this increase explicable / expected? The throughput calculator sheet output I saw seemed to forecast better iops with the striped raidz vdevs and I'd read that, generally, throughput is augmented by keeping the number of vdevs

[zfs-discuss] Performance issue with zfs send of a zvol

2009-01-06 Thread Brian H. Nelson
I noticed this issue yesterday when I first started playing around with zfs send/recv. This is on Solaris 10U6. It seems that a zfs send of a zvol issues 'volblocksize' reads to the physical devices. This doesn't make any sense to me, as zfs generally consolidates read/write requests to

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Cindy . Swearingen
Alex, I think the root cause of your confusion is that the format utility and disk labels are very unfriendly and confusing. Partition 2 identifies the whole disk and on x86 systems, space is needed for boot-related information and is currently stored in partition 8. Neither of these partitions

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Jacob Ritorto
I have that iozone program loaded, but its results were rather cryptic for me. Is it adequate if I learn how to decipher the results? Can it thread out and use all of my CPUs? Do you have tools to do random I/O exercises? -- Darren ___

Re: [zfs-discuss] ZFS send fails incremental snapshot

2009-01-06 Thread Brent Jones
On Mon, Jan 5, 2009 at 4:29 PM, Brent Jones br...@servuhome.net wrote: On Mon, Jan 5, 2009 at 2:50 PM, Richard Elling richard.ell...@sun.com wrote: Correlation question below... Brent Jones wrote: On Sun, Jan 4, 2009 at 11:33 PM, Carsten Aulbert carsten.aulb...@aei.mpg.de wrote: Hi

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Bob Friesenhahn
On Tue, 6 Jan 2009, Jacob Ritorto wrote: I have that iozone program loaded, but its results were rather cryptic for me. Is it adequate if I learn how to decipher the results? Can it thread out and use all of my CPUs? Yes, iozone does support threading. Here is a test with a record size of

[zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Rob
ZFS is the bomb. It's a great file system. What are it's real world applications besides solaris userspace? What I'd really like is to utilize the benefits of ZFS across all the platforms we use. For instance, we use Microsoft Windows Servers as our primary platform here. How might I utilize

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Marcelo Leal
Hello, - One way is virtualization, if you use a virtualization technology that uses NFS for example, you could add your virtual images on a ZFS filesystem. NFS can be used without virtualization too, but as you said the machines are windows, i don't think the NFS client for windows is

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Bob Friesenhahn
On Tue, 6 Jan 2009, Rob wrote: The only way I can visualize doing so would be to virtualize the windows server and store it's image in a ZFS pool. That would add additional overhead but protect the data at the disk level. It would also allow snapshots of the Windows Machine's virtual file.

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Rob
I am not experienced with iSCSI. I understand it's block level disk access via TCP/IP. However I don't see how using it eliminates the need for virtualization. Are you saying that a Windows Server can access a ZFS drive via iSCSI and store NTFS files? -- This message posted from

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Bob Friesenhahn
On Tue, 6 Jan 2009, Rob wrote: Are you saying that a Windows Server can access a ZFS drive via iSCSI and store NTFS files? A volume is created under ZFS, similar to a large sequential file. The iSCSI protocol is used to export that volume as a LUN. Windows can then format it and put NTFS

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread A Darren Dunham
On Tue, Jan 06, 2009 at 10:22:20AM -0800, Alex Viskovatoff wrote: I did an install of OpenSolaris in which I specified that the whole disk should be used for the installation. Here is what format verify produces for that disk: Part TagFlag Cylinders Size

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Volker A. Brandt
http://docs.sun.com/app/docs/doc/817-5093/disksconcepts-20068?a=view (To add more confusion, partitions are also referred to as slices.) Nope, at least not on x86 systems. A partition holds the Solaris part of the disk, and that part is subdivided into slices. Partitions are visible to other

Re: [zfs-discuss] zfs create performance degrades dramatically with increasing number of file systems

2009-01-06 Thread Alastair Neil
On Mon, Jan 5, 2009 at 5:27 AM, Roch roch.bourbonn...@sun.com wrote: Alastair Neil writes: I am attempting to create approx 10600 zfs file systems across two pools. The devices underlying the pools are mirrored iscsi volumes shared over a dedicated gigabit Ethernet with jumbo frames

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread A Darren Dunham
On Tue, Jan 06, 2009 at 11:49:27AM -0700, cindy.swearin...@sun.com wrote: My wish for this year is to boot from EFI-labeled disks so examining disk labels is mostly unnecessary because ZFS pool components could be constructed as whole disks, and the unpleasant disk format/label/partitioning

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-06 Thread JZ
[ok, no one replying, my spam then...] Open folks just care about SMART so far. http://www.mail-archive.com/linux-s...@vger.kernel.org/msg07346.html Enterprise folks care more about spin-down. (not an open thing yet, unless new practical industry standard is here that I don't know. yeah right.)

Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Jacob Ritorto
Yes, iozone does support threading. Here is a test with a record size of 8KB, eight threads, synchronous writes, and a 2GB test file: Multi_buffer. Work area 16777216 bytes OPS Mode. Output is in operations per second. Record Size 8 KB SYNC Mode. File

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Rob
Wow. I will read further into this. That seems like it could have great applications. I assume the same is true of FCoE? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread JZ
Hello Darren, This one, ok, was a validate thought/question -- On Solaris, root pools cannot have EFI labels (the boot firmware doesn't support booting from them). http://blog.yucas.info/2008/11/26/zfs-boot-solaris/ But again, this is a ZFS discussion, and obvously EFI is not a ZFS, or even

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Alex Viskovatoff
Cindy, Well, it worked. The system can boot off c4t0d0s0 now. But I am still a bit perplexed. Here is how the invocation of installgrub went: a...@diotiima:~# installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t0d0s0 Updating master boot sector destroys existing boot managers (if

[zfs-discuss] POSIX permission bits, ACEs, and inheritance confusion

2009-01-06 Thread Peter Skovgaard Nielsen
I am running a test system with Solaris 10u6 and I am somewhat confused as to how ACE inheritance works. I've read through http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf but it doesn't seem to cover what I am experiencing. The ZFS file system that I am working on has both aclmode

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Cindy . Swearingen
Hi Alex, The fact that you have to install the boot blocks manually on the second disk that you added with zpool attach is a bug! I should have mentioned this bug previously. If you had used the initial installation method to create a mirrored root pool, the boot blocks would have been applied

Re: [zfs-discuss] POSIX permission bits, ACEs, and inheritance confusion

2009-01-06 Thread Mark Shellenbaum
ls -V file -rw-r--r--+ 1 root root 0 Jan 6 21:42 d user:root:rwxpdDaARWcCos:--:allow owner@:--x---:--:deny owner@:rw-p---A-W-Co-:--:allow group@:-wxp--:--:deny

Re: [zfs-discuss] POSIX permission bits, ACEs, and inheritance confusion

2009-01-06 Thread Nicolas Williams
On Tue, Jan 06, 2009 at 01:27:41PM -0800, Peter Skovgaard Nielsen wrote: ls -V file --+ 1 root root 0 Jan 6 22:15 file user:root:rwxpdDaARWcCos:--:allow everyone@:--:--:allow Not bad at all. However, I contend that this

[zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-06 Thread Sam
I've run into this problem twice now, before I had 10x500GB drives in a ZFS+ setup and now again in a 12x500GB ZFS+ setup. The problem is when the pool reaches ~85% capacity I get random read failures and around ~90% capacity I get read failures AND zpool corruption. For example: -I open a

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Alex Viskovatoff
Thanks for clearing that up. That all makes sense. I was wondering why ZFS doesn't use the whole disk in the standard OpenSolaris install. That explains it. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread A Darren Dunham
On Tue, Jan 06, 2009 at 01:24:17PM -0800, Alex Viskovatoff wrote: a...@diotiima:~# installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t0d0s0 Updating master boot sector destroys existing boot managers (if any). continue (y/n)?y stage1 written to partition 0 sector 0 (abs 16065)

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Alex Viskovatoff
Hi Cindy, I now suspect that the boot blocks are located outside of the space in partition 0 that actually belongs to the zpool, in which case it is not necessarily a bug that zpool attach does not write those blocks, IMO. Indeed, that must be the case, since GRUB needs to get to stage2 in

Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread A Darren Dunham
On Tue, Jan 06, 2009 at 04:10:10PM -0500, JZ wrote: Hello Darren, This one, ok, was a validate thought/question -- Darn, I was hoping... On Solaris, root pools cannot have EFI labels (the boot firmware doesn't support booting from them). http://blog.yucas.info/2008/11/26/zfs-boot-solaris/

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread Tim
On Tue, Jan 6, 2009 at 2:58 PM, Rob rdyl...@yahoo.com wrote: Wow. I will read further into this. That seems like it could have great applications. I assume the same is true of FCoE? -- Yes, iSCSI, FC, FCOE all present out a LUN to Windows. For the layman, from the windows system the disk

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-06 Thread JZ
Ok, folks, new news - [feel free to comment in any fashion, since I don't know how yet.] EMC ACQUIRES OPEN-SOURCE ASSETS FROM SOURCELABS http://go.techtarget.com/r/5490612/6109175 attachment: joetucci.jpg___ zfs-discuss mailing list

Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-06 Thread Brent Jones
On Sat, Dec 6, 2008 at 11:40 AM, Ian Collins i...@ianshome.com wrote: Richard Elling wrote: Ian Collins wrote: Ian Collins wrote: Andrew Gabriel wrote: Ian Collins wrote: I've just finished a small application to couple zfs_send and zfs_receive through a socket to remove ssh from the

Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-06 Thread Orvar Korvar
It is not recommended to store more than 90% on any file system, I think. For instance, NTFS can behave very badly when it runs out of space. Similar to if you fill up your RAM and you have no swap space. Then the computer starts to thrash badly. Not recommended. Avoid 90% and above, and you

Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-06 Thread Sam
I was hoping that this was the problem (because just buying more discs is the cheapest solution given time=$$) but running it by somebody at work they said going over 90% can cause decreased performance but is unlikely to cause the strange errors I'm seeing. However, I think I'll stick a 1TB

Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-06 Thread Carson Gaspar
On 1/6/2009 4:19 PM, Sam wrote: I was hoping that this was the problem (because just buying more discs is the cheapest solution given time=$$) but running it by somebody at work they said going over 90% can cause decreased performance but is unlikely to cause the strange errors I'm seeing.

Re: [zfs-discuss] Practical Application of ZFS

2009-01-06 Thread David Magda
On Jan 6, 2009, at 14:21, Rob wrote: Obviously ZFS is ideal for large databases served out via application level or web servers. But what other practical ways are there to integrate the use of ZFS into existing setups to experience it's benefits. Remember that ZFS is made up of the ZPL

Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-06 Thread Tim
On Tue, Jan 6, 2009 at 6:19 PM, Sam s...@smugmug.com wrote: I was hoping that this was the problem (because just buying more discs is the cheapest solution given time=$$) but running it by somebody at work they said going over 90% can cause decreased performance but is unlikely to cause the

Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-06 Thread Nicholas Lee
Since zfs is so smart is other areas is there a particular reason why a high water mark is not calculated and the available space not reset to this? I'd far rather have a zpool of 1000GB that said it only had 900GB but did not have corruption as it ran out of space. Nicholas

Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-06 Thread Tim
On Tue, Jan 6, 2009 at 10:25 PM, Nicholas Lee emptysa...@gmail.com wrote: Since zfs is so smart is other areas is there a particular reason why a high water mark is not calculated and the available space not reset to this? I'd far rather have a zpool of 1000GB that said it only had 900GB but

Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-06 Thread JZ
BTW, high water mark method is not perfect, here is some for Novell support of water mark... best, z http://www.novell.com/coolsolutions/tools/16991.html Based on my own belief that there had to be a better way and the number of issues I'd seen reported in the Support Forums, I spent a lot of

Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-06 Thread Neil Perrin
On 01/06/09 21:25, Nicholas Lee wrote: Since zfs is so smart is other areas is there a particular reason why a high water mark is not calculated and the available space not reset to this? I'd far rather have a zpool of 1000GB that said it only had 900GB but did not have corruption as it

Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-06 Thread Carsten Aulbert
Hi, Brent Jones wrote: Using mbuffer can speed it up dramatically, but this seems like a hack without addressing a real problem with zfs send/recv. Trying to send any meaningful sized snapshots from say an X4540 takes up to 24 hours, for as little as 300GB changerate. I have not found a