Re: [zfs-discuss] deduplication

2009-09-17 Thread Cyril Plisko
2009/9/17 Brandon High bh...@freaks.com: 2009/9/11 C. Bergström codest...@osunix.org: Can we make a FAQ on this somewhere? 1) There is some legal bla bla between Sun and green-bytes that's tying up the IP around dedup... (someone knock some sense into green-bytes please) 2) there's an

[zfs-discuss] Persistent errors - do I believe?

2009-09-17 Thread Chris Murray
I can flesh this out with detail if needed, but a brief chain of events is: 1. RAIDZ1 zpool with drives A, B, C D (I don't have access to see original drive names) 2. New disk E. Replaced A with E. 3. Part way through resilver, drive D was 'removed' 4. 700+ persistent errors detected, and lots

Re: [zfs-discuss] deduplication

2009-09-17 Thread Thomas Burgess
I think you're right, and i also think we'll still see a new post asking about it once or twice a week. On Thu, Sep 17, 2009 at 2:20 AM, Cyril Plisko cyril.pli...@mountall.comwrote: 2009/9/17 Brandon High bh...@freaks.com: 2009/9/11 C. Bergström codest...@osunix.org: Can we make a FAQ on

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Eugen Leitl
On Wed, Sep 16, 2009 at 10:23:01AM -0700, Richard Elling wrote: This line of reasoning doesn't get you very far. It is much better to take a look at the mean time to data loss (MTTDL) for the various configurations. I wrote a series of blogs to show how this is done.

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Tomas Ögren
On 17 September, 2009 - Eugen Leitl sent me these 2,0K bytes: On Wed, Sep 16, 2009 at 08:02:35PM +0300, Markus Kovero wrote: It's possible to do 3-way (or more) mirrors too, so you may achieve better redundancy than raidz2/3 I understand there's almost no additional performance penalty

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Erik Trimble
Eugen Leitl wrote: On Wed, Sep 16, 2009 at 08:02:35PM +0300, Markus Kovero wrote: It's possible to do 3-way (or more) mirrors too, so you may achieve better redundancy than raidz2/3 I understand there's almost no additional performance penalty to raidz3 over raidz2 in terms of CPU

Re: [zfs-discuss] ZFS Export, Import = Windows sees wrong groups in ACLs

2009-09-17 Thread Kyle McDonald
Owen Davies wrote: Thanks. I took a look and that is exactly what I was looking for. Of course I have since just reset all the permissions on all my shares but it seems that the proper way to swap UIDs for users with permissions on CIFS shares is to: Edit /etc/passwd Edit /var/smb/smbpasswd

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Eugen Leitl
On Thu, Sep 17, 2009 at 12:55:35PM +0200, Tomas Ögren wrote: It's not a fixed value per technology, it depends on the number of disks per group. RAID5/RAIDZ1 loses 1 disk worth to parity per group. RAID6/RAIDZ loses 2 disks. RAIDZ3 loses 3 disks. Raid1/mirror loses half the disks. So in your

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Darren J Moffat
Erik Trimble wrote: So SSDs for ZIL/L2ARC don't bring that much when used with raidz2/raidz3, if I write a lot, at least, and don't access the cache very much, according to some recent posts on this list. Not true. Remember: ZIL = write cache ZIL is NOT a write cache. The ZIL is the

Re: [zfs-discuss] USB WD Passport 500GB zfs mirror bug

2009-09-17 Thread Matthias Pfützner
Might be related to Solaris bug 6881590 http://sunsolve.sun.com/search/document.do?assetkey=1-1-6881590-1 Matthias -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Paul Archer
I recently (re)built a fileserver at home, using Ubuntu and zfs-fuse to create a ZFS filesystem (RAIDz1) on five 1.5TB drives. I had some serious issues with NFS not working properly (kept getting stale file handles), so I tried to switch to OpenSolaris/Nexenta, but my SATA controller wasn't

[zfs-discuss] opensolaris/zfs as virtualization host

2009-09-17 Thread nikola toshev
Hi, ZFS drew my attention and I'm thinking of using it to manage the storage for a virtual machine host. The plan is to get a Core i7 machine with 12GB RAM and 6 SATA disks (+1 PATA for boot/swap), configure the 6 disks in a tank of mirrored pairs and keep on that pool a number of virtual

[zfs-discuss] d2d2t

2009-09-17 Thread Greg
Hello all, I have an opensolaris server which is used as an iscsi SAN on snv_122. I am then using two ESXi boxes to connect to them and this is where the storage for the virtual machines lies. On here are several vm's including linux and windows servers. We have another server which is almost

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Fajar A. Nugraha
On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer p...@paularcher.org wrote: I can reboot into Linux and import the pools, but haven't figured out why I can't import them in Solaris. I don't know if it makes a difference (I wouldn't think so), but zfs-fuse under Linux is using ZFS version 13, where

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Paul Archer
10:09pm, Fajar A. Nugraha wrote: On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer p...@paularcher.org wrote: I can reboot into Linux and import the pools, but haven't figured out why I can't import them in Solaris. I don't know if it makes a difference (I wouldn't think so), but zfs-fuse under

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Darren J Moffat
Paul Archer wrote: 10:09pm, Fajar A. Nugraha wrote: On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer p...@paularcher.org wrote: I can reboot into Linux and import the pools, but haven't figured out why I can't import them in Solaris. I don't know if it makes a difference (I wouldn't think so),

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Paul Archer
10:40am, Paul Archer wrote: I can reboot into Linux and import the pools, but haven't figured out why I can't import them in Solaris. I don't know if it makes a difference (I wouldn't think so), but zfs-fuse under Linux is using ZFS version 13, where Nexenta is using version 14. Just a

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Erik Trimble
Darren J Moffat wrote: Erik Trimble wrote: So SSDs for ZIL/L2ARC don't bring that much when used with raidz2/raidz3, if I write a lot, at least, and don't access the cache very much, according to some recent posts on this list. Not true. Remember: ZIL = write cache ZIL is NOT a write

Re: [zfs-discuss] Persistent errors - do I believe?

2009-09-17 Thread David Dyer-Bennet
On Thu, September 17, 2009 04:29, Chris Murray wrote: 2. New disk E. Replaced A with E. 3. Part way through resilver, drive D was 'removed' 4. 700+ persistent errors detected, and lots of checksum errors on all drives. Surprised by this - I thought the absence of one drive could be

[zfs-discuss] Adding new disks and ditto block behaviour

2009-09-17 Thread Joe Toppi
I have machine that had 2x 1TB drives in it. They were in the same zpool and that entire zpool is set to copies=2. From what I understand this will store all my data twice, and if the SPA is doing its job right it will store the copies on different disks and store the checksum for any given

Re: [zfs-discuss] deduplication

2009-09-17 Thread Tim Cook
On Thu, Sep 17, 2009 at 5:27 AM, Thomas Burgess wonsl...@gmail.com wrote: I think you're right, and i also think we'll still see a new post asking about it once or twice a week. On Thu, Sep 17, 2009 at 2:20 AM, Cyril Plisko cyril.pli...@mountall.comwrote: 2009/9/17 Brandon High

[zfs-discuss] stat() performance on files on zfs vs. ufs

2009-09-17 Thread Robert Milkowski
Hi, Bug ID: 6775100 stat() performance on files on zfs should be improved was fixed in snv_119. I wanted to do a quick comparison between snv_117 and snv_122 on my workstation to see what kind of improvement there is. I wrote a small C program which does a stat() N times in a loop. This is of

Re: [zfs-discuss] Adding new disks and ditto block behaviour

2009-09-17 Thread Carson Gaspar
Joe Toppi wrote: I have machine that had 2x 1TB drives in it. They were in the same zpool and that entire zpool is set to copies=2. From what I understand this will store all my data twice, and if the SPA is doing its job right it will store the copies on different disks and store the checksum

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Paul Archer
5:08pm, Darren J Moffat wrote: Paul Archer wrote: 10:09pm, Fajar A. Nugraha wrote: On Thu, Sep 17, 2009 at 8:55 PM, Paul Archer p...@paularcher.org wrote: I can reboot into Linux and import the pools, but haven't figured out why I can't import them in Solaris. I don't know if it makes a

Re: [zfs-discuss] Persistent errors - do I believe?

2009-09-17 Thread Chris Murray
Thanks David. Maybe I mis-understand how a replace works? When I added disk E, and used 'zpool replace [A] [E]' (still can't remember those drive names), I thought that disk A would still be part of the pool, and read from in order to build the contents of disk E? Sort of like a safer way of

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Darren J Moffat
Paul Archer wrote: What kind of partition table is on the disks, is it EFI ? If not that might be part of the issue. I don't believe there is any partition table on the disks. I pointed zfs to the raw disks when I setup the pool. If you run fdisk on OpenSolaris against this disk what does

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Marion Hakanson
rswwal...@gmail.com said: It's not the stripes that make a difference, but the number of controllers there. What's the system config on that puppy? The zpool status -v output was from a Thumper (X4500), slightly edited, since in our real-world Thumper, we use c6t0d0 in c5t4d0's place in the

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Paul Archer
6:44pm, Darren J Moffat wrote: Paul Archer wrote: What kind of partition table is on the disks, is it EFI ? If not that might be part of the issue. I don't believe there is any partition table on the disks. I pointed zfs to the raw disks when I setup the pool. If you run fdisk on

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Darren J Moffat
Paul Archer wrote: r...@ubuntu:~# fdisk -l /dev/sda Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xce13f90b Device Boot Start End Blocks Id System /dev/sda1

[zfs-discuss] Moving volumes to new controller

2009-09-17 Thread Nilsen, Vidar
Hi, I'm trying to move disks in a zpool from one SATA-kontroller to another. Its 16 disks in 4x4 raidz. Just to see if it could be done, I moved one disk from one raidz over to the new controller. Server was powered off. After booting OS, I get this: Zpool status (...) raidz1 DEGRADED

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Adam Leventhal
On Thu, Sep 17, 2009 at 01:32:43PM +0200, Eugen Leitl wrote: reasons), you will lose 2 disks worth of storage to parity leaving 12 disks worth of data. With raid10 you will lose half, 7 disks to parity/redundancy. With two raidz2 sets, you will get (5+2)+(5+2), that is 5+5 disks worth of

Re: [zfs-discuss] Moving volumes to new controller

2009-09-17 Thread Marion Hakanson
vidar.nil...@palantir.no said: I'm trying to move disks in a zpool from one SATA-kontroller to another. Its 16 disks in 4x4 raidz. Just to see if it could be done, I moved one disk from one raidz over to the new controller. Server was powered off. . . . zpool replace storage c10t7d0 c11t0d0

Re: [zfs-discuss] zfs not sharing nfs shares on OSOl 2009.06?

2009-09-17 Thread Tom de Waal
All, After long and long searching: I found the reason: ips package SUNWnfsskr was missing . Thanks for all your replies and help. Regards, Tom. Tom de Waal wrote: Hi, I'm trying to identify why my nfs server does not work. I'm using a more or less core install of OSOL 2009.06 (release)

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Paul Archer
7:37pm, Darren J Moffat wrote: Paul Archer wrote: r...@ubuntu:~# fdisk -l /dev/sda Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xce13f90b Device Boot Start End

Re: [zfs-discuss] ZFS file disk usage

2009-09-17 Thread Robert Milkowski
Andrew Deason wrote: As I'm sure you're all aware, filesize in ZFS can differ greatly from actual disk usage, depending on access patterns. e.g. truncating a 1M file down to 1 byte still uses up about 130k on disk when recordsize=128k. I'm aware that this is a result of ZFS's rather different

Re: [zfs-discuss] deduplication

2009-09-17 Thread James C. McPherson
On Thu, 17 Sep 2009 11:50:17 -0500 Tim Cook t...@cook.ms wrote: On Thu, Sep 17, 2009 at 5:27 AM, Thomas Burgess wonsl...@gmail.com wrote: I think you're right, and i also think we'll still see a new post asking about it once or twice a week. [snip] As we should. Did the video of the

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Andrew Gabriel
Darren J Moffat wrote: Paul Archer wrote: r...@ubuntu:~# fdisk -l /dev/sda Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xce13f90b Device Boot Start End Blocks

Re: [zfs-discuss] ZFS file disk usage

2009-09-17 Thread Andrew Deason
On Thu, 17 Sep 2009 22:55:38 +0100 Robert Milkowski mi...@task.gda.pl wrote: IMHO you won't be able to lower a file blocksize other than by creating a new file. For example: Okay, thank you. If you are not worried with this extra overhead and you are mostly concerned with proper accounting

Re: [zfs-discuss] ZFS file disk usage

2009-09-17 Thread Robert Milkowski
if you would create a dedicated dataset for your cache and set quota on it then instead of tracking a disk space usage for each file you could easily check how much disk space is being used in the dataset. Would it suffice for you? Setting recordsize to 1k if you have lots of files (I

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Fajar A. Nugraha
On Fri, Sep 18, 2009 at 4:08 AM, Paul Archer p...@paularcher.org wrote: I did a little research and found that parted on Linux handles EFI labelling. I used it to change the partition scheme on sda, creating an sda1. I then offlined sda and replaced it with sda1. I wish I had just tried a

Re: [zfs-discuss] Adding new disks and ditto block behaviour

2009-09-17 Thread Bob Friesenhahn
On Thu, 17 Sep 2009, Joe Toppi wrote: I filled this with data. So I added a 1.5 TB drive to the pool. Where will my ditto blocks and checksums go? Will it migrate data from the other drives automatically? Will it migrate data if I scrub or re-silver? will it never migrate data and just store

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Paul Archer
Tomorrow, Fajar A. Nugraha wrote: There was a post from Ricardo on zfs-fuse list some time ago. Apparently if you do a zpool create on whole disks, Linux on Solaris behaves differently: - solaris will create EFI partition on that disk, and use the partition as vdev - Linux will use the whole

Re: [zfs-discuss] migrating from linux to solaris ZFS

2009-09-17 Thread Al Muckart
On 18/09/2009, at 1:08 PM, Fajar A. Nugraha wrote: There was a post from Ricardo on zfs-fuse list some time ago. Apparently if you do a zpool create on whole disks, Linux on Solaris behaves differently: - solaris will create EFI partition on that disk, and use the partition as vdev - Linux