Re: [zfs-discuss] ZFS data loss

2009-04-08 Thread Fajar A. Nugraha
On Wed, Apr 8, 2009 at 4:06 PM, Tomas Ögren st...@acc.umu.se wrote: Do you think there is something that can be done to recover lost data? Thanks,   Vic Does 'zpool import' find anything? devfsadm -v  to re-scan devices first perhaps.. ... or info from the other thread boot from disk into

Re: [zfs-discuss] Can this be done?

2009-04-08 Thread Cindy . Swearingen
Michael, You can't attach disks to an existing RAIDZ vdev, but you add another RAIDZ vdev. Also keep in mind that you can't detach disks from RAIDZ pools either. See the syntax below. Cindy # zpool create rzpool raidz2 c1t0d0 c1t1d0 c1t2d0 # zpool status pool: rzpool state: ONLINE

Re: [zfs-discuss] Can this be done?

2009-04-08 Thread Miles Nordin
ms == Michael Shadle mike...@gmail.com writes: ms When I attach this new raidz2, will ZFS auto rebalance data ms between the two, or will it keep the other one empty and do ms some sort of load balancing between the two for future writes ms only? the second choice. You can see

Re: [zfs-discuss] Importing zpool after one side of mirror was destroyed

2009-04-08 Thread Miles Nordin
gs == Geoff Shipman geoff.ship...@sun.com writes: gs At this point boot from disk into single user mode and move gs the /etc/zfs/zpool.cache file to a different name. and these days ``boot single-user'' seems to often mean 'boot -m milestone=none'. The old 'boot -s' will, AFAICT,

Re: [zfs-discuss] Can this be done?

2009-04-08 Thread Michael Shadle
Wed, Apr 8, 2009 at 9:39 AM, Miles Nordin car...@ivy.net wrote: ms == Michael Shadle mike...@gmail.com writes:    ms When I attach this new raidz2, will ZFS auto rebalance data    ms between the two, or will it keep the other one empty and do    ms some sort of load balancing between the two

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-08 Thread Richard Elling
Harry Putnam wrote: Robert Milkowski mi...@task.gda.pl writes: Then is block doesn't compress better than 12.5% it won't be compressed at all. Then in zfs you need extra space for checksums, etc. How did the OP came up with how much data is being used? OP, just used `du -sh' at both

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-08 Thread Harry Putnam
Richard Elling richard.ell...@gmail.com writes: Harry Putnam wrote: Robert Milkowski mi...@task.gda.pl writes: Then is block doesn't compress better than 12.5% it won't be compressed at all. Then in zfs you need extra space for checksums, etc. How did the OP came up with how much data

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-08 Thread Harry Putnam
Harry Putnam rea...@newsguy.com writes: Richard Elling richard.ell...@gmail.com writes: Harry Putnam wrote: Robert Milkowski mi...@task.gda.pl writes: Then is block doesn't compress better than 12.5% it won't be compressed at all. Then in zfs you need extra space for checksums, etc.

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-08 Thread Jeff Bonwick
Yes, I made note of that in my OP on this thread. But is it enough to end up with 8gb of non-compressed files measuring 8gb on reiserfs(linux) and the same data showing nearly 9gb when copied to a zfs filesystem with compression on. whoops.. a hefty exaggeration it only shows about

[zfs-discuss] ZFS Panic

2009-04-08 Thread Grant Lowe
Hi All, Don't know if this is worth reporting, as it's human error. Anyway, I had a panic on my zfs box. Here's the error: marksburg /usr2/glowe grep panic /var/log/syslog Apr 8 06:57:17 marksburg savecore: [ID 570001 auth.error] reboot after panic: assertion failed: 0 ==