[zfs-discuss] OS restore to first hard disk on ZFS while booted from second had disk

2011-01-24 Thread Ddl
Hi, I have a Solaris 10 x86 server with 2 hard disks running on mirrored UFS configuration. Currently we are trying to implement a OS backup solution using Networker 7.6. I can successfully backup the OS to a remote Networker server. But now the trouble is if I need to perform a full Solaris

[zfs-discuss] ZFS/ARC consuming all memory on heavy reads (w/ dedup enabled)

2011-01-24 Thread Clemens Kalb
Greetings Gentlemen, I'm currently testing a new setup for a ZFS based storage system with dedup enabled. The system is setup on OI 148, which seems quite stable w/ dedup enabled (compared to the OpenSolaris snv_136 build I used before). One issue I ran into, however, is quite baffling: With

[zfs-discuss] Recurring checksum errors on RAIDZ2 vdev

2011-01-24 Thread Ashley Nicholls
Hello all, I'm having a problem that I find difficult to diagnose. I have an IBM x3550 M3 running nexenta core platform 3.0.1 (134f) with 7x6 disk RAIDZ2 vdevs (see listing at bottom). Every day a disk fails with Too many checksum errors, is marked as degraded and rebuilt onto a hot spare. I've

Re: [zfs-discuss] OS restore to first hard disk on ZFS while booted from second had disk

2011-01-24 Thread Ian Collins
On 01/24/11 09:13 PM, Ddl wrote: Hi, I have a Solaris 10 x86 server with 2 hard disks running on mirrored UFS configuration. Currently we are trying to implement a OS backup solution using Networker 7.6. I can successfully backup the OS to a remote Networker server. But now the trouble is

[zfs-discuss] zfs create -p only creates the parent but not the child

2011-01-24 Thread Rahul Deb
I have a pool tank and dir1 is the filesystem on that pool. zfs list and df -h both shows tank/dir1 mounted. *-* *# zfs list* *tank 124K 228G32K /tank* *tank/dir1 31K 228G31K /tank/dir1* *#* * * *# df

[zfs-discuss] Shrinking a pool, Increasing hotspares

2011-01-24 Thread Phillip V
Hey all, I have a 10 TB root pool setup like so: pool: s78 state: ONLINE scrub: resilver completed after 2h0m with 0 errors on Wed Jan 19 22:04:39 2011 config: NAME STATE READ WRITE CKSUM s78 ONLINE 0 0 0 mirror ONLINE 0

Re: [zfs-discuss] Recurring checksum errors on RAIDZ2 vdev

2011-01-24 Thread Ian Collins
On 01/25/11 06:52 AM, Ashley Nicholls wrote: Hello all, I'm having a problem that I find difficult to diagnose. I have an IBM x3550 M3 running nexenta core platform 3.0.1 (134f) with 7x6 disk RAIDZ2 vdevs (see listing at bottom). Every day a disk fails with Too many checksum errors, is

Re: [zfs-discuss] Shrinking a pool, Increasing hotspares

2011-01-24 Thread Erik Trimble
On Mon, 2011-01-24 at 13:56 -0800, Phillip V wrote: Hey all, I have a 10 TB root pool setup like so: pool: s78 state: ONLINE scrub: resilver completed after 2h0m with 0 errors on Wed Jan 19 22:04:39 2011 config: NAME STATE READ WRITE CKSUM s78

[zfs-discuss] Question regarding incremental zfs send/recv for hundreds of zfs filesystems under one pool

2011-01-24 Thread Rahul Deb
There is only one pool and hundreds of zfs file systems under that pool. New file systems are getting created on the fly. Is it possible to automate zfs incremental send/recv in this scenario? My assumption is negative as incremental send/recv needs a full snapshot to be sent first before

Re: [zfs-discuss] Question regarding incremental zfs send/recv for hundreds of zfs filesystems under one pool

2011-01-24 Thread Ian Collins
On 01/25/11 12:30 PM, Rahul Deb wrote: There is only one pool and hundreds of zfs file systems under that pool. New file systems are getting created on the fly. Is it possible to automate zfs incremental send/recv in this scenario? My assumption is negative as incremental send/recv needs a

Re: [zfs-discuss] Question regarding incremental zfs send/recv for hundreds of zfs filesystems under one pool

2011-01-24 Thread Bryan Hodgson
And does it handle the deletions as well? Bryan On Tue, Jan 25, 2011 at 12:34:57PM +1300, Ian Collins wrote: On 01/25/11 12:30 PM, Rahul Deb wrote: There is only one pool and hundreds of zfs file systems under that pool. New file systems are getting created on the fly.

Re: [zfs-discuss] OS restore to first hard disk on ZFS while booted from second had disk

2011-01-24 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ddl But now the trouble is if I need to perform a full Solaris OS restore, I need to perform an installation of the Solaris 10 base OS and install Networker 7.6 client to call back the data

Re: [zfs-discuss] zfs create -p only creates the parent but not the child

2011-01-24 Thread Richard Elling
comment below... On Jan 24, 2011, at 1:58 PM, Rahul Deb wrote: I have a pool tank and dir1 is the filesystem on that pool. zfs list and df -h both shows tank/dir1 mounted. - # zfs list tank 124K 228G32K /tank

Re: [zfs-discuss] zfs create -p only creates the parent but not the child

2011-01-24 Thread Rahul Deb
Thanks Richard for the prompt response. But second time same commands creates dir3 too. I mean to say, as I said earlier, first time it gives the mounting error and does not create dir3 *# zfs create -p tank/dir1/dir2/dir3* *cannot mount '/tank/dir1/dir2': directory is not empty* *#* but if I

Re: [zfs-discuss] Question regarding incremental zfs send/recv for hundreds of zfs filesystems under one pool

2011-01-24 Thread Rahul Deb
Thanks Ian for your response. So you are saying, if I create recursive snapshot of the pool, it will be able to do the incremental send/recv for the file systems created on the fly? I was thinking that if the file systems are created on the fly, then there is no previous snapshot for the newly