Re: [zfs-discuss] Can I recover filesystem from an offline pool?

2010-05-30 Thread Jim Horng
10GB of memory + 5 days later. The pool was imported. this file server is a virtual machine. I allocated 2GB of memory and 2 CPU cores assume this was enough to mange 6 TB (6x 1TB disks). While the pool I am try to recover is only 700 GB and not the 6TB pool I am try to migrate. So I decided

[zfs-discuss] Can I recover filesystem from an offline pool?

2010-05-25 Thread Jim Horng
Hi All, is there any procedure to recover a filesystem from an office pool or bring a pool on-line quickly. Here is my issue. * One 700GB Zpool * 1 filesystem with compression turn on (only using few MB) * Try to migrated another filesystem from a different pool with dedup stream. with zfs send

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Jim Horng
You may or may not need to add the log device back. zfs clear should bring the pool online. either way shouldn't affect the data. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-13 Thread Jim Horng
When I boot up without the disks in the slots. I manually bring the pool on line with zpool clear poolname I believe that was what you were missing from your command. However I did not try to change controller. Hopefully you only been unplug disks while the system is turn off. If that's

Re: [zfs-discuss] How can I be sure the zfs send | zfs received is correct?

2010-05-10 Thread Jim Horng
I was expecting zfs send tank/export/projects/project1...@today would send everything up to @today. That is the only snapshot and I am not using the -i options. The things worries me is that tank/export/projects/project1_nb was the first file system that I tested with full dedup and

[zfs-discuss] How can I be sure the zfs send | zfs received is correct?

2010-05-09 Thread Jim Horng
Okay, so after some test with dedup on snv_134. I decided we can not to use dedup feature for the time being. While unable to destroy a dedupped file system. I decided to migrate the file system to another pool then destroy the pool. (see below)

Re: [zfs-discuss] How can I be sure the zfs send | zfs received is correct?

2010-05-09 Thread Jim Horng
size of snapshot? r...@filearch1:/var/adm# zfs list mpool/export/projects/project1...@today NAMEUSED AVAIL REFER MOUNTPOINT mpool/export/projects/project1...@today 0 - 407G - r...@filearch1:/var/adm# zfs list

Re: [zfs-discuss] Panic when deleting a large dedup snapshot

2010-04-30 Thread Jim Horng
Looks like I am hitting the same issue now from the earlier post that you responded. http://opensolaris.org/jive/thread.jspa?threadID=128532tstart=15 Continue my test migration with the dedup=off and synced couple more file systems. I decided the merge two of the file systems together by

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-29 Thread Jim Horng
Why would you recommend a spare for raidz2 or raidz3? -- richard Spare is to minimize the reconstruction time. Because remember a vdev can not start resilvering until there is a spare disk available. And with disks as big as they are today, resilvering also take many hours. I rather have

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-29 Thread Jim Horng
Would your opinion change if the disks you used took 7 days to resilver? Bob That will only make a stronger case that hot spare is absolutely needed. This will also make a strong case for choosing raidz3 over raidz2 as well as vdev smaller number of disks. -- This message posted from

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
For this type of migration a downtime is required. However, it can be reduce to only a few hours to a few minutes depending how much change need to be synced. I have done this many times on a NetApp Filer but can be apply to zfs as well. First thing is consider is only do the migration once

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
So on the point of not need an migration back. Even at 144 disk. they won't be on the same raid group. So figure out what is the best raid group size for you since zfs don't support changing number of disk in raidz yet. I usually use the number of the slots per shelf. or a good number is

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
Sorry, I need to correct myself. Mirror luns on the windows side to switch storage pool under it is a great idea and I think you can do this without downtime. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
I understand your point. however in most production system the selves are added incrementally so make sense to be related to number of slots per shelf. and in most case withstand a shelf failure is to much of overhead on storage any are. for example in his case he will have to configure 1+0

[zfs-discuss] OpenSolaris snv_134 zfs pool hangs after some time with dedup=on

2010-04-28 Thread Jim Horng
Sorry for the double post but I think this was better suite for zfs forum. I am running OpenSolaris snv_134 as a file server in a test environment, testing deduplication. I am transferring large amount of data from our production server via using rsync. The Data pool is on a separated raidz1-0

Re: [zfs-discuss] OpenSolaris snv_134 zfs pool hangs after some time with dedup=on

2010-04-28 Thread Jim Horng
This is not a performance issue. The rsync will hang hard and one of the child process can not be killed (I assume it's the one running on the zfs). the command gets slower I am referring to the output of the file system commands (zpool, zfs, df, du, etc) from the different shell. I left the

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
3 shelves with 2 controllers each. 48 drive per shelf. These are Fibrechannel attached. We would like all 144 drives added to the same large pool. I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across controllers within vdevs. also may want to leave a least 1 spare

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-27 Thread Jim Horng
Unclear what you want to do? What's the goal for this excise? If you want to replace the pool with larger disks and the pool is in mirror or raidz. You just replace one disk at a time and allow the pool to rebuild it self. Once all the disk has been replace, it will atomically realize the