[zfs-discuss] Mount ZFS pool on different system

2009-01-03 Thread D. Eckert
Hi, I have a faulty hard drive on my notebook, but I have all my data stored on an external USB HDD with a zfs. Now I want to mount that external zfs hdd on a different notebook running solaris and supporting zfs as well. I am unable to do so. If I'd run zpool create, it would wipe out my

[zfs-discuss] SOLVED: Mount ZFS pool on different system

2009-01-03 Thread D. Eckert
RTFM seems to solve many problems ;-) :# zpool import poolname -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS disk read failure when pools are simultaneously scrubbed, x86 snv_104

2009-01-03 Thread Jake Carroll
Hi. Running snv_104 x86 against some very generic hardware as a testbed for some fun projects and as a home fileserver. Rough specifications of the host: * Intel Q6600 * 6GB DDR2 * Multiple 250GB, 500GB SATA connected HDD's of mixed vendors * Gigabyte GA-DQ6 series motherboard * etc. The

Re: [zfs-discuss] ZFS disk read failure when pools are simultaneously scrubbed, x86 snv_104

2009-01-03 Thread Tomas Ögren
On 03 January, 2009 - Jake Carroll sent me these 5,9K bytes: Hi. Running snv_104 x86 against some very generic hardware as a testbed for some fun projects and as a home fileserver. Rough specifications of the host: * Intel Q6600 * 6GB DDR2 * Multiple 250GB, 500GB SATA connected HDD's

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2009-01-03 Thread Roch Bourbonnais
Le 20 déc. 08 à 22:34, Dmitry Razguliaev a écrit : Hi, I faced with a similar problem, like Ross, but still have not found a solution. I have raidz out of 9 sata disks connected to internal and 2 external sata controllers. Bonnie++ gives me the following results: nexenta,8G,

Re: [zfs-discuss] Mount ZFS pool on different system

2009-01-03 Thread Mattias Pantzare
Now I want to mount that external zfs hdd on a different notebook running solaris and supporting zfs as well. I am unable to do so. If I'd run zpool create, it would wipe out my external hdd what I of course want to avoid. So how can I mount a zfs filesystem on a different machine

Re: [zfs-discuss] ZFS disk read failure when pools are simultaneously scrubbed, x86 snv_104

2009-01-03 Thread Bob Friesenhahn
On Sat, 3 Jan 2009, Jake Carroll wrote: 1. Am I just experiencing some form of crappy consumer grade controller I/O limitations or an issue of the controllers on this consumer grade kit not being up to the task of handling multiple scrubs occurring on different filesystems at any given

Re: [zfs-discuss] Error 16: Inconsistent filesystem structure after a change in the system

2009-01-03 Thread Rafal Pratnicki
I recovered the system and created the opensolaris-12 BE. The system was working fine. I had the grub menu, it was fully recovered. At this stage I decided to create a new BE but leave the opensolaris-12 BE as an active BE and manually boot to the opensolaris-13 BE. So the situation looked like

Re: [zfs-discuss] Error 16: Inconsistent filesystem structure after a change in the system

2009-01-03 Thread Jan Spitalnik
Hey Rafal, this sounds like missing GANG block support in GRUB. Checkout putback log for snv_106 (afaik), there's a bug where grub fails like this. Cheers, Spity On 3.1.2009, at 21:11, Rafal Pratnicki wrote: I recovered the system and created the opensolaris-12 BE. The system was working

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-03 Thread A Darren Dunham
On Wed, Dec 31, 2008 at 01:53:03PM -0500, Miles Nordin wrote: The thing I don't like about the checksums is that they trigger for things other than bad disks, like if your machine loses power during a resilver, or other corner cases and bugs. I think the Netapp block-level RAID-layer

Re: [zfs-discuss] What will happen when write a block of 8k if the recordsize is 128k. Will 128k be written instead of 8k?

2009-01-03 Thread Robert Milkowski
Hello qihua, Saturday, December 27, 2008, 7:04:06 AM, you wrote: After we changed the recordsize to 8k, We first used dd to move the data files around. We could see the time recovering a archive log dropped from 40mins to 4 mins. But when using iostat to check, the read io is about 8K