Hi,
I have a faulty hard drive on my notebook, but I have all my data stored on an
external USB HDD with a zfs.
Now I want to mount that external zfs hdd on a different notebook running
solaris and supporting zfs as well.
I am unable to do so. If I'd run zpool create, it would wipe out my
RTFM seems to solve many problems ;-)
:# zpool import poolname
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi.
Running snv_104 x86 against some very generic hardware as a testbed for some
fun projects and as a home fileserver. Rough specifications of the host:
* Intel Q6600
* 6GB DDR2
* Multiple 250GB, 500GB SATA connected HDD's of mixed vendors
* Gigabyte GA-DQ6 series motherboard
* etc.
The
On 03 January, 2009 - Jake Carroll sent me these 5,9K bytes:
Hi.
Running snv_104 x86 against some very generic hardware as a testbed for some
fun projects and as a home fileserver. Rough specifications of the host:
* Intel Q6600
* 6GB DDR2
* Multiple 250GB, 500GB SATA connected HDD's
Le 20 déc. 08 à 22:34, Dmitry Razguliaev a écrit :
Hi, I faced with a similar problem, like Ross, but still have not
found a solution. I have raidz out of 9 sata disks connected to
internal and 2 external sata controllers. Bonnie++ gives me the
following results:
nexenta,8G,
Now I want to mount that external zfs hdd on a different notebook running
solaris and
supporting zfs as well.
I am unable to do so. If I'd run zpool create, it would wipe out my external
hdd what I of
course want to avoid.
So how can I mount a zfs filesystem on a different machine
On Sat, 3 Jan 2009, Jake Carroll wrote:
1. Am I just experiencing some form of crappy consumer grade
controller I/O limitations or an issue of the controllers on this
consumer grade kit not being up to the task of handling multiple
scrubs occurring on different filesystems at any given
I recovered the system and created the opensolaris-12 BE. The system was
working fine. I had the grub menu, it was fully recovered.
At this stage I decided to create a new BE but leave the opensolaris-12 BE as
an active BE and manually boot to the opensolaris-13 BE.
So the situation looked like
Hey Rafal,
this sounds like missing GANG block support in GRUB. Checkout putback
log for snv_106 (afaik), there's a bug where grub fails like this.
Cheers,
Spity
On 3.1.2009, at 21:11, Rafal Pratnicki wrote:
I recovered the system and created the opensolaris-12 BE. The system
was working
On Wed, Dec 31, 2008 at 01:53:03PM -0500, Miles Nordin wrote:
The thing I don't like about the checksums is that they trigger for
things other than bad disks, like if your machine loses power during a
resilver, or other corner cases and bugs. I think the Netapp
block-level RAID-layer
Hello qihua,
Saturday, December 27, 2008, 7:04:06 AM, you wrote:
After we changed the recordsize to 8k, We first used dd to move the data files around. We could see the time recovering a archive log dropped from 40mins to 4 mins. But when using iostat to check, the read io is about 8K
11 matches
Mail list logo