Does 'zpool attach' enough for a root pool?
I mean, does it install GRUB bootblocks on the disk?
On Wed, Jul 2, 2008 at 1:10 PM, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Tommaso,
Wednesday, July 2, 2008, 1:04:06 PM, you wrote:
the root filesystem of my thumper is a ZFS with a single
we did a mistake :(
tom
On Wed, Jul 2, 2008 at 5:58 PM, Richard Elling [EMAIL PROTECTED] wrote:
Tommaso Boccali wrote:
Ciao, the rot filesystem of my thumper is a ZFS with a single disk:
bash-3.2# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
As Edna and Robert mentioned, zpool attach will add the mirror.
But note that the X4500 has only two possible boot devices:
c5t0d0 and c5t4d0. This is a BIOS limitation. So you will want
to mirror with c5t4d0 and configure the disks for boot. See the
docs on ZFS boot for details on how to
On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark [EMAIL PROTECTED] wrote:
I am new to SX:CE (Solaris 11) and ZFS but I think I found a bug.
I have eight 10GB drives.
...
I have 6 remaining 10 GB drives and I desire to raid 3 of them and mirror
them to the other 3 to give me raid security and
I would just swap the physical locations of the drives, so that the
second half of the mirror is in the right location to be bootable.
ZFS won't mind -- it tracks the disks by content, not by pathname.
Note that SATA is not hotplug-happy, so you're probably best off
doing this while the box is
Peter Tribble wrote:
Because what you've created is a pool containing two
components:
- a 3-drive raidz
- a 3-drive mirror
concatenated together.
OK. Seems odd that ZFS would allow that (would people want that configuration
instead of what I am attempting to do).
I think that what
On Sun, Jul 6, 2008 at 10:27 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
I would just swap the physical locations of the drives, so that the
second half of the mirror is in the right location to be bootable.
ZFS won't mind -- it tracks the disks by content, not by pathname.
Note that SATA is
On Sun, Jul 6, 2008 at 10:13 AM, Rob Clark [EMAIL PROTECTED] wrote:
Is there a way to get mirror performance (double speed) with raid integrity
(one drive can fail and you are OK)? I can't imagine that there exists no one
who would want that configuration.
That's what mirroring does - you
Can anybody tell me how to measure the raw performance of a new system I'm
putting together? I'd like to know what it's capable of in terms of IOPS and
raw throughput to the disks.
I've seen Richard's raidoptimiser program, but I've only seen results for
random read iops performance, and I'm
I'm no expert in ZFS, but I think I can explain what you've created there:
# zpool create temparray1 mirror c1t2d0 c1t4d0 mirror c1t3d0 c1t5d0 mirror
c1t6d0 c1t8d0
This creates a stripe of three mirror sets (or in old fashioned terms, you have
a raid-0 stripe made up of three raid-1 sets of
On Sun, Jul 6, 2008 at 3:46 PM, Ross [EMAIL PROTECTED] wrote:
For your second one I'm less sure what's going on:
# zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz
c1t6d0 c1t8d0
This creates three two disk raid-z sets and stripes the data across them.
The problem is
I have a zpool which has grown organically. I had a 60Gb disk, I added a
120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.
The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor
OneTouch USB drives.
The original system I created the 60+120+500 pool
is there a way to do it via software ? (attach remove add detach)
if not else, it would help me quite a lot to understand the underlying
zfs mechanism ...
thanks
;)
tom
On Sun, Jul 6, 2008 at 10:27 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
I would just swap the physical locations of the
I'm doing another scrub after clearing insufficient replicas only to find
that I'm back to the report of insufficient replicas, which basically leads me
to expect this scrub (due to complete in about 5 hours from now) won't have any
benefit either.
-bash-3.2# zpool status local
pool: local
Then I went and bought an Intel PCI Gigabit Ethernet card for 25€ which seems
to have solved the problem.
Is this really the case? If so that is an important clue to finding out why
virtualized opensolaris performance is so poor. I tried every network adapter
in virtualbox and vmware and
Tommaso Boccali wrote:
is there a way to do it via software ? (attach remove add detach)
Skeleton process:
1. detach c1t7d0 from the root mirror
2. replace c5t4d0 with c1t7d0
In the details, you will need to be careful with the partitioning
for the root mirror. You will need to
Ross wrote:
Can anybody tell me how to measure the raw performance of a new system I'm
putting together? I'd like to know what it's capable of in terms of IOPS and
raw throughput to the disks.
I've seen Richard's raidoptimiser program, but I've only seen results for
random read iops
Ross Smith wrote:
Thanks Richard, filebench sounds ideal for testing the abilities of
the server, far better than I expected to find actually.
NFSstat might be tricky however, since the clients are going to be
running XP :). I've got a very basic free benchmark that I'll use to
check
As a first step, 'fmdump -ev' should indicate why it's complaining
about the mirror.
Jeff
On Sun, Jul 06, 2008 at 07:55:22AM -0700, Pete Hartman wrote:
I'm doing another scrub after clearing insufficient replicas only to find
that I'm back to the report of insufficient replicas, which
Hello Ross,
We're trying to accomplish the same goal over here, ie. serving multiple
VMware images from a NFS server.
Could you tell what kind of NVRAM device did you end up choosing? We bought
a Micromemory PCI card but can't get a Solaris driver for it...
Thanks
Gilberto
On 7/6/08 9:54 AM,
On Saturday the X4500 system paniced, and rebooted. For some reason the
/export/saba1 UFS partition was corrupt, and needed fsck. This is why
it did not come back online. /export/saba1 is mounted logging,noatime,
so fsck should never (-ish) be needed.
SunOS x4500-01.unix 5.11 snv_70b i86pc
I'm not sure how to interpret the output of fmdump:
-bash-3.2# fmdump -ev
TIME CLASS ENA
Jul 06 23:25:39.3184 ereport.fs.zfs.vdev.bad_label
0x03b3e4e8b1900401
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum
0xdaffb466a7e1
Jul 07 03:32:14.3561
Jorgen Lundman wrote:
On Saturday the X4500 system paniced, and rebooted. For some reason the
/export/saba1 UFS partition was corrupt, and needed fsck. This is why
it did not come back online. /export/saba1 is mounted logging,noatime,
so fsck should never (-ish) be needed.
SunOS
Since the panic stack only ever goes through ufs, you should
log a call with Sun support.
We do have support, but they only speak Japanese, and I'm still quite
poor at it. But I have started the process of having it translated and
passed along to the next person. It is always fun to see what
Jorgen Lundman wrote:
Since the panic stack only ever goes through ufs, you should
log a call with Sun support.
We do have support, but they only speak Japanese, and I'm still quite
poor at it. But I have started the process of having it translated and
passed along to the next person.
I don't know, I'm not a UFS expert (heck, I'm not an expert
on _anything_). Have you investigated putting your paying
customers onto zfs and managing quotas with zfs properties
instead of ufs?
Yep, we spent about 6 weeks during the trial period of the x4500 to try
to find a way for ZFS to be
26 matches
Mail list logo