before the zoneadm attach or boot you must create the configuration on the
second host, manuell or with the detached config from first host.
zonecfg -z heczone 'create -a /hecpool/zones/heczone'
zoneadm -z heczone attach ( to attach the requirements must fulfilled
(pkgs and patches in
Hi,
Chris Quenelle wrote:
Thanks, Constantin! That sounds like the right answer for me.
Can I use send and/or snapshot at the pool level? Or do I have
to use it on one filesystem at a time? I couldn't quite figure this
out from the man pages.
the ZFS team is working on a zfs send -r
Joe S writes:
After researching this further, I found that there are some known
performance issues with NFS + ZFS. I tried transferring files via SMB, and
got write speeds on average of 25MB/s.
So I will have my UNIX systems use SMB to write files to my Solaris server.
This seems
Also introduces the Veritas sfop utility, which is the 'simplified'
front-end to VxVM/VxFS.
As imitation is the sincerest form of flattery, this smacks of a
desperate attempt to prove to their customers that Vx can be just as
slick as ZFS.
More details at
Hi,
If I add an entire disk to a new pool by doing zpool create, is this
reversible?
I.e. if there was data on that disk (e.g. it was the sole disk in a zpool in
another system) can I get this back or is zpool create destructive?
Joubert
This message posted from opensolaris.org
Because you have to read the entire stripe (which probably spans all the
disks) to verify the checksum.
Then I have a wrong idea of what a stripe is. I always thought it's the
interleave block size.
-mg
signature.asc
Description: This is a digitally signed message part
partition p
Current partition table (original):
Total disk cylinders available: 49771 + 2 (reserved cylinders)
Part TagFlag Cylinders SizeBlocks
7 homewm3814 - 49769 63.11GB(45956/0/0) 132353280
--- If i run the command zpool create pool
On Thu, 2007-06-21 at 06:16 -0700, satish s nandihalli wrote:
Part TagFlag Cylinders SizeBlocks
7 homewm3814 - 49769 63.11GB(45956/0/0) 132353280
--- If i run the command zpool create pool name 7th slice (shown
above which is mounted
Hi,
I was playing around with NexentaCP and its zfs boot facility. I tried
to figure out how what commands to run and I ran zpool history like
this
# zpool history
2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED]
2007-06-20.10:20:03 zfs clone syspool/[EMAIL PROTECTED] syspool/myrootfs
Hi,
I've got some issues with my 5-disk SATA stack using two controllers. Some of
the ports are acting strangely, so I'd like to play around and change which
ports the disks are connected to. This means that I need to bring down the
pool, swap some connections and then bring the pool back up.
[hourly] marvell88sx error in command 0x2f: status 0x51
ah, its some kinda SMART or FMA query that
model WDC WD3200JD-00KLB0
firmware 08.05J08
serial number WD-WCAMR2427571
supported features:
48-bit LBA, DMA, SMART, SMART self-test
SATA1 compatible
capacity = 625142448 sectors
drives
Le 20 juin 07 à 04:59, Ian Collins a écrit :
I'm not sure why, but when I was testing various configurations with
bonnie++, 3 pairs of mirrors did give about 3x the random read
performance of a 6 disk raidz, but with 4 pairs, the random read
performance dropped by 50%:
3x2
Blockread:
Mario Goebbels wrote:
Because you have to read the entire stripe (which probably spans all the
disks) to verify the checksum.
Then I have a wrong idea of what a stripe is. I always thought it's the
interleave block size.
Nope. A stripe generally refers to the logical block as spread across
On Jun 21, 2007, at 8:47 AM, Niclas Sodergard wrote:
Hi,
I was playing around with NexentaCP and its zfs boot facility. I tried
to figure out how what commands to run and I ran zpool history like
this
# zpool history
2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED]
Joubert Nel wrote:
Hi,
If I add an entire disk to a new pool by doing
zpool create, is this
reversible?
I.e. if there was data on that disk (e.g. it was
the sole disk in a zpool
in another system) can I get this back or is zpool
create destructive?
Short answer: you're
Sorry I can't volunteer to test your script.
I want to do the steps by hand to make sure I understand them.
If I have to do it all again, I'll get in touch.
Thanks for the advice!
--chris
Constantin Gonzalez wrote:
Hi,
Chris Quenelle wrote:
Thanks, Constantin! That sounds like the right
good work!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Run cfgadm to see what ports are recognized as hotswappable. Run
cfgadm -cunconfigure portname and then make sure it's logically
disconnected with cfgadm, then pull the disk and put it in another
port. Then run cfgadm -cconfigure newport and it'll be ready to be
imported again.
Will
On Thu, Jun 21, 2007 at 11:03:39AM -0700, Joubert Nel wrote:
When I ran zpool create, the pool got created without a warning.
zpool(1M) will diallow creation of the disk if it contains data in
active use (mounted fs, zfs pool, dump device, swap, etc). It will warn
if it contains a recognized
Roch Bourbonnais wrote:
Le 20 juin 07 à 04:59, Ian Collins a écrit :
I'm not sure why, but when I was testing various configurations with
bonnie++, 3 pairs of mirrors did give about 3x the random read
performance of a 6 disk raidz, but with 4 pairs, the random read
performance dropped by
Quick question,
Are there any tunables, or is there any way to specify devices in a pool to use
for the ZIL specifically? I've been thinking through architectures to mitigate
performance problems on SAN and various other storage technologies where
disabling ZIL or cache flushes has been
Joubert Nel wrote:
If the device was actually in use on another system, I
would expect that libdiskmgmt would have warned you about
this when you ran zpool create.
AFAIK, libdiskmgmt is not multi-node aware. It does know about local
uses of the disk. Remote uses of the disk, especially those
22 matches
Mail list logo