Hi Ray,
MPxIO is on by default for x86 systems that run the Solaris 10 9/10
release.
On my Solaris 10 9/10 SPARC system, I see this:
# stmsboot -L
stmsboot: MPxIO is not enabled
stmsboot: MPxIO disabled
You can use the stmsboot CLI to disable multipathing. You are prompted
to reboot the
Hi Chris,
Yes, this is a known problem and a CR is filed.
I haven't tried these in a while, but consider one of the following
workarounds below.
#1 is most drastic and make sure you've got the right device name. No
sanity checking is done by the dd command.
Other experts can comment on a
Hi Ian,
You are correct.
Previous Solaris releases displayed older POSIX ACL info on this
directory. It was changed to the new ACL style from the integration of
this CR:
6792884 Vista clients cannot access .zfs
Thanks,
Cindy
On 02/13/11 19:30, Ian Collins wrote:
While scanning filesystems
wrote:
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption, but its depends on how bad the corruption
Hi Krunal,
It looks to me like FMA thinks that you removed the disk so you'll need
to confirm whether the cable dropped or something else.
I agree that we need to get email updates for failing devices.
See if fmdump generated an error report using the commands below.
Thanks,
Cindy
# fmdump
:
On Tue, Feb 1, 2011 at 1:29 PM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
I agree that we need to get email updates for failing devices.
Definitely!
See if fmdump generated an error report using the commands below.
Unfortunately not, see below:
movax@megatron:/root# fmdump
TIME
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption, but its depends on how bad the corruption is.
First, I would try least destruction method:
Hi Alex,
Disks that are part of the root pool must contain a valid
slice 0 (this is boot restriction) and the disk names that you
present to ZFS for the root pool must also specify the slice
identifier (s0). For example, instead of this syntax:
# zpool attach -f rpool c0t0d0 c0t2d0
try this
Hi Phillip,
We don't yet support removing a mirrored pair, but you could reconfigure
this pool by the process below.
First, you would want to make sure that all the hardware is fully
operational by reviewing fmdump -eV, iostat -En, /var/adm/messages
and having a good backup. Yes, I'm paranoid
You're mixing a mkdir operation with a zfs create operation and only
the zfs create operation creates a file system that is mounted, which
is why df -h doesn't show dir2 as mounted. dir2 is just a directory,
not a file system.
ZFS does two things with a default zfs create operation:
o creates
My first inclination is 128k is too small for a pool component.
You might try something more reasonable, like 1G, if you're
just testing.
Thanks,
Cindy
# zfs create -V 2g sanpool/vol1
# stmfadm create-lu /dev/zvol/rdsk/sanpool/vol1
Logical unit created: 600144F0C49A05004CC84BE20001
On
Original Message
Subject: [osol-discuss] Its Official...!! GA release on 14th Jan 2011
Date: Wed, 12 Jan 2011 05:34:18 PST
From: darshin dars...@kqinfotech.com
To: opensolaris-disc...@opensolaris.org
Hi All,
Happy New Year !
First of all, a big thanks to you all for the
Hi David,
You might try importing this pool on a Oracle Solaris Express system,
where a pool recovery feature is available might be able to bring this
pool back (it rolls back to a previous transaction) or if that fails,
you could import this pool by using the read-only option to at least
Hi Karl,
I would keep your mirrored root pool separate on the smaller disks as
you have setup now.
You can move your root pool, its easy enough. You can even replace
or attach larger disks to the root pool and detach the smaller disks.
You can't currently boot from snapshots, you must boot
Hi Brandon,
I'm not the right person to evaluate your zstreamdump output, but I
can't reproduce this error on my b152 system, which as close as I
could get to b151a. See below.
Are the rpool and radar pool versions reasonably equivalent?
In your follow-up, I think you are saying that
On 01/05/11 12:21, Brandon High wrote:
On Wed, Jan 5, 2011 at 9:44 AM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
In your follow-up, I think you are saying that rp...@copy is a recursive
snapshot and you are able to receive the individual rpool snapshots. You
just can't receive
output for clues.
Thanks,
Cindy
On 01/05/11 14:01, Brandon High wrote:
On Wed, Jan 5, 2011 at 11:57 AM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
In the meantime, you could rule out a problem with zfs send/recv on your
system if you could create another non-BE dataset with descendent
Hey ZFSers,
This is a moderated discussion list and if you are not a member of this
list, your postings are not posted until a moderator approves them.
The list moderators will be on vacation and posting approvals will be
delayed. If you are not a member of this list and you want to post to
Hi Per,
Disk devices are used to create ZFS storage pools. Then, you create file
systems that can access all the available disk space in the storage
pool. ZFS file systems are not constrained to any physical disk in the
storage pool.
Consider that you will need to backup your data regardless
You should take a look at the ZFS best practices guide for RAIDZ and
mirrored configuration recommendations:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Its easy for me to say because I don't have to buy storage but
mirrored storage pools are currently more flexible,
Hi Lanky,
Other follow-up posters have given you good advice.
I don't see where you are getting the idea that you can combine
pools with pools. You can't do this and I don't see that the
southbrain tutorial illustrates this either. All of his examples
for creating redundant pools are
Hi Chris,
I have attempted to document the steps to restrict LUN access, here:
http://docs.sun.com/app/docs/doc/821-1459/gkgnr?l=ena=view
Please see if this info helps. If it doesn't, let me know the errors.
Thanks,
Cindy
On 12/13/10 16:30, Chris Mosetick wrote:
I have found this post from
Hi Don,
I'm no snapshot expert but I think you will have to remove the previous
receiving side snapshots, at least.
I created a file system hierarchy that includes a lower-level snapshot,
created a recursive snapshot of that hierarchy and sent it over to
a backup pool. Then, did the same steps
Karel,
You can't create snapshots in a read-only pool.
You will have to use something else besides zfs snapshots, such as
tar or cpio.
You could have used zfs send if a snapshot already existed but you
can't write anything to the pool when it is in read-only mode.
Thanks,
Cindy
On 11/25/10
Hi Karel,
Try /usr/bin/find instead of /usr/gnu/bin/find:
# which find
/usr/gnu/bin/find
# zfs snapshot rpool/cin...@snap1
# cd /rpool/cindys/.zfs
# /usr/bin/find . -type f
./snapshot/snap1/file.1
./snapshot/snap1/file.2
Thanks,
Cindy
On 11/25/10 15:22, Karel Gardas wrote:
Hello,
after
On 23/11/2010 21:01, StorageConcepts wrote:
r...@solaris11:~# zfs list mypool/secret_received
cannot open 'mypool/secret_received': dataset does
not exist
r...@solaris11:~# zfs send mypool/plaint...@test |
zfs receive -o encryption=on mypool/secret_received
cannot receive: cannot
Hi Markus,
Jeff Bonwick integrated this feature so I'll let him describe it.
In a nutshell:
If you create a RAIDZ pool in OS 11 Express or if you are running at
least build 129, some of the pool metadata is mirrored automatically.
This is a performance feature that should increase read I/O
need to upgrade the pool version to use this feature. In this case,
newly written metadata would be mirrored.
Thanks,
Cindy
On 11/18/10 10:15, Cindy Swearingen wrote:
Hi Markus,
Jeff Bonwick integrated this feature so I'll let him describe it.
In a nutshell:
If you create a RAIDZ pool in OS 11
Hi Ian,
The pool and file system version information is available in
the ZFS Administration Guide, here:
http://docs.sun.com/app/docs/doc/821-1448/appendixa-1?l=ena=view
The OpenSolaris version pages are up-to-date now also.
Thanks,
Cindy
On 11/15/10 16:42, Ian Collins wrote:
Is there an
Hi Rainer,
I haven't seen this in a while but I wonder if you just need to set the
bootfs property on your new root pool and/or reapplying the bootblocks.
Can you import this pool booting from a LiveCD and to review the
bootfs property value? I would also install the boot blocks on the
rpool2
about this since I'm sure
others will also run into this problem at some point if they have a
mixed Linux/Solaris environment.
-Moazam
On Tue, Nov 2, 2010 at 3:15 PM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
Hi Moazam,
The initial diagnosis is that the LSI controller is reporting bogus
Hi Moazam,
The initial diagnosis is that the LSI controller is reporting bogus
information. It looks like Roy is using a similar controller.
You might report this problem to LSI, but I will pass this issue
along to the format folks.
Thanks,
Cindy
On 11/02/10 15:26, Moazam Raja wrote:
I'm
Hi SR,
You can create a mirrored storage pool, but you can't mirror
an existing raidz2 pool nor can you convert a raidz2 pool
to a mirrored pool.
You would need to copy the data from the existing pool,
destroy the raidz2 pool, and create a mirrored storage
pool.
Cindy
On 10/28/10 11:19, SR
Hi Bill,
Do you have another equivalent sized disk available?
If so, and assuming this is the root pool, create one large slice 0 (s0)
with an SMI label on the replacement disk. Attach that disk to create a
mirrored root pool. After the replacement disk has resilvered, install
the bootblocks,
Hi Andy,
What is the setting for the aclinherit property? I think you want
to set this property to passthrough.
Thanks,
Cindy
On 10/26/10 07:25, Andy Graybeal wrote:
Yes, if you set up the directory ACLs for inheritance (include :fd:
when you specify the ACEs), the ACLs on copied files will
Hi Frank,
You can't simulate the aclmode-less world in the upcoming release
by setting aclmode to discard in b134.
The reason you see your aclmode discarded because aclmode applies
to both chmod operations and file/dir create operations.
This is why you are seeing the ACL being discarded. It
Hi Harry,
Generally, you need to use zpool clear to clear the pool errors, but I
can't reproduce the removed files reappearing in zpool status on my own
system when I corrupt data so I'm not sure this will help. Some other
larger problem is going on here...
Did any hardware changes lead up to
Krunal,
The file system size changes are probably caused when these
snapshots are created and deleted automatically.
The recurring messages below are driver related and probably
have nothing to do with the snapshots.
Thanks,
Cindy
On 10/20/10 10:50, Krunal Desai wrote:
Argh, yes, lots of
Hi Sridhar,
The answer to the first question is definitely no:
No way exists to change a pool name without exporting and importing
the pool. I thought we had an open CR that covered renaming pools but I
can't find it.
The underlying pool devices contain pool information and no easy way
exists
On 10/19/10 14:33, Tuomas Leikola wrote:
On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden sbre...@gmail.com wrote:
So are we all agreed then, that a vdev failure will cause pool loss ?
--
unless you use copies=2 or 3, in which case your data is still safe
for those datasets that have this
Derek,
The c0t5000C500268CFA6Bd0 disk has some kind of label problem.
You might compare the label of this disk to the other disks.
I agree with Richard that using whole disks (use the d0 device)
is best.
You could also relabel it manually by using the format--fdisk--
delete the current
Hi James,
I'm looking into this and will get back to you shortly.
Thanks,
Cindy
On 10/13/10 00:14, James Patterson wrote:
Iām testing the new online zpool expansion feature of Solaris 10 9/10. My
zpool was created using the entire disk (ie. no slice number was used). When I
resize my LUN
Hi Hans-Christian,
Can you provide the commands you used to create this pool?
Are the pool devices actually files? If so, I don't see how you
have a pool device that starts without a leading slash. I tried
to create one and it failed. See the example below.
By default, zpool import looks in
Hi Christian,
Yes, with non-standard disks you will need to provide the path to zpool
import.
I don't think the force import of a degraded pool would cause the pool
to be faulted. In general, the I/O error is caused when ZFS can't access
the underlying devices. In this case, your non-standard
I would not discount the performance issue...
Depending on your workload, you might find that performance increases
with ZFS on your hardware RAID in JBOD mode.
Cindy
On 10/07/10 06:26, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
Hi Sridhar,
Most of the answers to your questions are yes.
If I have a mirrored pool mypool, like this:
# zpool status mypool
pool: mypool
state: ONLINE
scan: none requested
config:
NAMESTATE READ WRITE CKSUM
mypool ONLINE 0 0 0
Budy,
Your previous zpool status output shows a non-redundant pool with data
corruption.
You should use the fmdump -eV command to find out the underlying cause
of this corruption.
You can review the hardware-level monitoring tools, here:
Hi Sridhar,
After a zpool split operation, you can access the newly created
pool by using the zpool import command.
If the LUNs from mypool are available on host1 and host2, you
should be able to import mypool_snap from host2. After mypool_snap
is imported, it will be available for backups, but
Hi Brian,
You could manually detach the spare, like this:
# zpool detach pool2 c10t22d0
Sometimes, you might need to clear the pool error but I don't
see any residual errors in this output:
# zpool clear pool2
I would use fmdump -eV to see what's going with c10t11d0.
Thanks,
Cindy
On
Hi--
Yes, you would use the zpool attach command to convert a
non-redundant configuration into a mirrored pool configuration.
http://docs.sun.com/app/docs/doc/819-5461/gcfhe?l=ena=view
See:
Example 4ā6 Converting a Nonredundant ZFS Storage Pool to a Mirrored ZFS
Storage Pool
If you have
errors
Actually, this zpool consists of two FC raids and I think I created it
simply by adding these two devs to the pool.
Does this disqualify my zpool for upgrading?
Thanks,
budy
Am 04.10.10 16:48, schrieb Cindy Swearingen:
Hi--
Yes, you would use the zpool attach command to convert a
non
for upgrading?
Thanks,
budy
Am 04.10.10 16:48, schrieb Cindy Swearingen:
Hi--
Yes, you would use the zpool attach command to convert a
non-redundant configuration into a mirrored pool configuration.
http://docs.sun.com/app/docs/doc/819-5461/gcfhe?l=ena=view
See:
Example 4ā6 Converting
that there. I tried replace/remove.
I guess the spare is actually a mirror of the disk and the spare disk and is
treated as such.
Thanks again,
Brian
On Oct 4, 2010, at 10:27 AM, Cindy Swearingen wrote:
Hi Brian,
You could manually detach the spare, like this:
# zpool detach pool2 c10t22d0
Hi Simon,
I don't think you will see much difference for these reasons:
1. The CIFS server ignores the aclinherit/aclmode properties.
2. Your aclinherit=passthrough setting overrides the aclmode
property anyway.
3. The only difference is that if you use chmod on these files
to manually change
Hi Ian,
If this is a release prior to b122, you might be running into CR 6860996.
Please see this thread for a possible resolution:
http://opensolaris.org/jive/thread.jspa?messageID=493866#493866
Thanks,
Cindy
On 09/30/10 09:34, Ian Levesque wrote:
Hello,
I have a ZFS filesystem (zpool
Hi Tony,
The current behavior is that you can add a spare to a root pool. If the
spare kicks in automatically, you would need to apply the boot blocks
manually before you could boot from the spared-in disk.
A good alternative is to create a two-way or three-way mirrored root
pool.
We're
Tony,
A brief follow-up is that the issue of applying the boot blocks
automatically to a spare for a root pool is covered by this
existing CR 6668666. See this URL for more details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6668666
Thanks,
Cindy
On 09/29/10 08:38, Cindy
Hi Ketan,
My flash archive experience is minimal, but..
This error suggest that the disk components of this pool might have some
SVM remnants. Is that possible? I would check with the metastat command,
review /etc/vfstab, or /etc/lu/ICF.* to see if they are referencing
meta devices.
Thanks,
Ketan,
Someone with more flash archive experience that me says that you
can't install a ZFS root flash archive with live upgrade at this
time. Duh, I knew that.
Sorry for the red herring... :-)
Cindy
On 09/28/10 08:30, Cindy Swearingen wrote:
Hi Ketan,
My flash archive experience
Hi Stephan,
Yes, the aclmode property was removed, but we're not sure how
this change is impacting your users.
Can you provide their existing ACL information and we'll take
a look.
Thanks,
Cindy
On 09/24/10 01:41, Stephan Budach wrote:
Hi,
I recently installed oi147 and I noticed that the
On 23/09/2010 11:06 PM, casper@sun.com wrote:
Ok, that doesn't seem to have worked so well ...
I took one of the drives offline, rebooted and it
just hangs at the
splash screen after prompting for which BE to boot
into.
It gets to
hostname: blah
and just sits there.
On 9/22/10 1:40 PM, Peter Taps wrote:
Neil,
Thank you for your help.
However, I don't see anything about l2cache under Cache devices man pages.
To be clear, there are two different vdev types defined in zfs source code - cache and l2cache.
I am familiar with cache devices. I am curious
Hi--
It might help to review the disk component terminology description:
c#t#d#p# = represents the the fdisk partition on x86 systems, where
you can have up to 4 fdisk partitions, such as one for the Solaris
OS or a Windows OS. An fdisk partition is the larger container of the
disk or disk
Hi Craig,
D'oh. I kept wondering where those p0 examples were coming from.
Don't use the p* devices for your storage pools. They represent
the larger fdisk partition.
Use the d* devices instead, like this example below:
zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
Craig,
I'm sure the other home file server users will comment on your gear
and any possible benefit of a L2ARC or separate log device...
Use the default checksum which is fletcher4, I fixed the tuning guide
reference, skip dedup for now. Keep things as simple as possible.
Thanks,
Cindy
On
Hi Marion,
I'm not the right person to analyze your panic stack, but a quick
search says the page_sub: bad arg(s): pp panic string might be
associated with a bad CPU or a page locking problem.
I would recommend running CPU/memory diagnostics on this system.
Thanks,
Cindy
On 09/02/10 20:31,
This is the right forum, fire away...
Feel free to review ZFS information in advance:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs
ZFS Administration Guide (Solaris 10):
http://docs.sun.com/app/docs/doc/819-5461
ZFS Best Practices Guide:
Dominik,
You overwrite your data when you recreated a pool with the same
name and the same disks with zpool create.
If I try to recreate a pool that already exists, at least exported,
I will see a message similar to the following:
# zpool create tank c3t3d0
invalid vdev specification
use '-f'
Yes, I did try to import the pool. However, the
response of the command was no pools available to
import.
I'm not sure what happened to your pool, but I think it is possible
that the pool information on these disks was removed accidentally.
I'm not sure what the diskutil command does but if
Hi Rainer,
I'm no device expert but we see this problem when firmware updates or
other device/controller changes change the device ID associated with
the devices in the pool.
In general, ZFS can handle controller/device changes if the driver
generates or fabricates device IDs. You can view
Its hard to tell what caused the smart predictive failure message,
like a temp fluctuation. If ZFS noticed that a disk wasn't available
yet, then I would expect a message to that effect.
In any case, I think I would have a replacement disk available.
The important thing is that you continue to
Hi Mark,
I would recheck with fmdump to see if you have any persistent errors
on the second disk.
The fmdump command will display faults and fmdump -eV will display
errors (persistent faults that have turned into errors based on some
criteria).
If fmdump -eV doesn't show any activity for
Hi Phillip,
What's the error message?
How did you share the ZFS file system?
# zfs create tank/cindys
# zfs sharenfs=on tank/cindys
# share
- /tank/cindys rw
# cp /usr/dict/words /tank/cindys/file.1
# cd /tank/cindys
# chmod 666 file.1
# ls -l file.1
-rw-rw-rw- 1 root
but since this is a ZFS filesystem being NFS over. Who knows!!!
Phillip
-Original Message-
From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
Sent: Friday, August 13, 2010 12:59 PM
To: Phillip Bruce (Mindsource)
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] NFS
Hi Giovanni,
The spare behavior and the autoreplace property behavior are separate
but they should work pretty well in recent builds.
You should not need to perform a zpool replace operation if the
autoreplace property is set. If autoreplace is set and a replacement
disk is inserted into the
You would look for the device name that might be a problem, like this:
# fmdump -eV | grep c2t4d0
vdev_path = /dev/dsk/c2t4d0s0
vdev_path = /dev/dsk/c2t4d0s0
vdev_path = /dev/dsk/c2t4d0s0
vdev_path = /dev/dsk/c2t4d0s0
Then, review the file more closely for the details of these errors,
such as
Yes, as long as the pools are on the same system, you can share
a spare between two pools, but we are not recommending sharing
spares at this time.
We'll keep you posted.
Thanks,
Cindy
On 08/10/10 07:39, Tony MacDoodle wrote:
I have 2 ZFS pools all using the same drive type and size. The
Hi Brian,
Is the pool exported before the update/upgrade of PowerPath software?
This recommended practice might help the resulting devices to be more
coherent.
If the format utility sees the devices the same way as ZFS, then I don't
see how ZFS can rename the devices.
If the format utility
The ZFS best practices is here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Run zpool scrub on a regular basis to identify data integrity problems.
If you have consumer-quality drives, consider a weekly scrubbing
schedule. If you have datacenter-quality drives,
Hi Peter,
I don't think we have any property that determines who created
the file system.
Would this work instead:
# zfs list -r mypool
NAME USED AVAIL REFER MOUNTPOINT
mypool 172K 134G33K /mypool
mypool/cifs131K 134G31K /mypool/cifs1
mypool/cifs231K
Because this is a non-redundant root pool, you should still
check fmdump -eV to make sure the corrupted files aren't
due to some ongoing disk problems.
cs
On 08/04/10 13:45, valrh...@gmail.com wrote:
Oooh... Good call!
I scrubbed the pool twice, then it showed a real filename from an old
In general, ZFS can detect device changes but we recommend
exporting the pool before you move hardware around.
You might try exporting and importing this pool to see if
ZFS recognizes this device again.
Make sure you have a good backup of this data before you
export it because its hard to tell
Which Solaris release is this and are you using /usr/bin/ls and
/usr/bin/chmod?
Thanks,
Cindy
On 07/29/10 02:44, . . wrote:
Hi ,
while playing with ZFS acls I have noticed chmod strange behavior, it
duplicates some acls , is it a bug or a feature :) ?
For example scenario:
#ls -dv ./2
individually, like this:
# zfs upgrade space/direct
# zfs upgrade space/dcc
Thanks,
Cindy
On 07/29/10 09:48, Cindy Swearingen wrote:
Hi Gary,
This should just work without having to do anything.
Looks like a bug but I haven't seen this problem before.
Anything unusual about the mount points
/synchronize:allow
On 07/29/10 11:56, Cindy Swearingen wrote:
Which Solaris release is this and are you using /usr/bin/ls and
/usr/bin/chmod?
Thanks,
Cindy
On 07/29/10 02:44, . . wrote:
Hi ,
while playing with ZFS acls I have noticed chmod strange behavior, it
duplicates some acls
Hi Gary,
If your root pool is getting full, you can replace the root pool
disk with a larger disk. My recommendation is to attach the replacement
disk, let the replacement disk resilver, install the boot blocks, and
then detach the smaller disk. The system will see the expanded space
Hi Mark,
A couple of things are causing this to fail:
1. The user needs permissions to the underlying mount point.
2. The user needs both create and mount permissions to create ZFS datasets.
See the syntax below, which might vary depending on your Solaris
release.
Thanks,
Cindy
# chmod
Mike,
Did you also give the user permissions to the underlying mount point:
# chmod A+user:user-name:add_subdirectory:fd:allow /rpool
If so, please let me see the syntax and error messages.
Thanks,
Cindy
On 07/28/10 12:23, Mike DeMarco wrote:
Thanks adding mount did allow me to create it
Hi Sol,
What kind of disks?
You should be able to use the fmdump -eV command to identify when the
checksum errors occurred.
Thanks,
Cindy
On 07/28/10 13:41, sol wrote:
Hi
Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror. Doesn't it
Hi Ketan,
The supported LU + zone configuration migration scenarios
are described here:
http://docs.sun.com/app/docs/doc/819-5461/gihfj?l=ena=view
I think the problem is that the /zones is a mountpoint.
You might have better results if /zones was just a directory.
See the examples in this
A small follow-up is that creating pools from components of other pools
can cause system deadlocks.
This approach is not recommended.
Thanks,
Cindy
On 07/26/10 12:19, Saxon, Will wrote:
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
You might look at the zpool split feature, where you can
split off the disks from a mirrored pool to create an identical
pool, described here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs
ZFS Admin Guide, p. 87
Thanks,
Cindy
On 07/26/10 12:51, Dav Banks wrote:
I wanted to
Hi Ryan,
You are seeing this CR:
http://bugs.opensolaris.org/view_bug.do?bug_id=6916574
zpool add -n displays incorrect structure
This is a display problem only.
Thanks,
Cindy
On 07/22/10 15:54, Ryan Schwartz wrote:
I've got a system running s10x_u7wos_08 with only half of the disks
The answer depends on your goals: space, performance, reliability
To me, optimal is best performance and reliability so use:
- JBOD
- ZFS mirrored pool of 22x2 + 2 spares
- Mirror the disk pairs across both controllers
Let ZFS protect your data.
Cindy
On 07/21/10 15:10, John Andrunas
Hi--
I don't know what's up with iostat -En but I think I remember a problem
where iostat does not correctly report drives running in legacy IDE mode.
You might use the format utility to identify these devices.
Thanks,
Cindy
On 07/18/10 14:15, Alxen4 wrote:
This is a situation:
I've got an
.
-Original Message-
From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
Sent: Monday, July 19, 2010 9:16 AM
To: Yuri Homchuk
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Help identify failed drive
Hi--
I don't know what's up with iostat -En but I think I remember
that all 7 drives are Seagate Barracuda which is
definetly not correct.
This is a reason of my original question.
I need to know if c2t3d0 Seagate or Western Digital.
Thanks,
-Original Message-
From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
Sent: Monday, July 19, 2010 9:48
-Original Message-
From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
Sent: Monday, July 19, 2010 10:28 AM
To: Yuri Homchuk
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Help identify failed drive
I think you are saying that even though format shows 9 devices (0-8
Hi Ned,
One of the benefits of using a mirrored ZFS configuration is just replacing
each disk with a larger disk, in place, online, and so on...
Its probably easiest to use zfs send -R (recursive) to do a recursive snapshot
of your root pool.
Check out the steps here:
Hi Daniel,
No conversion from a mirrored to RAIDZ configuration is available yet.
Mirrored pools are more flexible and generally provide good performance.
You can easily create a mirrored pool of two disks and then add two
more disks later. You can also replace each disk with larger disks
if
101 - 200 of 674 matches
Mail list logo