Hi Jason,
I think you are asking how do you tell ZFS that you want to replace the
failed disk c8t7d0 with the spare, c8t11d0?
I just tried do this on my Nevada build 124 lab system, simulating a
disk failure and using zpool replace to replace the failed disk with
the spare. The spare is now busy
Hi Rodney,
I've not seen this problem.
Did you install using LiveCD or the automated installer?
Here are some things to try/think about:
1. After a reboot with no swap or dump devices, run this command:
# zfs volinit
If this works, then this command isn't getting run on boot.
Let me know th
ink he needs to backup
his pool, reconfigure and expand the solaris2 partition, and
then reinstall OpenSolaris.
Cindy
On 10/13/09 10:47, Cindy Swearingen wrote:
Except that you can't add a disk or partition to a root pool:
# zpool add rpool c1t1d0s0
cannot add to 'rpool': root pool
Except that you can't add a disk or partition to a root pool:
# zpool add rpool c1t1d0s0
cannot add to 'rpool': root pool can not have multiple vdevs or separate
logs
He could try to attach the partition to his existing pool, I'm not sure
how, and this would only create a mirrored root pool, i
Hi--
Unfortunately, you cannot change the partitioning underneath your pool.
I don't see any way of resizing this partition except for backing up
your data, repartitioning the disk, and reinstalling Opensolaris
2009.06.
Maybe someone else has a better idea...
Cindy
On 10/13/09 06:32, Julio wr
Hua,
The behavior below is described here:
http://docs.sun.com/app/docs/doc/819-5461/setup-1?a=view
The top-level /tank file system cannot be removed so it is
less flexible then using descendent datasets.
If you want to create snapshot or clone and later promote
the /tank clone, then it is bes
Dirk,
I'm not sure I'm following you exactly but this is what I think you are
trying to do:
You have a RAIDZ pool that is built with slices and you are trying to
convert the slice configuration to whole disks. This isn't possible
because you are trying replace the same disk. This is what happens
Hi Stacy,
If you can't import the pool, then it is difficult to remove the disks.
If the pool had enough redundancy, you could attempt to unconfigure the
corrupted disks with cfgadm and then try to import the pool.
Until we have a zpool clean feature, you could wipe the disk labels with
dd in th
Hi Osvald,
If you physically replaced the failed disk with even a slightly smaller
disk in a RAIDZ pool and ran the zpool replace command, you would have
seen a message similar to the following:
# zpool replace rescamp c0t6d0 c2t2d0
cannot replace c0t6d0 with c2t2d0: device is too small
Did y
Hi Osvald,
Can you comment on how the disks shrank or how the labeling on these
disks changed?
We would like to track the issues that causes the hardware underneath
a live pool to change so that we can figure out how to prevent pool
failures in the future.
Thanks,
Cindy
On 10/03/09 09:46, Osv
Ray,
The checksums are set on the file systems not the pool.
If a new checksum is set and *you* rewrite the data, then the rewritten
data will contain the new checksum. If your pool has the space for you
to duplicate the user data and new checksum is set, then the duplicated
data will have the
Yes, you can use the zpool replace process with any kind of drive:
failed, failing, or even healthy.
cs
On 10/02/09 12:15, Dan Transue wrote:
Does the same thing apply for a "failing" drive? I have a drive that
has not failed but by all indications, it's about to Can I do the
same thing
David,
When you get back to the original system, it would be helpful if
you could provide a side-by-side comparison of the zpool create
syntax and the zfs list output of both pools.
Thanks,
Cindy
On 10/01/09 13:48, David Stewart wrote:
Cindy:
I am not at the machine right now, but I installe
You are correct. The zpool create -O option isn't available in a Solaris
10 release but will be soon. This will allow you to set the file system
checksum property when the pool is created:
# zpool create -O checksum=sha256 pool c1t1d0
# zfs get checksum pool
NAME PROPERTY VALUE SOURCE
poo
Hi David,
Which Solaris release is this?
Are you sure you are using the same ZFS command to review the sizes
of the raidz1 and raidz pools? The zpool list and zfs list commands
will display different values.
See the output below of my tank pool created with raidz or raidz1
redundancy. The pool
Hi Ron,
Any reason why you want to use slices except for the root pool?
I would recommend a 4-disk configuration like this:
mirrored root pool on c1t0d0s0 and c2t0d0s0
mirrored app pool on c1t1d0 and c2t1d0
Let the install use one big slice for each disk in the mirrored root
pool, which is req
Hi David,
All system-related components should remain in the root pool, such as
the components needed for booting and running the OS.
If you have datasets like /export/home or other non-system-related
datasets in the root pool, then feel free to move them out.
Moving OS components out of the ro
Hi Donour,
You would use the boot -L syntax to select the ZFS BE to boot from,
like this:
ok boot -L
Rebooting with command: boot -L
Boot device: /p...@8,60/SUNW,q...@4/f...@0,0/d...@w2104cf7fa6c7,0:a
File and args: -L
1 zfs1009BE
2 zfs10092BE
Select environment to boot: [ 1 - 2 ]: 2
The opensolaris.org site will be transitioning to a wiki-based site
soon, as described here:
http://www.opensolaris.org/os/about/faq/site-transition-faq/
I think it would be best to use the new site to collect this
information because it will be much easier for community members
to contribute.
Karl,
I'm not sure I'm following everything. If you can't swap the drives,
the which pool would you import?
If you install the new v210 with snv_115, then you would have a bootable
root pool.
You could then receive the snapshots from the old root pool into the
root pool on the new v210.
I wo
m specific info stored in the root pool?
Thanks
Peter
2009/9/24 Cindy Swearingen :
Hi Karl,
Manually cloning the root pool is difficult. We have a root pool recovery
procedure that you might be able to apply as long as the
systems are identical. I would not attempt this with LiveUpgrade
an
Hi Karl,
Manually cloning the root pool is difficult. We have a root pool
recovery procedure that you might be able to apply as long as the
systems are identical. I would not attempt this with LiveUpgrade
and manually tweaking.
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting
Dustin,
You didn't describe the process that you used to replace the disk so its
difficult to commment on what happened.
In general, you physically replace the disk and then let ZFS know that
the disk is replaced, like this:
# zpool replace pool-name device-name
This process is described here:
Hi Chris,
Unless we can figure out the best way to provide this info, please ask
about specific features and we'll tell you.
One convoluted way is that a CR that integrate a ZFS feature
identifies the Nevada integration build and the Solaris 10 release,
but not all CRs provide this info. You can
Dave,
I've searched opensolaris.org and our internal bug database.
I don't see that anyone else has reported this problem.
I asked someone from the OSOL install team and this behavior
is a mystery.
If you destroyed the phantom pools before you reinstalled,
then they probably returned from the i
:
Cindy Swearingen wrote:
Michael,
ZFS handles EFI labels just fine, but you need an SMI label on the
disk that you are booting from.
Are you saying that localtank is your root pool?
no... (I was on the plane yesterday, I'm still jet-lagged), I should
have realised that that's st
Michael,
ZFS handles EFI labels just fine, but you need an SMI label on the disk
that you are booting from.
Are you saying that localtank is your root pool?
I believe the OSOL install creates a root pool called rpool. I don't
remember if its configurable.
Changing labels or partitions from
In addition, if you need the flexibility of moving disks around until
the device removal CR integrates, then mirrored pools are more flexible.
Detaching disks from a mirror isn't ideal but if you absolutely have
to reuse a disk temporarily then go with mirrors. See the output below.
You can repla
Hi RB,
We have a draft of the ZFS/flar image support here:
http://opensolaris.org/os/community/zfs/boot/flash/
Make sure you review the Solaris OS requirements.
Thanks,
Cindy
On 09/14/09 11:45, RB wrote:
Is it possible to create flar image of ZFS root filesystem to install it to
other maci
Hi Brian,
I'm tracking this issue and expected resolution, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#RAID-Z_Checksum_Errors_in_Nevada_Builds.2C_120-123
Thanks,
Cindy
On 09/10/09 13:21, Brian Hechinger wrote:
I've hit google and it looks like this is still
Hi Jon,
If the zpool import command shows the old rpool and associated disk
(c1t1d0s0), then you might able to import it like this:
# zpool import rpool rpool2
Which renames the original pool, rpool, to rpool2, upon import.
If the disk c1t1d0s0 was overwritten in any way then I'm not sure
th
Hi Mike,
I reviewed this doc and the only issue I have with it now is that uses
/var/tmp an an example of storing snapshots in "long-term storage"
elsewhere.
For short-term storage, storing a snapshot as a file is an acceptable
solution as long as you verify that the snapshots as files are valid
g from DVD but nothing showed up. Thanks for the ideas, though.
Maybe your other sources might have something?
- Original Message ----
From: Cindy Swearingen
To: Grant Lowe
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, August 27, 2009 6:24:00 PM
Subject: Re: [zfs-discuss] Boot error
Hi
Hi Grant,
I don't have all my usual resources at the moment, but I would
boot from alternate media and use the format utility to check
the partitioning on newly added disk, and look for something
like overlapping partitions. Or, possibly, a mismatch between
the actual root slice and the one you
Hi Dick,
I'm testing root pool recovery from remotely stored snapshots rather
than from files.
I can send the snapshots to a remote pool easily enough.
The problem I'm having is getting the snapshots back while the
local system is booted from the miniroot to simulate a root pool
recovery. I don
Hi Chris,
You might repost this query on desktop-discuss to find out
the status of the Access List tab.
Last I heard, it was being reworked.
Cindy
On 08/21/09 10:14, Chris wrote:
How do I get this in OpenSolaris 2009.06?
http://www.alobbs.com/albums/albun26/ZFS_acl_dialog1.jpg
thanks.
_
ch is a good thing.
Is there further documentation on this yet?
I just asked Cindy Swearingen, the tech writer for ZFS, about this and
sadly, it appears that there isn't any documentation for this available
outside of Sun yet. The documentation for using flash archives to set
up systems
oving again which is a good thing.
Is there further documentation on this yet?
I just asked Cindy Swearingen, the tech writer for ZFS, about this and
sadly, it appears that there isn't any documentation for this available
outside of Sun yet. The documentation for using flash archives
Hey Richard,
I believe 6844090 would be a candidate for an s10 backport.
The behavior of 6844090 worked nicely when I replaced a disk of the same
physical size even though the disks were not identical.
Another flexible storage feature is George's autoexpand property (Nevada
build 117), where yo
Hi Michael,
I will get this fixed.
Thanks for letting us know.
Cindy
On 08/07/09 09:24, Michael Marburger wrote:
Who do we contact to fix mis-information in the evil tuning guide?
at:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#How_to_Tune_Cache_Sync_Handling_Per_St
Dang. This is a bug we talked about recently that is fixed in Nevada and
an upcoming Solaris 10 release.
Okay, so you can't offline the faulted disk, but you were able to
replace it and detach the spare.
Cool beans...
Cindy
On 08/06/09 15:35, Andreas Höschler wrote:
Hi Cindy,
I think you c
Hi Kyle,
Except that in the case of spares, you can't replace them.
You'll see a message like the one below.
Cindy
# zpool create pool mirror c1t0d0 c1t1d0 spare c1t5d0
# zpool status
pool: pool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
Andreas,
I think you can still offline the faulted disk, c1t6d0.
The difference between these two replacements:
zpool replace tank c1t6d0 c1t15d0
zpool replace tank c1t6d0
Is that in the second case, you are telling ZFS that c1t6d0
has been physically replaced in the same location. This would
Andreas,
More comments below.
Cindy
On 08/06/09 14:18, Andreas Höschler wrote:
Hi Cindy,
Good job for using a mirrored configuration. :-)
Thanks!
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted disk.
Hi Andreas,
Good job for using a mirrored configuration. :-)
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted disk.
Both 1 and 2 would take a bit more time than just replacing the faulted
disk with a spare dis
Brian,
CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.
In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as t
Hi Steffen,
Go with a mirrored root pool is my advice with all the disk space in s0
on each disk. Simple is best and redundant simple is even better.
I'm no write cache expert, but a few simple tests on Solaris 10 5/09,
show me that the write cache is enabled on a disk that is labeled with
an SM
Hi Will,
I simulated this issue on s10u7 and then imported the pool on a
current Nevada release. The original issue remains, which is you
can't remove a spare device that no longer exists.
My sense is that the bug fix prevents the spare from getting messed
up in the first place when the device I
Hi Nawir,
I haven't tested these steps myself, but the error message
means that you need to set this property:
# zpool set bootfs=rpool/ROOT/BE-name rpool
Cindy
On 08/05/09 03:14, nawir wrote:
Hi,
I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD
These steps below is w
Hi Will,
Since no workaround is provided in the CR, I don't know if importing on
a more recent OpenSolaris release and trying to remove it will work.
I will simulate this error, try this approach, and get back to you.
Thanks,
Cindy
On 08/04/09 18:34, Will Murnane wrote:
On Tue, Aug 4, 2009
Hi Will,
It looks to me like you are running into this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6664649
This is fixed in Nevada and a fix will also be available in an
upcoming Solaris 10 release.
This doesn't help you now, unfortunately.
I don't think this ghost of a de
Andrew,
Take a look at your zpool list output, which identifies the size of your
iscsi-pool pool.
Regardless of how the volume size was determined, your remaining
pool size is still 33GB and yes, some of it is used for metadata.
cs
On 08/03/09 11:26, andrew.r...@sun.com wrote:
hi cindy,
tnx
Hi Andrew,
The AVAIL column indicates the pool size, not the volsize
in this example.
In your case, the iscsi-pool/log_1_1 volume is 24 GB in size
and the remaining pool space is 33.7G. The 33.7G reflects
your pool space, not your volume size.
The sizing is easier to see if you include the zpoo
I apologize for replying in the middle of this thread, but I never
saw the initial snapshot syntax of mypool2, which needs to be
recursive (zfs snapshot -r mypo...@snap) to snapshot all the
datasets in mypool2. Then, use zfs send -R to pick up and
restore all the dataset properties.
What was the
Hi Dick,
The Solaris 10 volume management service is volfs.
If you attach the USB hard disk and run volcheck, the disk should
be mounted under the /rmdisk directory.
If the auto-mounting doesn't occur, you can disable volfs and mount
it manually.
You can read more about this feature here:
htt
Hi Laurent,
I was able to reproduce on it on a Solaris 10 5/09 system.
The problem is fixed in a current Nevada bits and also in
the upcoming Solaris 10 release.
The bug fix that integrated this change might be this one:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6328632
zpool o
Tim,
If you could send me your email address privately, the
OpenSolaris list folks have a better chance of resolving
this problem.
I promise I won't sell it to anyone. :-)
Cindy
On 07/27/09 16:25, cindy.swearin...@sun.com wrote:
Tim,
I sent your subscription problem to the OpenSolaris help l
Tim,
I sent your subscription problem to the OpenSolaris help list.
We should hear back soon.
Cindy
On 07/27/09 16:15, Tim Cook wrote:
So it is broken then... because I'm on week 4 now, no responses to this thread,
and I'm still not getting any emails.
Anyone from Sun still alive that can a
Hi Dick,
I haven't see this problem when I've tested these steps.
And its been awhile since I've seen the nobody:nobody problem, but it
sounds like NFSMAPID didn't get set correctly.
I think this question is asked during installation and generally is set
to the default DNS domain name.
The dom
Hi--
With 40+ drives, you might consider two pools any way. If you want to
use a ZFS root pool, some like this:
- Mirrored ZFS root pool (2 x 500 GB drives)
- Mirrored ZFS non-root pool for everything else
Mirrored pools are flexible and provide good performance. See this site
for more tips:
h
Hi Laurent,
Yes, you should able to offline a faulty device in a redundant
configuration as long as enough devices are available to keep
the pool redundant.
On my Solaris Nevada system (latest bits), injecting a fault
into a disk in a RAID-Z configuration and then offlining a disk
works as expec
Hi Shawn,
I have no experience with this configuration, but you might review
the information in this blog:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
ZFS is not a cluster file system and yes, possible data corruption
issues exist. Eric mentions this in his blog.
You might al
FYI...
The -u option is described in the ZFS admin guide and the ZFS
troubleshooting wiki in the areas of restoring root pool snapshots.
The -u option is described in the zfs.1m man page starting in the
b115 release:
http://docs.sun.com/app/docs/doc/819-2240/zfs-1m
Cindy
Lori Alt wrote:
T
Hua-Ying,
The partition table *is* confusing so don't try to make sense of it. :-)
Partition or slice 2 represents the entire disk, cylinders 0-24317.
You created slice 0, which is cylinders 1-24316. Slice 8 is a reserved,
legacy area for boot info on some x86 systems. You can ignore it.
Looks
Hi Hua-Ying,
Some disks don't have target identifiers, like you c3d0
and c3d1 disks.
To attach your c3d1 disk, you need to relabel it with an
SMI label and provide a slice, s0, for example.
See the steps here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2
Hi Tertius,
Yes, I agree that the process is complex and mirroring the root
pool improves downtime due to hardware failures, but I would
still keep root pool snapshots.
Richard posted a follow up that you can attach a disk that is
slight smaller after the integration of 6844090 into build 117:
Hi Patrick,
To answer your original question, yes, you can create your root swap
and dump volumes before you run the lucreate operation. LU won't change
them if they are already created.
Keep in mind that you'll need approximately 10 GBs of disk space for the
ZFS root BE and the swap/dump volume
Hi Tertius,
I think you are saying that you have an OpenSolaris system with a
one-disk root pool and a 6-way RAIDZ non-root pool.
You could create root pool snapshots and send them over to the non-root
pool or to a pool on another system. Then, consider purchasing another
disk for a mirrored
Hi Mykola,
Yes, if you are speaking of the automatic TimeSlider snapshots,
the snapshots are rotated. I think the threshold is 80% full
disk space.
Cheers,
Cindy
Mykola Maslov wrote:
How to turn off the timeslider snapshots on certain file systems?
http://wikis.sun.com/display/OpenSolari
Hi Kyle,
The first thing to plan for is that the Solaris CIFS services are not
available in the Solaris 10 release.
You can use the property descriptions in this table to review the CIFS
related features. Using your browser's find in page feature and
searching on CIFS is probably the easiest way
Hi Harry,
Are you attempting this change when logged in as yourself or
as root?
The top section of this procedure describes how to add yourself
to zfssnap role. Otherwise, if you are doing this step as a
non-root user, it probably won't work.
Cindy
Harry Putnam wrote:
dick hoogendijk writes:
Dave,
If I knew I would tell you, which is the problem. :-)
I see a good follow-up about device links, but probably
more is lurking.
I generally don't trust anything I haven't tested myself,
and I know that the manual process hasn't always worked.
I think Scott Dickson's instructions would hav
Hi Kent,
This is what I do in similar situations:
1. Import the pool to be destroyed by using the ID. In your case,
like this:
# zpool import 3280066346390919920
If tank already exists you can also rename it:
# zpool import 3280066346390919920 tank2
Then destroy it:
# zpool destroy tank2
I
Hi Dave,
Until the ZFS/flash support integrates into an upcoming Solaris 10
release, I don't think we have an easy way to clone a root pool/dataset
from one system to another system because system specific info is still
maintained.
Your manual solution sounds plausible but probably won't work be
Hi UNIX admin,
I would check fmdump -eV output to see if this error is isolated or
persistent.
If fmdump says this error is isolated, then you might just monitor the
status. For example, if fmdump says that these errors occurred on 6/15
and you moved this system on that date or you know that som
Hi Roland,
Current Solaris releases, SXCE (build 98) or OpenSolaris 2009.06,
provide space accounting features to display space consumed by
snapshots, descendent datasets, and so on.
On my OSOL 2009.06 system with automatic snapshots running, I can see
the space that is consumed by snapshots by
Hi Harry,
I use this stuff every day and I can't figure out the right syntax
either. :-)
Reviewing the zfs man page syntax, it looks like you should be able
to use this syntax:
# zfs list -t snapshot dataset
But it doesn't work:
# zfs list -t snapshot rpool/export
cannot open 'rpool/export':
Hi Dick,
I've rewritten the instructions for relabeling/repartitioning a disk
that is intended for the root pool, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk
Generally, the format utility will show you the size of the
Hi Frank,
The reason that ZFS let you create "rpool" with a EFI label is at this
point, it doesn't know that this is a root pool. Its just a pool named
"rpool." The best solution is for us to provide a bootable EFI label.
I see an old bug that says if you already have a pool with the same name
Christo,
We don't have an easy way to re-propagate ACL entries on existing files
and directories.
You might try using a combination of find and chmod, similar to the
syntax below.
Which Solaris release is this? We might be able to provide better
hints if you can identify the release and the ACL
Hi Krenz,
Can you provide your zfs list output and your snapshot syntax?
See the output below from my Solaris 10 5/09 system. Snapshot
syntax and behavior should similar to the Solaris 10 10/08
release.
When you take a snapshot of the root pool you must use the
-r option to recursively snapshot
Hi Richard,
I ran into some quirks resizing swap last week.
If you are seeing out of space when trying to remove a swap area, then a
reboot clear this up. I think the bugs are already filed, but I would
like to see your scenario as well.
Can you restate your steps?
Thanks,
Cindy
Jan Dambo
Hi Frank,
This bug was filed with bugster, but I see that the opensolaris bug
database is currently unavailable. I sent a note about this problem.
When a root cause is determined for 6844090, then we'll see whether
this particular issue is a ZFS problem or a format/fdisk problem.
In any case, im
Hi Noz,
This problem was reported recently and this bug was filed:
6844090 zfs should be able to mirror to a smaller disk
I believe slice 9 (alternates) is an older method for providing
alternate disk blocks on x86 systems. Apparently, it can be removed by
using the format -e command. I haven't
Aurelien,
I don't think this scenario has been tested and I'm unclear about what
other steps might be missing, but I suspect that you need to set the
bootfs property on the root pool, depending on your ZFS BE, would look
something like this:
# zpool set bootfs=rpool/ROOT/zfsBE-name rpool
This s
Hi Rich,
Yes, your zpool syntax is correct.
I just tested what I think is your final
configuration.
Cindy
# zpool create dpool c1t0d0 c1t1d0 c1t2d0
# zpool attach dpool c1t1d0 c1t3d0
# zpool attach dpool c1t0d0 c1t4d0
# zpool attach dpool c1t2d0 c1t5d0
# zpool status dpool
pool: dpool
state
Hi Ian,
This procedure identifies the zfs send/receive syntax:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery
Cindy
Ian Collins wrote:
I'm trying to use zfs send/receive to replicate the root pool of a
system and I can't think
Hi Howard,
Which Solaris release is this?
You shouldn't have to register the ZFS app, but other problems prevented
the ZFS GUI tool from launching successfully in the Solaris 10 release.
If you can provide the Solaris release info and specific error messages,
I can try to get some answers.
T
Hi Ian,
Other than bug fixes, the only notable feature in the Solaris 10 5/09
release is that Solaris Live Upgrade supports additional zones configurations.
You can read about these configurations here:
http://docs.sun.com/app/docs/doc/819-5461/gigek?l=en&a=view
I hope someone else from the tea
Hi Grant,
We have predefined ACL sets, which integrated into build 99.
With ZFS delegated permissions, you can create a permission set that can
be re-used.
See the example 9-2 here:
http://docs.sun.com/app/docs/doc/817-2271/gbchv?l=en&q=permission+sets&a=view
zfs allow [-s] ... perm|@setname
Hi Uwe,
You can use the fmdump feature to help determine whether these disk
errors are persistent.
Using fmdump -ev will provide a lot of detail but you can review
how many disks errors have occurred and for how long.
A brief description is provided here:
http://www.solarisinternals.com/wiki/i
Hi Ravi,
I think a previous bug prevented the use of volumes in non-global zones
and the man page was not updated. This is a bug in the man page. I will
fix this.
I agree that this text here:
http://docsview.sfbay.sun.com/app/docs/doc/819-5461/ftyxh?a=view
A ZFS volume is a dataset that repres
Michael,
You can't attach disks to an existing RAIDZ vdev, but you add another
RAIDZ vdev. Also keep in mind that you can't detach disks from RAIDZ
pools either.
See the syntax below.
Cindy
# zpool create rzpool raidz2 c1t0d0 c1t1d0 c1t2d0
# zpool status
pool: rzpool
state: ONLINE
scrub:
Hi Harry,
I was on vacation so am late to this discussion.
For this part of your question:
The zpool export/import feature is a pool-level operation for moving
the pool, disks, and data to another system.
For moving data from one pool to another pool, you would want to use
zfs send/recv, rsync
Harry,
Bob F. has give you some excellent advice about using mirrored
configurations. I can answer your RAIDZ questions but your original
configuration was for a root pool and non-root pool using 4 disks
total.
Start with two mirrored pools of two disks each. In the future,
you will be able to a
Hi Neal,
This example needs to be updated with a ZFS root pool. It could
also be that I mapped the wrong boot disks in this example.
You can name the root pool what ever you want, rpool, mpool,
mypool.
In these examples, I was using rpool for RAIDZ pool and mpool
for mirrored pool, not knowing
Grant,
If I'm following correctly, you can't mount a ZFS resource
outside of the pool from which the resource resides.
Is this a UFS directory, here:
# mkdir -p /opt/mis/oracle/data/db1
What are you trying to do?
Cindy
Grant Lowe wrote:
Another newbie question:
I have a new system with zfs
Neal,
You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:
http://opensolaris.org/os/community/zfs/docs/
Page 114:
Example 4–1 Initial Installation of a Bootable ZFS Root File System
Step 3, you'll be p
Hi Steven,
I don't have access to my usual resources to test the ACL syntax but
I think the root cause is that you don't have execute permission
on the "Not Started" directory.
Try the chmod syntax again but this time include execute:allow for
admin on "Not Sorted" or add it like this:
# chmod
Its your BE, you can run it if you want to. :-)
Select the BE from the GRUB menu to boot from even if it isn't the
(lu)activated one.
Be sure that every BE that you want to boot from has been activated and
booted (using init 6) prior to wanting to boot from them.
Also, the BE that you just bo
501 - 600 of 738 matches
Mail list logo