but nothing showed up. Thanks for the ideas, though.
Maybe your other sources might have something?
- Original Message
From: Cindy Swearingen cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, August 27, 2009 6:24:00 PM
Subject
Hi Grant,
I don't have all my usual resources at the moment, but I would
boot from alternate media and use the format utility to check
the partitioning on newly added disk, and look for something
like overlapping partitions. Or, possibly, a mismatch between
the actual root slice and the one
Hi Chris,
You might repost this query on desktop-discuss to find out
the status of the Access List tab.
Last I heard, it was being reworked.
Cindy
On 08/21/09 10:14, Chris wrote:
How do I get this in OpenSolaris 2009.06?
http://www.alobbs.com/albums/albun26/ZFS_acl_dialog1.jpg
thanks.
again which is a good thing.
Is there further documentation on this yet?
I just asked Cindy Swearingen, the tech writer for ZFS, about this and
sadly, it appears that there isn't any documentation for this available
outside of Sun yet. The documentation for using flash archives to set
up
.
Is there further documentation on this yet?
I just asked Cindy Swearingen, the tech writer for ZFS, about this and
sadly, it appears that there isn't any documentation for this available
outside of Sun yet. The documentation for using flash archives to set
up systems with zfs roots won't
Hey Richard,
I believe 6844090 would be a candidate for an s10 backport.
The behavior of 6844090 worked nicely when I replaced a disk of the same
physical size even though the disks were not identical.
Another flexible storage feature is George's autoexpand property (Nevada
build 117), where
Hi Andreas,
Good job for using a mirrored configuration. :-)
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted disk.
Both 1 and 2 would take a bit more time than just replacing the faulted
disk with a spare
Andreas,
More comments below.
Cindy
On 08/06/09 14:18, Andreas Höschler wrote:
Hi Cindy,
Good job for using a mirrored configuration. :-)
Thanks!
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted
Andreas,
I think you can still offline the faulted disk, c1t6d0.
The difference between these two replacements:
zpool replace tank c1t6d0 c1t15d0
zpool replace tank c1t6d0
Is that in the second case, you are telling ZFS that c1t6d0
has been physically replaced in the same location. This would
Hi Kyle,
Except that in the case of spares, you can't replace them.
You'll see a message like the one below.
Cindy
# zpool create pool mirror c1t0d0 c1t1d0 spare c1t5d0
# zpool status
pool: pool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
Dang. This is a bug we talked about recently that is fixed in Nevada and
an upcoming Solaris 10 release.
Okay, so you can't offline the faulted disk, but you were able to
replace it and detach the spare.
Cool beans...
Cindy
On 08/06/09 15:35, Andreas Höschler wrote:
Hi Cindy,
I think you
Hi Will,
Since no workaround is provided in the CR, I don't know if importing on
a more recent OpenSolaris release and trying to remove it will work.
I will simulate this error, try this approach, and get back to you.
Thanks,
Cindy
On 08/04/09 18:34, Will Murnane wrote:
On Tue, Aug 4,
Hi Will,
I simulated this issue on s10u7 and then imported the pool on a
current Nevada release. The original issue remains, which is you
can't remove a spare device that no longer exists.
My sense is that the bug fix prevents the spare from getting messed
up in the first place when the device
Hi Steffen,
Go with a mirrored root pool is my advice with all the disk space in s0
on each disk. Simple is best and redundant simple is even better.
I'm no write cache expert, but a few simple tests on Solaris 10 5/09,
show me that the write cache is enabled on a disk that is labeled with
an
Brian,
CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.
In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as
Hi Will,
It looks to me like you are running into this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6664649
This is fixed in Nevada and a fix will also be available in an
upcoming Solaris 10 release.
This doesn't help you now, unfortunately.
I don't think this ghost of a
Hi Andrew,
The AVAIL column indicates the pool size, not the volsize
in this example.
In your case, the iscsi-pool/log_1_1 volume is 24 GB in size
and the remaining pool space is 33.7G. The 33.7G reflects
your pool space, not your volume size.
The sizing is easier to see if you include the
Andrew,
Take a look at your zpool list output, which identifies the size of your
iscsi-pool pool.
Regardless of how the volume size was determined, your remaining
pool size is still 33GB and yes, some of it is used for metadata.
cs
On 08/03/09 11:26, andrew.r...@sun.com wrote:
hi cindy,
tnx
Hi Dick,
The Solaris 10 volume management service is volfs.
If you attach the USB hard disk and run volcheck, the disk should
be mounted under the /rmdisk directory.
If the auto-mounting doesn't occur, you can disable volfs and mount
it manually.
You can read more about this feature here:
I apologize for replying in the middle of this thread, but I never
saw the initial snapshot syntax of mypool2, which needs to be
recursive (zfs snapshot -r mypo...@snap) to snapshot all the
datasets in mypool2. Then, use zfs send -R to pick up and
restore all the dataset properties.
What was the
Tim,
I sent your subscription problem to the OpenSolaris help list.
We should hear back soon.
Cindy
On 07/27/09 16:15, Tim Cook wrote:
So it is broken then... because I'm on week 4 now, no responses to this thread,
and I'm still not getting any emails.
Anyone from Sun still alive that can
Tim,
If you could send me your email address privately, the
OpenSolaris list folks have a better chance of resolving
this problem.
I promise I won't sell it to anyone. :-)
Cindy
On 07/27/09 16:25, cindy.swearin...@sun.com wrote:
Tim,
I sent your subscription problem to the OpenSolaris help
Hi Laurent,
I was able to reproduce on it on a Solaris 10 5/09 system.
The problem is fixed in a current Nevada bits and also in
the upcoming Solaris 10 release.
The bug fix that integrated this change might be this one:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6328632
zpool
Hi Dick,
I haven't see this problem when I've tested these steps.
And its been awhile since I've seen the nobody:nobody problem, but it
sounds like NFSMAPID didn't get set correctly.
I think this question is asked during installation and generally is set
to the default DNS domain name.
The
Hi--
With 40+ drives, you might consider two pools any way. If you want to
use a ZFS root pool, some like this:
- Mirrored ZFS root pool (2 x 500 GB drives)
- Mirrored ZFS non-root pool for everything else
Mirrored pools are flexible and provide good performance. See this site
for more tips:
Hi Laurent,
Yes, you should able to offline a faulty device in a redundant
configuration as long as enough devices are available to keep
the pool redundant.
On my Solaris Nevada system (latest bits), injecting a fault
into a disk in a RAID-Z configuration and then offlining a disk
works as
Hi Shawn,
I have no experience with this configuration, but you might review
the information in this blog:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
ZFS is not a cluster file system and yes, possible data corruption
issues exist. Eric mentions this in his blog.
You might
FYI...
The -u option is described in the ZFS admin guide and the ZFS
troubleshooting wiki in the areas of restoring root pool snapshots.
The -u option is described in the zfs.1m man page starting in the
b115 release:
http://docs.sun.com/app/docs/doc/819-2240/zfs-1m
Cindy
Lori Alt wrote:
Hi Hua-Ying,
Some disks don't have target identifiers, like you c3d0
and c3d1 disks.
To attach your c3d1 disk, you need to relabel it with an
SMI label and provide a slice, s0, for example.
See the steps here:
Hua-Ying,
The partition table *is* confusing so don't try to make sense of it. :-)
Partition or slice 2 represents the entire disk, cylinders 0-24317.
You created slice 0, which is cylinders 1-24316. Slice 8 is a reserved,
legacy area for boot info on some x86 systems. You can ignore it.
Looks
Hi Patrick,
To answer your original question, yes, you can create your root swap
and dump volumes before you run the lucreate operation. LU won't change
them if they are already created.
Keep in mind that you'll need approximately 10 GBs of disk space for the
ZFS root BE and the swap/dump
Hi Tertius,
I think you are saying that you have an OpenSolaris system with a
one-disk root pool and a 6-way RAIDZ non-root pool.
You could create root pool snapshots and send them over to the non-root
pool or to a pool on another system. Then, consider purchasing another
disk for a mirrored
Hi Mykola,
Yes, if you are speaking of the automatic TimeSlider snapshots,
the snapshots are rotated. I think the threshold is 80% full
disk space.
Cheers,
Cindy
Mykola Maslov wrote:
How to turn off the timeslider snapshots on certain file systems?
Hi Kyle,
The first thing to plan for is that the Solaris CIFS services are not
available in the Solaris 10 release.
You can use the property descriptions in this table to review the CIFS
related features. Using your browser's find in page feature and
searching on CIFS is probably the easiest
Hi Kent,
This is what I do in similar situations:
1. Import the pool to be destroyed by using the ID. In your case,
like this:
# zpool import 3280066346390919920
If tank already exists you can also rename it:
# zpool import 3280066346390919920 tank2
Then destroy it:
# zpool destroy tank2
I
Hi Dave,
Until the ZFS/flash support integrates into an upcoming Solaris 10
release, I don't think we have an easy way to clone a root pool/dataset
from one system to another system because system specific info is still
maintained.
Your manual solution sounds plausible but probably won't work
Hi Roland,
Current Solaris releases, SXCE (build 98) or OpenSolaris 2009.06,
provide space accounting features to display space consumed by
snapshots, descendent datasets, and so on.
On my OSOL 2009.06 system with automatic snapshots running, I can see
the space that is consumed by snapshots by
Hi UNIX admin,
I would check fmdump -eV output to see if this error is isolated or
persistent.
If fmdump says this error is isolated, then you might just monitor the
status. For example, if fmdump says that these errors occurred on 6/15
and you moved this system on that date or you know that
Hi Harry,
I use this stuff every day and I can't figure out the right syntax
either. :-)
Reviewing the zfs man page syntax, it looks like you should be able
to use this syntax:
# zfs list -t snapshot dataset
But it doesn't work:
# zfs list -t snapshot rpool/export
cannot open 'rpool/export':
Hi Frank,
The reason that ZFS let you create rpool with a EFI label is at this
point, it doesn't know that this is a root pool. Its just a pool named
rpool. The best solution is for us to provide a bootable EFI label.
I see an old bug that says if you already have a pool with the same name
Hi Dick,
I've rewritten the instructions for relabeling/repartitioning a disk
that is intended for the root pool, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk
Generally, the format utility will show you the size of the
Hi Krenz,
Can you provide your zfs list output and your snapshot syntax?
See the output below from my Solaris 10 5/09 system. Snapshot
syntax and behavior should similar to the Solaris 10 10/08
release.
When you take a snapshot of the root pool you must use the
-r option to recursively
Christo,
We don't have an easy way to re-propagate ACL entries on existing files
and directories.
You might try using a combination of find and chmod, similar to the
syntax below.
Which Solaris release is this? We might be able to provide better
hints if you can identify the release and the
Hi Richard,
I ran into some quirks resizing swap last week.
If you are seeing out of space when trying to remove a swap area, then a
reboot clear this up. I think the bugs are already filed, but I would
like to see your scenario as well.
Can you restate your steps?
Thanks,
Cindy
Jan
Hi Frank,
This bug was filed with bugster, but I see that the opensolaris bug
database is currently unavailable. I sent a note about this problem.
When a root cause is determined for 6844090, then we'll see whether
this particular issue is a ZFS problem or a format/fdisk problem.
In any case,
Hi Noz,
This problem was reported recently and this bug was filed:
6844090 zfs should be able to mirror to a smaller disk
I believe slice 9 (alternates) is an older method for providing
alternate disk blocks on x86 systems. Apparently, it can be removed by
using the format -e command. I
Hi Rich,
Yes, your zpool syntax is correct.
I just tested what I think is your final
configuration.
Cindy
# zpool create dpool c1t0d0 c1t1d0 c1t2d0
# zpool attach dpool c1t1d0 c1t3d0
# zpool attach dpool c1t0d0 c1t4d0
# zpool attach dpool c1t2d0 c1t5d0
# zpool status dpool
pool: dpool
Hi Ian,
This procedure identifies the zfs send/receive syntax:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery
Cindy
Ian Collins wrote:
I'm trying to use zfs send/receive to replicate the root pool of a
system and I can't think
Hi Howard,
Which Solaris release is this?
You shouldn't have to register the ZFS app, but other problems prevented
the ZFS GUI tool from launching successfully in the Solaris 10 release.
If you can provide the Solaris release info and specific error messages,
I can try to get some answers.
Hi Ian,
Other than bug fixes, the only notable feature in the Solaris 10 5/09
release is that Solaris Live Upgrade supports additional zones configurations.
You can read about these configurations here:
http://docs.sun.com/app/docs/doc/819-5461/gigek?l=ena=view
I hope someone else from the
Hi Grant,
We have predefined ACL sets, which integrated into build 99.
With ZFS delegated permissions, you can create a permission set that can
be re-used.
See the example 9-2 here:
http://docs.sun.com/app/docs/doc/817-2271/gbchv?l=enq=permission+setsa=view
zfs allow [-s] ... perm|@setname
Hi Uwe,
You can use the fmdump feature to help determine whether these disk
errors are persistent.
Using fmdump -ev will provide a lot of detail but you can review
how many disks errors have occurred and for how long.
A brief description is provided here:
Hi Ravi,
I think a previous bug prevented the use of volumes in non-global zones
and the man page was not updated. This is a bug in the man page. I will
fix this.
I agree that this text here:
http://docsview.sfbay.sun.com/app/docs/doc/819-5461/ftyxh?a=view
A ZFS volume is a dataset that
Michael,
You can't attach disks to an existing RAIDZ vdev, but you add another
RAIDZ vdev. Also keep in mind that you can't detach disks from RAIDZ
pools either.
See the syntax below.
Cindy
# zpool create rzpool raidz2 c1t0d0 c1t1d0 c1t2d0
# zpool status
pool: rzpool
state: ONLINE
Hi Harry,
I was on vacation so am late to this discussion.
For this part of your question:
The zpool export/import feature is a pool-level operation for moving
the pool, disks, and data to another system.
For moving data from one pool to another pool, you would want to use
zfs send/recv,
Harry,
Bob F. has give you some excellent advice about using mirrored
configurations. I can answer your RAIDZ questions but your original
configuration was for a root pool and non-root pool using 4 disks
total.
Start with two mirrored pools of two disks each. In the future,
you will be able to
Hi Neal,
This example needs to be updated with a ZFS root pool. It could
also be that I mapped the wrong boot disks in this example.
You can name the root pool what ever you want, rpool, mpool,
mypool.
In these examples, I was using rpool for RAIDZ pool and mpool
for mirrored pool, not knowing
Neal,
You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:
http://opensolaris.org/os/community/zfs/docs/
Page 114:
Example 4–1 Initial Installation of a Bootable ZFS Root File System
Step 3, you'll be
Hi Steven,
I don't have access to my usual resources to test the ACL syntax but
I think the root cause is that you don't have execute permission
on the Not Started directory.
Try the chmod syntax again but this time include execute:allow for
admin on Not Sorted or add it like this:
# chmod
Hi Rafael,
The information on that site looks very out-of-date. I will attempt to
resolve this problem.
Other than using Live Upgrade to migrate a UFS root file system to a ZFS
root file system, you can use ufsdump and ufsrestore to migrate UFS data
to ZFS file system.
Other data migration
Leonid,
You could use the fmdump -eV command to look for problems with these
disks. This command might generate a lot of output, but it should be
clear if the root cause is a problem accessing these devices.
I would also check /var/adm/messages for any driver-related messages.
Cindy
Leonid
Hi Gordon,
We are working toward making the root pool recovery process easier
in the future, for everyone. In the meantime, this is awesome work.
After I run through these steps myself, I would like to add this
procedure to the ZFS t/s wiki.
Thanks,
Cindy
Gordon Johnson wrote:
I hope this
Jean-Paul,
Our goofy disk formatting is tripping you...
Put the disk space of c8t0d0 in c8t0d0s0 and try the
zpool add syntax again. If you need help with the
format syntax, let me know.
This command syntax should have complained:
pfexec zpool add rpool cache /dev/rdsk/c8t0d0
See the zpool
Handojo,
Use the format utility to put the disk space of c4d0 into c4d0s0
and try the zpool attach syntax again, like this:
# zpool attach rpool c3d0s0 c4d0s0
Let the newly added disk resilver by monitoring with zpool status.
Then, install the bootblocks on the newly added disk, like this:
#
Jean-Paul,
Regarding your comments here:
Expected because s0 is defined as 0 bytes in the partition table I presume?
Yes, you need to put the disk space into s0 by using the format
utility. Use the modify option from format's partition menu is
probably the easiest way. Email me directly if you
. We should also add some
new
members to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill Moore (billm)
Cindy Swearingen (cindys)
Lori M. Alt (lalt)
Mark Shellenbaum (marks)
Mark Maybee (maybee)
Matthew A. Ahrens (ahrens)
Neil V. Perrin (perrin)
Jeff
at core contributor levels. We should also add some new
members to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill Moore (billm)
Cindy Swearingen (cindys)
Lori M. Alt (lalt)
Mark Shellenbaum (marks)
Mark Maybee (maybee)
Matthew A. Ahrens
Hi Peter,
Yes, ZFS supports extended attributes.
The runat.1 and fsattr.5 man pages are good places
to start.
Cindy
Peter Reiher wrote:
Does ZFS currently support actual use of extended attributes? If so, where
can I find some documentation that describes how to use them?
Orvar,
In an existing RAIDZ configuration, you would add the cache device like
this:
# zpool add pool-name cache device-name
Currently, cache devices are only supported in the OpenSolaris and SXCE
releases.
The important thing is determining whether the cache device would
improve your
Hi Amy,
You can review the ZFS/LU/zones issues here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_with_Zones
The entire Solaris 10 10/08 UFS to ZFS with zones migration is described
here:
http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1?a=view
Let
Hi everyone,
Recent ZFS admin guide updates/troubleshooting wiki include the
following updates.
1. Revised root pool recovery steps.
The process has been changed slightly due to a recently uncovered zfs
receive problem. You can create a recursive root pool snapshot as was
previously
Nico,
If you want to enable snapshot display as in previous releases,
then set this parameter on the pool:
# zpool set listsnapshots=on pool-name
Cind
Nico Sabbi wrote:
On Wednesday 14 January 2009 11:44:56 Peter Tribble wrote:
On Wed, Jan 14, 2009 at 10:11 AM, Nico Sabbi
Orvar,
Two choices are described below, where safety is the priority.
I prefer the first one (A).
Cindy
A. Replace each 500GB disk in the existing pool with a 1 TB drive.
Then, add the 5th 1TB drive as a spare. Depending on the Solaris
release you are running, you might need to export/import
Hi Orvar,
Option A effectively doubles your existing pool (500GB x 4--1TB x 4)
*and* provides increased reliability. This is the difference between
options A and B.
I also like the convenience of just replacing the smaller disks with
larger disks in the existing pool and not having to create a
Alex,
I think the root cause of your confusion is that the format utility and
disk labels are very unfriendly and confusing.
Partition 2 identifies the whole disk and on x86 systems, space is
needed for boot-related information and is currently stored in
partition 8. Neither of these partitions
Hi Alex,
The fact that you have to install the boot blocks manually on the
second disk that you added with zpool attach is a bug! I should have
mentioned this bug previously.
If you had used the initial installation method to create a mirrored
root pool, the boot blocks would have been applied
Jianhua,
Use the format--label command, like the output below.
Cindy
# format -e c0t1d0
selecting c0t1d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current
Iman,
Yes, you can do either of the following:
o Select two disks for creating a mirrored root pool during an initial
installation
o Attach a second disk after the initial installation, like this:
# zpool attach rpool old-disk new-disk
In the attach disk scenario, you will also need to add the
Iman,
Sure, just select both disks during the install, like the screen below.
If you don't see all the disks on the system during the initial install,
then either their is an underlying configuration problem or you just
need to scroll down to see all the disks.
Cindy
Select Disks
On
Daniel,
You can replace the disks in both of the supported root pool
configurations:
- single disk (non-redundant) root pool
- mirrored (redundant) root pool
I've tried both recently and I prefer attaching the replacement disk to
the single-disk root pool and then detaching the old disk, using
Hi Alex,
Not exactly. Just hadn't thought of that specific example yet, but its a
good one so I'll add it.
In your case, ZFS might not see the expanded capacity of the larger disk
automatically due to a recent bug. For non-root pools, the workaround to
see the expanded space is to export and
Michael,
Sure. You can use Solaris 10 10/08 to initially install a ZFS root
file system as long as you're not interested in migrating a UFS
root file system to a ZFS root file system.
But if you want to migrate your existing UFS root file system to
a ZFS root file system, then you must perform
Hi Marlanne,
Excellent question and thank you for asking...
We have a set of instructions for creating root pool snapshots and
root pool recovery, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery
The zfs send and recv options used in this
Hi Peter,
You need to select the text-mode install option to select a ZFS root
file system.
Other ZFS root installation tips are described here:
http://docs.sun.com/app/docs/doc/817-2271/zfsboot-1?a=view
I'll be attending Richard Elling's ZFS workshop at LISA08.
Hope to see you. :-)
Cindy
Good point and we've tried to document this issue all over the place
and will continue to publicize this fact.
With the new ZFS boot and install features, it is a good idea to read
the docs first. Tell you friends.
I will send out a set of s10 10/08 doc pointers as soon as they are
available.
Dick,
Well, not at the same time. :-)
If you are running a recent SXCE release and you have a mirrored ZFS
root pool with two disks, for example, you can boot off either disk,
as described in the ZFS Admin Guide, pages 81-85, here:
http://opensolaris.org/os/community/zfs/docs/
If you create a
Chris,
Tim Foster sent out this syntax previously:
zfs set com.sun:auto-snapshot=false dataset
Unless I'm misunderstanding your questions, try this for the dataset
on the removable media device.
Let me know if you have any issues.
I'm tracking the auto snapshot experience...
Cindy
Chris
Hi Eric,
Are you saying that you selected two-disks for a mirrored root pool
during the initial install and because you changed the default
rpool name, the pool was created with just one disk?
I netinstalled build 96, selected two disks for the root pool mirror,
backspaced over rpool with mypool
Ross,
No need to apologize...
Many of us work hard to make sure good ZFS information is available so a
big thanks for bringing this wiki page to our attention.
Playing with UFS on ZFS is one thing but even inexperienced admins need
to know this kind of configuration will provide poor
Alain,
I think you want to use fmdump -eV to display the extended device
information. See the output below.
Cindy
class = ereport.fs.zfs.checksum
ena = 0x3242b9cdeac00401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
Hi Ivan,
If you are asking how you can make a ZFS root file system on a Solaris
10 system, then you'll need to wait a bit until that release is
available.
This features is currently provided in the SXCE, build 90 release,
which provides similar support. You can read more about this support
Hi Richard,
Yes, sure. We can add that scenario.
What's been on my todo list is a ZFS troubleshooting wiki.
I've been collecting issues. Let's talk soon.
Cindy
Richard Elling wrote:
Tom Bird wrote:
Richard Elling wrote:
I see no evidence that the data is or is not correct. What we
Soren,
At this point, I'd like to know what fmdump -eV says about your disk so
you can determine whether it should be replaced or not.
Cindy
soren wrote:
soren wrote:
ZFS has detected that my root filesystem has a
small number of errors. Is there a way to tell which
specific files have been
Hi Ron,
Try again by using this syntax:
ok boot cdrom - text
Make sure you have reviewed the ZFS boot/install chapter in the ZFS
admin guide, here:
http://opensolaris.org/os/community/zfs/docs/
Cindy
Ron Halstead wrote:
I have a Sun Blade 2500 running nv_88. I want to install nv_94 with a
Mark,
I filed two bugs for these issues but they are not visible in the
Opensolaris bug database yet:
6731639 More NFSv4 ACL changes for ls.1 (Nevada)
6731650 More NFSv4 ACL changes for acl.5 (Nevada)
The current ls.1 man page can be displayed on docs.sun.com, here:
Mark,
Thanks for your detailed review comments. I will check where the latest
man pages are online and get back to you.
In the meantime, I can file the bugs to get these issues fixed on your
behalf.
Thanks again,
Cindy
Marc Bevand wrote:
I noticed some errors in ls(1), acl(5) and the ZFS
Hi Alan,
ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
environment requires separate ZFS volumes for swap and dump devices.
The ZFS boot/install project and information trail starts here:
http://opensolaris.org/os/community/zfs/boot/
Cindy
Alan Burlison wrote:
I'm
Alan,
Just make sure you use dumpadm to point to valid dump device and
this setup should work fine. Please let us know if it doesn't.
The ZFS strategy behind automatically creating separate swap and
dump devices including the following:
o Eliminates the need to create separate slices
o Enables
Rainer,
Sorry for your trouble.
I'm updating the installboot example in the ZFS Admin Guide with the
-F zfs syntax now. We'll fix the installboot man page as well.
Mark, I don't have an x86 system to test right now, can you send
me the correct installgrub syntax for booting a ZFS file system?
For the record, the source of the ZFS Admin Guide is created with
a SGML editor that is not Framemaker. I agree that the evince PDF
display problems are with the font changes only.
Cindy
Akhilesh Mritunjai wrote:
Welcome to font hell :-(. For many years, Sun
documentation was written
in the
501 - 600 of 674 matches
Mail list logo