Hi Ricardo,
I just tested most of the links from the zones.5 man page on
docs.sun.com and they all seem to be working now.
Outages occurred a couple of weeks ago but everything seems to be
working now.
Please let me know if you have any more problems with docs.sun.com and
I'll file a service
D'oh! Thanks for the tip.
I was testing the Solaris Express version, which is working fine, here:
http://docs.sun.com/app/docs/doc/819-2252/6n4i8rtv2?a=view
If you're working with OpenSolaris features, then the Solaris Express
docs will more closely correlate than the Solaris 10 man pages.
Dennis,
You are absolutely correct that the doc needs a step to verify
that the backup occurred.
I'll work on getting this step added to the admin guide ASAP.
Thanks for feedback...
Cindy
Dennis Clarke wrote:
Am I missing something here? [1]
Dennis
[1] I am fully prepared for RTFM
Sorry, here's the correct URL:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6gl?a=view
Cindy
Al Hopper wrote:
On Tue, 1 Aug 2006, Cindy Swearingen wrote:
Hi Patrick,
Here's a pointer to the volume section in the ZFS admin guide:
http://docsview.sfbay/app/docs/doc/817-2271/6mhupg6gl
Hi Neal,
The ZFS administration class, available in the fall, I think, covers
basically the same content as the ZFS admin guide only with extensive
lab exercises.
If you're an experienced admin, I think you can pick up most of
the basic features from the ZFS Admin Guide. If you can't, please
James,
I noticed your link to the ZFS Admin Guide is out of date because
I appended the date in the pdf filename. This doesn't work because when
I update the guide once or month or so, you wouldn't get the latest
version.
So, I simplified this by renaming it as zfsadmin.pdf
The month/year is
Hi Brian,
See the previous posting about this below.
You can read about these features in the ZFS Admin Guide.
Cheers,
Cindy
Subject: Solaris 10 ZFS Update
From: George Wilson [EMAIL PROTECTED]
Date: Mon, 31 Jul 2006 11:51:09 -0400
To: zfs-discuss@opensolaris.org
We have putback a
Yes, hot spares are in the upcoming Solaris 10 release...
You can read about hot spares in the Solaris Express docs, here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view#gcvcw
Essentially the same information will appear in the upcoming Solaris 10
version.
Cindy
ozan s. yigit
Hi--
ZFS stripes data across all pool configurations but you can only detach
a device from mirrored storage pool.
For more information, see this section:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view
However, figuring out that this operation is only supported in a
mirrored
Hi Mike,
Yes, outside of the hot-spares feature, you can detach, offline, and
replace existing devices in a pool, but you can't remove devices, yet.
This feature work is being tracked under this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Cindy
Mike Seda wrote:
Hi Betsy,
Yes, part of this is a documentation problem.
I recently documented the find -inum scenario in the community version
of the admin guide. Please see page 156, (well, for next time) here:
http://opensolaris.org/os/community/zfs/docs/
We're working on the larger issue as well.
Cindy
Hi Torrey,
The MD21 entries were removed from the /etc/format.dat file in the
Solaris 10 release although the controller itself was EOL'd long
before this release.
However, the entries are not removed upon upgrade from a previous
release, which is this bug:
See the following bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6280662
Cindy
roland wrote:
is it planned to add some other compression algorithm to zfs ?
lzjb is quite good and especially performing very well, but i`d like to have better compression (bzip2?) - no matter how worse
Final for the first draft. :-)
Use the .../community/zfs/docs link to get to this doc link at the
bottom of the page. The current version is indeed 0822.
More updates are needed, but the dnode description is still applicable.
Someone will correct if I'm wrong.
cs
James Blackburn wrote:
Or
Uwe,
It was also unclear to me that legacy mounts were causing your
troubles. The ZFS Admin Guide describes ZFS mounts and legacy
mounts, here:
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qs6?a=view
Richard, I think we need some more basic troubleshooting info, such
as this mount failure.
Matt,
Generally, when a disk needs to be replaced, you replace the disk,
use the zpool replace command, and you're done...
This is only a little more complicated in your scenario below because
of the sharing the disk between ZFS and UFS.
Most disks are hot-pluggable so you generally don't need
Hi Kory,
No, they don't have to the same size. But, the pool size will be
constrained by the smallest disk and might not be the best
use of your disk space.
See the output below. I'd be better off mirroring the two 136-GB
disks and using the 4 GB-disk for something else. :-)
Cindy
c0t0d0 =
Malachi,
The section on adding devices to a ZFS storage pool in the ZFS Admin
guide, here, provides an example of adding to a raidz configuration:
http://docsview.sfbay/app/docs/doc/817-2271/6mhupg6ft?a=view
I think I need to provide a summary of what you can do with
both raidz and mirrored
Here's the correct link:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view
The same example exists on page 52 of the 817-2271 PDF posted on
the opensolaris.../zfs/documentation page.
Cindy
Malachi de Ælfweald wrote:
FYI That page is not publicly viewable. It was the 817-2271 pdf I
Hi Martin,
Yes, you can do this with the zpool attach command.
See the output below.
An example in the ZFS Admin Guide is here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view
Cindy
# zpool create mpool c1t20d0
# zpool status mpool
pool: mpool
state: ONLINE
scrub: none
will be implemented?
Cindy Swearingen wrote:
Hi Mike,
Yes, outside of the hot-spares feature, you can detach, offline, and
replace existing devices in a pool, but you can't remove devices, yet.
This feature work is being tracked under this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do
Chris,
Looks like you're not running a Solaris release that contains
the zfs receive -F option. This option is in current Solaris community
release, build 48.
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6f1?a=view#gdsup
Otherwise, you'll have to wait until an upcoming Solaris 10 release.
Chris,
This option will be available in the upcoming Solaris 10 release, a
few months from now.
We'll send out a listing of the new ZFS features around that time.
Cindy
Krzys wrote:
Ah, ok, not a problem, do you know Cindy when next Solaris Update is
going to be released by SUN? Yes, I am
Mario,
Until zpool remove is available, you don't have any options to remove a
disk from a non-redundant pool.
Currently, you can:
- replace or detach a disk in a ZFS mirrored storage pool
- replace a disk in a ZFS RAID-Z storage pool
Please see the ZFS best practices site for more info about
Hi Rainer,
This is a long thread and I wasn't commenting on your previous
replies regarding mirror manipulation. If I was, I would have done
so directly. :-)
I saw the export-a-pool-to-remove-a-disk-solution described in
a Sun doc.
My point and (I agree with your points below) is that making a
Hi Lee,
You can decide whether you want to use ZFS for a root file system now.
You can find this info here:
http://opensolaris.org/os/community/zfs/boot/
Consider this setup for your other disks, which are:
250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive
250GB = disk1
200GB
Lee,
Yes, the hot spare (disk4) should kick if another disk in the pool fails
and yes, the data is moved to disk4.
You are correct:
160 GB (the smallest disk) * 3 + raidz parity info
Here's the size of raidz pool comprised of 3 136-GB disks:
# zpool list
NAMESIZE
Arif,
You need to boot from {net | DVD} in single-user mode, like this:
boot net -s or boot cdrom -s
Then, when you get to a shell prompt, relabel the disk like this:
# format -e
select disk
format label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Then, you should be able to
Huitzi,
Yes, you are correct. You can add more raidz devices in the future as
your excellent graphic suggests.
A similar zpool add example is described here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6fu?a=view
This new section describes what operations are supported for both raidz
Hi Ed,
This BP was added as a lesson learned for not mixing these
models because its too confusing to administer and no other reason.
I'll update the BP to be clear about this.
I'm sure someone else will answer your NFSv3 question. (I'd like
to know too).
Cindy
Ed Ravin wrote:
Looking over
Jens,
Someone already added it to the ZFS links page, here:
http://opensolaris.org/os/community/zfs/links/
I just added a link to the links page from the zfs docs page
so it is easier to find.
Thanks,
Cindy
Jens Elkner wrote:
On Tue, Jun 19, 2007 at 05:19:05PM +0200, Constantin Gonzalez
Hi Young,
I will link these versions on the ZFS community docs page.
Thanks for the reminder. :-)
Cindy
Young Joo Pintaske wrote:
Hi ZFS Community,
Some time ago I posted a message that ZFS Administration Guide was translated
(Russian and Brazilian Portuguese). There are several other
Sean,
This scenario is covered in the ZFS Admin Guide, found here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6fu?a=view#gcfhe
I provided an example below.
Cindy
# zpool create tank02 c0t0d0
# zpool status tank02
pool: tank02
state: ONLINE
scrub: none requested
config:
Marko,
The ZFS Admin Guide has been updated to include the delegated
administration feature.
See Chapter 8, here:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Cindy
Matthew Ahrens wrote:
Marko Milisavljevic wrote:
Hmm.. my b69 installation understands zfs allow, but man zfs
Paul,
Scroll down a bit in this section to the default passwd/group tables:
http://docs.sun.com/app/docs/doc/819-2379/6n4m1vl99?a=view
Cindy
Paul Kraus wrote:
On 9/17/07, Darren J Moffat [EMAIL PROTECTED] wrote:
Why not use the already assigned webservd/webserved 80/80 uid/gid pair ?
Note
The log device feature integrated into snv_68.
You can read about them here:
http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
And starting on page 18 of the ZFS Admin Guide, here:
http://opensolaris.org/os/community/zfs/docs
Albert Chin wrote:
On Tue, Sep 18, 2007 at 12:59:02PM
Mike, Grant,
I reported the zoneadm.1m man page problem to the man page group.
I also added some stronger wording to the ZFS Admin Guide and the
ZFS FAQ about not using ZFS for zone root paths for the Solaris 10
release and that upgrading or patching is not supported for either
Solaris 10 or
Hi Stephen,
No, you can't replace a one device with a raidz device, but you can
create a mirror from one device by using zpool attach. See the output
below.
The other choice is to add to an existing raidz configuration. See
the output below.
I thought we had an RFE to expand an existing raidz
Chris,
I agree that your best bet is to replace the 128-mb device with
another device, fix the emcpower2a manually, and then replace it
back. I don't know these drives at all, so I'm unclear about the
fix it manually step.
Because your pool isn't redundant, you can't use zpool offline
or detach.
Chris,
You need to use the zpool replace command.
I recently enhanced this section of the admin guide with more explicit
instructions on page 68, here:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
If these are hot-swappable disks, for example, c0t1d0, then use this syntax:
#
Jonathan,
Thanks for providing the zpool history output. :-)
You probably missed the message after this command:
# zpool add tank c4t0d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk
I provided some
Jonathan,
I think I remember seeing this error in an older Solaris release. The
current zpool.1m man page doesn't have this error unless I'm missing it:
http://docs.sun.com/app/docs/doc/819-2240/zpool-1m
In a current Solaris release, this command fails as expected:
# zpool create mirror
Hi Doug,
ZFS uses an EFI label so you need to use format -e to set it back to a
VTOC label, like this:
# format -e
Specify disk (enter its number)[4]: 3
selecting c0t4d0
[disk formatted]
format label
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label.
Shawn,
Using slices for ZFS pools is generally not recommended so I think
we minimized any command examples with slices:
# zpool create tank mirror c1t0d0s0 c1t1d0s0
Keep in mind that using the slices from the same disk for both UFS
and ZFS makes administration more complex. Please see the ZFS
Hey Kory,
I think you must mean can you detach one of the 73GB disks from moodle
and then add it to another pool of 146GB and you want to save the
data from the 73GB disk?
You can't do this and save the data. By using zpool detach, you are
removing any knowledge of ZFS from that disk.
If you
Hi Kory,
Yes, I get it now. You want to detach one of the disks and then readd
the same disk, but lose the redundancy of the mirror.
Just as long as you realize you're losing the redundancy.
I'm wondering if zpool add will complain. I don't have a system to
try this at the moment.
Cindy
Kory
Hi Kava,
Your questions are hard for me to answer without seeing your syntax.
Also, you don't need to futz with slices if you are using whole disks.
I added some add'l information to the zpool replace section
on page 74, here:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Note
Kava,
Because of a recent bug, you need to export and import the pool to see
the expanded space after you use zpool replace.
Also, you don't need to detach first. The process would look like this:
# zpool create test mirror 8gb-1 8gb-2
# zpool replace test 8gb-1 12gb-1
# zpool replace test
Because of the mirror mount feature that integrated into that Solaris
Express, build 77.
You can read about here on page 20 of the ZFS Admin Guide:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Cindy
Andrew Tefft wrote:
Let's say I have a zfs called pool/backups and it contains
Chris,
You can replace the disks one at a time with larger disks. No problem.
You can also add another raidz vdev, but you can't add disks to an
existing raidz vdev.
See the sample output below. This might not solve all your problems,
but should give you some ideas...
Cindy
# zpool create
Chris,
You would need to replace all the disks to see the expanded space.
Otherwise, space on the 1-2 larger disks would be wasted. If
you replace all the disks with larger disks, then yes, the
disk space in the raidz config would be expanded.
A ZFS mirrored config would be more flexible but it
David,
Try detaching the spare, like this:
# zpool detach pool-name c10t600A0B80001139967CE145E80D4Dd0
Cindy
David Smith wrote:
Addtional information:
It looks like perhaps the original drive is in use, and the hot spare is
assigned but not in use see below about zpool iostat:
The file system only quotas and reservations feature description
starts here:
http://docs.sun.com/app/docs/doc/817-2271/gfwpz?a=view
cs
Eric Schrock wrote:
On Thu, Mar 20, 2008 at 06:41:42PM -0500, [EMAIL PROTECTED] wrote:
There was an change request put in to disable snaps affecting quota
Hi Mertol,
Log devices aren't supported in the Solaris 10 release yet. You would
have to run a Solaris Express version to configure log devices, such
as SXDE 9/07 or SXDE 1/08, described here:
http://docs.sun.com/app/docs/doc/817-2271/gfgaa?a=view
cs
Mertol Ozyoney wrote:
Hi All ;
I
Jeff,
No easy way exists to convert this configuration to a mirrored
configuration currently.
If you had two more disks, you could use zpool attach to create
a two-way, two disk mirror. See the output below.
A more complicated solution is to create two files that are the size of
your existing
Hi Sam,
You might review the ZFS best practice site for maintenance
recommendations, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Cindy
Sam wrote:
I have a 10x500 disc file server with ZFS+, do I need to perform any sort of
periodic maintenance to the
Simon,
I think you should review the checksum error reports from the fmdump
output (dated 4/30) that you supplied previously.
You can get more details by using fmdump -ev.
Use zpool status -v to identify checksum errors as well.
Cindy
Simon Breden wrote:
Thanks Max,
I have not been able
Okay, thanks.
I wanted to rule out that the checksum errors reported on 4/30
were persistent enough to be picked up by zpool status. ZFS is
generally quick to identify device problems.
Since fmdump doesn't show any add'l recent errors either, then I
think you can rule out hardware problems other
Hi Tom,
You need to use the zpool attach command, like this:
# zpool attach pool-name disk1 disk2
Cindy
Tom Buskey wrote:
I've always done a disksuite mirror of the boot disk. It's been easry to do
after the install in Solaris. WIth Linux I had do do it during the install.
OpenSolaris
Hi Orvar,
This section describes the operations you can do with a mirrored storage
pool:
http://docs.sun.com/app/docs/doc/817-2271/gazhv?a=view
This section describes the operations you can do with a raidz storage
pool:
http://docs.sun.com/app/docs/doc/817-2271/gcvjg?a=view
Go with mirrored
Tim,
Start at the zfs boot page, here:
http://www.opensolaris.org/os/community/zfs/boot/
Review the information and follow the links to the docs.
Cindy
- Original Message -
From: Tim [EMAIL PROTECTED]
Date: Wednesday, June 4, 2008 4:29 pm
Subject: Re: [zfs-discuss] Get your SXCE on
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Cindy
Swearingen
Sent: Wednesday, June 04, 2008 6:50 PM
To: Tim
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Get your SXCE on ZFS here!
Tim,
Start at the zfs boot page, here:
http
Uwe,
Please see pages 55-80 of the ZFS Admin Guide, here:
http://opensolaris.org/os/community/zfs/docs/
Basically, the process is to upgrade from nv81 to nv90 by using the
standard upgrade feature. Then, use lucreate to migrate your UFS root
file system to a ZFS file system, like this:
1.
Mike,
As we discussed, you can't currently break out other datasets besides
/var. I'll add this issue to the FAQ.
Thanks,
Cindy
Ellis, Mike wrote:
In addition to the standard containing the carnage arguments used to
justify splitting /var/tmp, /var/mail, /var/adm (process accounting
etc),
Vincent,
I think you are running into some existing bugs, particularly this one:
http://bugs.opensolaris.org/view_bug.do?bug_id=6668666
Please review the list of known issues here:
http://opensolaris.org/os/community/zfs/boot/
Also check out the issues described on page 77 in this section:
You want to install the zfs boot block, not the ufs bootblock.
Check the syntax in the ZFS Admin Guide that is available
from this location:
http://opensolaris.org/os/community/zfs/docs
Cindy
- Original Message -
From: Vincent Fox [EMAIL PROTECTED]
Date: Friday, June 13, 2008 3:49 pm
Hi Dan,
I filed a bug 6715550 to fix this issue.
Thanks for reporting it--
Cindy
Dan Reiland wrote:
Yeah. The command line works fine. Thought it to be a
bit curious that there was an issue with the HTTP
interface. It's low priority I guess because it
doesn't impact the functionality really.
Sure. This operation can be done with whole disks too. The disk
(new_device) should be the same size or larger than the existing disk
(device).
You can review some examples here:
http://docs.sun.com/app/docs/doc/817-2271/gcfhe?a=view
If the disks are of unequal size, then some disk space will
I modified the ZFS Admin Guide to show a simple zfs send | zfs recv
example, then a more complex example using ssh to another system.
Thanks for the feedback...
Cindy
Andrius wrote:
James C. McPherson wrote:
Andrius wrote:
Boyd Adamson wrote:
Andrius [EMAIL PROTECTED] writes:
Hi,
Hi--
You can replace the failed disk and then detach the spare using the
general scenario described below. Some steps might be optional but I'm
pretty cautious about disk replacement, even when its this easy.
Cindy
1. Physically replace the failed disk.
2. Let ZFS know that you replaced the
Hi--
I'm not quite sure about the exact sequence of events here, but it
sounds like you had two spares and replaced the failed disk with one of
the spares, which you can do manually with the zpool replace command.
The remaining spare should drop back into the spare pool if you detached
it. Check
Mark,
If you don't want to backup the data, destroy the pool, and
recreate the pool as a mirrored configuration, then another
option it to attach two more disks to create 2 mirrors of 2
disks.
See the output below.
Cindy
# zpool create zp01 c1t3d0 c1t4d0
# zpool status
pool: zp01
state:
ZFS uses EFI when a storage pool is created with whole disks.
ZFS uses the old-style VTOC label when a storage pool is created
with slices.
To be able to boot from a ZFS root pool, the storage pool must be
created with slices. This is a new requirement in ZFS land, and is
described in the doc
Hi Joe,
It is possible that your c0t1d0s0 disk has an existing EFI label instead
of a VTOC label?
(You can tell by using format--disk--partition and see
if the cylinder info is displayed. If no cylinder info, then an EFI
label.)
Relabel with a VTOC label, like this:
# format -e
select disk
For the record, the source of the ZFS Admin Guide is created with
a SGML editor that is not Framemaker. I agree that the evince PDF
display problems are with the font changes only.
Cindy
Akhilesh Mritunjai wrote:
Welcome to font hell :-(. For many years, Sun
documentation was written
in the
Rainer,
Sorry for your trouble.
I'm updating the installboot example in the ZFS Admin Guide with the
-F zfs syntax now. We'll fix the installboot man page as well.
Mark, I don't have an x86 system to test right now, can you send
me the correct installgrub syntax for booting a ZFS file system?
Hi Alan,
ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
environment requires separate ZFS volumes for swap and dump devices.
The ZFS boot/install project and information trail starts here:
http://opensolaris.org/os/community/zfs/boot/
Cindy
Alan Burlison wrote:
I'm
Alan,
Just make sure you use dumpadm to point to valid dump device and
this setup should work fine. Please let us know if it doesn't.
The ZFS strategy behind automatically creating separate swap and
dump devices including the following:
o Eliminates the need to create separate slices
o Enables
Mark,
Thanks for your detailed review comments. I will check where the latest
man pages are online and get back to you.
In the meantime, I can file the bugs to get these issues fixed on your
behalf.
Thanks again,
Cindy
Marc Bevand wrote:
I noticed some errors in ls(1), acl(5) and the ZFS
Hi Ron,
Try again by using this syntax:
ok boot cdrom - text
Make sure you have reviewed the ZFS boot/install chapter in the ZFS
admin guide, here:
http://opensolaris.org/os/community/zfs/docs/
Cindy
Ron Halstead wrote:
I have a Sun Blade 2500 running nv_88. I want to install nv_94 with a
Mark,
I filed two bugs for these issues but they are not visible in the
Opensolaris bug database yet:
6731639 More NFSv4 ACL changes for ls.1 (Nevada)
6731650 More NFSv4 ACL changes for acl.5 (Nevada)
The current ls.1 man page can be displayed on docs.sun.com, here:
Soren,
At this point, I'd like to know what fmdump -eV says about your disk so
you can determine whether it should be replaced or not.
Cindy
soren wrote:
soren wrote:
ZFS has detected that my root filesystem has a
small number of errors. Is there a way to tell which
specific files have been
Hi Richard,
Yes, sure. We can add that scenario.
What's been on my todo list is a ZFS troubleshooting wiki.
I've been collecting issues. Let's talk soon.
Cindy
Richard Elling wrote:
Tom Bird wrote:
Richard Elling wrote:
I see no evidence that the data is or is not correct. What we
Hi Ivan,
If you are asking how you can make a ZFS root file system on a Solaris
10 system, then you'll need to wait a bit until that release is
available.
This features is currently provided in the SXCE, build 90 release,
which provides similar support. You can read more about this support
Alain,
I think you want to use fmdump -eV to display the extended device
information. See the output below.
Cindy
class = ereport.fs.zfs.checksum
ena = 0x3242b9cdeac00401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
Ross,
No need to apologize...
Many of us work hard to make sure good ZFS information is available so a
big thanks for bringing this wiki page to our attention.
Playing with UFS on ZFS is one thing but even inexperienced admins need
to know this kind of configuration will provide poor
Hi Eric,
Are you saying that you selected two-disks for a mirrored root pool
during the initial install and because you changed the default
rpool name, the pool was created with just one disk?
I netinstalled build 96, selected two disks for the root pool mirror,
backspaced over rpool with mypool
Chris,
Tim Foster sent out this syntax previously:
zfs set com.sun:auto-snapshot=false dataset
Unless I'm misunderstanding your questions, try this for the dataset
on the removable media device.
Let me know if you have any issues.
I'm tracking the auto snapshot experience...
Cindy
Chris
Dick,
Well, not at the same time. :-)
If you are running a recent SXCE release and you have a mirrored ZFS
root pool with two disks, for example, you can boot off either disk,
as described in the ZFS Admin Guide, pages 81-85, here:
http://opensolaris.org/os/community/zfs/docs/
If you create a
Hi Peter,
You need to select the text-mode install option to select a ZFS root
file system.
Other ZFS root installation tips are described here:
http://docs.sun.com/app/docs/doc/817-2271/zfsboot-1?a=view
I'll be attending Richard Elling's ZFS workshop at LISA08.
Hope to see you. :-)
Cindy
Good point and we've tried to document this issue all over the place
and will continue to publicize this fact.
With the new ZFS boot and install features, it is a good idea to read
the docs first. Tell you friends.
I will send out a set of s10 10/08 doc pointers as soon as they are
available.
Hi Marlanne,
Excellent question and thank you for asking...
We have a set of instructions for creating root pool snapshots and
root pool recovery, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery
The zfs send and recv options used in this
Hi Alex,
Not exactly. Just hadn't thought of that specific example yet, but its a
good one so I'll add it.
In your case, ZFS might not see the expanded capacity of the larger disk
automatically due to a recent bug. For non-root pools, the workaround to
see the expanded space is to export and
Michael,
Sure. You can use Solaris 10 10/08 to initially install a ZFS root
file system as long as you're not interested in migrating a UFS
root file system to a ZFS root file system.
But if you want to migrate your existing UFS root file system to
a ZFS root file system, then you must perform
Daniel,
You can replace the disks in both of the supported root pool
configurations:
- single disk (non-redundant) root pool
- mirrored (redundant) root pool
I've tried both recently and I prefer attaching the replacement disk to
the single-disk root pool and then detaching the old disk, using
Iman,
Yes, you can do either of the following:
o Select two disks for creating a mirrored root pool during an initial
installation
o Attach a second disk after the initial installation, like this:
# zpool attach rpool old-disk new-disk
In the attach disk scenario, you will also need to add the
Iman,
Sure, just select both disks during the install, like the screen below.
If you don't see all the disks on the system during the initial install,
then either their is an underlying configuration problem or you just
need to scroll down to see all the disks.
Cindy
Select Disks
On
Jianhua,
Use the format--label command, like the output below.
Cindy
# format -e c0t1d0
selecting c0t1d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current
Alex,
I think the root cause of your confusion is that the format utility and
disk labels are very unfriendly and confusing.
Partition 2 identifies the whole disk and on x86 systems, space is
needed for boot-related information and is currently stored in
partition 8. Neither of these partitions
Hi Alex,
The fact that you have to install the boot blocks manually on the
second disk that you added with zpool attach is a bug! I should have
mentioned this bug previously.
If you had used the initial installation method to create a mirrored
root pool, the boot blocks would have been applied
1 - 100 of 674 matches
Mail list logo