Hi Michael,
Whenever I see commands hanging, I would first rule out
any hardware issues.
I'm not sure how to do that on a OS X.
Cindy
On 12/06/09 09:14, Michael Armstrong wrote:
Hi, I'm using zfs version 6 on mac os x 10.5 using the old macosforge
pkg. When I'm writing files to the fs they
On 12/07/09 09:37, Cindy Swearingen wrote:
Hi Xavier,
Neither the SMC interface nor the ZFS webconsole is available
in OpenSolaris releases. The SMC cannot be used for ZFS
administration in any Solaris release.
I'm not sure what the replacement plans are but you might
check with the experts
I agree that zpool attach and add look similar in their syntax,
but if you attempt to add a disk to a redundant config, you'll
see an error message similar to the following:
# zpool status export
pool: export
state: ONLINE
scrub: none requested
config:
NAMESTATE READ
Hi Matthias,
I'm not sure I understand all the issues that are going on
in this configuration, but I don't see that you used the
zpool replace command to complete physical replacement
of the failed disk, which would look like this:
# zpool replace performance c1t3d0
Then run zpool clear to
Hi Alex,
The SXCE Admin Guide is generally up-to-date on docs.sun.com.
The section that covers the autoreplace property and default
behavior is here:
http://docs.sun.com/app/docs/doc/817-2271/gazgd?a=view
Thanks,
Cindy
On 12/07/09 14:50, Alexandru Pirvulescu wrote:
Thank you. That fixed the
practices guide, here, for guidelines on
creating ZFS storage pools:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Cindy
On 12/03/09 15:26, Ragnar Sundblad wrote:
Thank you Cindy for your reply!
On 3 dec 2009, at 18.35, Cindy Swearingen wrote:
A bug might exist but you
Hi Dennis,
Yes, sorry for the confusion.
I just added the ZFS pool version (19-22) pages in the old OpenSolaris
site to avoid any problems, but it doesn't look like the newer pages
are redirecting correctly from the old site.
I filed a bug to alert people that the version pages have moved due
Hi Gary,
To answer your questions, the hardware read some data and ZFS detected
a problem with the checksums in this dataset and reported this problem.
ZFS can do this regardless of ZFS redundancy.
I don't think a scrub will fix these permanent errors, but it depends
on the corruption. If its
Hi Bill,
I can't comment on why your USB device names are changing, but I have
seen BIOS upgrades do similar things to device names.
If you must run a root pool on USB sticks, then I think you would have
to boot from the LiveCD before running the BIOS upgrade. Maybe someone
can comment. On Sun
in device names and the beadm activate will fail
something like this:
ERROR: Unable to determine the configuration of the current boot environment
Cindy
On 12/04/09 15:26, Cindy Swearingen wrote:
Hi Bill,
I can't comment on why your USB device names are changing, but I have
seen BIOS upgrades do
Hi Ragnar,
A bug might exist but you are building a pool based on the ZFS
volumes that are created in another pool. This configuration
is not supported and possible deadlocks can occur.
If you can retry this example without building a pool on another
pool, like using files to create a pool and
I'm not sure we have any LDOMs experts on this list.
You might try reposting this query on the LDOMs discuss list,
which I think is this one:
http://forums.sun.com/forum.jspa?forumID=894
Thanks,
Cindy
On 12/02/09 08:17, Andre Boegelsack wrote:
Hi to all,
I have a short question regarding
Hi Jim,
Nevada build 128 had some problems so will not be released.
The dedup space fixes should be available in build 129.
Thanks,
Cindy
On 12/02/09 02:37, Jim Klimov wrote:
Hello all
Sorry for bumping an old thread, but now that snv_128 is due to appear as a
public DVD download, I
Apparently, I don't know a DomU from a LDOM...
I should have pointed you to the Xen discussion list, here:
http://opensolaris.org/jive/forum.jspa?forumID=53
Cindy
On 12/02/09 08:58, Cindy Swearingen wrote:
I'm not sure we have any LDOMs experts on this list.
You might try reposting
Hi Chris,
If you have 40 or so disks then you would create 5-6 RAIDZ virtual
devices of 7-8 disks each, or possibly include two disks for the root
pool, two disks as spares, and then 36 (4 RAIDZ vdevs of 6 disks) disks
for a non-root pool.
This configuration guide hasn't been updated for
. Like I said, our storage group presents 15G LUNs to use --
so it'd be difficult to keep the TLDs under 9 and have a very large
filesystem.
Let me know what you think. Thanks!
Chris
On Tue, Dec 1, 2009 at 10:47 AM, Cindy Swearingen
cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com
I was able to reproduce this problem on the latest Nevada build:
# zpool create tank raidz c1t2d0 c1t3d0 c1t4d0
# zpool add -n tank raidz c1t5d0 c1t6d0 c1t7d0
would update 'tank' to the following configuration:
tank
raidz1
c1t2d0
c1t3d0
Hi Stuart,
Which Solaris release are you seeing this behavior?
I would like reproduce it and file a bug, if necessary.
Thanks,
Cindy
On 11/29/09 13:06, Stuart Reid wrote:
Answered by own question...
When using the -n switch the output is truncated i.e. the d0 is not printed.
When actually
Hi all,
on a x4500 with a relatively well patched Sol10u8
# uname -a
SunOS s13 5.10 Generic_141445-09 i86pc i386 i86pc
I've started a scrub after about 2 weeks of operation
and have a lot of
checksum errors:
s13:~# zpool status
div id=jive-html-wrapper-div
font face=Helvetica, Arial, sans-serifbr
Thanks old friendbr
br
I was surprised to read in the S10 zfs man page that
there was the
option /fontfont face=Helvetica, Arial,
sans-serifsharesmb=on.nbsp;
I though I had missed the CIFs server making S10
whilst I
Hi Sean,
I sympathize with your intentions but providing pseudo-names for these
disks might cause more confusion than actual help.
The c4t5... name isn't so bad. I've seen worse. :-)
Here are the issues with using the aliases:
- If a device fails on a J4200, a LED will indicate which disk has
Hi Daniel,
Unfortunately, the permanent errors are in this pool's metadata so it is
unlikely that this pool can be recovered.
Is this an external USB drive? These drives are not always well-behaved
and its possible that it didn't synchronize successfully.
Is the data accessible? I don't know
Hi Daniel,
In some cases, when I/O is suspended, permanent errors are logged and
you need to run a zpool scrub to clear the errors.
Are you saying that a zpool scrub cleared the errors that were
displayed in the zpool status output? Or, did you also use zpool
clear?
Metadata is duplicated even
Seems like upgrading from b126 to b127 will have the
same problem.
Yes, good point. I provided a blurb about this issue, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_Problem_.28Starting_in_Nevada.2C_build_125.29
Its a good idea to review this
Original Message
Subject: [osol-announce] IMPT: Infrastructure upgrade this weekend, 11/13-15
Date: Wed, 11 Nov 2009 12:37:19 -0800
From: Derek Cicero derek.cic...@sun.com
Reply-To: mai...@opensolaris.org
To: opensolaris-annou...@opensolaris.org
All,
Due to infrastructure
Hi Tim,
In a pool with mixed disk sizes, ZFS can use only the amount of disk
space that is equal to the smallest disk and spares aren't included in
pool size until they are used.
In your RAIDZ-2 pool, this is equivalent to 10 500 GB disks, which
should be about 5 TBs.
I think you are running a
Cook wrote:
On Thu, Nov 12, 2009 at 4:05 PM, Cindy Swearingen
cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote:
Hi Tim,
In a pool with mixed disk sizes, ZFS can use only the amount of disk
space that is equal to the smallest disk and spares aren't included in
pool
This feature is described in this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4930014
Secure delete option: erase blocks after they're freed
cs
On 11/11/09 09:17, Darren J Moffat wrote:
Brian Kolaci wrote:
Hi,
I was discussing the common practice of disk eradication used
Cook wrote:
On Tue, Nov 10, 2009 at 4:38 PM, Cindy Swearingen
cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote:
Hi Tim,
I'm not sure I understand this output completely, but have you
tried detaching the spare?
Cindy
Hey Cindy,
Detaching did in fact solve
Hi Orvar,
Correct, I don't see any marvell8sx2 driver changes between b125-126.
So far, only you and Tim are reporting these issues.
Generally, we see bugs filed by the internal test teams if they see
similar problems.
I will try to reproduce the RAIDZ checksum errors separately from the
Hi Tim,
I'm not sure I understand this output completely, but have you
tried detaching the spare?
Cindy
On 11/10/09 09:21, Tim Cook wrote:
So, I currently have a pool with 12 disks raid-z2 (12+2). As you may
have seen in the other thread, I've been having on and off issues with
b126
Hi,
I can't find any bug-related issues with marvell88sx2 in b126.
I looked over Dave Hollister's shoulder while he searched for
marvell in his webrevs of this putback and nothing came up:
driver change with build 126?
not for the SATA framework, but for HBAs there is:
Hi Tim and all,
I believe you are saying that marvell88sx2 driver error messages started
in build 126, along with new disk errors in RAIDZ pools.
Is this correct? If so, please send me the following information:
1. Hardware you are running
2. If you are also seeing new disk errors in your
Hi Rich,
In build 125, the device naming changed for redundant pools.
LU doesn't understand the new device naming if you have a mirrored root pool.
I believe an upgrade from 121 to 126 will be okay. Any LU operation on
your build 126 system will likely fail unless you follow Casper's steps
for
Hi Karl,
Welcome to Solaris/ZFS land ...
ZFS administration is pretty easy but our device administration
is more difficult.
I'll probably bungle this response because I don't have similar
hardware and I hope some expert will correct me.
I think you will have to experiment with various forms
Alex,
You can download the man page source files from this URL:
http://dlc.sun.com/osol/man/downloads/current/
If you want a different version, you can navigate to the available
source consolidations from the Downloads page on opensolaris.org.
Thanks,
Cindy
On 11/02/09 16:39, Cindy
Hi David,
This RFE is filed for this feature:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6893282
Allow the zpool command to wipe labels from disks
Cindy
On 11/03/09 09:00, David Dyer-Bennet wrote:
On Mon, November 2, 2009 20:23, Marion Hakanson wrote:
You'll need to give
Hi Alex,
I'm checking with some folks on how we handled this handoff
for the previous project.
I'll get back to you shortly.
Thanks,
Cindy
On 11/02/09 16:07, Alex Blewitt wrote:
The man pages documentation from the old Apple port
Hi Dan,
Could you provide a bit more information, such as:
1. zpool status output for tank
2. the format entries for c0d0 and c1d1
Thanks,
Cindy
- Original Message -
From: Daniel dan.lis...@gmail.com
Date: Thursday, October 29, 2009 9:59 am
Subject: [zfs-discuss] adding new disk to
ONLINE 0 0 0
errors: No known data errors
format current
Current Disk = c1d1
ST315003- 6VS08NK-0001-16777215.
/p...@0,0/pci-...@1f,2/i...@0/c...@1,0
On Thu, Oct 29, 2009 at 12:04 PM, Cindy Swearingen
cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote:
Hi
cannot create 'tank2': invalid argument for this pool operation
Thanks for your help.
On Thu, Oct 29, 2009 at 1:54 PM, Cindy Swearingen
cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote:
I might need to see the format--partition output for both c0d0 and
c1td1
Jeremy,
I generally suspect device failures in this case and if possible,
review the contents of /var/adm/messages and fmdump -eV to see
if the pool hang could be attributed to failed or failing devices.
Cindy
On 10/26/09 17:28, Jeremy Kitchen wrote:
Cindy Swearingen wrote:
Hi Jeremy,
Can
Hi Frederik,
In most cases, you can use the zfs get syntax below or you can use the
zfs get all fs-name to review all current property settings.
The checksum property is a bit different in that you need to review
the zfs.1m man page checksum property description to determine the value
of the
this
device until it is replaced. If you have another device available,
you might replace the suspect drive and see if that solves the
pool hang problem.
Cindy
On 10/27/09 12:04, Jeremy Kitchen wrote:
Cindy Swearingen wrote:
Jeremy,
I generally suspect device failures in this case
, which would
allow reads to continue in case of a device failure, might prevent
the pool from hanging.
If offlining the disk or replacing the disk doesn't help, let us know.
Cindy
On 10/27/09 13:13, Jeremy Kitchen wrote:
Jeremy Kitchen wrote:
Cindy Swearingen wrote:
Jeremy,
I generally suspect
Hi Ross,
The CR ID is 6740597:
zfs fletcher-2 is losing its carries
Integrated in Nevada build 114 and the Solaris 10 10/09 release.
This CR didn't get a companion man page bug to update the docs
so I'm working on that now.
The opensolaris.org site seems to be in the middle of its migration
Hi Jeremy,
Can you use the command below and send me the output, please?
Thanks,
Cindy
# mdb -k
::stacks -m zfs
On 10/26/09 11:58, Jeremy Kitchen wrote:
Jeremy Kitchen wrote:
Hey folks!
We're using zfs-based file servers for our backups and we've been having
some issues as of late with
Hi Sean,
A better way probably exists but I use the fdump -eV to identify the
pool and the device information (vdev_path) that is listed like this:
# fmdump -eV | more
.
.
.
pool = test
pool_guid = 0x6de45047d7bde91d
pool_context = 0
pool_failmode = wait
Hi Karim,
All ZFS storage pools are going to use some amount of space for
metadata and in this example it looks like 3 GB. This is what
the difference between zpool list and zfs list is telling you.
No other way exists to calculate the space that is consumed by
metadata.
pool space (199 GB)
I'm stumped too. Someone with more FM* experience needs to comment.
Cindy
On 10/23/09 14:52, sean walmsley wrote:
Thanks for this information.
We have a weekly scrub schedule, but I ran another just to be sure :-) It
completed with 0 errors.
Running fmdump -eV gives:
TIME
Probably if you try to use any LU operation after you have upgraded to
build 125.
cs
On 10/23/09 16:18, Chris Du wrote:
Sorry, do you mean luupgrade from previous versions or from 125 to future
versions?
I luupgrade from 124 to 125 with mirrored root pool and everything is working
fine.
Thanks for your comments, Frank.
I will take a look at the inconsistencies.
Cindy
On 10/22/09 08:29, Frank Cusack wrote:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations
says that the number of disks in a RAIDZ
Hi Bruno,
I see some bugs associated with these messages (6694909) that point to
an LSI firmware upgrade that cause these harmless errors to display.
According to the 6694909 comments, this issue is documented in the
release notes.
As they are harmless, I wouldn't worry about them.
Maybe
Hi Jason,
Since spare replacement is an important process, I've rewritten this
section to provide 3 main examples, here:
http://docs.sun.com/app/docs/doc/817-2271/gcvcw?a=view
Scroll down the section:
Activating and Deactivating Hot Spares in Your Storage Pool
Example 4–7 Manually Replacing
Hi Matthew,
You can use various forms of fmdump to decode this output.
It might be easier to use fmdump -eV and look for the
device info in the vdev path entry, like the one below.
Also see if the errors on these vdevs are reported in
your zpool status output.
Thanks,
Cindy
# fmdump -eV |
this is resolved, is there some
documentation
available that will let me calculate this by hand? I would like to know
how large
the current 3-4% meta data storage I am observing can potentially grow.
Thanks.
On Oct 20, 2009, at 8:57 AM, Cindy Swearingen wrote:
Hi Stuart,
The reason why used
Hi Stacy,
Can you try to forcibly create a new pool using the devices from
the corrupted pool, like this:
# zpool create -f newpool disk1 disk2 ...
Then, destroy this pool, which will release the devices.
This CR has been filed to help resolve the pool cruft problem:
6893282 Allow the zpool
Hi Stuart,
The reason why used is larger than the volsize is because we
aren't accounting for metadata, which is covered by this CR:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6429996
6429996 zvols don't reserve enough space for requisite meta data
Metadata is usually only a
Its updated now. Thanks for mentioning it.
Cindy
On 10/18/09 10:19, Sriram Narayanan wrote:
All:
Given that the latest S10 update includes user quotas, the FAQ here
[1] may need an update
-- Sriram
[1] http://opensolaris.org/os/community/zfs/faq/#zfsquotas
Hi Markus,
The numbered VDEVs listed in your zpool status output facilitate log
device removal that integrated into build 125. Eventually, they will
also be used for removal of redundant devices when device removal
integrates.
In build 125, if you create a pool with mirrored log devices, and
Hi Tomas,
I think you are saying that you are testing what happens when you
increase a slice under a live ZFS storage pool and then reviewing
the zdb output of the disk labels.
Increasing a slice under a live ZFS storage pool isn't supported and
might break your pool.
I think you are seeing
We are working on evaluating all the issues and will get problem
descriptions and resolutions posted soon. I've asked some of you to
contact us directly to provide feedback and hope those wheels are
turning.
So far, we have these issues:
1. Boot failure after LU with a separate var dataset.
, Tomas Ögren wrote:
On 19 October, 2009 - Cindy Swearingen sent me these 2,4K bytes:
Hi Tomas,
I think you are saying that you are testing what happens when you
increase a slice under a live ZFS storage pool and then reviewing
the zdb output of the disk labels.
Increasing a slice under a live
Hi everyone,
Currently, the device naming changes in build 125 mean that you cannot
use Solaris Live Upgrade to upgrade or patch a ZFS root dataset in a
mirrored root pool.
If you are considering this release for the ZFS log device removal
feature, then also consider that you will not be able
Hi Greg,
With two disks, I would start with a mirror. Then, you could add
two more disks for expansion. You can also detach disks in a mirrored
configuration. Or, you could attach another disk to create a 3-way
mirror.
With a RAIDZ configuration, you would not be able to expand the
two disks to
Rodney,
I added a second swap device to my OSOL 2009.06 laptop and my system
running Nevada build 124. I can't reproduce this. Both swap devices
appear after reboot.
I would agree with Darren's comments that copies=2 is a better
configuration for a one-disk pool.
The fact that you can attach a
Other than how to turn these features on and off, only so much
performance related info can be shoe horned into a man page.
You might check out these blogs:
http://blogs.sun.com/roch/entry/people_ask_where_are_we
See the direct I/O section
Hi Rodney,
I've not seen this problem.
Did you install using LiveCD or the automated installer?
Here are some things to try/think about:
1. After a reboot with no swap or dump devices, run this command:
# zfs volinit
If this works, then this command isn't getting run on boot.
Let me know
Hi Jason,
I think you are asking how do you tell ZFS that you want to replace the
failed disk c8t7d0 with the spare, c8t11d0?
I just tried do this on my Nevada build 124 lab system, simulating a
disk failure and using zpool replace to replace the failed disk with
the spare. The spare is now
0 0
c0t5d0 ONLINE 0 0 0
c0t7d0 ONLINE 0 0 0 48.5K resilvered
errors: No known data errors
On 10/14/09 15:23, Eric Schrock wrote:
On 10/14/09 14:17, Cindy Swearingen wrote:
Hi Jason,
I think you are asking how do you tell ZFS
searchable What to do when you have
a zfs disk failure with lots of examples would be great. There are a
lot of attempts out there, but nothing I've found is comprehensive.
Jason
On Wed, Oct 14, 2009 at 4:23 PM, Eric Schrock eric.schr...@sun.com wrote:
On 10/14/09 14:17, Cindy Swearingen wrote:
Hi
/09 16:02, Eric Schrock wrote:
On 10/14/09 14:33, Cindy Swearingen wrote:
Hi Eric,
I tried that and found that I needed to detach and remove
the spare before replacing the failed disk with the spare
disk.
You should just be able to detach 'c0t6d0' in the config below. The
spare (c0t7d0
Hi--
Unfortunately, you cannot change the partitioning underneath your pool.
I don't see any way of resizing this partition except for backing up
your data, repartitioning the disk, and reinstalling Opensolaris
2009.06.
Maybe someone else has a better idea...
Cindy
On 10/13/09 06:32, Julio
Except that you can't add a disk or partition to a root pool:
# zpool add rpool c1t1d0s0
cannot add to 'rpool': root pool can not have multiple vdevs or separate
logs
He could try to attach the partition to his existing pool, I'm not sure
how, and this would only create a mirrored root pool,
to backup
his pool, reconfigure and expand the solaris2 partition, and
then reinstall OpenSolaris.
Cindy
On 10/13/09 10:47, Cindy Swearingen wrote:
Except that you can't add a disk or partition to a root pool:
# zpool add rpool c1t1d0s0
cannot add to 'rpool': root pool can not have multiple vdevs
Hua,
The behavior below is described here:
http://docs.sun.com/app/docs/doc/819-5461/setup-1?a=view
The top-level /tank file system cannot be removed so it is
less flexible then using descendent datasets.
If you want to create snapshot or clone and later promote
the /tank clone, then it is
Dirk,
I'm not sure I'm following you exactly but this is what I think you are
trying to do:
You have a RAIDZ pool that is built with slices and you are trying to
convert the slice configuration to whole disks. This isn't possible
because you are trying replace the same disk. This is what
Hi Osvald,
If you physically replaced the failed disk with even a slightly smaller
disk in a RAIDZ pool and ran the zpool replace command, you would have
seen a message similar to the following:
# zpool replace rescamp c0t6d0 c2t2d0
cannot replace c0t6d0 with c2t2d0: device is too small
Did
Hi Stacy,
If you can't import the pool, then it is difficult to remove the disks.
If the pool had enough redundancy, you could attempt to unconfigure the
corrupted disks with cfgadm and then try to import the pool.
Until we have a zpool clean feature, you could wipe the disk labels with
dd in
Hi Osvald,
Can you comment on how the disks shrank or how the labeling on these
disks changed?
We would like to track the issues that causes the hardware underneath
a live pool to change so that we can figure out how to prevent pool
failures in the future.
Thanks,
Cindy
On 10/03/09 09:46,
Yes, you can use the zpool replace process with any kind of drive:
failed, failing, or even healthy.
cs
On 10/02/09 12:15, Dan Transue wrote:
Does the same thing apply for a failing drive? I have a drive that
has not failed but by all indications, it's about to Can I do the
same thing
Ray,
The checksums are set on the file systems not the pool.
If a new checksum is set and *you* rewrite the data, then the rewritten
data will contain the new checksum. If your pool has the space for you
to duplicate the user data and new checksum is set, then the duplicated
data will have
Hi David,
Which Solaris release is this?
Are you sure you are using the same ZFS command to review the sizes
of the raidz1 and raidz pools? The zpool list and zfs list commands
will display different values.
See the output below of my tank pool created with raidz or raidz1
redundancy. The pool
You are correct. The zpool create -O option isn't available in a Solaris
10 release but will be soon. This will allow you to set the file system
checksum property when the pool is created:
# zpool create -O checksum=sha256 pool c1t1d0
# zfs get checksum pool
NAME PROPERTY VALUE SOURCE
David,
When you get back to the original system, it would be helpful if
you could provide a side-by-side comparison of the zpool create
syntax and the zfs list output of both pools.
Thanks,
Cindy
On 10/01/09 13:48, David Stewart wrote:
Cindy:
I am not at the machine right now, but I
Hi Ron,
Any reason why you want to use slices except for the root pool?
I would recommend a 4-disk configuration like this:
mirrored root pool on c1t0d0s0 and c2t0d0s0
mirrored app pool on c1t1d0 and c2t1d0
Let the install use one big slice for each disk in the mirrored root
pool, which is
The opensolaris.org site will be transitioning to a wiki-based site
soon, as described here:
http://www.opensolaris.org/os/about/faq/site-transition-faq/
I think it would be best to use the new site to collect this
information because it will be much easier for community members
to contribute.
Hi Donour,
You would use the boot -L syntax to select the ZFS BE to boot from,
like this:
ok boot -L
Rebooting with command: boot -L
Boot device: /p...@8,60/SUNW,q...@4/f...@0,0/d...@w2104cf7fa6c7,0:a
File and args: -L
1 zfs1009BE
2 zfs10092BE
Select environment to boot: [ 1 - 2 ]:
Hi David,
All system-related components should remain in the root pool, such as
the components needed for booting and running the OS.
If you have datasets like /export/home or other non-system-related
datasets in the root pool, then feel free to move them out.
Moving OS components out of the
Hi Karl,
Manually cloning the root pool is difficult. We have a root pool
recovery procedure that you might be able to apply as long as the
systems are identical. I would not attempt this with LiveUpgrade
and manually tweaking.
info stored in the root pool?
Thanks
Peter
2009/9/24 Cindy Swearingen cindy.swearin...@sun.com:
Hi Karl,
Manually cloning the root pool is difficult. We have a root pool recovery
procedure that you might be able to apply as long as the
systems are identical. I would not attempt
Karl,
I'm not sure I'm following everything. If you can't swap the drives,
the which pool would you import?
If you install the new v210 with snv_115, then you would have a bootable
root pool.
You could then receive the snapshots from the old root pool into the
root pool on the new v210.
I
Dustin,
You didn't describe the process that you used to replace the disk so its
difficult to commment on what happened.
In general, you physically replace the disk and then let ZFS know that
the disk is replaced, like this:
# zpool replace pool-name device-name
This process is described
Michael,
ZFS handles EFI labels just fine, but you need an SMI label on the disk
that you are booting from.
Are you saying that localtank is your root pool?
I believe the OSOL install creates a root pool called rpool. I don't
remember if its configurable.
Changing labels or partitions
Dave,
I've searched opensolaris.org and our internal bug database.
I don't see that anyone else has reported this problem.
I asked someone from the OSOL install team and this behavior
is a mystery.
If you destroyed the phantom pools before you reinstalled,
then they probably returned from the
In addition, if you need the flexibility of moving disks around until
the device removal CR integrates, then mirrored pools are more flexible.
Detaching disks from a mirror isn't ideal but if you absolutely have
to reuse a disk temporarily then go with mirrors. See the output below.
You can
Hi RB,
We have a draft of the ZFS/flar image support here:
http://opensolaris.org/os/community/zfs/boot/flash/
Make sure you review the Solaris OS requirements.
Thanks,
Cindy
On 09/14/09 11:45, RB wrote:
Is it possible to create flar image of ZFS root filesystem to install it to
other
Hi Brian,
I'm tracking this issue and expected resolution, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#RAID-Z_Checksum_Errors_in_Nevada_Builds.2C_120-123
Thanks,
Cindy
On 09/10/09 13:21, Brian Hechinger wrote:
I've hit google and it looks like this is
Hi Jon,
If the zpool import command shows the old rpool and associated disk
(c1t1d0s0), then you might able to import it like this:
# zpool import rpool rpool2
Which renames the original pool, rpool, to rpool2, upon import.
If the disk c1t1d0s0 was overwritten in any way then I'm not sure
Hi Mike,
I reviewed this doc and the only issue I have with it now is that uses
/var/tmp an an example of storing snapshots in long-term storage
elsewhere.
For short-term storage, storing a snapshot as a file is an acceptable
solution as long as you verify that the snapshots as files are valid
401 - 500 of 674 matches
Mail list logo