Hi Ian,
I see the problem. In your included URL below, you didn't
include the /N suffix as included in the zpool upgrade
output.
CR 6898657 is still filed to identify the change.
If you copy and paste the URL from the zpool upgrade -v output:
Hi Paul,
Example 11-1 in this section describes how to replace a
disk on an x4500 system:
http://docs.sun.com/app/docs/doc/819-5461/gbcet?a=view
Cindy
On 01/09/10 16:17, Paul B. Henson wrote:
On Sat, 9 Jan 2010, Eric Schrock wrote:
If ZFS removed the drive from the pool, why does the
Hi Gary,
You might consider running OSOL on a later build, like build 130.
Have you reviewed the fmdump -eV output to determine on which devices
the ereports below have been generated? This might give you more clues
as to what the issues are. I would also be curious if you have any
driver-level
Hi--
The best approach is to correct the issues that are causing these
problems in the first place. The fmdump -eV commnand will identify
the hardware problems that caused the checksum errors and the corrupted
files.
You might be able to use some combination of zpool scrub, zpool clear,
and
Hi Dan,
I'm not sure I'm following everything here but I will try:
1. How do you offline a zvol? Can you show your syntax?
You can only offline a redundant pool component, such as a file, slice,
or whole disk.
2. What component does black represent? Only a pool can be exported.
3. In
Dan,
I see now how you might have created this config.
I tried to reproduce this issue by creating a separate pool on another
disk and a volume to attach to my root pool, but my system panics when
I try to attach the volume to the root pool.
This is on Nevada, build 130.
Panic aside, we don't
Hi,
I think you are saying that you copied the data on this system from a
previous system with hardware problems. It looks like the data that was
copied was corrupt, which is causing the permanent errors on the new
system (?)
The manual removal of the corrupt files, zpool scrub and zpool clear
Hi John,
The message below is a ZFS message, but its not enough to figure out
what is going on in an LDOM environment. I don't know of any LDOMs
experts that hang out on this list so you might post this on the
ldoms-discuss list, if only to get some more troubleshooting data.
I think you are
Hi John,
In general, ZFS will warn you when you attempt to add a device that
is already part of an existing pool. One exception is when the system
is being re-installed.
I'd like to see the set of steps that led to the notification failure.
Thanks,
Cindy
On 01/19/10 20:58, John wrote:
I was
Hi Frank,
I couldn't reproduce this problem on SXCE build 130 by failing a disk in
mirrored pool and then immediately running a scrub on the pool. It works
as expected.
Any other symptoms (like a power failure?) before the disk went offline?
It is possible that both disks went offline?
We
Hi Frank,
We need both files.
Thanks,
Cindy
On 01/20/10 15:43, Frank Middleton wrote:
On 01/20/10 04:27 PM, Cindy Swearingen wrote:
Hi Frank,
I couldn't reproduce this problem on SXCE build 130 by failing a disk in
mirrored pool and then immediately running a scrub on the pool. It works
Hi Alexander,
I'm not sure about the OpenSolaris release specifically, but for the
SXCE and Solaris 10 releases, we provide this requirement:
http://docs.sun.com/app/docs/doc/817-2271/zfsboot-1?a=view
* Solaris OS Components – All subdirectories of the root file system
that are part of the OS
Younes,
Including your zpool list output for tank would be helpful because zfs
list includes the AVAILABLE pool space. Determining volume space is a
bit trickier because volume size is set at creation time but the
allocated size might not be consumed.
I include a simple example below that might
Thanks Jack,
I was just a listener in this case. Tim did all the work. :-)
Cindy
On 01/23/10 21:49, Jack Kielsmeier wrote:
I'd like to thank Tim and Cindy at Sun for providing me with a new zfs binary
file that fixed my issue. I was able to get my zpool back! Hurray!
Thank You.
Hi Thomas,
Looks like a known problem in b131 that is fixed in b132:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6912791
Unable to set sharename using zfs set sharesmb=name=value
The workaround is to use sharemgr instead.
Thanks,
Cindy
On 01/23/10 21:50, Thomas Burgess wrote:
132 drop? if it's pretty soon i guess
i could just wait.
Thanks for the reply.
On Tue, Jan 26, 2010 at 10:42 AM, Cindy Swearingen
cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote:
Hi Thomas,
Looks like a known problem in b131 that is fixed in b132:
http
the share.
# sharemgr show -vp
default nfs=()
zfs
myshare smb=()
mystuff=/tank/cindys
# cat /etc/dfs/sharetab
/tank/cindys-...@myshare smb
On 01/26/10 13:30, Thomas Burgess wrote:
On Tue, Jan 26, 2010 at 2:36 PM, Cindy Swearingen
cindy.swearin...@sun.com
Brad,
If you are referring to this thread that starting in 2006, then I would
review this updated section:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes
Check to see if your array is described or let us know which array you
are referring to...
Thanks,
Brad,
It depends on the Solaris release. What Solaris release are you running?
Thanks,
Cindy
On 01/27/10 11:43, Brad wrote:
Cindy,
It does not list our SAN (LSI/STK/NetApp)...I'm confused about disabling cache
from the wiki entries.
Should we disable it by turning off zfs cache syncs via
Hi Dick,
Based on this message:
cannot attach c5d0s0 to c4d0s0: device is too small
c5d0s0 is the disk you are trying to attach so it must be smaller than
c4d0s0.
Is it possible that c5d0s0 is just partitioned so that the s0 is smaller
than s0 on c4d0s0?
On some disks, the default
Hi Brad,
You should see better performance on the dev box running 10/09 with the
sd and ssd drivers as is because they should properly handle the SYNC_NV
bit in this release.
If you have determined that the 11/06 system is affected by this issue,
then the best method is to set this parameter
/28/10 07:55, dick hoogendijk wrote:
Cindy Swearingen wrote:
On some disks, the default partitioning is not optimal and you have to
modify it so that the bulk of the disk space is in slice 0.
Yes, I know, but in this case the second disk indeed is smaller ;-(
So I wonder, should I reinstall
On 01/28/10 08:52, Thomas Maier-Komor wrote:
On 28.01.2010 15:55, dick hoogendijk wrote:
Cindy Swearingen wrote:
On some disks, the default partitioning is not optimal and you have to
modify it so that the bulk of the disk space is in slice 0.
Yes, I know, but in this case the second disk indeed
On 01/28/10 14:19, Lori Alt wrote:
On 01/28/10 14:08, dick hoogendijk wrote:
On Thu, 2010-01-28 at 12:34 -0700, Lori Alt wrote:
But those could be copied by send/recv from the larger disk (current
root pool) to the smaller disk (intended new root pool). You won't be
attaching anything
does the extra entries get added? The extra entry at the top, seem
to block me from accessing the file.
On 01/25/2010 09:18 PM, Cindy Swearingen wrote:
Hi CD,
Practical in what kind of environment? What are your goals?
Do you want the ACL deny entries to be inherited?
Do you plan to use CIFS
Hi Michelle,
Your previous mail about the disk label reverting to EFI makes me wonder
whether you used the format -e option to relabel the disk, but your disk
label below looks fine.
This also might be a known bug (6419310), whose workaround is to use the
-f option to zpool attach.
An
I think the SATA(2)--SATA(1) connection will negotiate correctly,
but maybe some hardware expert will confirm.
cs
On 01/28/10 15:27, dick hoogendijk wrote:
On Thu, 2010-01-28 at 08:44 -0700, Cindy Swearingen wrote:
Or, if possible, connect another larger disk and attach it to the original
Hi Tony,
I'm no JumpStart expert but it looks to me like the error is
on the pool entry in the profile.
I would retest this install by changing the pool entry in the
profile like this:
install_type flash_install
archive_location nfs://192.168.1.230/export/install/media/sol10u8.flar
Hi Michelle,
You're almost there, but install the bootblocks in s0:
# installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c19d0s0
Thanks,
Cindy
On 01/29/10 11:10, Michelle Knight wrote:
Well, I nearly got there. I used -f to force the overwrite and then installed
grub to slice 8
either BE in either pool. I thought beadm
would be similar, but let me find out.
Thanks,
Cindy
On 01/29/10 11:07, Dick Hoogendijk wrote:
Op 28-1-2010 17:35, Cindy Swearingen schreef:
Thomas,
Excellent and much better suggestion... :-)
You can use beadm to specify another root pool by using
Michelle,
Yes, the bootblocks and the pool coexist, even happily sometimes.
In general, you shouldn't have to deal with the boot partition stuff
that you see in the disk format output. If I could hide all this low-
level stuff from you, I would, because its so dang confusing.
Looks like you got
Hi--
Were you trying to swap out a drive in your pool's raidz1 VDEV
with a spare device? Was that your original intention?
If so, then you need to use the zpool replace command to replace
one disk with another disk including a spare.
I would put the disks back to where they were and retry with
Its Monday morning so it still doesn't make sense. :-)
I suggested putting the disks back because I'm still not sure if you
physically swapped c7t11d0 for c7t9d0 or if c7t9d0 is still connected
and part of your pool. You might trying detaching the spare as described
in the docs. If you put the
You are correct. Should be fine without -m.
Thanks,
Cindy
On 01/30/10 09:15, Fajar A. Nugraha wrote:
On Sat, Jan 30, 2010 at 2:02 AM, Cindy Swearingen
cindy.swearin...@sun.com wrote:
Hi Michelle,
You're almost there, but install the bootblocks in s0:
# installgrub -m /boot/grub/stage1
ZFS can generally detect device changes on Sun hardware, but for other
hardware, the behavior is unknown.
The most harmful pool problem I see besides inadequate redundancy levels
or no backups, is device changes. Recovery can be difficult.
Follow recommended practices for replacing devices in a
depends on the
driver--ZFS interaction and we can't speak for all hardware.
Thanks,
Cindy
On 02/01/10 12:52, Frank Cusack wrote:
On February 1, 2010 10:19:24 AM -0700 Cindy Swearingen
cindy.swearin...@sun.com wrote:
ZFS has recommended ways for swapping disks so if the pool is exported
Hi,
Testing how ZFS reacts to a failed disk can be difficult to anticipate
because some systems don't react well when you remove a disk. On an
x4500, for example, you have to unconfigure a disk before you can remove
it.
Before removing a disk, I would consult your h/w docs to see what the
Frank,
ZFS, Sun device drivers, and the MPxIO stack all work as expected.
Cindy
On 02/01/10 14:55, Frank Cusack wrote:
On February 1, 2010 4:15:10 PM -0500 Frank Cusack
frank+lists/z...@linetwo.net wrote:
On February 1, 2010 1:09:21 PM -0700 Cindy Swearingen
cindy.swearin...@sun.com wrote
Even if the pool is created with whole disks, you'll need to
use the s* identifier as I provided in the earlier reply:
# zdb -l /dev/dsk/cvtxdysz
Cindy
On 02/02/10 01:07, Tonmaus wrote:
If I run
# zdb -l /dev/dsk/c#t#d#
the result is failed to unpack label for any disk attached to
Hi Joerg,
Eabling the autoexpand property after the disk replacement is complete
should expand the pool. This looks like a bug. I can reproduce this
issue with files. It seems to be working as expected for disks.
See the output below.
Thanks, Cindy
Create pool test with 2 68 GB drives:
#
Hi David,
This feature integrated into build 117, which would be beyond
your OpenSolaris 2009.06. We anticipate this feature will be
available in an upcoming Solaris 10 release.
You can read about it here:
http://docs.sun.com/app/docs/doc/817-2271/githb?a=view
ZFS Device Replacement
Hi Brian,
If you are considering testing dedup, particularly on large datasets,
see the list of known issues, here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup
Start with build 132.
Thanks,
Cindy
On 02/04/10 16:19, Brian wrote:
I am Starting to put together a home NAS
Hi Francois,
The autoreplace property works independently of the spare
feature.
Spares are activated automatically when a device in the main
pool fails.
Thanks,
Cindy
On 02/05/10 09:43, Francois wrote:
Hi list,
I've a strange behaviour with autoreplace property. It is set to off by
Hi Cesare,
If you want another way to replicate pools, you might be interested
in the zpool split feature that Mark Musante integrated recently.
You can read about it here:
http://blogs.sun.com/mmusante/entry/seven_years_of_good_luck
Cindy
- Original Message -
From: Cesare
Hi Richard,
I last updated this FAQ on 1/19.
Which part is not well-maintained?
:-)
Cindy
On 02/08/10 14:50, Richard Elling wrote:
This is a FAQ, but the FAQ is not well maintained :-(
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
On Feb 8, 2010, at 1:35 PM, Lasse Osterild
Hi Lasse,
I expanded this entry to include more details of the zpool list and
zfs list reporting.
See if the new explanation provides enough details.
Thanks,
Cindy
On 02/08/10 16:51, Lasse Osterild wrote:
On 09/02/2010, at 00.23, Daniel Carosone wrote:
On Mon, Feb 08, 2010 at 11:28:11PM
Hi Tester,
It is difficult for me to see all that is going on here. Can you provide
the steps and the complete output?
I tried to reproduce this on latest Nevada bits and I can't. The
snapshot sizing looks correct to me after a snapshot/clone promotion.
Thanks,
Cindy
# zfs create
Hi Marc,
I've not seen an unimportable pool when all the devices are reported as
ONLINE.
You might see if the fmdump -eV output reports any issues that happened
prior to this failure.
You could also attempt to rename the /etc/zfs/zpool.cache file and then
try to re-import the pool so that the
Hi Charles,
What kind of pool is this?
The SIZE and AVAIL amounts will vary depending on the ZFS redundancy and
whether the deflated or inflated amounts are displayed.
I attempted to explain the differences in the zpool list/zfs list
display, here:
Hi--
From your pre-promotion output, both fs1-patch and snap1 are
referencing the same 16.4 GB, which makes sense. I don't see how fs1
could be a
clone of fs1-patch because it should be REFER'ing 16.4 GB as well in
your pre-promotion zfs list.
If you snapshot, clone, and promote, then the
Hi Dennis,
You might be running into this issue:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6856341
The workaround is to force load the drivers.
Thanks,
Cindy
On 02/17/10 14:33, Dennis Clarke wrote:
I find that some servers display a DEGRADED zpool status at boot. More
Hi Ethan,
Great job putting this pool back together...
I would agree with the disk-by-disk replacement by using the zpool
replace command. You can read about this command here:
http://docs.sun.com/app/docs/doc/817-2271/gazgd?a=view
Having a recent full backup of your data before making any
Hi David,
Its a life-long curse to describe the format utility. Trust me. :-)
I think you want to relabel some disks with an EFI label to SMI label
to be used in your ZFS root pool, and you have overlapping slices
on one disk. I don't think ZFS would let you attach this disk.
To fix the
Frank,
I can't comment on everything happening here, but please review the ZFS
root partition information in this section:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Replacing/Relabeling the Root Pool Disk
The p0 partition identifies the larger Solaris partition,
Hi Harry,
Our current scrubbing guideline is described here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Run zpool scrub on a regular basis to identify data integrity problems.
If you have consumer-quality drives, consider a weekly scrubbing
schedule. If you have
Hi Dirk,
I'm not seeing anything specific to hanging scrubs on b 132
and I can't reproduce it.
Any hardware changes or failures directly before the scrub?
You can rule out any hardware issues by checking fmdump -eV,
iostat -En, or /var/adm/messages output.
Thanks,
Cindy
On 02/20/10 12:56,
Hi Jeff,
The vmware pool is unavailable because the only device in the pool,
c7t0d0, is unavailable.
This problem is probably due to the device failing or being removed
accidentally.
You can follow the steps at the top of this section to help you
diagnose the c7t0d0 problems:
Ray,
Log removal integrated into build 125, so yes, if you upgraded to at
least OpenSolaris build 125 you could fix this problem. See the syntax
below on my b133 system.
In this particular case, importing the pool from b125 or later media
and attempting to remove the log device could not fix
Hi Romain,
The option to select a ZFS root file system or a UFS root file system
is available starting in the Solaris 10 10/08 release.
Which Solaris 10 release are you trying to install?
Thanks,
Cindy
On 03/01/10 09:23, Romain LAMAISON wrote:
Hi all,
I wish to install a Solaris 10 on a
Hi Thomas,
I see that Richard has suggested mirroring your existing pool by
attaching slices from your 1 TB disk if the sizing is right.
You mentioned file security and I think you mean protecting your data
from hardware failures. Another option is to get one more disk to
convert this
Hi David,
I think installgrub is unhappy that no s2 exists on c7t1d0.
I would detach c7t1d0s0 from the pool and follow these steps
to relabel/repartition this disk:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Replacing/Relabeling the Root Pool Disk
Then, reattach
Hi Greg,
You are running into this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6929751
Currently, building a pool from files is not fully supported.
Thanks,
Cindy
On 03/05/10 16:15, Gregory Durham wrote:
Hello all,
I am using Opensolaris 2009.06 snv_129
I have a quick
wrote:
Great...will using lofiadm still cause this issue? either by using
mkfile or by using dd makeing a sparse file? Thanks for the heads up!
On Fri, Mar 5, 2010 at 3:48 PM, Cindy Swearingen
cindy.swearin...@sun.com mailto:cindy.swearin...@sun.com wrote:
Hi Greg,
You are running
Good catch Eric, I didn't see this problem at first...
The problem here and Richard described it well is that the ctdp* devices
represent the larger fdisk partition, which might also contain a ctds*
device.
This means that in this configuration, c7t0d0p3 and c7t0d0s0, might
share the same
Hi Tony,
Good questions...
Yes, you can assign a spare disk to multiple pools on the same system,
but not shared across systems.
The problem with sharing a spare disk with a root pool is that if the
spare kicks in, a boot block is not automatically applied. The
differences in the labels is
Hi Tim,
I'm not sure why your spare isn't kicking in, but you could manually
replace the failed disk with the spare like this:
# zpool replace fserv c7t5d0 c3t6d0
If you want to run with the spare for awhile, then you can also detach
the original failed disk like this:
# zpool detach fserv
Hi D,
Is this a 32-bit system?
We were looking at your panic messages and they seem to indicate a
problem with memory and not necessarily a problem with the pool or
the disk. Your previous zpool status output also indicates that the
disk is okay.
Maybe someone with similar recent memory
Hi Harry,
Reviewing other postings where permanent errors where found on redundant
ZFS configs, one was resolved by re-running the zpool scrub and one
resolved itself because the files with the permanent errors were most
likely temporary files.
One of the files with permanent errors below is
Hi Grant,
I don't have a v240 to test but I think you might need to unconfigure
the disk first on this system.
So I would follow the more complex steps.
If this is a root pool, then yes, you would need to use the slice
identifier, and make sure it has an SMI disk label.
After the zpool
as this is production! Even an IM
would be helpful.
--- On Wed, 3/10/10, Cindy Swearingen cindy.swearin...@sun.com wrote:
From: Cindy Swearingen cindy.swearin...@sun.com
Subject: Re: [zfs-discuss] Replacing a failed/failed mirrored root disk
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Date
Mar 2010 15:28:40 -0800 Cindy Swearingen wrote:
Hey list,
Grant says his system is hanging after the zpool replace on a v240,
running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots.
No errors from zpool replace so it sounds like the disk was physically
replaced successfully
Ian,
You might consider converting this pool to a mirrored pool, which is
currently more flexible than a raidz pool and provide good performance.
Its easy too. See the example below.
Cindy
A non-redundant pool of one disk (33 GB).
# zpool status tank
pool: tank
state: ONLINE
scrub: none
://hub.opensolaris.org/bin/view/Community+Group+zfs/boot
And review Lori's slides at the bottom of this page.
Thanks,
cindy
On 03/12/10 08:41, David L Kensiski wrote:
On Mar 11, 2010, at 3:08 PM, Cindy Swearingen wrote:
Hi David,
In general, an I/O error means that the slice 0 doesn't exist
or some
Hi Michael,
For a RAIDZ pool, the zpool list command identifies the inflated space
for the storage pool, which is the physical available space without an
accounting for redundancy overhead.
The zfs list command identifies how much actual pool space is available
to the file systems.
See the
Hi Svein,
Here's a couple of pointers:
http://wikis.sun.com/display/OpenSolarisInfo/comstar+Administration
http://blogs.sun.com/observatory/entry/iscsi_san
Thanks,
Cindy
On 03/16/10 12:15, Svein Skogen wrote:
Things used to be simple.
zfs create -V xxg -o shareiscsi=on
Hi Dave,
I'm unclear about the autoreplace behavior with one spare that is
connected to two pools. I don't see how it could work if the autoreplace
property is enabled on both pools, which formats and replaces a spare
disk that might be in-use in another pool (?) Maybe I misunderstand.
1. I
Hi Ned,
If you look at the examples on the page that you cite, they start
with single-parity RAIDZ examples and then move to double-parity RAIDZ
example with supporting text, here:
http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view
Can you restate the problem with this page?
Thanks,
this all idea is not that bad at all..
Can you provide anything on this subject?
Thanks,
Bruno
On 31-3-2010 23:49, Cindy Swearingen wrote:
Hi Ned,
If you look at the examples on the page that you cite, they start
with single-parity RAIDZ examples and then move to double-parity RAIDZ
example
documentation should not
have such example. However if people made such example in Sun
documentation, perhaps this all idea is not that bad at all..
Can you provide anything on this subject?
Thanks,
Bruno
On 31-3-2010 23:49, Cindy Swearingen wrote:
Hi Ned,
If you look at the examples on the page
Hi Marlanne,
I can import a pool that is created with files on a system running the
Solaris 10 10/09 release. See the output below.
This could be a regression from a previous Solaris release, although I
can't reproduce it, but creating a pool with files is not a recommended
practice as
Patrick,
I'm happy that you were able to recover your pool.
Your original zpool status says that this pool was last accessed on
another system, which I believe is what caused of the pool to fail,
particularly if it was accessed simultaneously from two systems.
It is important that the cause of
Daniel,
Which Solaris release is this?
I can't reproduce this on my lab system that runs the Solaris 10 10/09
release.
See the output below.
Thanks,
Cindy
# zfs destroy -r tank/test
# zfs create -o compression=gzip tank/test
# zfs snapshot tank/t...@now
# zfs send -R tank/t...@now | zfs
Hi Daniel,
D'oh...
I found a related bug when I looked at this yesterday but I didn't think
it was your problem because you didn't get a busy message.
See this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6700597
Cindy
On 04/07/10 17:59, Daniel Bakken wrote:
We have found
Jonathan,
For a different diagnostic perspective, you might use the fmdump -eV
command to identify what FMA indicates for this device. This level of
diagnostics is below the ZFS level and definitely more detailed so
you can see when these errors began and for how long.
Cindy
On 04/14/10 11:08,
Hi Tony,
Is this on an x86 system?
If so, you might also check whether this disk has a Solaris fdisk
partition or has an EFI fdisk partition.
If it has an EFI fdisk partition then you'll need to change it to a
Solaris fdisk partition.
See the pointers below.
Thanks,
Cindy
MstAsg,
Is this the root pool disk?
I'm not sure I'm following what you want to do but I think you want
to attach a disk to create a mirrored configuration, then detach
the original disk.
If this is a ZFS root pool that contains the Solaris OS, then
following these steps:
1. Attach disk-2.
#
If this isn't a root pool disk, then skip steps 3-4. Letting
the replacement disk resilver before removing the original
disk is good advice for any configuration.
cs
On 04/16/10 16:15, Cindy Swearingen wrote:
MstAsg,
Is this the root pool disk?
I'm not sure I'm following what you want to do
Hi Brandon,
I think I've done a similar migration before by creating a second root
pool, and then create a new BE in the new root pool, like this:
# zpool create rpool2 mirror disk-1 disk2
# lucreate -n newzfsBE -p rpool2
# luactivate newzfsBE
# installgrub ...
reboot to newzfsBE
I don't think
Hi Harry,
Both du and df are pre-ZFS commands and don't really understand ZFS
space issues, which are described in the ZFS FAQ here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
Why does du(1) report different file sizes for ZFS and UFS? Why doesn't
the space consumption that is
at 7:42 AM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
I don't think LU cares that the disks in the new pool are smaller,
obviously they need to be large enough to contain the BE.
It doesn't look like OpenSolaris includes LU, at least on x86-64.
Anyhow, wouldn't the method you mention fail
. Install the boot blocks.
5. Test that the system boots from the second root pool.
6. Update BIOS and GRUB to boot from new pool.
On 04/20/10 08:36, Cindy Swearingen wrote:
Yes, I apologize. I didn't notice you were running the OpenSolaris
release. What I outlined below would work on a Solaris 10
Hi Justin,
Maybe I misunderstand your question...
When you export a pool, it becomes available for import by using
the zpool import command. For example:
1. Export tank:
# zpool export tank
2. What pools are available for import:
# zpool import
pool: tank
id: 7238661365053190141
Hi Clint,
Your symptoms to point to disk label problems, dangling device links,
or overlapping partitions. All could be related to the power failure.
The OpenSolaris error message (b134, I think you mean) brings up these
bugs:
6912251, describes the dangling links problem, which you might be
Hi Vlad,
The create-time permissions do not provide the correct permissions for
destroying descendent datasets, such as clones.
See example 9-5 in this section that describes how to use zfs allow -d
option to grant permissions on descendent datasets:
Yes, it is helpful in that it reviews all the steps needed to get the
replacement disk labeled properly for a root pool and is identical
to what we provide in the ZFS docs.
The part that is not quite accurate is the reasons for having to relabel
the replacement disk with the format utility.
Hi Lutz,
You can try the following commands to see what happened:
1. Someone else replaced the disk with a spare, which would be
recorded in this command:
# zpool history -l zfs01vol
2. If the disk had some transient outage then maybe the spare kicked
in. Use the following command to see if
Hi everyone,
Please review the information below regarding access to ZFS version
information.
Let me know if you have questions.
Thanks,
Cindy
CR 6898657:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6898657
ZFS commands zpool upgrade -v and zfs upgrade -v refer to URLs that
Hi Wolf,
Which Solaris release is this?
If it is an OpenSolaris system running a recent build, you might
consider the zpool split feature, which splits a mirrored pool into two
separate pools, while the original pool is online.
If possible, attach the spare disks to create the mirrored pool as
-0600, Cindy Swearingen wrote:
The revised ZFS Administration Guide describes the ZFS version
descriptions and the Solaris OS releases that provide the version
and feature, starting on page 293, here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs
It's not entirely clear how much
Hi Abdullah,
You can review the ZFS/MySQL presentation at this site:
http://forge.mysql.com/wiki/MySQL_and_ZFS#MySQL_and_ZFS
We also provide some ZFS/MySQL tuning info on our wiki,
here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsanddatabases
Thanks,
Cindy
On 04/28/10
301 - 400 of 674 matches
Mail list logo