I hope to see everyone on the other side...
***
The ZFS discussion list is moving to java.net.
This opensolaris/zfs discussion will not be available after March 24.
There is no way to migrate the existing list to the new list.
The solaris-zfs project is
Hi Ned,
This list is migrating to java.net and will not be available
in its current form after March 24, 2013.
The archive of this list is available here:
http://www.mail-archive.com/zfs-discuss@opensolaris.org/
I will provide an invitation to the new list shortly.
Thanks for your patience.
Hi Everyone,
The ZFS discussion list is moving to java.net.
This opensolaris/zfs discussion will not be available after March 24.
There is no way to migrate the existing list to the new list.
The solaris-zfs project is here:
http://java.net/projects/solaris-zfs
See the steps below to join
Hi Andrew,
Your original syntax was incorrect.
A p* device is a larger container for the d* device or s* devices.
In the case of a cache device, you need to specify a d* or s* device.
That you can add p* devices to a pool is a bug.
Adding different slices from c25t10d1 as both log and cache
Hi Hans,
Start with the ZFS Admin Guide, here:
http://docs.oracle.com/cd/E26502_01/html/E29007/index.html
Or, start with your specific questions.
Thanks, Cindy
On 03/19/13 03:30, Hans J. Albertsson wrote:
as used on Illumos?
I've seen a few tutorials written by people who obviously are
Hi Jim,
We will be restaging the ZFS community info, most likely on OTN.
The zfs discussion list archive cannot be migrated to the new
list on java.net, but you can pick it up here:
http://www.mail-archive.com/zfs-discuss@opensolaris.org/
We are looking at other ways to make the zfs discuss
Hey Ned and Everyone,
This was new news to use too and we're just talking over some options
yesterday
afternoon so please give us a chance to regroup and provide some
alternatives.
This list will be shutdown but we can start a new one on java.net. There is
a huge
ecosystem around Solaris and
Hi Jamie,
Yes, that is correct.
The S11u1 version of this bug is:
https://bug.oraclecorp.com/pls/bug/webbug_print.show?c_rptno=15852599
and has this notation which means Solaris 11.1 SRU 3.4:
Changeset pushed to build 0.175.1.3.0.4.0
Thanks,
Cindy
On 01/11/13 19:10, Jamie Krier wrote:
It
I believe the bug.oraclecorp.com URL is accessible with a support
contract, but its difficult for me to test.
I should have mentioned it. I apologize.
cs
On 01/14/13 14:02, Nico Williams wrote:
On Mon, Jan 14, 2013 at 1:48 PM, Tomas Forsmanst...@acc.umu.se wrote:
Free advice is cheap...
I personally don't see the advantage of caching reads
and logging writes to the same devices. (Is this recommended?)
If this pool is serving CIFS/NFS, I would recommend testing
for best performance with a mirrored log device first without
a separate cache device:
#
Existing Solaris 10 releases are not impacted. S10u11 isn't released yet so
I think
we can assume that this upcoming Solaris 10 release will include a
preventative fix.
Thanks, Cindy
On Thu, Dec 27, 2012 at 11:11 PM, Andras Spitzer wsen...@gmail.com wrote:
Josh,
You mention that Oracle is
Hi Ned,
Which man page are you referring to?
I see the zfs receive -o syntax in the S11 man page.
The bottom line is that not all properties can be set on the
receiving side and the syntax is one property setting per -o
option.
See below for several examples.
Thanks,
Cindy
I don't think
Hi Sol,
You can review the Solaris 11 ZFS best practices info, here:
http://docs.oracle.com/cd/E26502_01/html/E29007/practice-1.html#scrolltoc
The above section also provides info about the full pool performance
penalty.
For S11 releases, we're going to increase the 80% pool capacity
not impacted by this problem.
If scrubbing the pool finds permanent metadata errors,
then you should open an SR.
B. If zdb doesn't complete successfully, open an SR.
On 12/18/12 09:45, Cindy Swearingen wrote:
Hi Sol,
The appliance is affected as well.
I apologize. The MOS article is for internal
Hi Sol,
The appliance is affected as well.
I apologize. The MOS article is for internal diagnostics.
I'll provide a set of steps to identify this problem
as soon as I understand them better.
Thanks, Cindy
On 12/18/12 05:27, sol wrote:
*From:* Cindy Swearingen cindy.swearin...@oracle.com
Hi Jamie,
No doubt. This is a bad bug and we apologize.
Below is a misconception that this bug is related to the VM2 project.
It is not. Its related to a problem that was introduced in the ZFS ARC
code.
If you would send me your SR number privately, we can work with the
support person to
Hey Sol,
Can you send me the core file, please?
I would like to file a bug for this problem.
Thanks, Cindy
On 12/14/12 02:21, sol wrote:
Here it is:
# pstack core.format1
core 'core.format1' of 3351: format
- lwp# 1 / thread# 1
0806de73
Hi Morris,
I hope someone has done this recently and can comment, but the process
is mostly manual and it will depend on how much gear you have.
For example, if you have some extra disks, you can build a minimal ZFS
storage pool to hold the bulk of your data. Then, you can do a live
migration
Hi Andreas,
Which release is this... Can you provide the /etc/release info?
It works fine for me on a S11 Express (b162) system:
# zfs create -o readonly=off pond/amy
# zfs get readonly pond/amy
NAME PROPERTY VALUE SOURCE
pond/amy readonly off local
This is somewhat redundant
Hi Charles,
Yes, a faulty or failing disk can kill performance.
I would see if FMA has generated any faults:
# fmadm faulty
Or, if any of the devices are collecting errors:
# fmdump -eV | more
Thanks,
Cindy
On 10/04/12 11:22, Knipe, Charles wrote:
Hey guys,
I’ve run into another ZFS
You said you're new to ZFS so might consider using zpool list
and zfs list rather df -k to reconcile your disk space.
In addition, your pool type (mirrored on RAIDZ) provides a
different space perspective in zpool list that is not always
easy to understand.
changes
6430818 Solaris needs mechanism of dynamically increasing LUN size
-Original Message-
From: Hung-Sheng Tsao (LaoTsao) Ph.D [mailto:laot...@gmail.com]
Sent: 2012. július 26. 14:49
To: Habony, Zsolt
Cc: Cindy Swearingen; Sašo Kiselkov; zfs-discuss@opensolaris.org
Subject: Re: [zfs
Hi--
Patches are available to fix this so I would suggest that you
request them from MOS support.
This fix fell through the cracks and we tried really hard to
get it in the current Solaris 10 release but sometimes things
don't work in your favor. The patches are available though.
Relabeling
Hi--
I guess I can't begin to understand patching.
Yes, you provided a whole disk to zpool create but it actually
creates a part(ition) 0 as you can see in the output below.
Part TagFlag First SectorSizeLast Sector
0 usrwm 256 19.99GB
Here's a better link below.
I have seen enough bad things happen to pool devices when hardware is
changed or firmware is updated to recommend that the pool is exported
first, even an HBA firmware update.
Either shutting the system down (where pool is hosted) or exporting
the pool should do it.
I speak for myself... :-)
If the real bug is in procfs, I can file a CR.
When xattrs were designed right down the hall from me,
I don't think /proc interactions were considered, which
is why I mentioned an RFE.
Thanks,
Cindy
On 07/15/12 15:59, Cedric Blancher wrote:
On 14 July 2012
I don't think that xattrs were ever intended or designed
for /proc content.
I could file an RFE for you if you wish.
Thanks,
Cindy
On 07/13/12 14:00, ольга крыжановская wrote:
Yes, accessing the files through runat works.
I think /proc (and /dev/fd, which has the same trouble but only works
Hi Rich,
I don't think anyone can say definitively how this problem resolved,
but I believe that the dd command overwrote some of the disk label,
as you describe below.
Your format output below looks like you relabeled the disk and maybe
that was enough to resolve this problem.
I have had
Hi Hans,
Its important to identify your OS release to determine if
booting from a 4k disk is supported.
Thanks,
Cindy
On 06/15/12 06:14, Hans J Albertsson wrote:
I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
Hi--
You don't see what release this is but I think that seeing the checkum
error accumulation on the spare was a zpool status formatting bug that
I have seen myself. This is fixed in a later Solaris release.
Thanks,
Cindy
On 05/28/12 22:21, Stephan Budach wrote:
Hi all,
just to wrap this
Hi Karl,
I like to verify that no dead or dying disk is killing pool
performance and your zpool status looks good. Jim has replied
with some ideas to check your individual device performance.
Otherwise, you might be impacted by this CR:
7060894 zfs recv is excruciatingly slow
This CR covers
Hi Karl,
Someone sitting across the table from me (who saw my posting)
informs me that CR 7060894 would not impact Solaris 10 releases,
so kindly withdrawn my comment about CR 7060894.
Thanks,
Cindy
On 5/7/12 11:35 AM, Cindy Swearingen wrote:
Hi Karl,
I like to verify that no dead or dying
. Is
this a know issue with ZFS ? bug ?
cheers
Matt
On 04/16/12 10:05 PM, Cindy Swearingen wrote:
Hi Matt,
I don't have a way to reproduce this issue and I don't know why
this is failing. Maybe someone else does. I know someone who
recently split a root pool running the S11 FCS release without
problems
Hi Matt,
I don't have a way to reproduce this issue and I don't know why
this is failing. Maybe someone else does. I know someone who
recently split a root pool running the S11 FCS release without
problems.
I'm not a fan of root pools on external USB devices.
I haven't tested these steps in a
Hi Peter,
The root pool disk labeling/partitioning is not so easy.
I don't know which OpenIndiana release this is but in a previous
Solaris release we had a bug that caused the error message below
and the workaround is exactly what you did, use the -f option.
We don't yet have an easy way to
System
Step 9 uses the format--disk--partition--modify option and
sets the free hog space to slice 0. Then, you press return for
each existing slice to zero them out. This creates one large
slice 0.
cs
On 04/12/12 11:48, Cindy Swearingen wrote:
Hi Peter,
The root pool disk labeling/partitioning
Hi Matt,
There is no easy way to access data from a detached device.
You could try to force import it on another system or under
a different name on the same system with the remaining device.
The easiest way is to split the mirrored pool. See the
steps below.
Thanks,
Cindy
# zpool status
Hi Bob,
Not many options because you can't attach disks to convert a
non-redundant pool to a RAIDZ pool.
To me, the best solution is to get one more disk (for a total of 4
disks) to create a mirrored pool. Mirrored pools provide more
flexibility. See 1 below.
See the options below.
Thanks,
In theory, instead of this missing
disk approach I could create a two-disk raidz pool and later add the
third disk to it, right?
No, you can't add a 3rd disk to an existing RAIDZ vdev of two disks.
You would want to add another 2 disk RAIDZ vdev.
See Example 4-2 in this section:
Hi Ian,
This looks like CR 7097870.
To resolve this problem, apply the latest s11 SRU to both systems.
Thanks,
Cindy
On 02/08/12 17:55, Ian Collins wrote:
Hello,
I'm attempting to dry run the send the root data set of a zone from one
Solaris 11 host to another:
sudo zfs send -r
Hi Jan,
These commands will tell you if FMA faults are logged:
# fmdump
# fmadm faulty
This command will tell you if errors are accumulating on this
disk:
# fmdump -eV | more
Thanks,
Cindy
On 02/01/12 11:20, Jan Hellevik wrote:
I suspect that something is wrong with one of my disks.
This
:
On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote:
Hi Anon,
The disk that you attach to the root pool will need an SMI label
and a slice 0.
The syntax to attach a disk to create a mirrored root pool
is like this, for example:
# zpool attach rpool c1t0d0s0 c1t1d0s0
BTW. Can
Hi Tim,
No, in current Solaris releases the boot blocks are installed
automatically with a zpool attach operation on a root pool.
Thanks,
Cindy
On 12/15/11 17:13, Tim Cook wrote:
Do you still need to do the grub install?
On Dec 15, 2011 5:40 PM, Cindy Swearingen cindy.swearin...@oracle.com
to do the partitioning by hand, which is
just silly to fight with anyway.
Gregg
Sent from my iPhone
On Dec 15, 2011, at 6:13 PM, Tim Cook t...@cook.ms mailto:t...@cook.ms
wrote:
Do you still need to do the grub install?
On Dec 15, 2011 5:40 PM, Cindy Swearingen
cindy.swearin...@oracle.com
Yep, well said, understood, point taken, I hear you, you're
preaching to the choir. Have faith in Santa.
A few comments:
1. I need more info on the x86 install issue. I haven't seen this
problem myself.
2. We don't use slice2 for anything and its not recommended.
3. The SMI disk is a
Hi Anon,
The disk that you attach to the root pool will need an SMI label
and a slice 0.
The syntax to attach a disk to create a mirrored root pool
is like this, for example:
# zpool attach rpool c1t0d0s0 c1t1d0s0
Thanks,
Cindy
On 12/15/11 16:20, Anonymous Remailer (austria) wrote:
On
Hi Francois,
A similar recovery process in OS11 is to just mount the BE,
like this:
# beadm mount s11_175 /mnt
# ls /mnt/var
adm croninetlogadm preservetmp
ai db infomailrun tpm
apache2 dhcpinstalladm nfs
I think the too many open files is a generic error message about
running out of file descriptors. You should check your shell ulimit
information.
On 11/29/11 09:28, sol wrote:
Hello
Has anyone else come across a bug moving files between two zfs file systems?
I used mv
Hi Sol,
For 1) and several others, review the ZFS Admin Guide for
a detailed description of the share changes, here:
http://docs.oracle.com/cd/E23824_01/html/821-1448/gayne.html
For 2-4), You can't rename a share. You would have to remove it
and recreate it with the new name.
For 6), I think
Hi John,
CR 7102272:
ZFS storage pool created on a 3 TB USB 3.0 device has device label
problems
Let us know if this is still a problem in the OS11 FCS release.
Thanks,
Cindy
On 11/10/11 08:55, John D Groenveld wrote:
In message4e9db04b.80...@oracle.com, Cindy Swearingen writes
Hi John,
I'm going to file a CR to get this issue reviewed by the USB team
first, but if you could humor me with another test:
Can you run newfs to create a UFS file system on this device
and mount it?
Thanks,
Cindy
On 10/18/11 08:18, John D Groenveld wrote:
In message
Yeah, okay, duh. I should have known that large sector size
support is only available for a non-root ZFS file system.
A couple more things if you're still interested:
1. If you re-create the pool on the whole disk, like this:
# zpool create foo c1t0d0
Then, resend the prtvtoc output for
Hi Paul,
Your 1-3 is very sensible advice and I must ask about this
statement:
I have yet to have any data loss with ZFS.
Maybe this goes without saying, but I think you are using
ZFS redundancy.
Thanks,
Cindy
On 10/18/11 08:52, Paul Kraus wrote:
On Tue, Oct 18, 2011 at 9:38 AM, Gregory
This is CR 7102272.
cs
On 10/18/11 10:50, John D Groenveld wrote:
In message 4e9da8b1.7020...@oracle.com, Cindy Swearingen writes:
1. If you re-create the pool on the whole disk, like this:
# zpool create foo c1t0d0
Then, resend the prtvtoc output for c1t0d0s0.
# zpool create snafu c1t0d0
John,
Any USB-related messages in /var/adm/messages for this device?
Thanks,
Cindy
On 10/12/11 11:29, John D Groenveld wrote:
In message 4e95cb2a.30...@oracle.com, Cindy Swearingen writes:
What is the error when you attempt to import this pool?
cannot import 'foo': no such pool available
Hi John,
What is the error when you attempt to import this pool?
Thanks,
Cindy
On 10/11/11 18:17, John D Groenveld wrote:
Banging my head against a Seagate 3TB USB3 drive.
Its marketing name is:
Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102
format(1M) shows it
In the steps below, you're missing a zpool import step.
I would like to see the error message when the zpool import
step fails.
Thanks,
Cindy
On 10/12/11 11:29, John D Groenveld wrote:
In message 4e95cb2a.30...@oracle.com, Cindy Swearingen writes:
What is the error when you attempt to import
Hi Kelsey,
I haven't had to do this myself so someone who has done this
before might have a better suggestion.
I wonder if you need to make links from the original device
name to the new device names.
You can see from the zdb -l output below that the device path
is pointing to the original
in JBOD mode.
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
cs
On 08/15/11 08:41, Cindy Swearingen wrote:
Hi Tom,
I think you test this configuration with and without the
underlying hardware RAID.
If RAIDZ is the right redundancy level for your workload,
you might
Hi Ned,
The difference is that for mirrored pools, zpool list displays the
actual available space so that if you have a mirrored pool of two
30-GB disks, zpool list will display 30 GBs, which should jibe with
the zfs list output of available space for file systems.
For RAIDZ pools, zpool list
Hi Judy,
Without much to go on, let's try the easier task first.
Is it possible that you can re-attach the detached disk back
to original root mirror disk?
Thanks,
Cindy
On 07/28/11 13:16, Judy Wheeler (QTSI) X7567 wrote:
Does anyone know where the Jeff Bonwick tool to recover a ZFS label
Hi Judy,
The disk label of the detached disk is intact but the pool info is no
longer accessible so no easy way exists to solve this problem.
Its not simply a matter of getting the boot info back on there, its the
pool info.
Your disk will have to meet up with its other half before it can be
Subject: Re: [zfs-discuss] Adding mirrors to an existing zfs-pool
Date: Tue, 26 Jul 2011 08:54:38 -0600
From: Cindy Swearingen cindy.swearin...@oracle.com
To: Bernd W. Hennig consult...@hennig-consulting.com
References: 342994905.11311662049567.JavaMail.Twebapp@sf-app1
Hi Bernd,
If you
Hi Roberto,
Yes, you can reinstall the OS on another disk and as long as the
OS install doesn't touch the other pool's disks, your
previous non-root pool should be intact. After the install
is complete, just import the pool.
Thanks,
Cindy
On 07/26/11 10:49, Roberto Scudeller wrote:
Hi all,
That is correct.
Those of us working in ZFS land recommend one file system per user, but the
software has not yet caught up to that model. The wheels are turning though.
When I get back to office, I will send out some steps that might help during
this transition.
Thanks,
Cindy
--
This
Hi Chris,
Which Solaris release is this?
Depending on the Solaris release, you have a couple of
different options.
Here's one:
1. Physically replace original failed disk and detach
the spare.
A. If c10t0d0 was the disk that you physically replaced,
issue this command:
# zpool replace tank
Hi David,
If the permanent error is in some kind of metadata, then it doesn't
translate to a specific file name.
You might try another zpool scrub and then a zpool clear to see if
it clears this error.
Thanks,
Cindy
On 07/13/11 12:47, David Smith wrote:
I recently had an issue with my LUNs
Hi Ram,
Which Solaris release is this and how was the OS re-imaged?
If this is a recent Solaris 10 release and you used Live Upgrade,
then the answer is yes.
I'm not so sure about zone behavior in the Oracle Solaris 11
Express release.
You should just be able to import testpool and boot your
to create Non-global zone and it is giving error
bash-3.00# zonecfg -z Test
Test: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:Test create -a /zones/Test
invalid path to detached zone
Thanks,
Ram
On Thu, Jul 7, 2011 at 3:14 PM, Cindy Swearingen
Hi Adrian,
I wonder if you have seen these setup instructions:
http://www.oracle.com/technetwork/articles/servers-storage-dev/autosnapshots-397145.html
If you have, let me know if you are still having trouble.
Thanks,
Cindy
On 07/05/11 16:37, Adrian Carpenter wrote:
I've been trying to
Hi Jiawen,
Yes, the boot failure message would be very helpful.
The first thing to rule out is:
I think you need to be running a 64-bit kernel to
boot from a 2 TB disk.
Thanks,
Cindy
On 07/01/11 02:58, Jiawen Chen wrote:
Hi,
I have Solaris 11 Express with a root pool installed on a
Hi Dave,
Consider the easiest configuration first and it will probably save
you time and money in the long run, like this:
73g x 73g mirror (one large s0 on each disk) - rpool
73g x 73g mirror (use whole disks) - data pool
Then, get yourself two replacement disks, a good backup strategy,
and
Hi Kitty,
Try this:
# zpool create test c5t0d0
Thanks,
Cindy
On 06/23/11 12:34, Kitty Tam wrote:
It wouldn't let me
# zpool create test_pool c5t0d0p0
cannot create 'test_pool': invalid argument for this pool operation
Thanks,
Kitty
On 06/23/11 03:00, Roy Sigurd Karlsbakk wrote:
I
Hi David,
I see some inconsistencies between the mirrored pool tank info below
and the device info that you included.
1. The zpool status for tank shows some remnants of log devices (?),
here:
tank FAULTED corrupted data
logs
Generally, the log devices are listed after the
Hi Todd,
Yes, I have seen zpool scrub do some miracles but I think it depends
on the amount of corruption.
A few suggestions are:
1. Identify and resolve the corruption problems on the underlying
hardware. No point in trying to clear the pool errors if this
problem continues.
The fmdump
Hi Ed,
This is current Solaris SMB sharing behavior. CR 6582165 is filed to
provide this feature.
You will need to reshare your 3 descendent file systems.
NFS sharing does this automatically.
Thanks,
Cindy
On 06/22/11 09:46, Ed Fang wrote:
Need a little help. I set up my zfs storage last
Hi Clive,
What you are asking is not recommended nor supported and could render
your ZFS root pool unbootable. (I'm not saying that some expert
couldn't do it, but its risky, like data corruption risky.)
ZFS expects the partition boundaries to remain the same unless you
replace the original
Hi Lanky,
If you created a mirrored pool instead of a RAIDZ pool, you could use
the zpool split feature to split your mirrored pool into two identical
pools.
For example, If you had 3-way mirrored pool, your primary pool will
remain redundant with 2-way mirrors after the split. Then, you would
Hi Matt,
You have several options in terms of migrating the data but I think the
best approach is to do something like I have described below.
Thanks,
Cindy
1. Create snapshots of the file systems to be migrated. If you
want to capture the file system properties, then see the zfs.1m
man page
Hi Bill,
I'm assuming you've already upgraded to a Solaris 10 release that supports a UFS
to ZFS migration...
I don't think Live Upgrade supports the operations below. The UFS to ZFS
migration takes your existing UFS file systems and creates one ZFS BE in a root
pool. An advantage to this is
@SATA_WDC_WD5000AAKS-0_WD-WCAWF1769949/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 893
On May 20, 2011, at 8:34 AM, Cindy Swearingen wrote:
Hi Alex
More scary than interesting to me.
What kind of hardware
Hi Alex
More scary than interesting to me.
What kind of hardware and which Solaris release?
Do you know what steps lead up to this problem? Any recent hardware
changes?
This output should tell you which disks were in this pool originally:
# zpool history tank
If the history identifies
Hi Ketan,
What steps lead up to this problem?
I believe the boot failure messages below are related to a mismatch
between the pool version and the installed OS version.
If you're using the JumpStart installation method, then the root pool is
re-created each time, I believe. Does it also
Hi--
I don't know why the spare isn't kicking in automatically, it should.
A documented workaround is to outright replace the failed disk with one
of the spares, like this:
# zpool replace fwgpool0 c4t5000C5001128FE4Dd0 c4t5000C50014D70072d0
The autoreplace pool property has nothing to do
Hi Darren,
Yes, a bootable root pool must be created on a disk slice.
You can use a cache device, but not a log device, and the cache device
must be a disk slice.
See the output below.
Thanks,
Cindy
# zpool add rpool log c0t2d0s0
cannot add to 'rpool': root pool can not have multiple vdevs
Hi Karl...
I just saw this same condition on another list. I think the poster
resolved it by replacing the HBA.
Drives go bad but they generally don't all go bad at once, so I would
suspect some common denominator like the HBA/controller, cables, and
so on.
See what FMA thinks by running
is resolved
completely.
If not, then see the recommendation below.
Thanks,
Cindy
On 04/15/11 13:18, Cindy Swearingen wrote:
Hi Karl...
I just saw this same condition on another list. I think the poster
resolved it by replacing the HBA.
Drives go bad but they generally don't all go bad at once, so I
Yes, the Solaris 10 9/10 release has the fix for RAIDZ checksum errors
if you have ruled out any hardware problems.
cs
On 04/15/11 14:47, Karl Rossing wrote:
Would moving the pool to a Solaris 10U9 server fix the random RAIDZ errors?
On 04/15/2011 02:23 PM, Cindy Swearingen wrote:
D'oh. One
Arjun,
Yes, you an choose any name for the root pool, but an existing
limitation is that you can't rename the root pool by using the
zpool export/import with new name feature.
Too much internal boot info is tied to the root pool name.
What info are you changing? Instead, could you create a new
Hi Albert,
I didn't notice that you are running the Solaris 10 9/10 release.
Although the autoexpand property is provided, the underlying driver
changes to support the LUN expansion are not available in this release.
I don't have the right storage to test, but a possible workaround is
to
You can add and remove mirrored or non-mirrored log devices.
Jordan is probably running into CR 7000154:
cannot remove log device
Thanks,
Cindy
On 03/31/11 12:28, Roy Sigurd Karlsbakk wrote:
http://pastebin.com/nD2r2qmh
Here is zpool status and zpool version
The only thing I wonder
Hi Jim,
Yes, the Solaris 10 9/10 release supports log device removal.
http://download.oracle.com/docs/cd/E19253-01/819-5461/gazgw/index.html
See Example 4-3 in this section.
The ability to import a pool with a missing log device is not yet
available in the Solaris 10 release.
Thanks,
Cindy
Hi Robert,
We integrated some fixes that allowed you to replace disks of equivalent
sizes, but 40 MB is probably beyond that window.
Yes, you can do #2 below and the pool size will be adjusted down to the
smaller size. Before you do this, I would check the sizes of both
spares.
If both spares
Robert,
Which Solaris release is this?
Thanks,
Cindy
On 03/04/11 11:10, Mark J Musante wrote:
The fix for 6991788 would probably let the 40mb drive work, but it would
depend on the asize of the pool.
On Fri, 4 Mar 2011, Cindy Swearingen wrote:
Hi Robert,
We integrated some fixes
Hi Paul,
I've seen some spare stickiness too and its generally when I'm trying to
simulate a drive failure (like you are below) without actually
physically replacing the device.
If I actually physically replace the failed drive, the spare is
detached automatically after the new device is
(Dave P...I sent this yesterday, but it bounced on your email address)
A small comment from me would be to create some test pools and replace
devices in the pools to see if device names remain the same or change
during these operations.
If the device names change and the pools are unhappy,
Hi Dave,
Still true.
Thanks,
Cindy
On 02/25/11 13:34, David Blasingame Oracle wrote:
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
from :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Hi Bill,
I think the root cause of this problem is that time slider implemented
the zfs destroy -d feature but this feature is only available in later
pool versions. This means that the routine removal of time slider
generated snapshots fails on older pool versions.
The zfs destroy -d feature
Sergey,
I think you are saying that you had 4 separate ZFS storage pools on 4
separate disks and one ZFS pool/fs didn't not import successfully.
If you created a new storage pool on the disk for the pool that
failed to import then the data on that disk is no longer available
because it was
The best way to remove the pool is to reconnect the device and then
destroy the pool, but if the device is faulted or no longer available,
then you'll need a workaround.
If the external drive with the FAULTED pool remnants isn't connected to
the system, then rename the /etc/zfs/zpool.cache file
1 - 100 of 674 matches
Mail list logo