Hi Thomas,
I see that Richard has suggested mirroring your existing pool by
attaching slices from your 1 TB disk if the sizing is right.
You mentioned file security and I think you mean protecting your data
from hardware failures. Another option is to get one more disk to
convert this non-redund
Hi David,
I think installgrub is unhappy that no s2 exists on c7t1d0.
I would detach c7t1d0s0 from the pool and follow these steps
to relabel/repartition this disk:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Replacing/Relabeling the Root Pool Disk
Then, reattach
Hi Greg,
You are running into this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6929751
Currently, building a pool from files is not fully supported.
Thanks,
Cindy
On 03/05/10 16:15, Gregory Durham wrote:
Hello all,
I am using Opensolaris 2009.06 snv_129
I have a quick que
wrote:
Great...will using lofiadm still cause this issue? either by using
mkfile or by using dd makeing a sparse file? Thanks for the heads up!
On Fri, Mar 5, 2010 at 3:48 PM, Cindy Swearingen
mailto:cindy.swearin...@sun.com>> wrote:
Hi Greg,
You are running into this bug:
Good catch Eric, I didn't see this problem at first...
The problem here and Richard described it well is that the ctdp* devices
represent the larger fdisk partition, which might also contain a ctds*
device.
This means that in this configuration, c7t0d0p3 and c7t0d0s0, might
share the same block
Hi Tony,
Good questions...
Yes, you can assign a spare disk to multiple pools on the same system,
but not shared across systems.
The problem with sharing a spare disk with a root pool is that if the
spare kicks in, a boot block is not automatically applied. The
differences in the labels is prob
Hi Tim,
I'm not sure why your spare isn't kicking in, but you could manually
replace the failed disk with the spare like this:
# zpool replace fserv c7t5d0 c3t6d0
If you want to run with the spare for awhile, then you can also detach
the original failed disk like this:
# zpool detach fserv c7t
Hi D,
Is this a 32-bit system?
We were looking at your panic messages and they seem to indicate a
problem with memory and not necessarily a problem with the pool or
the disk. Your previous zpool status output also indicates that the
disk is okay.
Maybe someone with similar recent memory problem
Hi Harry,
Reviewing other postings where permanent errors where found on redundant
ZFS configs, one was resolved by re-running the zpool scrub and one
resolved itself because the files with the permanent errors were most
likely temporary files.
One of the files with permanent errors below is a
m: Harry Putnam
Date: Tuesday, March 9, 2010 4:00 pm
Subject: Re: [zfs-discuss] what to do when errors occur during scrub
To: zfs-discuss@opensolaris.org
> Cindy Swearingen writes:
>
> > Hi Harry,
> >
> > Reviewing other postings where permanent errors where found on
>
Hi Grant,
I don't have a v240 to test but I think you might need to unconfigure
the disk first on this system.
So I would follow the more complex steps.
If this is a root pool, then yes, you would need to use the slice
identifier, and make sure it has an SMI disk label.
After the zpool replace
is is production! Even an IM
would be helpful.
--- On Wed, 3/10/10, Cindy Swearingen wrote:
From: Cindy Swearingen
Subject: Re: [zfs-discuss] Replacing a failed/failed mirrored root disk
To: "Grant Lowe"
Cc: zfs-discuss@opensolaris.org
Date: Wednesday, March 10, 2010, 1:09 PM
Hi Grant,
d, 10 Mar 2010 15:28:40 -0800 Cindy Swearingen wrote:
Hey list,
Grant says his system is hanging after the zpool replace on a v240,
running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots.
No errors from zpool replace so it sounds like the disk was physically
replaced success
Ian,
You might consider converting this pool to a mirrored pool, which is
currently more flexible than a raidz pool and provide good performance.
Its easy too. See the example below.
Cindy
A non-redundant pool of one disk (33 GB).
# zpool status tank
pool: tank
state: ONLINE
scrub: none r
http://hub.opensolaris.org/bin/view/Community+Group+zfs/boot
And review Lori's slides at the bottom of this page.
Thanks,
cindy
On 03/12/10 08:41, David L Kensiski wrote:
On Mar 11, 2010, at 3:08 PM, Cindy Swearingen wrote:
Hi David,
In general, an I/O error means that the slice 0 doesn'
Hi Michael,
For a RAIDZ pool, the zpool list command identifies the "inflated" space
for the storage pool, which is the physical available space without an
accounting for redundancy overhead.
The zfs list command identifies how much actual pool space is available
to the file systems.
See the ex
Hi Svein,
Here's a couple of pointers:
http://wikis.sun.com/display/OpenSolarisInfo/comstar+Administration
http://blogs.sun.com/observatory/entry/iscsi_san
Thanks,
Cindy
On 03/16/10 12:15, Svein Skogen wrote:
Things used to be simple.
"zfs create -V xxg -o shareiscsi=on pool/iSCSI/mynewvo
Hi Dave,
I'm unclear about the autoreplace behavior with one spare that is
connected to two pools. I don't see how it could work if the autoreplace
property is enabled on both pools, which formats and replaces a spare
disk that might be in-use in another pool (?) Maybe I misunderstand.
1. I th
Hi Grant,
An I/O error generally means that there is some problem either accessing
the disk or disks in this pool, or a disk label got clobbered.
Does zpool status provide any clues about what's wrong with this pool?
Thanks,
Cindy
On 03/19/10 10:26, Grant Lowe wrote:
Hi all,
I'm trying to
Hi Ned,
If you look at the examples on the page that you cite, they start
with single-parity RAIDZ examples and then move to double-parity RAIDZ
example with supporting text, here:
http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view
Can you restate the problem with this page?
Thanks,
Cindy
n, perhaps this all idea is not that bad at all..
Can you provide anything on this subject?
Thanks,
Bruno
On 31-3-2010 23:49, Cindy Swearingen wrote:
Hi Ned,
If you look at the examples on the page that you cite, they start
with single-parity RAIDZ examples and then move to double-parity RAI
ficial Sun documentation should not
have such example. However if people made such example in Sun
documentation, perhaps this all idea is not that bad at all..
Can you provide anything on this subject?
Thanks,
Bruno
On 31-3-2010 23:49, Cindy Swearingen wrote:
Hi Ned,
If you look at the examples o
Hi Marlanne,
I can import a pool that is created with files on a system running the
Solaris 10 10/09 release. See the output below.
This could be a regression from a previous Solaris release, although I
can't reproduce it, but creating a pool with files is not a recommended
practice as described
Patrick,
I'm happy that you were able to recover your pool.
Your original zpool status says that this pool was last accessed on
another system, which I believe is what caused of the pool to fail,
particularly if it was accessed simultaneously from two systems.
It is important that the cause of
Daniel,
Which Solaris release is this?
I can't reproduce this on my lab system that runs the Solaris 10 10/09
release.
See the output below.
Thanks,
Cindy
# zfs destroy -r tank/test
# zfs create -o compression=gzip tank/test
# zfs snapshot tank/t...@now
# zfs send -R tank/t...@now | zfs re
). Both
filesystems are zfs version 3.
Mystified,
Daniel Bakken
On Wed, Apr 7, 2010 at 10:57 AM, Cindy Swearingen
mailto:cindy.swearin...@oracle.com>> wrote:
Daniel,
Which Solaris release is this?
I can't reproduce this on my lab system that runs the Solaris 10
Hi Daniel,
D'oh...
I found a related bug when I looked at this yesterday but I didn't think
it was your problem because you didn't get a busy message.
See this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6700597
Cindy
On 04/07/10 17:59, Daniel Bakken wrote:
We have found
Jonathan,
For a different diagnostic perspective, you might use the fmdump -eV
command to identify what FMA indicates for this device. This level of
diagnostics is below the ZFS level and definitely more detailed so
you can see when these errors began and for how long.
Cindy
On 04/14/10 11:08,
Hi Tony,
Is this on an x86 system?
If so, you might also check whether this disk has a Solaris fdisk
partition or has an EFI fdisk partition.
If it has an EFI fdisk partition then you'll need to change it to a
Solaris fdisk partition.
See the pointers below.
Thanks,
Cindy
http://www.solaris
MstAsg,
Is this the root pool disk?
I'm not sure I'm following what you want to do but I think you want
to attach a disk to create a mirrored configuration, then detach
the original disk.
If this is a ZFS root pool that contains the Solaris OS, then
following these steps:
1. Attach disk-2.
#
If this isn't a root pool disk, then skip steps 3-4. Letting
the replacement disk resilver before removing the original
disk is good advice for any configuration.
cs
On 04/16/10 16:15, Cindy Swearingen wrote:
MstAsg,
Is this the root pool disk?
I'm not sure I'm following what
Hi Brandon,
I think I've done a similar migration before by creating a second root
pool, and then create a new BE in the new root pool, like this:
# zpool create rpool2 mirror disk-1 disk2
# lucreate -n newzfsBE -p rpool2
# luactivate newzfsBE
# installgrub ...
I don't think LU cares that the
Hi Harry,
Both du and df are pre-ZFS commands and don't really understand ZFS
space issues, which are described in the ZFS FAQ here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
Why does du(1) report different file sizes for ZFS and UFS? Why doesn't
the space consumption that is
10 at 7:42 AM, Cindy Swearingen
wrote:
I don't think LU cares that the disks in the new pool are smaller,
obviously they need to be large enough to contain the BE.
It doesn't look like OpenSolaris includes LU, at least on x86-64.
Anyhow, wouldn't the method you mention fail becau
. Install the boot blocks.
5. Test that the system boots from the second root pool.
6. Update BIOS and GRUB to boot from new pool.
On 04/20/10 08:36, Cindy Swearingen wrote:
Yes, I apologize. I didn't notice you were running the OpenSolaris
release. What I outlined below would work on a Solar
Hi Justin,
Maybe I misunderstand your question...
When you export a pool, it becomes available for import by using
the zpool import command. For example:
1. Export tank:
# zpool export tank
2. What pools are available for import:
# zpool import
pool: tank
id: 7238661365053190141
state
Hi Clint,
Your symptoms to point to disk label problems, dangling device links,
or overlapping partitions. All could be related to the power failure.
The OpenSolaris error message (b134, I think you mean) brings up these
bugs:
6912251, describes the dangling links problem, which you might be ab
Hi Vlad,
The create-time permissions do not provide the correct permissions for
destroying descendent datasets, such as clones.
See example 9-5 in this section that describes how to use zfs allow -d
option to grant permissions on descendent datasets:
http://docs.sun.com/app/docs/doc/819-5461/ge
Yes, it is helpful in that it reviews all the steps needed to get the
replacement disk labeled properly for a root pool and is identical
to what we provide in the ZFS docs.
The part that is not quite accurate is the reasons for having to relabel
the replacement disk with the format utility.
If
Hi Lutz,
You can try the following commands to see what happened:
1. Someone else replaced the disk with a spare, which would be
recorded in this command:
# zpool history -l zfs01vol
2. If the disk had some transient outage then maybe the spare kicked
in. Use the following command to see if so
Hi everyone,
Please review the information below regarding access to ZFS version
information.
Let me know if you have questions.
Thanks,
Cindy
CR 6898657:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6898657
ZFS commands zpool upgrade -v and zfs upgrade -v refer to URLs that
a
Hi Wolf,
Which Solaris release is this?
If it is an OpenSolaris system running a recent build, you might
consider the zpool split feature, which splits a mirrored pool into two
separate pools, while the original pool is online.
If possible, attach the spare disks to create the mirrored pool as
:04AM -0600, Cindy Swearingen wrote:
The revised ZFS Administration Guide describes the ZFS version
descriptions and the Solaris OS releases that provide the version
and feature, starting on page 293, here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs
It's not entirely clea
Hi Abdullah,
You can review the ZFS/MySQL presentation at this site:
http://forge.mysql.com/wiki/MySQL_and_ZFS#MySQL_and_ZFS
We also provide some ZFS/MySQL tuning info on our wiki,
here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsanddatabases
Thanks,
Cindy
On 04/28/10 03:42
Hi Mary Ellen,
We were looking at this problem and are unsure what the problem is...
To rule out NFS as the root cause, could you create and share a test ZFS
file system without any ACLs to see if you can access the data from the
Linux client?
Let us know the result of your test.
Thanks,
Ci
Hi Euan,
For full root pool recovery see the ZFS Administration Guide, here:
http://docs.sun.com/app/docs/doc/819-5461/ghzvz?l=en&a=view
Recovering the ZFS Root Pool or Root Pool Snapshots
Additional scenarios and details are provided in the ZFS troubleshooting
wiki. The link is here but the s
zpat:umass mfitzpat
updated auto.home on linux client(nona-man)
test-rw,hard,intr hecate:/zp-ext/test
nona-man:/# cd /fs/test
nona-man:/fs/test# ls -l
total 3
drwxr-xr-x+ 2 root root 2 Apr 29 11:15 mfitzpat
Permissions did not carry over from zfs share.
Willing test/try next step.
Ma
anks,
Cindy
On 04/29/10 21:42, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Cindy Swearingen
For full root pool recovery see the ZFS Administration Guide, here:
http://docs.sun.com/app/docs/doc/819-5461/ghzvz?l=
Brandon,
You're probably hitting this CR:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924824
I'm tracking the existing dedup issues here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup
Thanks,
Cindy
On 04/29/10 23:11, Brandon High wrote:
I tried destroying a
Hi Richard,
Renaming the root pool is not recommended. I have some details on what
actually breaks, but I can't find it now.
This limitation is described in the ZFS Admin Guide, but under the
LiveUpgrade section in the s10 version. I will add this limitation under
the general limitation section.
create -o version=19 rpool c1t3d0s0
I will add this info to the root pool recovery process.
Thanks for the feedback...
Cindyr
On 04/30/10 22:46, Edward Ned Harvey wrote:
From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
Sent: Friday, April 30, 2010 10:46 AM
Hi Ned,
Unless I
Hi Robert,
Could be a bug.
What kind of system and disks are reporting these errors?
Thanks,
Cindy
On 05/02/10 10:55, Lutz Schumann wrote:
Hello,
thanks for the feedback and sorry for the delay in answering.
I checked the log and the fmadm. It seems the log does not show changes, however
t/mount-at-boot components that need to be changed.
Cindy
On 05/03/10 18:34, Brandon High wrote:
On Mon, May 3, 2010 at 9:13 AM, Cindy Swearingen
wrote:
Renaming the root pool is not recommended. I have some details on what
actually breaks, but I can't find it now.
Really? I asked about u
beadm create -p rpool2 osol2BE
3. Activate the new BE.
# beadm activate osol2BE
4. Install the boot blocks.
5. Test that the system boots from the second root pool.
6. Update BIOS and GRUB to boot from new pool.
On 05/04/10 11:04, Brandon High wrote:
On Tue, May 4, 2010 at 7:19 AM,
Hi Dick,
Experts on the cifs-discuss list could probably advise you better.
You might even check the cifs-discuss archive because I hear that
the SMB/NFS sharing scenario has been covered previously on that
list.
Thanks,
Cindy
On 05/04/10 03:06, Dick Hoogendijk wrote:
I have some ZFS datasets
Hi Bob,
You can review the latest Solaris 10 and OpenSolaris release dates here:
http://www.oracle.com/ocom/groups/public/@ocom/documents/webcontent/059542.pdf
Solaris 10 release, CY2010
OpenSolaris release, 1st half CY2010
Thanks,
Cindy
On 05/05/10 18:03, Bob Friesenhahn wrote:
On Wed, 5 M
Hi--
Even though the dedup property can be set on a file system basis,
dedup space usage is accounted for from the pool level by using
zpool list command.
My non-expert opinion is that it would be near impossible to report
space usage for dedup and non-dedup file systems at the file system
level
Hi Eduardo,
Please use the following steps to collect more information:
1. Use the following command to get the PID of the zpool import process,
like this:
# ps -ef | grep zpool
2. Use the actual found in step 1 in the following
command, like this:
echo "0t::pid2proc|::walk thread|::findsta
Hi--
The scenario in the bug report below is that the pool is exported.
The spare can't kick in if the pool is exported. It looks like the
issue reported in this CR's See Also section, CR 6887163 is still
open.
Thanks,
Cindy
On 05/18/10 11:19, eXeC001er wrote:
Hi.
In bugster i found bug abo
I think the remaining CR is this one:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6911420
cs
On 05/18/10 12:08, eXeC001er wrote:
6887163
11-Closed:Duplicate (Closed)
6945634
11-Closed:Duplicate (Closed)
2010/5/18 Cindy Swearingen <mailto:cindy.swearin...@oracle.
Hi Roi,
You need equivalent sized disks for a mirrored pool. When you attempt to
attach a disk that is too small, you will see a message similar to the
following:
cannot attach c1t3d0 to c1t2d0: device is too small
In general, an "I/O error" message means that the partition slice is not
avai
Andreas,
Does the pool tank actually have 6 disks c7t0-c7t5 and c7t3d0 is now
masking c7t5d0 or it is a 5-disk configuration with c7t5 repeated twice?
If it is the first case (c7t0-c7t5), then I would check how these
devices are connected before attempting to replace the c7t3d0 disk.
What does t
Hi Thomas,
This looks like a display bug. I'm seeing it too.
Let me know which Solaris release you are running and
I will file a bug.
Thanks,
Cindy
On 05/25/10 01:42, Thomas Burgess wrote:
I was just wondering:
I added a SLOG/ZIL to my new system today...i noticed that the L2ARC
shows up u
Hi Reshekel,
You might review these resources for information on using ZFS without
having to hack code:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs
ZFS Administration Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
I will add a section on migrat
Hi--
I apologize for missing understanding your original issue.
Regardless of the original issues and the fact that current Solaris
releases do not let you set the bootfs property on a pool that has a
disk with an EFI label, the secondary bug here is not being able to
remove a bootfs property on
Hi--
I'm glad you were able to resolve this problem.
I drafted some hints in this new section:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Pool_Migration_Issues
We had all the clues, you and Brandon got it though.
I think my brain function was missing yesterday.
r filed against zfs for not allowing the bootfs property to
be set to ""? We should always let that request succeed.
lori
On 05/26/10 09:09 AM, Cindy Swearingen wrote:
Hi--
I'm glad you were able to resolve this problem.
I drafted some hints in this new section:
http://www.solar
Cassandra,
Which Solaris release is this?
This is working for me between an Solaris 10 server and a OpenSolaris
client.
Nested mount points can be tricky and I'm not sure if you are looking
for the mirror mount feature that is not available in the Solaris 10
release, where new directory conte
Hi--
I can't speak to running a ZFS root pool in a VM, but the problem is
that you can't add another disk to a root pool. All the boot info needs
to be contiguous. This is a boot limitation.
I've not attempted either of these operations in a VM but you might
consider:
1. Replacing the root pool
slice?
On Sun, May 30, 2010 at 9:05 PM, me <mailto:dea...@gmail.com>> wrote:
Thanks! It is exactly i was looking for.
On Sat, May 29, 2010 at 12:44 AM, Cindy Swearingen
mailto:cindy.swearin...@oracle.com>>
wrote:
2. Attaching a larger disk to the root poo
Hi--
I'm no user property expert, but I have some syntax for you to try to
resolve these problems. See below.
Maybe better ways exist but one way to correct datapool inheritance of
com.sun:auto-snapshot:dailyy is to set it to false, inherit the false
setting, then reset the correct property at t
Hi Ned,
If you do incremental receives, this might be CR 6860996:
%temporary clones are not automatically destroyed on error
A temporary clone is created for an incremental receive and
in some cases, is not removed automatically.
Victor might be able to describe this better, but consider
the f
s mirror mounting is? Would that help me?
Is there something else I could be doing to approach this better?
Thank you for your insight.
-
Cassandra
Unix Administrator
On Thu, May 27, 2010 at 5:25 PM, Cindy Swearingen
mailto:cindy.swearin...@oracle.com>> wrote:
Cassandra,
Whic
s, for the purpose of having compressed and uncompressed
directories.
-
Cassandra
(609) 243-2413
Unix Administrator
"From a little spark may burst a mighty flame."
-Dante Alighieri
On Thu, Jun 3, 2010 at 3:00 PM, Cindy Swearingen
mailto:cindy.swearin...@oracle.com>>
Frank,
The format utility is not technically correct because it refers to
slices as partitions. Check the output below.
We might describe that the "partition" menu is used to partition the
disk into slices, but all of format refers to partitions, not slices.
I agree with Brandon's explanation,
Hi--
Pool names must contain alphanumeric characters as described here:
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/common/zfs/zfs_namecheck.c
The problem you might be having is probably with an special characters,
such as umlauts or accents (?). Pool names only allow 4 specia
Hi Toyama,
You cannot restore an individual file from a snapshot stream like
the ufsrestore command. If you have snapshots stored on your
system, you might be able to access them from the .zfs/snapshot
directory. See below.
Thanks,
Cindy
% rm reallyimportantfile
% cd .zfs/snapshot
% cd recent-
Hi Joe,
The REMOVED status generally means that a device was physically removed
from the system.
If necessary, physically reconnect c0t7d0 or if connected, check
cabling, power, and so on.
If the device is physically connected, see what cfgadm says about this
device. For example, a device that
changing the hardware. After the device is back online and the pool is
imported, you might need to use zpool clear to clear the pool status.
Thanks,
Cindy
On 06/08/10 11:11, Joe Auty wrote:
Cindy Swearingen wrote:
Hi Joe,
The REMOVED status generally means that a device was physically removed
from
6/08/10 11:39, Joe Auty wrote:
Cindy Swearingen wrote:
Joe,
Yes, the device should resilver when its back online.
You can use the fmdump -eV command to discover when this device was
removed and other hardware-related events to help determine when this
device was removed.
I would recommend expo
Thanks,
Cindy
On 06/08/10 23:56, Joe Auty wrote:
Cindy Swearingen wrote:
According to this report, I/O to this device caused a probe failure
because the device isn't available on May 31.
I was curious if this device had any previous issues over a longer
period of time.
Failing or faulted
Hi Alvin,
Which Solaris release is this?
If you are using a OpenSolaris release (build 131), you might consider
the zpool split feature that allows you to clone a mirrored pool by
attaching the HDD to the pool, letting it resilver, and using zpool
split to clone the pool. Then, move the HDD and
Hi Tom,
Did you boot from the OpenSolaris LiveCD and attempt to manually
mount the data3 pool? The import might take some time.
I'm also curious whether the device info is coherent after the
power failure. You might review the device info for the root
pool to confirm.
If the device info is okay
Tom,
If you freshly installed the root pool, then those devices
should be okay so that wasn't a good test. The other pools
should remain unaffected by the install, and I hope, from
the power failure.
We've seen device info get messed up during a power failure,
which is why I asked.
If you don't
Hi Giovanni,
My Monday morning guess is that the disk/partition/slices are not
optimal for the installation.
Can you provide the partition table on the disk that you are attempting
to install? Use format-->disk-->partition-->print.
You want to put all the disk space in c*t*d*s0. See this sect
> Hello all,
>
> I've been running OpenSolaris on my personal
> fileserver for about a year and a half, and it's been
> rock solid except for having to upgrade from 2009.06
> to a dev version to fix some network driver issues.
> About a month ago, the motherboard on this computer
> died, and I upg
Hi Austin,
No much help, as it turns out.
I don't see any evidence that a recovery mechanism, where you might lose
a few seconds of data transactions, was triggered.
It almost sounds like your file system was rolled back to a previous
snapshot because the data is lost as of a certain date. I
Hi--
No way exists to outright disable ACLs on a ZFS file system.
The removal of the aclmode property was a recent dev build change.
The zfs.1m man page you cite is for a Solaris release that is no longer
available and will be removed soon.
What are you trying to do? You can remove specific ACL
Hi Jay,
I think you mean you want to connect the disk with a potentially damaged
ZFS BE on another system and mount the ZFS BE for possible repair
purposes.
This recovery method is complicated by the fact that changing the root
pool name can cause the original system not to boot.
Other potent
Hi--
ZFS command operations involving disk space take input and display using
numeric values specified as exact values, or in a human-readable form
with a suffix of B, K, M, G, T, P, E, Z for bytes, kilobytes, megabytes,
gigabytes, terabytes, petabytes, exabytes, or zettabytes.
Thanks,
Cindy
Hi Ben,
Any other details about this pool, like how it might be different from
the other two pools on this system, might be helpful...
I'm going to try to reproduce this problem.
We'll be in touch.
Thanks,
Cindy
On 06/17/10 07:02, Ben Miller wrote:
I upgraded a server today that has been r
P.S.
User/group quotas are available in the Solaris 10 release,
starting in the Solaris 10 10/09 release:
http://docs.sun.com/app/docs/doc/819-5461/gazvb?l=en&a=view
Thanks,
Cindy
On 06/18/10 07:09, David Magda wrote:
On Fri, June 18, 2010 08:29, Sendil wrote:
I can create 400+ file system
Hi Curtis,
You might review the ZFS best practices info to help you determine
the best pool configuration for your environment:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
If you're considering using dedup, particularly on a 24T pool, then
review the current known is
If the device driver generates or fabricates device IDs, then moving
devices around is probably okay.
I recall the Areca controllers are problematic when it comes to moving
devices under pools. Maybe someone with first-hand experience can
comment.
Consider exporting the pool first, moving the de
Hi Justin,
This looks like an older Solaris 10 release. If so, this looks like
a zpool status display bug, where it looks like the checksum errors
are occurring on the replacement device, but they are not.
I would review the steps described in the hardware section of the ZFS
troubleshooting wiki
On 06/23/10 10:40, Evan Layton wrote:
On 6/23/10 4:29 AM, Brian Nitz wrote:
I saw a problem while upgrading from build 140 to 141 where beadm
activate {build141BE} failed because installgrub failed:
# BE_PRINT_ERR=true beadm activate opensolarismigi-4
be_do_installgrub: installgrub failed for
Hi Shawn,
I think this can happen if you apply patch 141445-09.
It should not happen in the future.
I believe the workaround is this:
1. Boot the system from the correct media.
2. Install the boot blocks on the root pool disk(s).
3. Upgrade the pool.
Thanks,
Cindy
On 06/24/10 09:24, Shawn
Sean,
If you review the doc section you included previously, you will see
that all the root pool examples include slice 0.
The slice is a long-standing boot requirement and is described in
the boot chapter, in this section:
http://docs.sun.com/app/docs/doc/819-5461/ggrko?l=en&a=view
ZFS Storag
Tiernan,
Hardware redundancy is important, but I would be thinking about how you
are going to back up data in the 6-24 TB range, if you actually need
that much space.
Balance your space requirements with good redundancy and how much data
you can safely back up because stuff happens: hardware fai
Hi Donald,
I think this is just a reporting error in the zpool status output,
depending on what Solaris release is.
Thanks,
Cindy
On 06/27/10 15:13, Donald Murray, P.Eng. wrote:
Hi,
I awoke this morning to a panic'd opensolaris zfs box. I rebooted it
and confirmed it would panic each time it
1 - 100 of 738 matches
Mail list logo