Thanks Cindy Enda for the info ..
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Sep 28, 2010 at 12:18:49PM -0700, Paul B.
Henson wrote:
On Sat, 25 Sep 2010, [iso-8859-1] Ralph Böhme
wrote:
Darwin ACL model is nice and slick, the new NFSv4
one in 147 is just
braindead. chmod resulting in ACLs being
discarded is a bizarre design
decision.
Agreed.
Hi Cindy,
I did see your first email pointing to that
bughttp://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6538600.
Apologies for not addressing it earlier. It is my opinion that the behavior
Mike, and I http://illumos.org/issues/217 (or anyone else upgrading pools
right now) is seeing
Additionally, even though zpool and zfs get version display the true and
updated versions, I'm not convinced that the problem is zdb, as the label
config is almost certainly set by the zpool and/or zfs commands. Somewhere,
something is not happening that is supposed to when initiating a zpool
On 9/28/2010 2:13 PM, Nicolas Williams wrote:
The version of samba bundled with Solaris 10 seems to
insist on
chmod'ing stuff. I've tried all of the various
options that should
disable mapping to mode bits, yet still randomly when
people copy files
in over CIFS, ACL's get destroyed by
Well strangely enough, I just logged into a OS b145 machine. It's rpool is
not mirrored, just a single disk. I know that zdb reported zpool version 22
after at least the first 3 reboots after rpool upgrade, so I stopped
checking. zdb now reports version 27. This machine has probably been
Interesting thread. So how would you go about fixing this?
I suspect you have to track down the vnode, znode_t and eventually
modify one kernel buffers for znode_phys_t. If your left with the
decision to completely rebuild then repairing this might be the only
choice some people may have.
On Wed, Sep 29, 2010 at 03:44:57AM -0700, Ralph Böhme wrote:
On 9/28/2010 2:13 PM, Nicolas Williams wrote:
The version of samba bundled with Solaris 10 seems to
insist on
chmod'ing stuff. I've tried all of the various
Just in case it's not clear, I did not write the quoted text. (One
Using ZFS v22, is it possible to add a hot spare to rpool?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Tony,
The current behavior is that you can add a spare to a root pool. If the
spare kicks in automatically, you would need to apply the boot blocks
manually before you could boot from the spared-in disk.
A good alternative is to create a two-way or three-way mirrored root
pool.
We're
Is there any way to stop a resilver?
We gotta stop this thing - at minimum, completion time is 300,000 hours, and
maximum is in the millions.
Raidz2 array, so it has the redundancy, we just need to get data off.
___
zfs-discuss mailing list
Has it been running long? Initially the numbers are way off. After a while
it settles down into something reasonable.
How many disks, and what size, are in your raidz2?
-Scott
On 9/29/10 8:36 AM, LIC mesh licm...@gmail.com wrote:
Is there any way to stop a resilver?
We gotta stop this
It's always running less than an hour.
It usually starts at around 300,000h estimate(at 1m in), goes up to an
estimate in the millions(about 30mins in) and restarts.
Never gets past 0.00% completion, and K resilvered on any LUN.
64 LUNs, 32x5.44T, 32x10.88T in 8 vdevs.
On Wed, Sep 29, 2010
What version of OS?
Are snapshots running (turn them off).
So are there eight disks?
On 9/29/10 8:46 AM, LIC mesh licm...@gmail.com wrote:
It's always running less than an hour.
It usually starts at around 300,000h estimate(at 1m in), goes up to an
estimate in the millions(about 30mins
What caused the resilvering to kick off in the first place?
Lin
On Sep 29, 2010, at 8:46 AM, LIC mesh wrote:
It's always running less than an hour.
It usually starts at around 300,000h estimate(at 1m in), goes up to an
estimate in the millions(about 30mins in) and restarts.
Never gets
This is an iSCSI/COMSTAR array.
The head was running 2009.06 stable with version 14 ZFS, but we updated that
to build 134 (kept the old OS drives) - did not, however, update the zpool -
it's still version 14.
The targets are all running 2009.06 stable, exporting 4 raidz1 LUNs each of
6 drives -
Most likely an iSCSI timeout, but that was before my time here.
Since then, there have been various individual drives lost along the way on
the shelves, but never a whole LUN, so, theoretically, /except/ for iSCSI
timeouts, there has been no great reason to resilver.
On Wed, Sep 29, 2010 at
Hi all,
Thanks to some clues from people on this list, I have finally
resolved this issue!
To summarise, I was having problems with timeouts when applications
on my MacBook Pro tried to create new files on an NFS file system
that was mounted from my server running snv_130 (writes to existing
Tony,
A brief follow-up is that the issue of applying the boot blocks
automatically to a spare for a root pool is covered by this
existing CR 6668666. See this URL for more details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6668666
Thanks,
Cindy
On 09/29/10 08:38, Cindy
(I left the list off last time sorry)
No, the resliver should only be happening if there was a spare available. Is
the whole thing scrubbing? It looks like it. Can you stop it with a
zpool scrub s pool
So... Word of warning, I am no expert at this stuff. Think about what I am
suggesting
The endless resilver problem still persists on OI b147. Restarts when it
should complete.
I see no other solution than to copy the data to safety and recreate the
array. Any hints would be appreciated as that takes days unless i can stop
or pause the resilvering.
On Mon, Sep 27, 2010 at 1:13 PM,
Answers below...
Tuomas Leikola wrote:
The endless resilver problem still persists on OI b147. Restarts when it
should complete.
I see no other solution than to copy the data to safety and recreate the
array. Any hints would be appreciated as that takes days unless i can
stop or pause the
Can you post the output of 'zpool status'?
Thanks,
George
LIC mesh wrote:
Most likely an iSCSI timeout, but that was before my time here.
Since then, there have been various individual drives lost along the way
on the shelves, but never a whole LUN, so, theoretically, /except/ for
iSCSI
Thanks for taking an interest. Answers below.
On Wed, Sep 29, 2010 at 9:01 PM, George Wilson
george.r.wil...@oracle.comwrote:
On Mon, Sep 27, 2010 at 1:13 PM, Tuomas Leikola
tuomas.leik...@gmail.commailto:
tuomas.leik...@gmail.com wrote:
(continuous resilver loop) has been going on
Currently I'm still using OpenSolaris b134 and I had used the 'aclmode'
property on my file systems. However, the aclmode property has been dropped
now:
http://arc.opensolaris.org/caselog/PSARC/2010/029/20100126_mark.shellenbaum
I'm wondering what will happen to the ACLs on these files and
On Wed, September 22, 2010 21:25, Aleksandr Levchuk wrote:
I ran out of space, consequently could not rm or truncate files. (It
make sense because it's a copy-on-write and any transaction needs to
be written to disk. It worked out really well - all I had to do is
destroy some snapshots.)
rb == Ralph Böhme ra...@rsrc.de writes:
rb The Darwin kernel evaluates permissions in a first
rb match paradigm, evaluating the ACL before the mode
well...I think it would be better to AND them together like AFS did.
In that case it doesn't make any difference in which order you do it
You can truncate a file:
Echo bigfile
That will free up space without the 'rm'
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of David Dyer-Bennet
Sent: Wednesday, September 29, 2010 12:59 PM
To:
On Wed, September 29, 2010 15:17, Matt Cowger wrote:
You can truncate a file:
Echo bigfile
That will free up space without the 'rm'
Copy-on-write; the new version gets written to the disk before the old
version is released, it doesn't just overwrite. AND, if it's in any
snapshots, the
Keep in mind that Windows lacks a mode_t. We need to interop with
Windows. If a Windows user cannot completely change file perms because
there's a mode_t completely out of their reach... they'll be frustrated.
Thus an ACL-and-mode model where both are applied doesn't work. It'd be
nice, but it
Keep in mind that Windows lacks a mode_t. We need to
interop with Windows.
Oh my, I see. Another itch to scratch. Now at least Windows users are happy
while me and mabye others are not.
-r
--
This message posted from opensolaris.org
___
zfs-discuss
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool
over NFS was effectively zero for about 45 minutes. Currently the pool is still
reslivering, but for some reason I can access the file system now.
Resliver speed has been beaten to death I
On Wed, Sep 29, 2010 at 03:09:22PM -0700, Ralph Böhme wrote:
Keep in mind that Windows lacks a mode_t. We need to
interop with Windows.
Oh my, I see. Another itch to scratch. Now at least Windows users are
happy while me and mabye others are not.
Yes. Pardon me for forgetting to mention
On Wed, Sep 29, 2010 at 05:21:51PM -0500, Nicolas Williams wrote:
On Wed, Sep 29, 2010 at 03:09:22PM -0700, Ralph Böhme wrote:
Keep in mind that Windows lacks a mode_t. We need to
interop with Windows.
Oh my, I see. Another itch to scratch. Now at least Windows users are
happy while
I should add I have 477 snapshots across all files systems. Most of them are
hourly snaps (225 of them anyway).
On Sep 29, 2010, at 3:16 PM, Scott Meilicke wrote:
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool
over NFS was
Yeah, I'm having a combination of this and the resilver constantly
restarting issue.
And nothing to free up space.
It was recommended to me to replace any expanders I had between the HBA and
the drives with extra HBAs, but my array doesn't have expanders.
If your's does, you may want to try
36 matches
Mail list logo