Hello Richard,
Wednesday, March 21, 2007, 1:48:23 AM, you wrote:
RE Yes, PSARC 2007/121 integrated into build 61 (and there was much rejoicing
:-)
RE I'm working on some models which will show the affect on various RAID
RE configurations and intend to post some results soon. Suffice to say, if
Hello Robert,
Saturday, March 17, 2007, 6:49:05 PM, you wrote:
RM Hello Thomas,
RM Saturday, March 17, 2007, 11:46:14 AM, you wrote:
TN On Fri, 16 Mar 2007, Anton B. Rang wrote:
It's possible (if unlikely) that you are only getting checksum errors on
metadata. Since ZFS always internally
JS writes:
The big problem is that if you don't do your redundancy in the zpool,
then the loss of a single device flatlines the system. This occurs in
single device pools or stripes or concats. Sun support has said in
support calls and Sunsolve docs that this is by design, but I've never
Ah :-)
Btw, that bug note is a bit misleading - our usage case had nothing to do with
ZFS Root filesystems - he was trying to install in a completely separate
filesystem - a very large one. And yes, he found out that setting a quota was a
good workaround :-)
This message posted from
Hi Gino,
What version of Solaris your server is running?
What happens here is while opening your pool ZFS is trying to process
ZFS Intent Log of this poll and discovers some inconsistency between
on-disk state and ZIL contents.
What was the first panic you refer to?
Wbr,
Victor
Gino
Did you say what version of Solaris 10 you were using? I had similar problems
on Sol10 U2, booting a database. This involved first initializing the data
files (a few Gb), then starting the server(s) which tried to allocate a large
chunk of shared memory. This failed miserably since ZFS had
Gino,
S10U2
Ok, then if you have a support contract for this system you may want to
open new case for this issue.
unfortunately we have nothing on the logs about the first panic!
This is not good... Without it may be impossible to find out what went
wrong. You may have nothing on the logs,
Gino,
Gino Ruopolo пишет:
Victor,
can we try to mount the zpool on a S10U3 system?
No, this may require to use one of the recent Solaris Nevada builds. I'm
trying to check relevant build number.
What about answers to other my questions?
Wbr,
Victor
From: Victor Latushkin [EMAIL
Gino,
Gino Ruopolo пишет:
Victor,
1) crash dump dir has moved on the crashed zpool a few days ago :
Anyway we think the crash is related to mpxio. We had tens of crash in
the
last weeks but never lost a zpool!!
2) That particular unis is out of Sun contract
We hope there is a way to
We're running Update 3. Note that the DB _does_ come up, just not in the two
minutes they were expecting. If they wait a few moments after their two-minute
start-up attempt, it comes up just fine.
I was looking at vmstat, and it seems to tell me what I need. It's just that I
need to present
On Wed, 21 Mar 2007, Rainer Heilke wrote:
[... reformatted ]
We're running Update 3. Note that the DB _does_ come up, just not in the
two minutes they were expecting. If they wait a few moments after their
two-minute start-up attempt, it comes up just fine.
So why don't you state the
Richard Elling wrote:
I think this is a systems engineering problem, not just a ZFS problem.
Few have bothered to look at mount performance in the past because
most systems have only a few mounted file systems[1]. Since ZFS does
file system quotas instead of user quotas, now we have the
Robert Milkowski wrote:
Hello Richard,
Wednesday, March 21, 2007, 1:48:23 AM, you wrote:
RE Yes, PSARC 2007/121 integrated into build 61 (and there was much rejoicing
:-)
RE I'm working on some models which will show the affect on various RAID
RE configurations and intend to post some results
Folks -
I'm preparing to submit the attached PSARC case to provide better
support for device removal and insertion within ZFS. Since this is a
rather complex issue, with a fair share of corner issues, I thought I'd
send the proposal out to the ZFS community at large for further comment
before
[EMAIL PROTECTED] wrote on 03/21/2007 11:00:43 AM:
The problem is that in order to restrict disk usage, ZFS *requires*
that you create this many filesystems. I think most in this situation
would prefer not to have to do that. The two solutions I see would
be to add user quotas to ZFS
eric kustarz [EMAIL PROTECTED] writes:
I just integrated into snv_62:
6529406 zpool history needs to bump the on-disk version
The original CR for 'zpool history':
6343741 want to store a command history on disk
was integrated into snv_51.
Both of these are planned to make s10u4.
But
Hello Richard,
Wednesday, March 21, 2007, 6:23:05 PM, you wrote:
RE Robert Milkowski wrote:
RE Wouldn't that fall under the generic rewrite/shrink functionality we're also
RE anxiously waiting for? Note that this also brings up a nasty edge case
where
RE the rewrite may cause you to run out
Is this the same panic I observed when moving a FireWire disk from
a SPARC
system running snv_57 to an x86 laptop with snv_42a?
6533369 panic in dnode_buf_byteswap importing zpool
Yep, thanks - i was looking for that bug :) I'll close it out as a dup.
eric
JS wrote:
I'd definitely prefer owning a sort of SAN solution that would basically just
be trays of JBODs exported through redundant controllers, with enterprise level
service. The world is still playing catch up to integrate with all the
possibilities of zfs.
It was called the A5000, later
On Thu, Mar 22, 2007 at 01:03:48AM +0100, Robert Milkowski wrote:
What if I have a failing drive (still works but I want it to be
replaced) and I have a replacement drive on a shelf. All I want is
to remove failing drive, insert new one and resilver. I do not want
a hot spare to
Anyone have any experience with this?
http://www.storagebuilder.com/ssr212cc/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Kory,
I'm sorry that you had to go through this. We're all working very hard to
make ZFS better for everyone. We've noted this problem on the ZFS Best
Practices wiki to try and help avoid future problems until we can get the
quotas issue resolved.
-- richard
Kory Wheatley wrote:
Richard,
I
Hi,
S10U3: It seems, that ufs POSIX-ACLs are not properly translated to zfs
ACL4 entries, when one xfers a directory tree from UFS to ZFS.
Test case:
Assuming one has an user A and B, both belonging to group G and having
their
umask set to 022:
1) On UFS
- as user A do:
mkdir /dir
Hello Eric,
Thursday, March 22, 2007, 1:13:19 AM, you wrote:
ES On Thu, Mar 22, 2007 at 01:03:48AM +0100, Robert Milkowski wrote:
What if I have a failing drive (still works but I want it to be
replaced) and I have a replacement drive on a shelf. All I want is
to remove failing drive,
I'm strongly considering using iscsi with zfs. What is the current
status wrt bugs or bad configurations, for S10 U3 patched to 125101-03?
I mean things like, if you have 2 iscsi target hosts as zfs mirrors, and
one goes away, will solaris panic? Will the data be safe after reboot?
Or, if you
25 matches
Mail list logo