Hello max,
Sunday, August 17, 2008, 1:02:05 PM, you wrote:
mbc A Darren Dunham wrote:
If the most recent uberblock appears valid, but doesn't have useful
data, I don't think there's any way currently to see what the tree of an
older uberblock looks like. It would be nice to see if that data
Hello Paul,
Thursday, August 14, 2008, 9:25:45 PM, you wrote:
PR This problem is becoming a real pain to us again and I was wondering
PR if there has been in the past few month any known fix or workaround.
PR I normally create zfs fs's like this:
PRzfs create -o quota=131G -o reserv=131G
Paul B. Henson wrote:
Sweet. Might I request an acl evaluation function? Which basically, given a
user and a requested permission, returns either true (user has permission),
false (user doesn't have permission), or error condition. Similar to the
POSIX access() call, but for ACLs. If I had
Hi,
I am searching for a roadmap for shrinking a pool. Is there some
project, where can I find informations, when will it be implemented in
Solars10
Thanks
Regards
Bernhard
--
Bernhard Holzer
Sun Microsystems Ges.m.b.H.
Wienerbergstraße 3/7
A-1100 Vienna, Austria
Phone x60983/+43 1 60563
Try to change uberblock
http://www.opensolaris.org/jive/thread.jspa?messageID=
217097
Looks like you are the originator of that thread. In the last message you
promised to post some details on how you've recovered, but it was not done. Can
you please post some details? How did you figure out
Long story short,
There isn't a project, there are no plans to start a project, and don't
expect to see it in Solaris10 in this lifetime without some serious pushback
from large Sun customers. Even then, it's unlikely to happen anytime soon
due to the technical complications of doing so
I read that that should be case but did not see such in practice. I
created one volume without the recsize setting and one with. Than
copied the same data to both (lots of small files). The 'du' report
on the one without the recsize was significantly bigger than the
one where I made it and in
Hi all,
I have defined a zfs pool on a Solaris 10 :
[EMAIL PROTECTED]:/kernel/drv zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
nsrcatalog 125G 72.7G 52.4G58% ONLINE -
[EMAIL PROTECTED]:/kernel/drv zpool status nsrcatalog
pool:
Ian Collins wrote:
Mark Shellenbaum wrote:
Paul B. Henson wrote:
Are the libsec undocumented interfaces likely to remain the same when the
acl_t structure changes? They will still require adding the prototypes to
my code so the compiler knows what to make of them, but less chance of
breakage
On Mon, Aug 18, 2008 at 04:43:02PM +0200, Dony Pierre wrote:
I have defined a zfs pool on a Solaris 10 :
I would now like to use mpxio - multipathing driver. So, i will change
the file /kernel/drv/fp.conf and set the parameter mpxio-disable=no;
But, I thing Solaris will
initiator_host:~ # dd if=/dev/zero bs=1k of=/dev/dsk/c5t0d0
count=100
So this is going at 3000 x 1K writes per second or
330usec per write. The iscsi target is probably doing a
over the wire operation for each request. So it looks fine
at first glance.
-r
Cody Campbell writes:
Kyle McDonald writes:
Ross wrote:
Just re-read that and it's badly phrased. What I meant to say is that a
raid-z / raid-5 array based on 500GB drives seems to have around a 1 in 10
chance of loosing some data during a full rebuild.
Actually, I think it's been
Hi ZFS team,
I am currently working on fixes for couple of bugs in OpenSolaris Caiman
installer and since they are related to the ZFS, I would like to kindly
ask you, if you could please help me to understand, if the issues
encountered and mentioned below are known (somebody works on them or
if
Mark Shellenbaum wrote:
Ian Collins wrote:
Mark Shellenbaum wrote:
Paul B. Henson wrote:
Are the libsec undocumented interfaces likely to remain the same
when the
acl_t structure changes? They will still require adding the
prototypes to
my code so the compiler knows what to make of them,
Suppose that ZFS detects an error in the first
case. It can't tellbr
the storage array something's wrong, please
fix it (since thebr
storage array doesn't provide for this with
checksums and intelligentbr
recovery), so all it can do is tell the user
this file is corrupt,br
recover it from
Ask your hardware vendor. The hardware corrupted your
data, not ZFS.
Right, that's all because of these storage vendors. All problems come from
them! Never from ZFS :-) I have similar answer from them: ask Sun, ZFS is
buggy. Our storage is always fine. That is really ridiculous! People pay
Ian Collins wrote:
I have a pretty standard ZFS boot AMD64 desktop. A the moment, most ZFS
related commands are hanging (can't be killed) . Running 'truss share'
the last few lines I see are:
Can you provide a kernel thread list report? You can use mdb -k to get
that. Once in mdb type
Borys Saulyak wrote:
Suppose that ZFS detects an error in the first
case. It can't tellbr
the storage array something's wrong, please
fix it (since thebr
storage array doesn't provide for this with
checksums and intelligentbr
recovery), so all it can do is tell the user
this file is
Dony Pierre wrote:
Hi all,
I have defined a zfs pool on a Solaris 10 :
[EMAIL PROTECTED]:/kernel/drv zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
nsrcatalog 125G 72.7G 52.4G58% ONLINE -
[EMAIL PROTECTED]:/kernel/drv
Mark Shellenbaum wrote:
Ian Collins wrote:
I have a pretty standard ZFS boot AMD64 desktop. A the moment, most ZFS
related commands are hanging (can't be killed) . Running 'truss share'
the last few lines I see are:
Can you provide a kernel thread list report? You can use mdb -k to
get
* andrew [EMAIL PROTECTED] [2008-08-16 00:38]:
Hmm... Just tried the same thing on SXCE build 95 and it works fine.
Strange. Anyone know what's up with OpenSolaris (the distro)? I'm
using the ISO of OpenSolaris 208.11 snv_93 image-updated to build 95
if that makes a difference. I've not tried
21 matches
Mail list logo