I have been using iosoop script (see
http://www.opensolaris.org/os/community/dtrace/scripts/) written by Brendan
Gregg to look at the IO operations of my application. When I was running my
test-program on a UFS filesystem I could see both read and write operations
like:
UID PID DBLOCK
G'Day Trond,
On Thu, 20 Jul 2006, Trond Norbye wrote:
I have been using iosoop script (see
http://www.opensolaris.org/os/community/dtrace/scripts/) written by
Brendan Gregg to look at the IO operations of my application. When I was
running my test-program on a UFS filesystem I could see both
Hello,
Does the work of IEEE's Security in Storage Working Group [1] have
any affect on the design of ZFS's encryption modules? Or do the two
efforts deal with different layers?
Seems that 1619 is more geared towards SAN disks, where 'regular'
file systems tend to sit on and not know
Has anyone looked into adding support for ZFS ACLs into Rsync? It would be
really convenient if it would support transparent conversions from old-style
Posix ACLs to ZFS ACLs on the fly
One way Posix-ZFS is probably good enough. I've tried Googling, but haven't
come up with much. There
Peter Eriksson wrote:
Has anyone looked into adding support for ZFS ACLs into Rsync? It would be
really convenient if it would support transparent conversions from old-style
Posix ACLs to ZFS ACLs on the fly
One way Posix-ZFS is probably good enough. I've tried Googling, but haven't
come
... and in a related question - since rsync uses the ACL code from the Samba
project - has there been some progress in that direction too?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The zdb interface is certainly unstable. We plan on automatically doing
this at a future date (bugid not handy), but it's a little tricky for
live filesystems. If your filesystem is undergoing a lot of churn, you
may notice that zdb(1M) will blow up with an I/O error or assertion
failure
What does 'zpool status -v' show? This sounds like you have corruption
in the dnode (a.k.a. metadata). This corruption is unrepairable at the
moment, since we have no way of knowing the extent of the blocks that
this dnode may be referencing. You should be able to move this file
aside, however.
Hey, does anybody know the timeframe for when Legato Networker will support ZFS?Or, perhaps, a workaround whereby backups of ZFS by Networker will succeed? -Gregory Shaw, IT ArchitectPhone: (303) 673-8273 Fax: (303) 673-2773ITCTO Group, Sun Microsystems Inc.1 StorageTek Drive ULVL4-382
Gregory Shaw wrote:
Hey, does anybody know the timeframe for when Legato Networker will
support ZFS?
Or, perhaps, a workaround whereby backups of ZFS by Networker will succeed?
EMC has a patch that we have tested and it appears to work.
Last I heard they were planning on releasing the patch
Gregory Shaw wrote:
Hey, does anybody know the timeframe for when Legato Networker will
support ZFS?
Or, perhaps, a workaround whereby backups of ZFS by Networker will succeed?
EMC has a patch that we have tested and it appears to work.
Last I heard they were planning on releasing the
Luc I. Suryo wrote:
Gregory Shaw wrote:
Hey, does anybody know the timeframe for when Legato Networker will
support ZFS?
Or, perhaps, a workaround whereby backups of ZFS by Networker will succeed?
EMC has a patch that we have tested and it appears to work.
Last I heard they were planning
The EMC/Legato NetWorker (a.k.a. Sun StorEdge EBS) support for ZFS
NFSv4/ACLs will be in NetWorker 7.3.2 release currently targeting for
September release.
Regards,
-- Anne
Mark Shellenbaum wrote:
Gregory Shaw wrote:
Hey, does anybody know the timeframe for when Legato Networker will
do you know if this is for 7.3 or will it work for 7.2 too??
we are still using 7.2 and have no plan to update to 7.3 yet...
right now we doing snapshots and send to tar-tape, ugly...
Do you have ACLs you need to maintain? Can you just specify a snapshot
as a saveset directly?
--
Darren
Anne Wong schrieb:
The EMC/Legato NetWorker (a.k.a. Sun StorEdge EBS) support for ZFS
NFSv4/ACLs will be in NetWorker 7.3.2 release currently targeting for
September release.
Will it also support the new zfs style automounts?
Or do I have to set
zfs set mountpoint=legacy
On Thu, Jul 20, 2006 at 12:58:31AM -0700, Trond Norbye wrote:
I have been using iosoop script (see
http://www.opensolaris.org/os/community/dtrace/scripts/) written by
Brendan Gregg to look at the IO operations of my application.
...
So how can I get the same information from a ZFS file-system?
Eric Schrock wrote:
What does 'zpool status -v' show? This sounds like you have corruption
# zpool status -v
pool: junk
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in
Well the fact that it's a level 2 indirect block indicates why it can't
simply be removed. We don't know what data it refers to, so we can't
free the associated blocks. The panic on move is quite interesting -
after BFU give it another shot and file a bug if it still happens.
- Eric
On Thu,
do you know if this is for 7.3 or will it work for 7.2 too??
we are still using 7.2 and have no plan to update to 7.3 yet...
right now we doing snapshots and send to tar-tape, ugly...
Do you have ACLs you need to maintain? Can you just specify a snapshot
as a saveset directly?
Well the fact that it's a level 2 indirect block indicates why it can't
simply be removed. We don't know what data it refers to, so we can't
free the associated blocks. The panic on move is quite interesting -
after BFU give it another shot and file a bug if it still happens.
What's the
On Thu, 20 Jul 2006, Darren Dunham wrote:
Well the fact that it's a level 2 indirect block indicates why it can't
simply be removed. We don't know what data it refers to, so we can't
free the associated blocks. The panic on move is quite interesting -
after BFU give it another shot and
Note that there are two common reasons to have a fsck-like utility -
1. Detect corruption
2. Repair corruption
For the first, we have scrubbing (and eventually background scrubbing)
so it's pointless in the ZFS world. For the latter, the type of things
it repairs are known pathologies endemic
Do you have ACLs you need to maintain? Can you just specify a snapshot
as a saveset directly?
Well we not (yet) worry about the ACLs as long we have a backup, using
zfs sent/receieve of the snapshot to 1 single tar en dan to tape..
I meant, rather than taring it up, can you just
Darren Dunham wrote:
I meant, rather than taring it up, can you just pass the snapshot mount
point to Networker as a saveset?
Yup, in my brief testing, I was able to backup a snapdir using
Networker. Pointing Networker at a ZFS mountpoint with the snapdir shown
( .zfs, at the top level
Och, sorry - a clarification might needed to my reply:
Tim Foster wrote:
Darren Dunham wrote:
I meant, rather than taring it up, can you just pass the snapshot mount
point to Networker as a saveset?
Yup, in my brief testing, I was able to backup a snapdir using
Networker.
... ** with the
Basically, the first step is to identify the file in question so the
user knows what's been lost. The second step is a way to move these
blocks into pergatory, where they won't take up filesystem namespace,
but still account for used space. The final step is to actually delete
the blocks
Joseph Mocker wrote:
...
Anyways, I found the ::memstat dcmd for mdb. So I gave it a spin and it
looked something like
Page SummaryPagesMB %Tot
Kernel 139650 1091 36%
So what's going on! Please help. I want my memory back!
This is essentially by design, due to the way that ZFS uses kernel
memory for caching and other stuff.
You can alleviate this somewhat by running a 64bit processor, which
has a significantly larger address space to play with.
Uhh.
There two things to note here:
1. The vast majority of the memory is being used by the ZFS cache, but
appears under 'kernel heap'. If you actually need the memory, it
_should_ be released. Under UFS, this cache appears as the 'page
cache', and users understand that it can be released
Eric,
Thanks for the explanation. I am familiar with the UFS cache and assumed
ZFS cache would have worked the same way.
However, it seems like there are a few bugs here. Here's what I see.
1. I can cause an out of memory situation by simply copying a bunch of
files between folders in a ZFS
Something I often do when I'm a little suspicious of this sort of
activity is to run something that steals vast quantities of memory...
eg: something like this:
#include stdio.h
#include stdlib.h
int main()
{
int memsize=0;
char *input_string;
char *memory;
Bart Smaalders wrote:
How much swap space is configured on this machine?
Zero. Is there any reason I would want to configure any swap space?
--joe
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Yeah I was a little suspicious of my mkfile in tmpfs test so I went
ahead and wrote a program not so different than this one.
The results were the same. I could only allocate about 512M before
things went bad.
--joe
Nathan Kroenert wrote:
Something I often do when I'm a little suspicious
33 matches
Mail list logo