Re: [zfs-discuss] zfs/nfs issue editing existing files

2008-06-10 Thread Brandon High
On Mon, Jun 9, 2008 at 10:44 PM, Robert Thurlow [EMAIL PROTECTED] wrote:
 Brandon High wrote:
 AFAIK, you're doing the best that you can while playing in the
 constraints of ZFS. If you want to use nfs v3 with your clients,
 you'll need to use UFS as the back end.

 Just a clarification: NFSv3 isn't a problem to my knowledge in general;
 Andy has a problem with a HP-UX client bug that he can't appear to get
 a fix for.

It was clear to me when I wrote it, but by your clients I meant
Andy's buggy hosts that can't be upgraded to a version with a fix in
place.

ZFS works fine with other nfs v3 clients (or v2, or v4), AFAIK.

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs/nfs issue editing existing files

2008-06-09 Thread Andy Lubel

On Jun 6, 2008, at 11:22 AM, Andy Lubel wrote:

 That was it!

 hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R GETATTR3 OK
 hpux-is-old.com - nearline.host NFS C SETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R SETATTR3 Update synch mismatch
 hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R GETATTR3 OK
 hpux-is-old.com - nearline.host NFS C SETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R SETATTR3 Update synch mismatch
 hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R GETATTR3 OK
 hpux-is-old.com - nearline.host NFS C SETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R SETATTR3 Update synch mismatch
 hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3

 It is too bad our silly hardware only allows us to go to 11.23.
 That's OK though, in a couple months we will be dumping this server
 with new x4600's.

 Thanks for the help,

 -Andy


 On Jun 5, 2008, at 6:19 PM, Robert Thurlow wrote:

 Andy Lubel wrote:

 I've got a real doozie..   We recently implemented a b89 as zfs/
 nfs/ cifs server.  The NFS client is HP-UX (11.23).
 What's happening is when our dba edits a file on the nfs mount
 with  vi, it will not save.
 I removed vi from the mix by doing 'touch /nfs/file1' then 'echo
 abc/nfs/file1' and it just sat there while the nfs servers cpu
 went up  to 50% (one full core).

 Hi Andy,

 This sounds familiar: you may be hitting something I diagnosed
 last year.  Run snoop and see if it loops like this:

 10920   0.00013 141.240.193.235 - 141.240.193.27 NFS C GETATTR3
 FH=6614
 10921   0.7 141.240.193.27 - 141.240.193.235 NFS R GETATTR3 OK
 10922   0.00017 141.240.193.235 - 141.240.193.27 NFS C SETATTR3
 FH=6614
 10923   0.7 141.240.193.27 - 141.240.193.235 NFS R SETATTR3
 Update synch mismatch
 10924   0.00017 141.240.193.235 - 141.240.193.27 NFS C GETATTR3
 FH=6614
 10925   0.00023 141.240.193.27 - 141.240.193.235 NFS R GETATTR3 OK
 10926   0.00026 141.240.193.235 - 141.240.193.27 NFS C SETATTR3
 FH=6614
 10927   0.9 141.240.193.27 - 141.240.193.235 NFS R SETATTR3
 Update synch mismatch

 If you see this, you've hit what we filed as Sun bugid 6538387,
 HP-UX automount NFS client hangs for ZFS filesystems.  It's an
 HP-UX bug, fixed in HP-UX 11.31.  The synopsis is that HP-UX gets
 bitten by the nanosecond resolution on ZFS.  Part of the CREATE
 handshake is for the server to send the create time as a 'guard'
 against almost-simultaneous creates - the client has to send it
 back in the SETATTR to complete the file creation.  HP-UX has only
 microsecond resolution in their VFS, and so the 'guard' value is
 not sent accurately and the server rejects it, lather rinse and
 repeat.  The spec, RFC 1813, talks about this in section 3.3.2.
 You can use NFSv2 in the short term until you get that update.

 If you see something different, by all means send us a snoop.

Update:

We tried nfs v2 and the speed was terrible but the gettattr/setattr  
issue was gone.  So what I'm looking at doing now is to create a raw  
volume, format it with ufs, mount it locallly, then share it over  
nfs.  Luckily we will only have to do it this way for a few months, I  
don't like the extra layer and the block device isn't as fast as we  
hoped (I get about 400MB/s on the zfs filesystem and 180MB/s using the  
ufs-formatted local disk..  I just sure hope I'm not breaking any  
rules by implementing this workaround that will come back to haunt me  
later.

-Andy



 Rob T

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs/nfs issue editing existing files

2008-06-09 Thread Andy Lubel

On Jun 9, 2008, at 12:28 PM, Andy Lubel wrote:


 On Jun 6, 2008, at 11:22 AM, Andy Lubel wrote:

 That was it!

 hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R GETATTR3 OK
 hpux-is-old.com - nearline.host NFS C SETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R SETATTR3 Update synch mismatch
 hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R GETATTR3 OK
 hpux-is-old.com - nearline.host NFS C SETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R SETATTR3 Update synch mismatch
 hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R GETATTR3 OK
 hpux-is-old.com - nearline.host NFS C SETATTR3 FH=F6B3
 nearline.host - hpux-is-old.com NFS R SETATTR3 Update synch mismatch
 hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3

 It is too bad our silly hardware only allows us to go to 11.23.
 That's OK though, in a couple months we will be dumping this server
 with new x4600's.

 Thanks for the help,

 -Andy


 On Jun 5, 2008, at 6:19 PM, Robert Thurlow wrote:

 Andy Lubel wrote:

 I've got a real doozie..   We recently implemented a b89 as zfs/
 nfs/ cifs server.  The NFS client is HP-UX (11.23).
 What's happening is when our dba edits a file on the nfs mount
 with  vi, it will not save.
 I removed vi from the mix by doing 'touch /nfs/file1' then 'echo
 abc/nfs/file1' and it just sat there while the nfs servers cpu
 went up  to 50% (one full core).

 Hi Andy,

 This sounds familiar: you may be hitting something I diagnosed
 last year.  Run snoop and see if it loops like this:

 10920   0.00013 141.240.193.235 - 141.240.193.27 NFS C GETATTR3
 FH=6614
 10921   0.7 141.240.193.27 - 141.240.193.235 NFS R GETATTR3 OK
 10922   0.00017 141.240.193.235 - 141.240.193.27 NFS C SETATTR3
 FH=6614
 10923   0.7 141.240.193.27 - 141.240.193.235 NFS R SETATTR3
 Update synch mismatch
 10924   0.00017 141.240.193.235 - 141.240.193.27 NFS C GETATTR3
 FH=6614
 10925   0.00023 141.240.193.27 - 141.240.193.235 NFS R GETATTR3 OK
 10926   0.00026 141.240.193.235 - 141.240.193.27 NFS C SETATTR3
 FH=6614
 10927   0.9 141.240.193.27 - 141.240.193.235 NFS R SETATTR3
 Update synch mismatch

 If you see this, you've hit what we filed as Sun bugid 6538387,
 HP-UX automount NFS client hangs for ZFS filesystems.  It's an
 HP-UX bug, fixed in HP-UX 11.31.  The synopsis is that HP-UX gets
 bitten by the nanosecond resolution on ZFS.  Part of the CREATE
 handshake is for the server to send the create time as a 'guard'
 against almost-simultaneous creates - the client has to send it
 back in the SETATTR to complete the file creation.  HP-UX has only
 microsecond resolution in their VFS, and so the 'guard' value is
 not sent accurately and the server rejects it, lather rinse and
 repeat.  The spec, RFC 1813, talks about this in section 3.3.2.
 You can use NFSv2 in the short term until you get that update.

 If you see something different, by all means send us a snoop.

 Update:

 We tried nfs v2 and the speed was terrible but the gettattr/setattr
 issue was gone.  So what I'm looking at doing now is to create a raw
 volume, format it with ufs, mount it locallly, then share it over
 nfs.  Luckily we will only have to do it this way for a few months, I
 don't like the extra layer and the block device isn't as fast as we
 hoped (I get about 400MB/s on the zfs filesystem and 180MB/s using the
 ufs-formatted local disk..  I just sure hope I'm not breaking any
 rules by implementing this workaround that will come back to haunt me
 later.

 -Andy

Tried this today and although things appear to function correctly, the  
performance seems to be steadily degrading.  Am I getting burnt by  
double-caching?  If so, what is the best way to workaround for my sad  
situation?  I tried directio for the ufs volume and it made it even  
worse..

The only next thing I know to do is destroy one of my zfs pools and go  
back to SVM until we can get some newer nfs clients writing to this  
nearline.  It pains me deeply!!

TIA,

-Andy





 Rob T

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs/nfs issue editing existing files

2008-06-09 Thread Brandon High
On Mon, Jun 9, 2008 at 3:14 PM, Andy Lubel [EMAIL PROTECTED] wrote:
 Tried this today and although things appear to function correctly, the
 performance seems to be steadily degrading.  Am I getting burnt by
 double-caching?  If so, what is the best way to workaround for my sad
 situation?  I tried directio for the ufs volume and it made it even
 worse..

AFAIK, you're doing the best that you can while playing in the
constraints of ZFS. If you want to use nfs v3 with your clients,
you'll need to use UFS as the back end.

You can check the size of the caches to see if that's the problem. If
you just want to take a shot i the dark and if this is the only
filesystem in your zpool, either reduce the size of the zfs ARC cache,
or reduce the size of the UFS cache.

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs/nfs issue editing existing files

2008-06-09 Thread Robert Thurlow
Brandon High wrote:
 On Mon, Jun 9, 2008 at 3:14 PM, Andy Lubel [EMAIL PROTECTED] wrote:
 Tried this today and although things appear to function correctly, the
 performance seems to be steadily degrading.  Am I getting burnt by
 double-caching?  If so, what is the best way to workaround for my sad
 situation?  I tried directio for the ufs volume and it made it even
 worse..
 
 AFAIK, you're doing the best that you can while playing in the
 constraints of ZFS. If you want to use nfs v3 with your clients,
 you'll need to use UFS as the back end.

Just a clarification: NFSv3 isn't a problem to my knowledge in general;
Andy has a problem with a HP-UX client bug that he can't appear to get
a fix for.

Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs/nfs issue editing existing files

2008-06-06 Thread Andy Lubel
That was it!

hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
nearline.host - hpux-is-old.com NFS R GETATTR3 OK
hpux-is-old.com - nearline.host NFS C SETATTR3 FH=F6B3
nearline.host - hpux-is-old.com NFS R SETATTR3 Update synch mismatch
hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
nearline.host - hpux-is-old.com NFS R GETATTR3 OK
hpux-is-old.com - nearline.host NFS C SETATTR3 FH=F6B3
nearline.host - hpux-is-old.com NFS R SETATTR3 Update synch mismatch
hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
nearline.host - hpux-is-old.com NFS R GETATTR3 OK
hpux-is-old.com - nearline.host NFS C SETATTR3 FH=F6B3
nearline.host - hpux-is-old.com NFS R SETATTR3 Update synch mismatch
hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3

It is too bad our silly hardware only allows us to go to 11.23.   
That's OK though, in a couple months we will be dumping this server  
with new x4600's.

Thanks for the help,

-Andy


On Jun 5, 2008, at 6:19 PM, Robert Thurlow wrote:

 Andy Lubel wrote:

 I've got a real doozie..   We recently implemented a b89 as zfs/ 
 nfs/ cifs server.  The NFS client is HP-UX (11.23).
 What's happening is when our dba edits a file on the nfs mount  
 with  vi, it will not save.
 I removed vi from the mix by doing 'touch /nfs/file1' then 'echo  
 abc/nfs/file1' and it just sat there while the nfs servers cpu  
 went up  to 50% (one full core).

 Hi Andy,

 This sounds familiar: you may be hitting something I diagnosed
 last year.  Run snoop and see if it loops like this:

 10920   0.00013 141.240.193.235 - 141.240.193.27 NFS C GETATTR3  
 FH=6614
 10921   0.7 141.240.193.27 - 141.240.193.235 NFS R GETATTR3 OK
 10922   0.00017 141.240.193.235 - 141.240.193.27 NFS C SETATTR3  
 FH=6614
 10923   0.7 141.240.193.27 - 141.240.193.235 NFS R SETATTR3  
 Update synch mismatch
 10924   0.00017 141.240.193.235 - 141.240.193.27 NFS C GETATTR3  
 FH=6614
 10925   0.00023 141.240.193.27 - 141.240.193.235 NFS R GETATTR3 OK
 10926   0.00026 141.240.193.235 - 141.240.193.27 NFS C SETATTR3  
 FH=6614
 10927   0.9 141.240.193.27 - 141.240.193.235 NFS R SETATTR3  
 Update synch mismatch

 If you see this, you've hit what we filed as Sun bugid 6538387,
 HP-UX automount NFS client hangs for ZFS filesystems.  It's an
 HP-UX bug, fixed in HP-UX 11.31.  The synopsis is that HP-UX gets
 bitten by the nanosecond resolution on ZFS.  Part of the CREATE
 handshake is for the server to send the create time as a 'guard'
 against almost-simultaneous creates - the client has to send it
 back in the SETATTR to complete the file creation.  HP-UX has only
 microsecond resolution in their VFS, and so the 'guard' value is
 not sent accurately and the server rejects it, lather rinse and
 repeat.  The spec, RFC 1813, talks about this in section 3.3.2.
 You can use NFSv2 in the short term until you get that update.

 If you see something different, by all means send us a snoop.

 Rob T

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs/nfs issue editing existing files

2008-06-05 Thread Lisa Week

Andy Lubel wrote:
 Hello,

 I've got a real doozie..   We recently implemented a b89 as zfs/nfs/ 
 cifs server.  The NFS client is HP-UX (11.23).

 What's happening is when our dba edits a file on the nfs mount with  
 vi, it will not save.

 I removed vi from the mix by doing 'touch /nfs/file1' then 'echo abc  
   /nfs/file1' and it just sat there while the nfs servers cpu went up  
 to 50% (one full core).

 This nfsstat is most troubling (I zeroed it and only tried to echo  
 data into a file so this is the numbers for about 2 minutes before I  
 CTRL-C'ed the echo command).

 Version 3: (11242416 calls)
 nullgetattr setattr lookup  access  readlink
 0 0%5600958 49% 5600895 49% 19 0%   9 0%0 0%
 readwrite   create  mkdir   symlink mknod
 0 0%40494 0%5 0%0 0%0 0%0 0%
 remove  rmdir   rename  linkreaddir readdirplus
 3 0%0 0%0 0%0 0%0 0%7 0%
 fsstat  fsinfo  pathconfcommit
 12 0%   0 0%0 0%14 0%


 Thats a lot of getattr and setattr!  Does anyone have any advice on  
 where I should start to figure out what is going on?  truss, dtrace,  
 snoop.. so many choices!

snoop would be a fine place to start. This'll tell us what the server is 
responding to all those getattr/setattr calls with.

- Lisa
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs/nfs issue editing existing files

2008-06-05 Thread Robert Thurlow
Andy Lubel wrote:

 I've got a real doozie..   We recently implemented a b89 as zfs/nfs/ 
 cifs server.  The NFS client is HP-UX (11.23).
 
 What's happening is when our dba edits a file on the nfs mount with  
 vi, it will not save.
 
 I removed vi from the mix by doing 'touch /nfs/file1' then 'echo abc  
   /nfs/file1' and it just sat there while the nfs servers cpu went up  
 to 50% (one full core).

Hi Andy,

This sounds familiar: you may be hitting something I diagnosed
last year.  Run snoop and see if it loops like this:

10920   0.00013 141.240.193.235 - 141.240.193.27 NFS C GETATTR3 FH=6614
10921   0.7 141.240.193.27 - 141.240.193.235 NFS R GETATTR3 OK
10922   0.00017 141.240.193.235 - 141.240.193.27 NFS C SETATTR3 FH=6614
10923   0.7 141.240.193.27 - 141.240.193.235 NFS R SETATTR3 Update 
synch mismatch
10924   0.00017 141.240.193.235 - 141.240.193.27 NFS C GETATTR3 FH=6614
10925   0.00023 141.240.193.27 - 141.240.193.235 NFS R GETATTR3 OK
10926   0.00026 141.240.193.235 - 141.240.193.27 NFS C SETATTR3 FH=6614
10927   0.9 141.240.193.27 - 141.240.193.235 NFS R SETATTR3 Update 
synch mismatch

If you see this, you've hit what we filed as Sun bugid 6538387,
HP-UX automount NFS client hangs for ZFS filesystems.  It's an
HP-UX bug, fixed in HP-UX 11.31.  The synopsis is that HP-UX gets
bitten by the nanosecond resolution on ZFS.  Part of the CREATE
handshake is for the server to send the create time as a 'guard'
against almost-simultaneous creates - the client has to send it
back in the SETATTR to complete the file creation.  HP-UX has only
microsecond resolution in their VFS, and so the 'guard' value is
not sent accurately and the server rejects it, lather rinse and
repeat.  The spec, RFC 1813, talks about this in section 3.3.2.
You can use NFSv2 in the short term until you get that update.

If you see something different, by all means send us a snoop.

Rob T
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs / nfs issue (not performance :-) with courier-imap

2007-01-25 Thread Robert Milkowski
Hello Chad,

Thursday, January 25, 2007, 9:14:24 AM, you wrote:

CLSNL I am not sure if this is a zfs issue, and nfs issue, or a combination
CLSNL of the two, or not an issue with them per se (caching or whatever),  
CLSNL or a courier-imap issue, or even a mail client issue.

CLSNL However, the issue happens in at least two different unrelated mail  
CLSNL clients, so I don't think it is client related, and I have spoken to  
CLSNL someone who uses courier-imap on nfs mounted directories for maildir  
CLSNL mailstore using FreeBSD 6.x  to NetApp nfs servers without issue (my  
CLSNL nfs client if FreeBSD 6.x while the server is Solaris 10 x86 serving  
CLSNL ZFS backed filesystems over nfs), so maybe it is something to do with
CLSNL ZFS and NFS interaction.

CLSNL Basically, I have a few maildir mailstores that are mounted on my  
CLSNL FreeBSD imap server from a Solaris 10 sever that serves them using  
CLSNL NFSv3 from ZFS filesystems (each maildir has its own ZFS  
CLSNL filesystem).  Most of my maildirs are on a local disk and do not have
CLSNL a problem and a few on the nfz/zfs do not have the problem and a few  
CLSNL have the problem that appeared right after they were migrated from  
CLSNL the local disk to the zfs/nfs filesystem for testing (we would  
CLSNL eventually like to move over all mail to this nfz/zfs setup).

CLSNL Basically, in the affected accounts (under Apple Mail.app and Windows
CLSNL Thunderbird), you can delete 1 or more messages, (mark for delete),  
CLSNL expunge, and then mail starting some place in the list after the  
CLSNL deleted messages starts to show the wrong mail content for the given  
CLSNL message as shown in the list view.

CLSNL say I have messages A B C D E F G etc

CLSNL A
CLSNL B
CLSNL C
CLSNL D
CLSNL E
CLSNL F
CLSNL G

CLSNL I delete C and expunge

CLSNL Now it looks like this

CLSNL A
CLSNL B
CLSNL D
CLSNL E
CLSNL F
CLSNL G

CLSNL but if I click, say E, it has F's contents, F has Gs contents, and no
CLSNL mail has D's contents that I can see.  But the list in the mail
CLSNL client list view is correct.

I don't belive it's a problem with nfs/zfs server.

Please try with simple dtrace script to see (or even truss) what files
your imapd actually opens when you click E - I don't belive it opens E
and you get F contents, I would bet it opens F.



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs / nfs issue (not performance :-) with courier-imap

2007-01-25 Thread Ben Rockwood

Robert Milkowski wrote:

CLSNL but if I click, say E, it has F's contents, F has Gs contents, and no
CLSNL mail has D's contents that I can see.  But the list in the mail
CLSNL client list view is correct.

I don't belive it's a problem with nfs/zfs server.

Please try with simple dtrace script to see (or even truss) what files
your imapd actually opens when you click E - I don't belive it opens E
and you get F contents, I would bet it opens F.
  


I completely agree with Robert.  I'd personally suggest 'truss' to start 
because its trivial to use, then start using DTrace to further hone down 
the problem.


In the case of Courier-IMAP the best way to go about it would be to 
truss the parent (courierlogger, which calls courierlogin and ultimately 
imapd) using 'truss -f -p PID'.   Then open the mailbox and watch 
those stat's and open's closely.


I'll be very interested in your findings.  We use Courier on NFS/ZFS 
heavily and I'm thankful to report having no such problems.


benr.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/NFS issue...

2006-11-03 Thread eric kustarz

Erik Trimble wrote:

I actually think this is an NFSv4 issue, but I'm going to ask here
anyway...


Server:Solaris 10 Update 2 (SPARC), with several ZFS file systems
shared via the legacy method (/etc/dfs/dfstab and share(1M), not via the
ZFS property).  Default settings in /etc/default/nfs

bigbox# share
-   /data/archive   rw,anon=0   
bigbox# ls -ld /data/archive
drwxrwxrwx   9 root other 10 Nov  3 14:15 /data/archived




Client A:  Solaris 10 (various patchlevels, both x86 and SPARC)

user1% cd /net/bigbox/data/archived
user1% ls -ld .
drwxrwxrwx   9 nobody   nobody10 Nov  3 14:49 ./
user1% touch me
user1% mkdir foo
mkdir: Failed to make directory foo; Permission denied




Client B:  Solaris 8/9, various Linuxes, both x86/SPARC

user1% cd /net/bigbox/data/archived
user1% ls -ld .
drwxrwxrwx   9 root other 11 Nov  3 14:49 ./
user1% touch me
user1% mkdir foo




It looks like the Solaris 10 machines aren't mapping the userIDs
correctly. All machines belong to the same NIS domain. I suspect NFSv4,
but can't be sure. Am I doing something wrong here?



Make sure you're NFSv4 mapid domain matches (client and server).

http://blogs.sun.com/erickustarz/entry/nfsmapid_domain

You can override the default in /etc/default/nfs, and can check what 
your current one is /var/run/nfs4_domain.


eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/NFS issue...

2006-11-03 Thread Karen Yeung
Don't forget to restart mapid after modifying default domain in 
/etc/default/nfs.


As root, run svcadm restart svc:/network/nfs/mapid.

I've run into this in the past.

Karen

eric kustarz wrote:

Erik Trimble wrote:

I actually think this is an NFSv4 issue, but I'm going to ask here
anyway...


Server:Solaris 10 Update 2 (SPARC), with several ZFS file systems
shared via the legacy method (/etc/dfs/dfstab and share(1M), not via the
ZFS property).  Default settings in /etc/default/nfs

bigbox# share
-/data/archiverw,anon=0
bigbox# ls -ld /data/archive
drwxrwxrwx   9 root other 10 Nov  3 14:15 /data/archived




Client A:  Solaris 10 (various patchlevels, both x86 and SPARC)

user1% cd /net/bigbox/data/archived
user1% ls -ld .
drwxrwxrwx   9 nobody   nobody10 Nov  3 14:49 ./
user1% touch me
user1% mkdir foo
mkdir: Failed to make directory foo; Permission denied




Client B:  Solaris 8/9, various Linuxes, both x86/SPARC

user1% cd /net/bigbox/data/archived
user1% ls -ld .
drwxrwxrwx   9 root other 11 Nov  3 14:49 ./
user1% touch me
user1% mkdir foo




It looks like the Solaris 10 machines aren't mapping the userIDs
correctly. All machines belong to the same NIS domain. I suspect NFSv4,
but can't be sure. Am I doing something wrong here?



Make sure you're NFSv4 mapid domain matches (client and server).

http://blogs.sun.com/erickustarz/entry/nfsmapid_domain

You can override the default in /etc/default/nfs, and can check what 
your current one is /var/run/nfs4_domain.


eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss