imed out\n
02/11/2009 23:27:04 com.apple.Finder[257] mount_nfs: bad MNT RPC: RPC:
Timed out\n
02/11/2009 23:27:04 com.apple.Finder[257] mount_nfs: can't access
/export/home/ben/Documents: Permission denied
02/11/2009 23:27:45 com.apple.Finder[257] mount_nfs: bad MNT RPC: RPC:
Timed out\n
ile on the server everything
was fine.
I did notice while playing that SSH'ing to the server was taking a stupid
amount of time (perhaps 40 seconds to as for my password and finally connect).
I presume SSH didn't time out as quickly as the mount request did.
Many thanks to you all :)
Ben
-
Hey Guys,
I'm trying to tune my NFS enviroment and have yet to make any improvement, I
was hoping someone could offer some experience in this situation.
Here's the problem: I've got a bunch of X4100 clients (NV_B43) and a Thumper
(NV_B43) NFS server using ZFS for storage. I'm currently u
I've got a Thumper doing nothing but serving NFS. Its using B43 with
zil_disabled. The system is being consumed in waves, but by what I don't know.
Notice vmstat:
3 0 0 25693580 2586268 0 0 0 0 0 0 0 0 0 0 0 926 91 703 0 25 75
21 0 0 25693580 2586268 0 0 0 0 0 0 0 0 0 13
Eric Kustarz wrote:
> Ben Rockwood wrote:
> > I've got a Thumper doing nothing but serving NFS. Its using B43 with
> > zil_disabled. The system is being consumed in waves, but by what I
> > don't know. Notice vmstat:
>
> We made several performance fixes in
I wanted to add one more piece of information to this problem that may
or not may be helpful.
On an NFS client if we just do "ls" commands over and over and over we
can snoop the wire and see TCP retransmits whenever the CPU is burned
up. nfsstat doesn't record these retransmits, they are happ
eric kustarz wrote:
> So i'm guessing there's lots of files being created over NFS in one
> particular dataset?
>
> We should figure out how many creates/second you are doing over NFS (i
> should have put a timeout on the script). Here's a real simple one
> (from your snoop it looked like you'r
Spencer Shepler wrote:
> Good to hear that you have figured out what is happening, Ben.
>
> For future reference, there are two commands that you may want to
> make use of in observing the behavior of the NFS server and individual
> filesystems.
>
> There is the trusty, nfss
Bill Moore wrote:
> On Fri, Dec 08, 2006 at 12:15:27AM -0800, Ben Rockwood wrote:
>
>> Clearly ZFS file creation is just amazingly heavy even with ZIL
>> disabled. If creating 4,000 files in a minute squashes 4 2.6Ghz Opteron
>> cores we're in big trouble in the
gt;>
>
> ek> Actually i forgot he had 'zil_disable' turned on, so it won't matter in
> ek> this case.
>
>
> Ben, are you sure zil_disable was set to 1 BEFORE pool was imported?
>
Yes, absolutely. Set var in /etc/system, reboot, system come
With all the B43 systems I have there has only been one re-occuring problem
that I see across almost all of them. From time to time a system will panic
and they all look like this:
> ::status
debugging crash dump vmcore.0 (64-bit) from rosario
operating system: 5.11 snv_43 (i86pc)
panic message
Robert Gordon wrote:
>
> This panic is covered by the bug id 6459866, and was delivered in
> build 52...
>
> -- Robert.
Thanks Robert!
benr.
Any idea as to how feasible or difficult it would be to back port this to B43
as a patch?
This message posted from opensolaris.org
Mahesh Siddheshwar wrote:
> Robert Gordon wrote:
>>
>> I'm not sure. The suggested fix in the bug looks like a simple one
>> liner.. maybe Sameer/Mahesh can comment ?
>>
>>
>> Robert..
>>
>>
>> On Dec 13, 2006, at 3:16 PM, Ben Rock
snf_smap_d
In-Reply-To: <4580FC66.1000304 at sun.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-OpenSolaris-URL:
http://www.opensolaris.org/jive/message.jspa?messageID=79659&tstart=0#79659
Could you please point me to the magic one line fi
Can someone please clarify the ability to utilize ACL's over NFSv3 from a ZFS
share? I can "getfacl" but I can't "setfacl". I can't find any documentation
in this regard. My suspicion is that that ZFS Shares must be NFSv4 in order to
utilize ACLs but I'm hoping this isn't the case.
Can anyon
For the archives sake I'll say that this question was answered on the ZFS list
by Mark Shellenbaum. Short answer, NFSv3 does not support ACL's when served
from ZFS, use NFSv4. See the message of the same subject on ZFS-Discuss for
the full reply.
benr.
This message posted from opensolaris
Is there a "standard" or "recommended" way of securing the various client
daemons (statd and lockd for NFSv3, add mapid and cbd for NFSv4)? Even with
rpcbind set to local_only the ports are still accessible and thus exploitable.
Is a IPFilter the only real solution?
benr.
This message pos
I encountered an odd issue recently where NFS mounts and permissions worked
properly but NFS locking did not.
If I export a share with the options: "rw,root=192.168.100.50" locking doesn't
work. However, if I just reshare using the options "anon=0,rw=192.168.100.50"
locking works fine.
Can so
I just rebooted this host this morning and the same thing happened again. I
have the core file from zfs.
[ Apr 26 07:47:01 Executing start method ("/lib/svc/method/nfs-server start") ]
Assertion failed: pclose(fp) == 0, file ../common/libzfs_mount.c, line 380, func
tion zfs_share
Abort - core du
I was able to duplicate this problem on a test Ultra 10. I put in a workaround
by adding a service that depends on /milestone/multi-user-server which does a
'zfs share -a'. It's strange this hasn't happened on other systems, but maybe
it's related to slower systems
I was really hoping for some option other than ZIL_DISABLE, but finally gave up
the fight. Some people suggested NFSv4 helping over NFSv3 but it didn't... at
least not enough to matter.
ZIL_DISABLE was the solution, sadly. I'm running B43/X86 and hoping to get up
to 48 or so soonish (I BFU'd
I have several different NFS environments, in some cases .nfs* files are a big
problem, in others its not. As I understand it, a .nfs* file is created when a
file is removed while its still got an open file handle.
Can I disable this behavior via some tunable?
What, outside of the applicatio
The files are not removed. The means by which that happens is the cronjob
scheduled weekly, however these are very large NFS servers and the result of
those types of find's have a massive impact on performance and take a very long
time to run, thus I disable it.
In all cases its mail serving (
Mike Gerdts wrote:
> Over the last couple of months I have noticed lots of .nfs files being
> left around while using cvs.
>
> A typical command:
>
> cvs -d $cvsroot co -d dotnfs-`uname -r` -r $release jass
>
> On snv_99:
>
> $ find dotnfs-5.11 -name .nfs\* | wc -l
> 822
>
> That number does n
25 matches
Mail list logo