Our nfs server has failed to server clients recently. Users we not able to 
cd to there home directories

messages.0:May  2 09:31:33 loginhost  automountd[760]: [ID 801587 
daemon.error] nfsp-prv:/home/usera: File table overflow

Applications were failing to start: 

| Date/Time         :- Friday May 02 11:01:09 CDT 2008    | 
| Host Name         :- hostname (SunOS 5.9)     | 
| PIDS              :- 5724B4103    | 
| LVLS              :- 530.5  CSD05    | 
| Product Long Name :- WebSphere MQ for Sun Solaris    | 
| Vendor            :- IBM    | 
| Probe Id          :- XC022001    | 
| Application Name  :- MQM    | 
| Component         :- xcsDisplayMessage    | 
| Build Date        :- Sep 27 2003    | 
| CMVC level        :- p530-05-L030926    | 
| Build Type        :- IKAP - (Production)    | 
| UserID            :- 00000215 (mqm)    | 
| Program Name      :- runmqchl_nd    | 
| Process           :- 00025118    | 
| Thread            :- 00000001    | 
| Major Errorcode   :- xecF_E_UNEXPECTED_SYSTEM_RC    | 
| Minor Errorcode   :- OK    | 
| Probe Type        :- MSGAMQ6119    | 
| Probe Severity    :- 2    | 
| Probe Description :- AMQ6119: An internal WebSphere MQ error has 
occurred   |
|   ('23 - File table overflow' from open.)    | 
| FDCSequenceNumber :- 0    | 
| Arith1            :- 23 17    | 
| Comment1          :- '23 - File table overflow' from open. 



It was a lot of warning messages in /var/adm/messages. 

May  2 13:16:04 hostname vxfs: [ID 702911 kern.warning] WARNING: msgcnt 
978 mesg 014: V-2-14: vx_iget - inode table overflow

vxfs_inode was set to 

set vxfs:vxfs_ninode=427815                     during the nfs server 
failure. 


Sun has provided us explanation of the error 23 "File table overflow". 


"File table overflow" or errno=23 is reported by the application when it
tries opening a file on vxfs file system that has exhausted in core
inode table entries. So when vxfs reports:

    "vx_iget - inode table overflow"

It is propagated to application as errno=23 (ENFILE):

#define ENFILE  23      /* File table overflow    */


We were not able to find a lot of open files either. 

It looks like this is a bug in Veritas. We are running 4.1 with patches. 


[EMAIL PROTECTED]:/var/../errors # pkginfo -l VRTSvxvm
   PKGINST:  VRTSvxvm
      NAME:  VERITAS Volume Manager, Binaries
  CATEGORY:  system
      ARCH:  sparc
   VERSION:  4.1,REV=02.17.2005.21.28
   BASEDIR:  /
    VENDOR:  VERITAS Software
      DESC:  Virtual Disk Subsystem
    PSTAMP:  VERITAS-4.1z:2005-02-17
  INSTDATE:  May 19 2007 19:03
   HOTLINE:  800-342-0652
     EMAIL:  [EMAIL PROTECTED]
    STATUS:  completely installed
     FILES:      809 installed pathnames
                  26 shared pathnames
                  17 linked files
                  98 directories
                 417 executables
              291119 blocks used (approx)

[EMAIL PROTECTED]:/var/../errors # modinfo | grep -i vxvm
 17  11c75c7 20f869 315   1  vxio (VxVM 4.1z I/O driver)
 18  13b66f0  26abc 211   1  vxdmp (VxVM 4.1z: DMP Driver)
 35 78046944   1499 213   1  vxspec (VxVM 4.1z control/status driver)

We have increased 
set vxfs:vxfs_ninode=640000. 
But have no confidence it's enough. 

We do have a bunch of large directories with hundreds of thousand files. 

Anyone seen the problem? Knows what it is? 
If that the way VxFS works how we can calculate vxfs_ninode number? 


--
Sincerely,


Andrey Shinkarev, PMP
Infrastructure Architect
CIT Technology Engineering
Unix/VMS Platforms
Abbott
100 Abbott Park Rd
AP14B GB08
Abbott Park, IL
60064-6041 USA
Phone 847 938 7559
Fax 847 937 4160
Pager 847 774 4487
Assistance 847 937 6547
[EMAIL PROTECTED]




This communication may contain information that is proprietary, 
confidential, or exempt from disclosure. If you are not the intended 
recipient, please note that any other dissemination, distribution, use or 
copying of this communication is strictly prohibited. Anyone who receives 
this message in error should notify the sender immediately by telephone or 
by return e-mail and delete it from his or her computer.

<<image/gif>>

_______________________________________________
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

Reply via email to