Hello!
On May 15, 2008, at 5:34 AM, Jakob Goldbach wrote:
On a regular basis a process get stuck in __d_lookup. When I dig in it
seems I'm caught in the hlist_for_each_entry_rcu loop never satisfying
the exit-from-loop condition.
Hm, so you actually have a circular loop? I wonder if you can
Hello!
On May 13, 2008, at 12:37 PM, Wei-keng Liao wrote:
What versions of Lustre support fcntl for byte-range file locking?
Attached is a test program extracted from ROMIO to test fcntl() for
locking. It ran fine on some Lustre versions.
Certainly many versions, going far back to 1.2 and
Hello!
On Apr 14, 2008, at 3:43 AM, Jakob Goldbach wrote:
Is this under-reporting breaking anything for you, I wonder?
No - I was just worried what happened to all my inodes. I planned
about
200M inodes on the MDS and less than half showed up on the clients.
Do you want me to file a
Hello!
On Apr 11, 2008, at 3:48 AM, Jakob Goldbach wrote:
The inode count you get on the client side also takes into account
the
number of inodes available on the OSTs and the default stripe count.
The stripecount is 1. (from /proc/../lov/../stripecount) - The OSTs
have
plenty ~400M
Hello!
On Mar 6, 2008, at 10:57 AM, Balagopal Pillai wrote:
On a few of the hpc cluster nodes, i am seeing a new
lustre
error that is pasted below. The volumes are working fine and there
is nothing on the oss and mds to report.
LustreError:
Hello!
On Mar 5, 2008, at 11:33 AM, Joe Barjo wrote:
While making my tests, I saw that the flock system call was not
working.
Googling aroung I found the flock option in the mount command, and it
seems to work just fine.
However, I've read in the documentation that flock will only be
Hello!
On Mar 4, 2008, at 4:44 AM, gas5x1 wrote:
Could you please advice me, how, if at all passible, is to install
Lustre on IBM PPC64? I have already Lustre 1.6 installation working
for Intel i386 and AMD Opteron nodes, and now would like to acess it
from IBM clients.
You just compile as
Hello!
On Feb 18, 2008, at 4:55 PM, Charles Taylor wrote:
Feb 18 15:25:50 hpcmds kernel: LustreError: 7162:0:(mgs_handler.c:
515:mgs_handle()) lustre_mgs: operation 101 on unconnected MGS
Feb 18 15:25:50 hpcmds kernel: LustreError: 7162:0:(mgs_handler.c:
515:mgs_handle()) Skipped 263
Hello!
On Feb 18, 2008, at 5:04 PM, Charles Taylor wrote:
Well, yes. But the evictions are the result of the job trying to
start. Absent that, there are no evictions.A bunch of threads
trying to open the same file should not cause the clients to be
evicted.That's an odd way
Hello!
On Feb 18, 2008, at 5:13 PM, Charles Taylor wrote:
Feb 18 15:32:47 r5b-s42 kernel: LustreError: 11-0: an error occurred
while communicating with [EMAIL PROTECTED] The mds_close operation
failed with -116
Feb 18 15:32:47 r5b-s42 kernel: LustreError: Skipped 3 previous
similar messages
101 - 110 of 110 matches
Mail list logo