[Lustre-discuss] ll_inode_revalidate_fini()) failure -108 inode

2010-01-23 Thread DT Piotr Wadas


Hello,
Using 1.8.1.1 server/patched client, with one OST for now, 
remote directory mounted with flock option.

I the following appears in kernel log: 

LustreError: 12864:0:(mdc_locks.c:618:mdc_enqueue()) ldlm_cli_enqueue: -108
LustreError: 12864:0:(mdc_locks.c:618:mdc_enqueue()) Skipped 2103 previous 
similar messages
LustreError: 12881:0:(file.c:3143:ll_inode_revalidate_fini()) failure -108 
inode 1024203
LustreError: 12881:0:(file.c:3143:ll_inode_revalidate_fini()) Skipped 2103 
previous similar messages

With the same inode number each time. Actually, this appears only
on one lustre client (for now there are two, both use flock).

What this message says ? Shall I lfsck the filesystem ? 
Regards,
DT

-- 
http://dtpw.pl/buell [ 25th anniversary of Buell - American Motorcycles ]
Linux aleft 2.6.27.29-0.1_lustre.1.8.1.1-default #1 SMP
drbd version: 8.3.7 (api:88/proto:86-91)
pacemaker 1.0.6-cebe2b6ff49b36b29a3bd7ada1c4701c7470febe


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] MDS crashes daily at the same hour

2010-01-23 Thread Christopher J. Walker
Brian J. Murrell wrote:
 On Wed, 2010-01-06 at 11:25 +0200, David Cohen wrote: 
 I was indeed the *locate update, a simple edit of /etc/updatedb.conf on the 
 clients and the system is stable again.
 

I've just encountered the same thing - the mds crashing at the same time 
several times this week. It's just after the *locate update - I've added 
lustre to the excluded filesystems, and so far so good.

 Great.  But as Andreas said previously, load should not have caused the
 LBUG that you got.  Could you open a bug on our bugzilla about that?
 Please attach to that bug an excerpt from the tech-mds log that covers a
 window of time of 12h hours prior to the the LBUG and an hour after.
 

I don't see an LBUG in my logs, but there are several Call Traces. Would 
it be useful if I filed a bug too or I could add to David's bug if you'd 
prefer  - if so, can you let me know the bug number as I can't find it 
in bugzilla.  Would you like /tmp/lustre-log.* too?

Chris
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Using SSDs for OSS Read Cache on servers with little RAM?

2010-01-23 Thread Scott Adametz
Any possibility to redirect Lustre's OSS Cache or portions of the Linux
pagecache to a location other than RAM such as to an internal SSD (perhaps
even two in RAID0).  I ask because the servers we have available (read:
second hand servers) for OSSs physically max out at 8GB RAM.

The wiki states: *It uses a regular Linux pagecache to store the data.*  I
looked into pagecache and pdflush tunables but found little on directing the
actual location of the cached content as it tends to focus heavily on tuning
timing of writes-*to*-disk vs reads.Yes, I understand the absurdity of
trying to redirect RAM cached content back to disk but with rapidly
evolving SSD design and performance it actually makes some sense in this
case for Lustre cache data.  Obviously I would prefer to redirect
*only*Lustre OSS cache data and not the entire Operating System page
cache.

While an SSD is nowhere near as quick as RAM, it would still seem beneficial
to have more frequently viewed content available for reads even from an SSD
vs a spinning disk, especially with our use case (HD video editing and
playback).  Until SSD prices come down to reasonable levels and we can
afford to replace all spinning disks with SSDs, any little bit helps ;-)

In sum: just a thought to be able to designate, in Lustre configs, an
additional low-latency swap or cache location on the OSSs.  Adding one
SSD to each OSS sure beats having to buy new servers with more RAM capacity
or using all SSD drives and blowing the budget...

Thanks for the time,

Scott Adametz
Systems Engineer
Big Ten Network
FOX Networks Group
scott.adam...@fox.com
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss