On Mon, May 11, 2009 23:57, Sage Weil wrote:
>> how much should i expext it to be working?
>
> Clustered mds isn't working, and the snapshots haven't been tested with
> some of the recent changes, but for basic fs usage it should be
> pretty stable.  There are some lingering issues with the mds restarting
> that have to be sorted out, but it will mostly work.
>
>> the kernel client was stalled several times while
>> copying directories in my test setup.
>
> When the kernel is doing its async data writeback, metadata is also
> flushed back to the mds, which makes the progress of other mds operations
> stall.

hi sage!

wanted to let you know about some progress i made:
i changed the virtualbox network settings to use a host-internal
tun/tap interface that the vboxes bridge to now. this seems
better performance-wise and i did not get stalled shells or
similar up to now. there is one issue though:
i wrote a small zope test application (zope is a python appserver)
that shows a) the last 20 db transactions and b) the last 20 created
files (on a ceph mounted dir). "newtxn" creates a db txn and a file
on each call.

db transactions work fine, they're handled by a mysql backend.
but there's an issue with files:

initially there were lots of files in the folder. the zope app
showed latest 20 modified of them. then i purged the whole dir
on a second server with the same mount.

from that time on the zope app showed still the contents of the
dir before the purge (20 files). if i commit a new txn and write
a new file to the folder, the reloaded page shows only the newly
created files (correct dir state). if i only reload for a read,
it shows old contents again.

so to summarize: directory contents are out of sync accross
multiple mounts/nodes. it's also in the shell, so it's not a
problem of the zope app for sure!

if such things happen, should there be an hints in the logfiles?
can i do anything to find out what's going on here?

btw one log entry that looks just not right:
09.05.12 00:44:16.207487 1150765392 -- 10.0.85.102:6800/4690/0 >>
  10.0.85.102:6801/4722/0 pipe(0x1291280 sd=11 pgs=0 cs=0).connect
  claims to be 0.0.0.0:6801/4722/0 not 10.0.85.102:6801/4722/0 -
  presumably this is the same node!

is that of concern? mon0/ofs0/mds ARE on the same node.
just like mon1/ofs1/second mds - problem?

still trying to get qemu with kvm support and bridged ethernet
devices working, to have a comparison to that damn virtualbox
setup. will need some further reading i guess...

bes regards!
jürgen
--
>> XLhost.de - eXperts in Linux hosting ® <<

XLhost.de GmbH
Jürgen Herrmann, Geschäftsführer
Boelckestrasse 21, 93051 Regensburg, Germany

Geschäftsführer: Volker Geith, Jürgen Herrmann
Registriert unter: HRB9918
Umsatzsteuer-Identifikationsnummer: DE245931218

Fon:  +49 (0)700 XLHOSTDE [0700 95467833]
Fax:  +49 (0)700 XLHOSTDE [0700 95467833]

WEB:  http://www.XLhost.de
IRC:  #xlh...@irc.quakenet.org


------------------------------------------------------------------------------
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel

Reply via email to