Fwd: does still not recommended place rbd device on nodes, where osd daemon located?

2012-11-22 Thread ruslan usifov
images and take advantage of the newly-available space on the separate host, just as though you were expanding a RAID device. Maybe that fits your use case, Ruslan? On 11/21/2012 12:05 PM, ruslan usifov wrote: Yes i mean exactly this. it's a great pity :-( Maybe present some ceph equivalent

Re: does still not recommended place rbd device on nodes, where osd daemon located?

2012-11-21 Thread ruslan usifov
Yes i mean exactly this. it's a great pity :-( Maybe present some ceph equivalent that solve my problem? 2012/11/21 Gregory Farnum g...@inktank.com: On Wed, Nov 21, 2012 at 4:33 AM, ruslan usifov ruslan.usi...@gmail.com wrote: So, not possible use ceph as scalable block device without

does still not recommended place rbd device on nodes, where osd daemon located?

2012-11-20 Thread ruslan usifov
Hello Now i can't find link where i read this info (this was old ceph wiki) but there was written that rbd on osd can prevent hung. Does this situation actual for present days? -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to

Fwd: rbd kernel module fail

2012-11-13 Thread ruslan usifov
Hello I test ceph cluster on VmWare machines (3 nodes in cluster) to make rbd scalable block device, and have troubles when try to map rbd image to device, i got follow message in kernel.log Nov 13 15:32:47 ceph-precie-64-02 kernel: [ 188.319319] [ cut here ] Nov 13

Re: Fwd: rbd kernel module fail

2012-11-13 Thread ruslan usifov
How can i compile current version of rbd module? Now i use rbd module that goes with standart lunux kernel in ubuntu 12.04 2012/11/13 Alex Elder el...@inktank.com: On 11/13/2012 05:54 AM, ruslan usifov wrote: Hello I test ceph cluster on VmWare machines (3 nodes in cluster) to make rbd

Re: periodically delays when one of mons dies

2012-03-25 Thread ruslan usifov
2012/3/24 Sage Weil s...@newdream.net: On Fri, 23 Mar 2012, ruslan usifov wrote: Sorry for my bad English. I mean that, if throw pacemaker we organize fault tolerant monitor (monitor that will work all time - even in fail case), we prevent The other key thing to keep in mind

Re: periodically delays when one of mons dies

2012-03-23 Thread ruslan usifov
2012/3/22 Greg Farnum gregory.far...@dreamhost.com: On Wednesday, March 21, 2012 at 8:30 AM, ruslan usifov wrote: Hello I'm new to ceph, and perhaps misunderstand some things. I have test configuration with 3 wmvare machines (i test RBD). My setup consist of: 3: mons 3: osd When i

periodically delays when one of mons dies

2012-03-21 Thread ruslan usifov
Hello I'm new to ceph, and perhaps misunderstand some things. I have test configuration with 3 wmvare machines (i test RBD). My setup consist of: 3: mons 3: osd When i kill one mon (simulate fail), time to time (periodicaly) i got little delays when work with RBD device, perhaps this happens

Ceph mon crash

2012-03-20 Thread ruslan usifov
2012/3/20 Greg Farnum gregory.far...@dreamhost.com: On Monday, March 19, 2012 at 11:44 AM, ruslan usifov wrote: Sorry but no, i use precompiled binaries from this http://ceph.newdream.net/debian. Perhaps this helps, initialy i configure all ceph services mon, mds, osd, but then i test only rdb

Re: Ceph mon crash

2012-03-19 Thread ruslan usifov
2012/3/19 Greg Farnum gregory.far...@dreamhost.com: On Monday, March 19, 2012 at 7:33 AM, ruslan usifov wrote: Hello I have follow stack trace: #0 0xb77fa424 in __kernel_vsyscall () (gdb) bt #0 0xb77fa424 in __kernel_vsyscall () #1 0xb77e98a0 in raise () from /lib/i386-linux-gnu