Fwd: does still not recommended place rbd device on nodes, where osd daemon located?

2012-11-22 Thread ruslan usifov
Hello Thank for your attention, and sorry for my bad english! In my draft architecture, i want use same hardware for osd and rbd devices. In other words, i have 5 nodes this 5TB software raid on each Disk space. I want build on this nodes, ceph cluster. All 5 nodes will be run OSD and, on the

Re: does still not recommended place rbd device on nodes, where osd daemon located?

2012-11-21 Thread Gregory Farnum
On Wed, Nov 21, 2012 at 4:33 AM, ruslan usifov ruslan.usi...@gmail.com wrote: So, not possible use ceph as scalable block device without visualization? I'm not sure I understand, but if you're trying to take a bunch of compute nodes and glue their disks together, no, that's not a supported use

Re: does still not recommended place rbd device on nodes, where osd daemon located?

2012-11-21 Thread ruslan usifov
Yes i mean exactly this. it's a great pity :-( Maybe present some ceph equivalent that solve my problem? 2012/11/21 Gregory Farnum g...@inktank.com: On Wed, Nov 21, 2012 at 4:33 AM, ruslan usifov ruslan.usi...@gmail.com wrote: So, not possible use ceph as scalable block device without

Re: does still not recommended place rbd device on nodes, where osd daemon located?

2012-11-21 Thread Dan Mick
Still not certain I'm understanding *just* what you mean, but I'll point out that you can set up a cluster with rbd images, mount them from a separate non-virtualized host with kernel rbd, and expand those images and take advantage of the newly-available space on the separate host, just as

does still not recommended place rbd device on nodes, where osd daemon located?

2012-11-20 Thread ruslan usifov
Hello Now i can't find link where i read this info (this was old ceph wiki) but there was written that rbd on osd can prevent hung. Does this situation actual for present days? -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to