images and take
advantage of the newly-available space on the separate host, just as though
you were expanding a RAID device. Maybe that fits your use case, Ruslan?
On 11/21/2012 12:05 PM, ruslan usifov wrote:
Yes i mean exactly this. it's a great pity :-( Maybe present some ceph
equivalent
Yes i mean exactly this. it's a great pity :-( Maybe present some ceph
equivalent that solve my problem?
2012/11/21 Gregory Farnum g...@inktank.com:
On Wed, Nov 21, 2012 at 4:33 AM, ruslan usifov ruslan.usi...@gmail.com
wrote:
So, not possible use ceph as scalable block device without
Hello
Now i can't find link where i read this info (this was old ceph wiki)
but there was written that rbd on osd can prevent hung. Does this
situation actual for present days?
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
Hello
I test ceph cluster on VmWare machines (3 nodes in cluster) to make
rbd scalable block device, and have troubles when try to map rbd image
to device, i got follow message in kernel.log
Nov 13 15:32:47 ceph-precie-64-02 kernel: [ 188.319319] [
cut here ]
Nov 13
How can i compile current version of rbd module? Now i use rbd module
that goes with standart lunux kernel in ubuntu 12.04
2012/11/13 Alex Elder el...@inktank.com:
On 11/13/2012 05:54 AM, ruslan usifov wrote:
Hello
I test ceph cluster on VmWare machines (3 nodes in cluster) to make
rbd
2012/3/24 Sage Weil s...@newdream.net:
On Fri, 23 Mar 2012, ruslan usifov wrote:
Sorry for my bad English.
I mean that, if throw pacemaker we organize fault tolerant monitor
(monitor that will work all time - even in fail case), we prevent
The other key thing to keep in mind
2012/3/22 Greg Farnum gregory.far...@dreamhost.com:
On Wednesday, March 21, 2012 at 8:30 AM, ruslan usifov wrote:
Hello
I'm new to ceph, and perhaps misunderstand some things.
I have test configuration with 3 wmvare machines (i test RBD). My setup
consist of:
3: mons
3: osd
When i
Hello
I'm new to ceph, and perhaps misunderstand some things.
I have test configuration with 3 wmvare machines (i test RBD). My setup
consist of:
3: mons
3: osd
When i kill one mon (simulate fail), time to time (periodicaly) i got little
delays when work with RBD device, perhaps this happens
2012/3/20 Greg Farnum gregory.far...@dreamhost.com:
On Monday, March 19, 2012 at 11:44 AM, ruslan usifov wrote:
Sorry but no, i use precompiled binaries from this
http://ceph.newdream.net/debian. Perhaps this helps, initialy i
configure all ceph services mon, mds, osd, but then i test only rdb
2012/3/19 Greg Farnum gregory.far...@dreamhost.com:
On Monday, March 19, 2012 at 7:33 AM, ruslan usifov wrote:
Hello
I have follow stack trace:
#0 0xb77fa424 in __kernel_vsyscall ()
(gdb) bt
#0 0xb77fa424 in __kernel_vsyscall ()
#1 0xb77e98a0 in raise () from /lib/i386-linux-gnu
10 matches
Mail list logo