Re: defaults paths

2012-04-05 Thread Andrey Korolyov
feel it's up to the sysadmin to mount / symlink the correct storage devices on the correct paths - ceph should not be concerned that some volumes might need to sit together. Rgds, Bernard On 05 Apr 2012, at 09:12, Andrey Korolyov wrote: Right, but probably we need journal separation

Re: rbd snapshot in qemu and libvirt

2012-04-18 Thread Andrey Korolyov
I have tested all of them about a week ago, all works fine. Also it will be very nice if rbd can list an actual allocated size of every image or snapshot in future. On Wed, Apr 18, 2012 at 5:22 PM, Martin Mailand mar...@tuxadero.com wrote: Hi Wido, I am looking for doing the snapshots via

Re: rbd snapshot in qemu and libvirt

2012-04-18 Thread Andrey Korolyov
: Disk 'rbd/vm1:rbd_cache_enabled=1' does not support snapshotting maybe the rbd_cache option is the problem? -martin Am 18.04.2012 16:39, schrieb Andrey Korolyov: I have tested all of them about a week ago, all works fine. Also it will be very nice if rbd can list an actual allocated size

collectd and ceph plugin

2012-04-21 Thread Andrey Korolyov
Hello everyone, I have just tried ceph collectd fork on wheezy and noticed that all logs for ceph plugin produce nothing but zeroes(see below) for all types of nodes. Python cephtool works just fine. Collectd run as root and there is no obvious errors like socket permissions and no tips from its

'rbd map' asynchronous behavior

2012-05-15 Thread Andrey Korolyov
Hi, There are strange bug when I tried to map excessive amounts of block devices inside the pool, like following for vol in $(rbd ls); do rbd map $vol; [some-microsleep]; [some operation or nothing, I have stubbed guestfs mount here] ; [some-microsleep]; unmap /dev/rbd/rbd/$vol ;

Re: 'rbd map' asynchronous behavior

2012-05-16 Thread Andrey Korolyov
josh.dur...@inktank.com wrote: On 05/15/2012 04:49 AM, Andrey Korolyov wrote: Hi, There are strange bug when I tried to map excessive amounts of block devices inside the pool, like following for vol in $(rbd ls); do rbd map $vol; [some-microsleep]; [some operation or nothing, I have

Re: how to debug slow rbd block device

2012-05-22 Thread Andrey Korolyov
Hi, I`ve run in almost same problem about two months ago, and there is a couple of corner cases: near-default tcp parameters, small journal size, disks that are not backed by controller with NVRAM cache and high load on osd` cpu caused by side processes. Finally, I have able to achieve 115Mb/s

Re: how to debug slow rbd block device

2012-05-23 Thread Andrey Korolyov
Hi, For Stefan: Increasing socket memory gave me about some percents on fio tests inside VM(I have measured 'max-iops-until-ceph-throws-message-about-delayed-write' parameter). What is more important, osd process, if possible, should be pinned to dedicated core or two, and all other processes

Re: 'rbd map' asynchronous behavior

2012-05-25 Thread Andrey Korolyov
] [8146a839] ? system_call_fastpath+0x16/0x1b On Wed, May 16, 2012 at 12:24 PM, Andrey Korolyov and...@xdel.ru wrote: This is most likely due to a recently-fixed problem. The fix is found in this commit, although there were other changes that led up to it:   32eec68d2f   rbd: don't drop

Re: Random data corruption in VM, possibly caused by rbd

2012-06-07 Thread Andrey Korolyov
Hmm, can`t reproduce that(phew!). Qemu-1.1-release, 0.47.2, guest/host mainly debian wheezy. Only one main difference with my setup from yours is a underlying fs - I`m tired of btrfs unpredictable load issues and moved back to xfs. BTW you calculate sha1 in test suite, not sha256 as you mentioned

Re: Rolling upgrades possible?

2012-06-22 Thread Andrey Korolyov
On Fri, Jun 22, 2012 at 1:23 PM, John Axel Eriksson j...@insane.se wrote: I guess this has been asked before, I'm just new to the list and wondered whether it's possible to do rolling upgrades of mons, osds and radosgw? We will soon be in the process of migrating from our current storage

Re: Adding a delay when restarting all OSDs on a host

2014-07-22 Thread Andrey Korolyov
On Tue, Jul 22, 2014 at 5:19 PM, Wido den Hollander w...@42on.com wrote: Hi, Currently on Ubuntu with Upstart when you invoke a restart like this: $ sudo restart ceph-osd-all It will restart all OSDs at once, which can increase the load on the system a quite a bit. It's better to restart

Re: Adding a delay when restarting all OSDs on a host

2014-07-22 Thread Andrey Korolyov
On Tue, Jul 22, 2014 at 6:28 PM, Wido den Hollander w...@42on.com wrote: On 07/22/2014 03:48 PM, Andrey Korolyov wrote: On Tue, Jul 22, 2014 at 5:19 PM, Wido den Hollander w...@42on.com wrote: Hi, Currently on Ubuntu with Upstart when you invoke a restart like this: $ sudo restart ceph

Re: [Qemu-devel] qemu drive-mirror to rbd storage : no sparse rbd image

2014-10-11 Thread Andrey Korolyov
On Sat, Oct 11, 2014 at 12:25 PM, Fam Zheng f...@redhat.com wrote: On Sat, 10/11 10:00, Alexandre DERUMIER wrote: What is the source format? If the zero clusters are actually unallocated in the source image, drive-mirror will not write those clusters either. I.e. with drive-mirror sync=top,

Multiple issues with glibc heap management

2014-10-13 Thread Andrey Korolyov
Hello, since very long period (at least from cuttlefish) many users, including me, experiences rare but still very disturbing client crashes (#8385, #6480, and couple of other same-looking traces for different code pieces, I may start the corresponding separate bugs if necessary). The main

leaking mons on a latest dumpling

2015-04-15 Thread Andrey Korolyov
Hello, there is a slow leak which is presented in all ceph versions I assume but it is positively exposed only on large time spans and on large clusters. It looks like the lower is monitor placed in the quorum hierarchy, the higher the leak is:

Re: leaking mons on a latest dumpling

2015-04-16 Thread Andrey Korolyov
On Thu, Apr 16, 2015 at 11:30 AM, Joao Eduardo Luis j...@suse.de wrote: On 04/15/2015 05:38 PM, Andrey Korolyov wrote: Hello, there is a slow leak which is presented in all ceph versions I assume but it is positively exposed only on large time spans and on large clusters. It looks like

Re: Preliminary RDMA vs TCP numbers

2015-04-08 Thread Andrey Korolyov
On Wed, Apr 8, 2015 at 11:17 AM, Somnath Roy somnath@sandisk.com wrote: Hi, Please find the preliminary performance numbers of TCP Vs RDMA (XIO) implementation (on top of SSDs) in the following link. http://www.slideshare.net/somnathroy7568/ceph-on-rdma The attachment didn't go

Re: osd: new pool flags: noscrub, nodeep-scrub

2015-09-11 Thread Andrey Korolyov
On Fri, Sep 11, 2015 at 4:24 PM, Mykola Golub wrote: > On Fri, Sep 11, 2015 at 05:59:56AM -0700, Sage Weil wrote: > >> I wonder if, in addition, we should also allow scrub and deep-scrub >> intervals to be set on a per-pool basis? > > ceph osd pool set [deep-]scrub_interval

<    1   2