[ceph-users] Small modifications to Bluestore migration documentation

2018-02-27 Thread Alexander Kushnirenko
Hello, Luminous 12.2.2 There were several discussions on this list concerning Bluestore migration, as official documentation does not work quite well yet. In particular this one http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-January/024190.html Is it possible to modify official

Re: [ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-10-16 Thread Alexander Kushnirenko
47 48 .. 4f 50 .. 57 58 .. 5f Obj5= 8MObj6= 8MObj7= 8MObj8= 8M 60 .. 6768 .. 6f70 .. 7778 .. 7f Alexander. On Wed, Oct 11, 2017 at 3:19 PM, Alexander Kushnirenko < kushnire...@gmail.com> wrote: > Oh! I put a wrong

Re: [ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-10-11 Thread Alexander Kushnirenko
, Alexander Kushnirenko < kushnire...@gmail.com> wrote: > Hi, Ian! > > Thank you for your reference! > > Could you comment on the following rule: > object_size = stripe_unit * stripe_count > Or it is not necessarily so? > > I refer to page 8 in this report: > &

Re: [ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-10-11 Thread Alexander Kushnirenko
Hi, Ian! Thank you for your reference! Could you comment on the following rule: object_size = stripe_unit * stripe_count Or it is not necessarily so? I refer to page 8 in this report: https://indico.cern.ch/event/531810/contributions/2298934/at

Re: [ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-10-11 Thread Alexander Kushnirenko
triper.hpp has one in RadosStriper > set_object_layout_object_size(unsigned int object_size); > > So I imagine you specify it with those the same way you've set the stripe > unit and counts. > > On Sat, Oct 7, 2017 at 12:38 PM Alexander Kushnirenko < > kushnire...@gmail.com>

[ceph-users] advice on number of objects per OSD

2017-10-10 Thread Alexander Kushnirenko
Hi, Are there any recommendations on what is the limit when osd performance start to decline because of large number of objects? Or perhaps a procedure on how to find this number (luminous)? My understanding is that the recommended object size is 10-100 MB, but is there any performance hit due

Re: [ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-10-07 Thread Alexander Kushnirenko
13bc9d72b1] 10: (()+0xbca) [0x55dd87b40bca] On Fri, Sep 29, 2017 at 11:46 PM, Gregory Farnum <gfar...@redhat.com> wrote: > I haven't used the striper, but it appears to make you specify sizes, > stripe units, and stripe counts. I would expect you need to make sure that > the size i

[ceph-users] How to use rados_aio_write correctly?

2017-10-03 Thread Alexander Kushnirenko
Hello, I'm working on third party code (Bareos Storage daemon) which gives very low write speeds for CEPH. The code was written to demonstrate that it is possible, but the speed is about 3-9 MB/s which is too slow. I modified the routine to use rados_aio_write instead of rados_write, and was

Re: [ceph-users] rados_read versus rados_aio_read performance

2017-10-01 Thread Alexander Kushnirenko
here. In > that case you are dominated by the per-op already rather than the > throughout of your cluster. Using aio or multiple threads will let you > parallelism requests. > -Greg > On Fri, Sep 29, 2017 at 3:33 AM Alexander Kushnirenko < > kushnire...@gmail.com> wrote: >

[ceph-users] rados_read versus rados_aio_read performance

2017-09-29 Thread Alexander Kushnirenko
Hello, We see very poor performance when reading/writing rados objects. The speed is only 3-4MB/sec, compared to 95MB rados benchmarking. When you look on underline code it uses librados and linradosstripper libraries (both have poor performance) and the code uses rados_read and rados_write

[ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-09-29 Thread Alexander Kushnirenko
Hi, I'm trying to use CEPH-12.2.0 as storage for with Bareos-16.2.4 backup with libradosstriper1 support. Libradosstriber was suggested on this list to solve the problem, that current CEPH-12 discourages users from using object with very big size (>128MB). Bareos treat Rados Object as Volume

Re: [ceph-users] osd crashes with large object size (>10GB) in luminos Rados

2017-09-26 Thread Alexander Kushnirenko
e data across multiple objects. Objects shouldn’t be > stored as large as that and performance will also suffer. > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Alexander Kushnirenko > *Sent:* 26 September 2017 13:50 > *To:* ceph-users@lists.

[ceph-users] osd crashes with large object size (>10GB) in luminos Rados

2017-09-26 Thread Alexander Kushnirenko
Hello, We successfully use rados to store backup volumes in jewel version of CEPH. Typical volume size is 25-50GB. Backup software (bareos) use Rados objects as backup volumes and it works fine. Recently we tried luminous for the same purpose. In luminous developers reduced osd_max_object_size