Thank you Greg,
I will look into it and I hope the self managed and pool snapshot will work
for Erasure pool also, we predominantly use Erasure coding.
Thanks,
Muthu
On Wednesday, 2 August 2017, Gregory Farnum <gfar...@redhat.com> wrote:
> On Tue, Aug 1, 2017 at 8:29 AM Muthusam
Hi,
Is there an librados API to clone objects ?
I could able to see options available on radosgw API to copy object and rbd
to clone images. Not able to find similar options on librados native
library to clone object.
It would be good if you can point be to right document if it is possible.
On 20 March 2017 at 22:13, Wido den Hollander <w...@42on.com> wrote:
>
> > Op 18 maart 2017 om 10:39 schreef Muthusamy Muthiah <
> muthiah.muthus...@gmail.com>:
> >
> >
> > Hi,
> >
> > We had similar issue on one of the 5 node
p 13 februari 2017 om 12:57 schreef Muthusamy Muthiah <
> muthiah.muthus...@gmail.com>:
> >
> >
> > Hi All,
> >
> > We also have same issue on one of our platforms which was upgraded from
> > 11.0.2 to 11.2.0 . The issue occurs on one node alone where C
On one our platform mgr uses 3 CPU cores . Is there a ticket available for
this issue ?
Thanks,
Muthu
On 14 February 2017 at 03:13, Brad Hubbard wrote:
> Could one of the reporters open a tracker for this issue and attach
> the requested debugging data?
>
> On Mon, Feb 13,
Hi Wido,
Thanks for the information and let us know if this is a bug.
As workaround we will go with small bluestore_cache_size to 100MB.
Thanks,
Muthu
On 16 February 2017 at 14:04, Wido den Hollander <w...@42on.com> wrote:
>
> > Op 16 februari 2017 om 7:19 schreef Mu
Thanks IIya Letkowski for the information we will change this value
accordingly.
Thanks,
Muthu
On 15 February 2017 at 17:03, Ilya Letkowski <mj12.svetz...@gmail.com>
wrote:
> Hi, Muthusamy Muthiah
>
> I'm not totally sure that this is a memory leak.
> We had same problems with
Hi All,
We also have same issue on one of our platforms which was upgraded from
11.0.2 to 11.2.0 . The issue occurs on one node alone where CPU hits 100%
and OSDs of that node marked down. Issue not seen on cluster which was
installed from scratch with 11.2.0.
ski...@redhat.com> wrote:
> On Wed, Feb 1, 2017 at 3:38 AM, Gregory Farnum <gfar...@redhat.com> wrote:
> > On Tue, Jan 31, 2017 at 9:06 AM, Muthusamy Muthiah
> > <muthiah.muthus...@gmail.com> wrote:
> >> Hi Greg,
> >>
> >> the problem is in
On 31 January 2017 at 18:17, Muthusamy Muthiah <muthiah.muthus...@gmail.com>
wrote:
> Hi Greg,
>
> Following are the test outcomes on EC profile ( n = k + m)
>
>
>
> 1. Kraken filestore and bluetore with m=1 , recovery does not start
> .
>
> 2. Je
. Kraken bluestore with m=2 , recovery happens when one OSD is down
and for 2 OSD fails.
So, the issue seems to be on ceph-kraken release. Your views…
Thanks,
Muthu
On 31 January 2017 at 14:18, Muthusamy Muthiah <muthiah.muthus...@gmail.com>
wrote:
> Hi Greg,
>
> N
t;
> On Mon, Jan 30, 2017 at 1:23 PM, Gregory Farnum <gfar...@redhat.com>
> wrote:
> > On Sun, Jan 29, 2017 at 6:40 AM, Muthusamy Muthiah
> > <muthiah.muthus...@gmail.com> wrote:
> >> Hi All,
> >>
> >> Also tried EC profile 3+1 on 5 node clust
, 282 TB / 322 TB avail
941 active+clean
75 remapped+incomplete
8 active+clean+scrubbing
this seems to be an issue with bluestore , recovery not happening properly
with EC .
Thanks,
Muthu
On 24 January 2017 at 12:57, Muthusamy Muthiah
the equivalent of
> > min_size for EC pools, but I don't know the parameters off the top of
> > my head.
> > -Greg
> >
> > On Fri, Jan 20, 2017 at 2:15 AM, Muthusamy Muthiah
> > <muthiah.muthus...@gmail.com> wrote:
> >> Hi ,
> >>
> >&g
Hi ,
We are validating kraken 11.2.0 with bluestore on 5 node cluster with EC
4+1.
When an OSD is down , the peering is not happening and ceph health status
moved to ERR state after few mins. This was working in previous development
releases. Any additional configuration required in v11.2.0
Thanks Wido for the information and I hope from 11.1.0 possible to upgrade
the intermediate releases of kraken and upcoming releases.
Thanks,
Muthu
On 11 January 2017 at 19:12, Wido den Hollander wrote:
>
> > Op 11 januari 2017 om 12:24 schreef Jayaram R
16 matches
Mail list logo