Hi everyone,
We have a Luminous cluster (12.2.10) on Ubuntu Xenial, though we have
also observed the same behavior on 12.2.7 on Bionic (download.ceph.com
doesn't build Luminous packages for Bionic, and 12.2.7 is the latest
distro build).
The primary use case for this cluster is radosgw. 6 OSD
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote:
> Thanks for the insight and links.
>
> > As I can see you are on Luminous. Since Luminous Balancer plugin is
> > available [1], you should use it instead reweight's in place, especially
> >
Good Morning,
I re-ran the verification and it matches exactly the original data that
was backed up (aprox 300GB). There were no further messages issued on
the client or any OSD originally involved (2,9,18). I believe the data
to be ok. The cluster is currently healther (all PGs
Thanks for the insight and links.
> As I can see you are on Luminous. Since Luminous Balancer plugin is
> available [1], you should use it instead reweight's in place, especially
> in upmap mode [2]
I'll try it out again - last I tried it complanied about older clients -
it should be better
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote:
>> Thanks for the insight and links.
>>
>> > As I can see you are on Luminous. Since Luminous Balancer plugin is
>> > available [1], you should use it instead reweight's in place,
>>
Hi Konstantin,
Thanks for your response.
Issue was because by default sending encryption key over the non secure link is
disabled in Ceph which was making my request fail.
Best Regards,
Rishabh
> On 26-Dec-2018, at 8:58 AM, Konstantin Shalygin wrote:
>
>> put_object
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On mik, 2018-12-26 at 16:30 +0100, jes...@krogh.cc wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA256
> >
> > On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote:
> > > Thanks for the insight and links.
> > >
> > > > As I can see
Hi all,
Some useful information
>>* >> *>> * What do the following return?*
>>* >> >>
*>>* >> >> $ radosgw-admin zone get*
*root@ctrl1:~# radosgw-admin zone get
{
"id": "8bfdf8a3-c165-44e9-9ed6-deff8a5d852f",
"name": "default",
"domain_root": "default.rgw.meta:root",
What is happening on the individual nodes when you reach that point
(iostat -x 1 on the OSD nodes)? Also, what throughput do you get when
benchmarking the replicated pool?
I guess one way to start would be by looking at ongoing operations at
the OSD level:
ceph daemon osd.X dump_blocked_ops
ceph
> Have a look at this thread on the mailing list:
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46506.html
Ok, done.. how do I see that it actually work?
Second - should the reweights be set back to 1 then?
Jesper
___
ceph-users mailing
Hi all,
I have a ceph radosgw deployment as openstack swift backend with
multitenancy enabled in rgw.
I can create containers and store data through swift api.
I am trying to retrieve user data from radosgw-admin cli tool for an user.
I am able to get only admin user info but no one else.
$
I'll try it out again - last I tried it complanied about older clients -
it should be better now.
upmap is supported since kernel 4.13.
Second - should the reweights be set back to 1 then?
Yes, also:
1. `ceph osd crush tunables optimal`
2. All your buckets should be straw2, but in case
We've been using ceph-fuse with a pretty good stability record (against
the Luminous 12.2.8 back end). Unfortunately ceph-fuse has extremely
poor small file performance (understandably), so we've been testing the
kernel client. The latest RedHat kernel 3.10.0-957.1.3.el7.x86_64 seems
to work
Den lör 22 dec. 2018 kl 19:18 skrev Brian : :
> Sorry to drag this one up again.
Not as sorry to drag it up as you
> Just got the unsubscribed due to excessive bounces thing.
And me.
> 'Your membership in the mailing list ceph-users has been disabled due
> to excessive bounces The last bounce
14 matches
Mail list logo