[ceph-users] EC pools grinding to a screeching halt on Luminous

2018-12-26 Thread Florian Haas
Hi everyone, We have a Luminous cluster (12.2.10) on Ubuntu Xenial, though we have also observed the same behavior on 12.2.7 on Bionic (download.ceph.com doesn't build Luminous packages for Bionic, and 12.2.7 is the latest distro build). The primary use case for this cluster is radosgw. 6 OSD

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread Heðin Ejdesgaard Møller
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote: > Thanks for the insight and links. > > > As I can see you are on Luminous. Since Luminous Balancer plugin is > > available [1], you should use it instead reweight's in place, especially > >

Re: [ceph-users] Strange Data Issue - Unexpected client hang on OSD I/O Error

2018-12-26 Thread Dyweni - Ceph-Users
Good Morning, I re-ran the verification and it matches exactly the original data that was backed up (aprox 300GB). There were no further messages issued on the client or any OSD originally involved (2,9,18). I believe the data to be ok. The cluster is currently healther (all PGs

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread jesper
Thanks for the insight and links. > As I can see you are on Luminous. Since Luminous Balancer plugin is > available [1], you should use it instead reweight's in place, especially > in upmap mode [2] I'll try it out again - last I tried it complanied about older clients - it should be better

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread jesper
> -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote: >> Thanks for the insight and links. >> >> > As I can see you are on Luminous. Since Luminous Balancer plugin is >> > available [1], you should use it instead reweight's in place, >>

Re: [ceph-users] InvalidObjectName Error when calling the PutObject operation

2018-12-26 Thread Rishabh S
Hi Konstantin, Thanks for your response. Issue was because by default sending encryption key over the non secure link is disabled in Ceph which was making my request fail. Best Regards, Rishabh > On 26-Dec-2018, at 8:58 AM, Konstantin Shalygin wrote: > >> put_object

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread Heðin Ejdesgaard Møller
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On mik, 2018-12-26 at 16:30 +0100, jes...@krogh.cc wrote: > > -BEGIN PGP SIGNED MESSAGE- > > Hash: SHA256 > > > > On mik, 2018-12-26 at 13:14 +0100, jes...@krogh.cc wrote: > > > Thanks for the insight and links. > > > > > > > As I can see

Re: [ceph-users] radosgw-admin unable to store user information

2018-12-26 Thread Dilip Renkila
Hi all, Some useful information >>* >> *>> * What do the following return?* >>* >> >> *>>* >> >> $ radosgw-admin zone get* *root@ctrl1:~# radosgw-admin zone get { "id": "8bfdf8a3-c165-44e9-9ed6-deff8a5d852f", "name": "default", "domain_root": "default.rgw.meta:root",

Re: [ceph-users] EC pools grinding to a screeching halt on Luminous

2018-12-26 Thread Mohamad Gebai
What is happening on the individual nodes when you reach that point (iostat -x 1 on the OSD nodes)? Also, what throughput do you get when benchmarking the replicated pool? I guess one way to start would be by looking at ongoing operations at the OSD level: ceph daemon osd.X dump_blocked_ops ceph

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread jesper
> Have a look at this thread on the mailing list: > https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46506.html Ok, done.. how do I see that it actually work? Second - should the reweights be set back to 1 then? Jesper ___ ceph-users mailing

[ceph-users] radosgw-admin unable to store user information

2018-12-26 Thread Dilip Renkila
Hi all, I have a ceph radosgw deployment as openstack swift backend with multitenancy enabled in rgw. I can create containers and store data through swift api. I am trying to retrieve user data from radosgw-admin cli tool for an user. I am able to get only admin user info but no one else. $

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-26 Thread Konstantin Shalygin
I'll try it out again - last I tried it complanied about older clients - it should be better now. upmap is supported since kernel 4.13. Second - should the reweights be set back to 1 then? Yes, also: 1. `ceph osd crush tunables optimal` 2. All your buckets should be straw2, but in case

[ceph-users] cephfs kernel client instability

2018-12-26 Thread Andras Pataki
We've been using ceph-fuse with a pretty good stability record (against the Luminous 12.2.8 back end).  Unfortunately ceph-fuse has extremely poor small file performance (understandably), so we've been testing the kernel client.  The latest RedHat kernel 3.10.0-957.1.3.el7.x86_64 seems to work

Re: [ceph-users] list admin issues

2018-12-26 Thread Janne Johansson
Den lör 22 dec. 2018 kl 19:18 skrev Brian : : > Sorry to drag this one up again. Not as sorry to drag it up as you > Just got the unsubscribed due to excessive bounces thing. And me. > 'Your membership in the mailing list ceph-users has been disabled due > to excessive bounces The last bounce