Re: [ceph-users] Strange Data Issue - Unexpected client hang on OSD I/O Error

2018-12-25 Thread Dyweni - Ceph-Users
Hi again! Prior to rebooting the client, I found this file (and it's contents): # cat /sys/kernel/debug/ceph/8abf116d-a710-4245-811d-c08473cb9fb4.client7412370/osdc REQUESTS 1 homeless 0 1459933 osd24.3120c635 [2,18,9]/2 [2,18,9]/2 rbd_data.6b60e8643c9869.157f

[ceph-users] Strange Data Issue - Unexpected client hang on OSD I/O Error

2018-12-25 Thread Dyweni - Ceph-Users
Hi Everyone/Devs, Would someone please help me troubleshoot a strange data issue (unexpected client hang on OSD I/O Error)? On the client, I had a process reading a large amount of data from a mapped RBD image. I noticed tonight that it had stalled for a long period of time (which never

Re: [ceph-users] InvalidObjectName Error when calling the PutObject operation

2018-12-25 Thread Konstantin Shalygin
Dear Members, I am trying to upload an object using SSE-Customer Provided Key and getting following Error. botocore.exceptions.ClientError: An error occurred (InvalidObjectName) when calling the PutObject operation: Unknown >>>/s3.list_buckets() /{u'Owner': {u'DisplayName': 'User for

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-25 Thread Konstantin Shalygin
$ sudo ceph osd df tree ID CLASSWEIGHTREWEIGHT SIZE USEAVAIL %USE VAR PGS TYPE NAME -8 639.98883- 639T 327T 312T 51.24 1.00 - root default -10 111.73999- 111T 58509G 55915G 51.13 1.00 - host bison 78 hdd_fast 0.90900 1.0

Re: [ceph-users] InvalidObjectName Error when calling the PutObject operation

2018-12-25 Thread Rishabh S
A kind reminder if anyone can help me with this "InvalidObjectName Error when calling the PutObject operation” Thanks & Regards, Rishabh > On 21-Dec-2018, at 10:03 AM, Rishabh S wrote: > > > Dear Members, > > I am trying to upload an object using SSE-Customer Provided Key and getting >

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-25 Thread jesper
> Please, paste your `ceph osd df tree` and `ceph osd dump | head -n 12`. $ sudo ceph osd df tree ID CLASSWEIGHTREWEIGHT SIZE USEAVAIL %USE VAR PGS TYPE NAME -8 639.98883- 639T 327T 312T 51.24 1.00 - root default -10 111.73999- 111T

Re: [ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-25 Thread Konstantin Shalygin
We hit an OSD_FULL last week on our cluster - with an average utillzation of less than 50% .. thus hugely imbalanced. This has driven us to go for adjusting pg's upwards and reweighting the osd's more agressively. Question: What do people see as an "acceptable" variance across OSD's? x N

[ceph-users] Balancing cluster with large disks - 10TB HHD

2018-12-25 Thread jesper
Hi. We hit an OSD_FULL last week on our cluster - with an average utillzation of less than 50% .. thus hugely imbalanced. This has driven us to go for adjusting pg's upwards and reweighting the osd's more agressively. Question: What do people see as an "acceptable" variance across OSD's? x