[ceph-users] Changing the distribution of pgs to be deep-scrubbed

2016-08-25 Thread Mark Kirkwood
Deep scrubbing is a pain point for some (many?) Ceph installations. We have recently been hit by deep scrubbing causing noticeable latency increases to the entire cluster, but only on certain (infrequent) days. This led me to become more interested in the distribution of pgdeep scrubs.

[ceph-users] 答复: RGW 10.2.2 SignatureDoesNotMatch with special characters in object name

2016-08-25 Thread zhu tong
I don't have such problem: [root@ceph-node1 ~]# s3cmd ls s3://aaa 2016-08-26 02:2210 s3://aaa/@@@.txt And I don't think rgw supports AWS v4 signer. I guess you are developing a client? Maybe comparing the output signature with one generated by AWS XXSDK could help.

Re: [ceph-users] RGW 10.2.2 SignatureDoesNotMatch with special characters in object name

2016-08-25 Thread Henrik Korkuc
looks like mine problem is little different. I am not using v4, and object names which fail to you works for me On 16-08-25 11:52, jan hugo prins wrote: Could this have something to do with: http://tracker.ceph.com/issues/17076 Jan Hugo Prins On 08/25/2016 10:34 AM, Henrik Korkuc wrote:

Re: [ceph-users] CephFS Big Size File Problem

2016-08-25 Thread Lazuardi Nasution
Hi, I'm using admin key with following "ceph auth list" output. I don;t see any problem there. I'm not sure sure it is related to path restriction since dd with small file was OK. client.admin key: AQBBk81Wk76dBxAA0SGKyJGgfSUt202NKX8tNQ== caps: [mds] allow * caps: [mon]

Re: [ceph-users] CephFS Big Size File Problem

2016-08-25 Thread Lazuardi Nasution
Hi, I have rebooted the Nova compute node with D state, so I cannot check that anymore. Best regards, On Thu, Aug 25, 2016 at 7:58 PM, Yan, Zheng wrote: > On Thu, Aug 25, 2016 at 11:12 AM, Lazuardi Nasution > wrote: > > Hi Gregory, > > > > Since I

Re: [ceph-users] mounting a VM rbd image as a /dev/rbd0 device

2016-08-25 Thread Oleksandr Natalenko
You'd: 1) inspect /dev/rbd0 with fdisk -l to get partitions offsets; 2) mount desired partition with -o offset= option. On четвер, 25 серпня 2016 р. 17:31:52 EEST Deneau, Tom wrote: > If I have an rbd image that is being used by a VM and I want to mount it > as a read-only /dev/rbd0 kernel

[ceph-users] mounting a VM rbd image as a /dev/rbd0 device

2016-08-25 Thread Deneau, Tom
If I have an rbd image that is being used by a VM and I want to mount it as a read-only /dev/rbd0 kernel device, is that possible? When I try it I get: mount: /dev/rbd0 is write-protected, mounting read-only mount: wrong fs type, bad option, bad superblock on /dev/rbd0, missing codepage

Re: [ceph-users] CephFS + cache tiering in Jewel

2016-08-25 Thread Gregory Farnum
On Wed, Aug 24, 2016 at 11:21 PM, Burkhard Linke wrote: > Hi, > > > On 08/24/2016 10:22 PM, Gregory Farnum wrote: >> >> On Tue, Aug 23, 2016 at 7:50 AM, Burkhard Linke >> wrote: >>> >>> Hi, >>> >>>

Re: [ceph-users] Fast Ceph a Cluster with PB storage

2016-08-25 Thread Александр Пивушков
Hello,  > Yes, I gathered that.  > The question is, what servers between the Windows clients and the final > Ceph storage are you planning to use. Got it! :) I think I will use the OSD to samba. If possible, using this project here. https://ctdb.samba.org/ For each OSD will install samba and

Re: [ceph-users] CephFS Big Size File Problem

2016-08-25 Thread Yan, Zheng
On Thu, Aug 25, 2016 at 11:12 AM, Lazuardi Nasution wrote: > Hi, > > My setup is default. How can I know if there is path restriction? If it is > path restriction, why operation with small size if file is OK? > run "ceph auth list". check if there is "path=/XXX" in MDS'

Re: [ceph-users] librados Java support for rados_lock_exclusive()

2016-08-25 Thread Dan Jakubiec
Thanks Wido, I will have a look at it late next week. -- Dan > On Aug 25, 2016, at 00:23, Wido den Hollander wrote: > > Hi Dan, > > Not on my list currently. I think it's not that difficult, but I never got > around to maintaining rados-java and keep up with librados. > > You

Re: [ceph-users] CephFS Big Size File Problem

2016-08-25 Thread Yan, Zheng
On Thu, Aug 25, 2016 at 11:12 AM, Lazuardi Nasution wrote: > Hi Gregory, > > Since I have mounted it with /etc/fstab, of course it is kernel client. What > log do you mean? I cannot find anything related on dmesg. > login to the node that run dd. find process id of dd.

Re: [ceph-users] ONE pg deep-scrub blocks cluster

2016-08-25 Thread ceph
Hey JC, Thank you very much for your mail! I will provide the Informations tomorrow when i am at work again. Hope that we will find a solution :) - Mehmet Am 24. August 2016 16:58:58 MESZ, schrieb LOPEZ Jean-Charles : >Hi Mehmet, > >I’m just seeing your message and read

[ceph-users] Antw: Re: Best practices for extending a ceph cluster with minimal client impact data movement

2016-08-25 Thread Steffen Weißgerber
Hi, >>> Wido den Hollander schrieb am Dienstag, 9. August 2016 um 10:05: >> Op 8 augustus 2016 om 16:45 schreef Martin Palma : >> >> >> Hi all, >> >> we are in the process of expanding our cluster and I would like to >> know if there are some best

Re: [ceph-users] RGW 10.2.2 SignatureDoesNotMatch with special characters in object name

2016-08-25 Thread jan hugo prins
Could this have something to do with: http://tracker.ceph.com/issues/17076 Jan Hugo Prins On 08/25/2016 10:34 AM, Henrik Korkuc wrote: > Hey, > > I stumbled on the problem that RGW upload results in > SignatureDoesNotMatch error when I try uploading file with '@' or some > other special

[ceph-users] RGW 10.2.2 SignatureDoesNotMatch with special characters in object name

2016-08-25 Thread Henrik Korkuc
Hey, I stumbled on the problem that RGW upload results in SignatureDoesNotMatch error when I try uploading file with '@' or some other special characters. Can someone confirm same issues? I didn't manage to find bugreports about it ___ ceph-users

Re: [ceph-users] Best practices for extending a ceph cluster with minimal client impact data movement

2016-08-25 Thread Martin Palma
Hi Wido, only to clarify things: I checked some osd daemons with the following command: $ sudo ceph daemon osd.42 config show | grep backfills "osd_max_backfills": "1", $ sudo ceph daemon osd.42 config show | grep recovery_threads "osd_recovery_threads": "1", So it seem's we already

Re: [ceph-users] CephFS + cache tiering in Jewel

2016-08-25 Thread Burkhard Linke
Hi, On 08/24/2016 10:22 PM, Gregory Farnum wrote: On Tue, Aug 23, 2016 at 7:50 AM, Burkhard Linke wrote: Hi, the Firefly and Hammer releases did not support transparent usage of cache tiering in CephFS. The cache tier itself had to be