[ceph-users] Accessing krbd client metrics

2017-08-18 Thread Mingliang LIU
Hi all, I have a quick question about the RBD kernel module - how to best collect the metrics or perf numbers? The command 'ceph -w' does print some useful event logs of cluster-wide while I'm interested in per-client/per-image/per-volume read/write bytes and latency etc. For the *librbd*, I

Re: [ceph-users] Luminous radosgw hangs after a few hours

2017-08-18 Thread Kamble, Nitin A
I see the same issue with ceph v12.1.4 as well. We are not using openstack or keystone, and see these errors in the rgw log. RGW is not hanging though. Thanks, Nitin From: ceph-users on behalf of Martin Emrich Date: Monday, July

Re: [ceph-users] Fwd: Can't get fullpartition space

2017-08-18 Thread David Turner
Have you checked that zapping your disks to remove any and all partitions didn't work? sgdisk -Z /dev/sda3 On Fri, Aug 18, 2017 at 12:48 PM Maiko de Andrade wrote: > Hi, > > I try use bluestore_block_size but I recive this error (I used values in > byte, kb, mb, gb and 1)

Re: [ceph-users] Fwd: Can't get fullpartition space

2017-08-18 Thread Maiko de Andrade
Hi, I try use bluestore_block_size but I recive this error (I used values in byte, kb, mb, gb and 1) : [ceph][WARNIN] /build/ceph-12.1.4/src/os/bluestore/BlueFS.cc: 172: FAILED assert(bdev[id]->get_size() >= offset + length) ALL LOG $ ceph-deploy osd activate ceph:/dev/sda3

Re: [ceph-users] RBD only keyring for client

2017-08-18 Thread David Turner
That's exactly what I was missing. Thank you. On Thu, Aug 17, 2017 at 3:15 PM Jason Dillaman wrote: > You should be able to set a CEPH_ARGS='--id rbd' environment variable. > > On Thu, Aug 17, 2017 at 2:25 PM, David Turner > wrote: > > I already

Re: [ceph-users] BlueStore WAL or DB devices on a distant SSD ?

2017-08-18 Thread David Turner
Specifying them to be the same device is redundant and not necessary. They will be put on the bluestore device by default unless you specify them to go to another device. On Fri, Aug 18, 2017 at 9:17 AM Hervé Ballans wrote: > Le 16/08/2017 à 16:19, David Turner a

Re: [ceph-users] How to distribute data

2017-08-18 Thread Oscar Segarra
Hi, Yes, you are right, the idea is cloning a snapshot taken from the base image... And yes, I'm working with the current RC of luminous. In this scenario: base image (raw format) + snapshot + snapshot clones (for end user Windows 10 vdi). Does tiering ssd+hdd may help? Thanks a lot El 18

Re: [ceph-users] BlueStore WAL or DB devices on a distant SSD ?

2017-08-18 Thread Hervé Ballans
Le 16/08/2017 à 16:19, David Turner a écrit : Would reads and writes to the SSD on another server be faster than reads and writes to HDD on the local server? If the answer is no, then even if this was possible it would be worse than just putting your WAL and DB on the same HDD locally. I

Re: [ceph-users] Modify user metadata in RGW multi-tenant setup

2017-08-18 Thread Sander van Schie
I tried using quotes before, which doesn't suffice. Turns out you just need to escape the dollar-sign: radosgw-admin metadata get user:\$ On Thu, Aug 17, 2017 at 10:38 PM, Sander van Schie wrote: > Hello, > > I'm trying to modify the metadata of a RGW user in a

Re: [ceph-users] ceph pgs state forever stale+active+clean

2017-08-18 Thread David Turner
What were the settings for your pool? What was the size? It looks like the size was 2 and that the PGs only existed on osds 2 and 6. If that's the case, it's like having a 4 disk raid 1+0, removing 2 disks of the same mirror, and complaining that the other mirror didn't pick up the data... Don't