Hi all,
I have a quick question about the RBD kernel module - how to best collect
the metrics or perf numbers? The command 'ceph -w' does print some useful
event logs of cluster-wide while I'm interested in
per-client/per-image/per-volume read/write bytes and latency etc.
For the *librbd*, I
I see the same issue with ceph v12.1.4 as well. We are not using openstack or
keystone, and see these errors in the rgw log. RGW is not hanging though.
Thanks,
Nitin
From: ceph-users on behalf of Martin Emrich
Date: Monday, July
Have you checked that zapping your disks to remove any and all partitions
didn't work? sgdisk -Z /dev/sda3
On Fri, Aug 18, 2017 at 12:48 PM Maiko de Andrade
wrote:
> Hi,
>
> I try use bluestore_block_size but I recive this error (I used values in
> byte, kb, mb, gb and 1)
Hi,
I try use bluestore_block_size but I recive this error (I used values in
byte, kb, mb, gb and 1) :
[ceph][WARNIN] /build/ceph-12.1.4/src/os/bluestore/BlueFS.cc: 172: FAILED
assert(bdev[id]->get_size() >= offset + length)
ALL LOG
$ ceph-deploy osd activate ceph:/dev/sda3
That's exactly what I was missing. Thank you.
On Thu, Aug 17, 2017 at 3:15 PM Jason Dillaman wrote:
> You should be able to set a CEPH_ARGS='--id rbd' environment variable.
>
> On Thu, Aug 17, 2017 at 2:25 PM, David Turner
> wrote:
> > I already
Specifying them to be the same device is redundant and not necessary. They
will be put on the bluestore device by default unless you specify them to
go to another device.
On Fri, Aug 18, 2017 at 9:17 AM Hervé Ballans
wrote:
> Le 16/08/2017 à 16:19, David Turner a
Hi,
Yes, you are right, the idea is cloning a snapshot taken from the base
image...
And yes, I'm working with the current RC of luminous.
In this scenario: base image (raw format) + snapshot + snapshot clones
(for end user Windows 10 vdi). Does tiering ssd+hdd may help?
Thanks a lot
El 18
Le 16/08/2017 à 16:19, David Turner a écrit :
Would reads and writes to the SSD on another server be faster than
reads and writes to HDD on the local server? If the answer is no, then
even if this was possible it would be worse than just putting your WAL
and DB on the same HDD locally. I
I tried using quotes before, which doesn't suffice. Turns out you just
need to escape the dollar-sign:
radosgw-admin metadata get user:\$
On Thu, Aug 17, 2017 at 10:38 PM, Sander van Schie
wrote:
> Hello,
>
> I'm trying to modify the metadata of a RGW user in a
What were the settings for your pool? What was the size? It looks like the
size was 2 and that the PGs only existed on osds 2 and 6. If that's the
case, it's like having a 4 disk raid 1+0, removing 2 disks of the same
mirror, and complaining that the other mirror didn't pick up the data...
Don't
10 matches
Mail list logo