Re: [ceph-users] Any CEPH's iSCSI gateway users?

2019-06-11 Thread Glen Baars
Interesting performance increase! I'm Iscsi it at a few installations and now a wonder what version of Centos is required to improve performance! Did the cluster go from Luminous to Mimic? Glen -Original Message- From: ceph-users On Behalf Of Heðin Ejdesgaard Møller Sent: Saturday, 8

[ceph-users] Learning rig, is it a good idea?

2019-06-11 Thread Inkatadoc
Hi all! I'm thinking about building a learning rig for ceph. This is the parts list: PCPartPicker Part List: https://pcpartpicker.com/list/s4vHXP TL;DR 8 Core 3Ghz Ryzen CPU, 64 Gb RAM, 2Tb x 5 HDDs, 1 240Gb SSD in a tower case. My plan is to build a KVM-based setup, both for ceph and workload

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-11 Thread J. Eric Ivancich
Hi Wido, Interleaving below On 6/11/19 3:10 AM, Wido den Hollander wrote: > > I thought it was resolved, but it isn't. > > I counted all the OMAP values for the GC objects and I got back: > > gc.0: 0 > gc.11: 0 > gc.14: 0 > gc.15: 0 > gc.16: 0 > gc.18: 0 > gc.19: 0 > gc.1: 0 > gc.20: 0 >

Re: [ceph-users] limitations to using iscsi rbd-target-api directly in lieu of gwcli

2019-06-11 Thread Jason Dillaman
On Tue, Jun 11, 2019 at 10:24 AM Wesley Dillingham wrote: > > Thanks Jason for the info! A few questions: > > "The current rbd-target-api doesn't really support single path LUNs." > > In our testing, using single path LUNs, listing the "owner" of a given LUN > and then connecting directly to

Re: [ceph-users] limitations to using iscsi rbd-target-api directly in lieu of gwcli

2019-06-11 Thread Paul Emmerich
On Tue, Jun 11, 2019 at 4:24 PM Wesley Dillingham wrote: > (running 14.2.0 and ceph-iscsi-3.0-57.g4ae) > > and configuring the dash as follows: > > ceph dashboard set-iscsi-api-ssl-verification false > ceph dashboard iscsi-gateway-add http://admin:admin@${MY_HOSTNAME}:5000 > systemctl

Re: [ceph-users] is rgw crypt default encryption key long term supported ?

2019-06-11 Thread Casey Bodley
The server side encryption features all require special x-amz headers on write, so they only apply to our S3 apis. But objects encrypted with SSE-KMS (or a default encryption key) can be read without any x-amz headers, so swift should be able to decrypt them too. I agree that this is a bug and

Re: [ceph-users] limitations to using iscsi rbd-target-api directly in lieu of gwcli

2019-06-11 Thread Wesley Dillingham
Thanks Jason for the info! A few questions: "The current rbd-target-api doesn't really support single path LUNs." In our testing, using single path LUNs, listing the "owner" of a given LUN and then connecting directly to that gateway yielded stable and well-performing results, obviously, there

Re: [ceph-users] Error when I compare hashes of export-diff / import-diff

2019-06-11 Thread ceph
On 6/11/19 3:24 PM, Rafael Diaz Maurin wrote: > 3- I create a snapshot inside the source pool > rbd snap create ${POOL-SOURCE}/${KVM-IMAGE}@${TODAY-SNAP} > > 4- I export the snapshot from the source pool and I import the snapshot > towards the destination pool (in the pipe) > rbd export-diff

Re: [ceph-users] limitations to using iscsi rbd-target-api directly in lieu of gwcli

2019-06-11 Thread Jason Dillaman
On Tue, Jun 11, 2019 at 9:29 AM Wesley Dillingham wrote: > > Hello, > > I am hoping to expose a REST API to a remote client group who would like to > do things like: > > > Create, List, and Delete RBDs and map them to gateway (make a LUN) > Create snapshots, list, delete, and rollback >

Re: [ceph-users] Error when I compare hashes of export-diff / import-diff

2019-06-11 Thread Jason Dillaman
On Tue, Jun 11, 2019 at 9:25 AM Rafael Diaz Maurin wrote: > > Hello, > > I have a problem when I want to validate (using md5 hashes) rbd > export/import diff from a rbd source-pool (the production pool) towards > another rbd destination-pool (the backup pool). > > Here is the algorythm : > 1-

[ceph-users] limitations to using iscsi rbd-target-api directly in lieu of gwcli

2019-06-11 Thread Wesley Dillingham
Hello, I am hoping to expose a REST API to a remote client group who would like to do things like: * Create, List, and Delete RBDs and map them to gateway (make a LUN) * Create snapshots, list, delete, and rollback * Determine Owner / Active gateway of a given lun I would run 2-4

[ceph-users] Error when I compare hashes of export-diff / import-diff

2019-06-11 Thread Rafael Diaz Maurin
Hello, I have a problem when I want to validate (using md5 hashes) rbd export/import diff from a rbd source-pool (the production pool) towards another rbd destination-pool (the backup pool). Here is the algorythm : 1- First of all, I validate that the two hashes from lasts snapshots source

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-11 Thread John Petrini
I certainly would, particularly on your SSD's. I'm not familiar with those Toshibas but disabling disk cache has improved performance on my clusters and others on this list. Does the LSI controller you're using provide read/write cache and do you have it enabled? 72k spinners are likely to see a

Re: [ceph-users] ceph monitor keep crash

2019-06-11 Thread Joao Eduardo Luis
On 06/04/2019 07:01 PM, Jianyu Li wrote: > Hello, > > I have a ceph cluster running over 2 years and the monitor began crash > since yesterday. I had some flapping OSDs up and down occasionally, > sometimes I need to rebuild the OSD. I found 3 OSDs are down yesterday, > they may cause this issue

Re: [ceph-users] Remove rbd image after interrupt of deletion command

2019-06-11 Thread Sakirnth Nagarasa
On 6/11/19 10:42 AM, Igor Podlesny wrote: > On Tue, 11 Jun 2019 at 14:46, Sakirnth Nagarasa > wrote: >> On 6/7/19 3:35 PM, Jason Dillaman wrote: > [...] >>> Can you run "rbd rm --log-to-stderr=true --debug-rbd=20 >>> ${POOLNAME}/${IMAGE}" and provide the logs via pastebin.com? >>> Cheers,

Re: [ceph-users] Remove rbd image after interrupt of deletion command

2019-06-11 Thread Igor Podlesny
On Tue, 11 Jun 2019 at 14:46, Sakirnth Nagarasa wrote: > On 6/7/19 3:35 PM, Jason Dillaman wrote: [...] > > Can you run "rbd rm --log-to-stderr=true --debug-rbd=20 > > ${POOLNAME}/${IMAGE}" and provide the logs via pastebin.com? > > > >> Cheers, > >> Sakirnth > > It is not necessary anymore the

Re: [ceph-users] Remove rbd image after interrupt of deletion command

2019-06-11 Thread Sakirnth Nagarasa
On 6/7/19 3:35 PM, Jason Dillaman wrote: > On Fri, Jun 7, 2019 at 7:22 AM Sakirnth Nagarasa > wrote: >> >> On 6/6/19 5:09 PM, Jason Dillaman wrote: >>> On Thu, Jun 6, 2019 at 10:13 AM Sakirnth Nagarasa >>> wrote: On 6/6/19 3:46 PM, Jason Dillaman wrote: > Can you run "rbd trash ls

Re: [ceph-users] Expected IO in luminous Ceph Cluster

2019-06-11 Thread Stolte, Felix
Hi John, I have 9 HDDs and 3 SSDs behind a SAS3008 PCI-Express Fusion-MPT SAS-3 from LSI. HDDs are HGST HUH721008AL (8TB, 7200k rpm), SSDs are Toshiba PX05SMB040 (400GB). OSDs are bluestore format, 3 HDDs have their wal and db on one SSD (DB Size 50GB, wal 10 GB). I did not change any cache

Re: [ceph-users] slow requests are blocked > 32 sec. Implicated osds 0, 2, 3, 4, 5 (REQUEST_SLOW)

2019-06-11 Thread BASSAGET Cédric
Hello Robert, I did not make any changes, so I'm still using the prio queue. Regards Le lun. 10 juin 2019 à 17:44, Robert LeBlanc a écrit : > I'm glad it's working, to be clear did you use wpq, or is it still the > prio queue? > > Sent from a mobile device, please excuse any typos. > > On Mon,

Re: [ceph-users] Large OMAP object in RGW GC pool

2019-06-11 Thread Wido den Hollander
On 6/4/19 8:00 PM, J. Eric Ivancich wrote: > On 6/4/19 7:37 AM, Wido den Hollander wrote: >> I've set up a temporary machine next to the 13.2.5 cluster with the >> 13.2.6 packages from Shaman. >> >> On that machine I'm running: >> >> $ radosgw-admin gc process >> >> That seems to work as