Re: [ceph-users] ceph mgr balancer bad distribution

2018-03-02 Thread Stefan Priebe - Profihost AG
Thanks! Your patch works great! The only problem I still see is that the balancer kicks in even when the old optimize has not finished. It seems it only evaluated the degraded of value. But while remapping it can happen that none are degraded but a lot are still misplaced. I think the balancer

Re: [ceph-users] Slow requests troubleshooting in Luminous - details missing

2018-03-02 Thread Maged Mokhtar
On 2018-03-02 07:54, Alex Gorbachev wrote: > On Thu, Mar 1, 2018 at 10:57 PM, David Turner wrote: > Blocked requests and slow requests are synonyms in ceph. They are 2 names > for the exact same thing. > > On Thu, Mar 1, 2018, 10:21 PM Alex Gorbachev > wrote: > On Thu, Mar 1, 2018 at 2:47 PM

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Max Cuttins
Hi Federico, Hi Max, On Feb 28, 2018, at 10:06 AM, Max Cuttins wrote: This is true, but having something that just works in order to have minimum compatibility and start to dismiss old disk is something you should think about. You'll have ages in order to improve and get better performanc

Re: [ceph-users] ceph mgr balancer bad distribution

2018-03-02 Thread Stefan Priebe - Profihost AG
Thanks! Your patch works great! The only problem I still see is that the balancer kicks in even when the old optimize has not finished. It seems it only evaluated the degraded of value. But while remapping it can happen that none are degraded but a lot are still misplaced. I think the balancer sho

[ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins
Hi everybody, i deleted everything from the cluster after some test with RBD. Now I see that there something still in use:   data:     pools:   0 pools, 0 pgs     objects: 0 objects, 0 bytes     usage: *9510 MB used*, 8038 GB / 8048 GB avail     pgs: Is this the overhead of

Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Janne Johansson
2018-03-02 11:21 GMT+01:00 Max Cuttins : > Hi everybody, > > i deleted everything from the cluster after some test with RBD. > Now I see that there something still in use: > > data: > pools: 0 pools, 0 pgs > objects: 0 objects, 0 bytes > usage: *9510 MB used*, 8038 GB / 8048 GB a

Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins
I don't care of get back those space. I just want to know if it's expected or not. Because I run several rados bench with the flag |--no-cleanup| And maybe I leaved something in the way. Il 02/03/2018 11:35, Janne Johansson ha scritto: 2018-03-02 11:21 GMT+01:00 Max Cuttins

Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Gonzalo Aguilar Delgado
Hi Max, No that's not normal. 9GB for an empty cluster. Maybe you reserved some space or you have other service that's taking the space. But It seems way to much for me. El 02/03/18 a las 12:09, Max Cuttins escribió: I don't care of get back those space. I just want to know if it's expecte

Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins
How can I analyze this? Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto: Hi Max, No that's not normal. 9GB for an empty cluster. Maybe you reserved some space or you have other service that's taking the space. But It seems way to much for me. El 02/03/18 a las 12:09, Max Cuttins

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Federico Lucifredi
On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins wrote: > > > Hi Federico, > > Hi Max, >> >> On Feb 28, 2018, at 10:06 AM, Max Cuttins wrote: >>> >>> This is true, but having something that just works in order to have >>> minimum compatibility and start to dismiss old disk is something you should >>>

[ceph-users] Multipart Upload - POST fails

2018-03-02 Thread Ingo Reimann
3ca700 10 v4 credential format = 8DGDGA57XL9YPM8DGEQQ/20180302/us-east-1/s3/aws4_request 2018-03-02 13:59:04.927587 7fe2053ca700 10 access key id = 8DGDGA57XL9YPM8DGEQQ 2018-03-02 13:59:04.927589 7fe2053ca700 10 credential scope = 20180302/us-east-1/s3/aws4_request 2018-03-02 13:59:04.927620 7fe2053ca700 10 cano

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Max Cuttins
Il 02/03/2018 13:27, Federico Lucifredi ha scritto: On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins > wrote: Hi Federico, Hi Max, On Feb 28, 2018, at 10:06 AM, Max Cuttins mailto:m...@phoenixweb.it>> wrote: This is t

Re: [ceph-users] ceph mgr balancer bad distribution

2018-03-02 Thread Dan van der Ster
On Fri, Mar 2, 2018 at 10:12 AM, Stefan Priebe - Profihost AG wrote: > Thanks! Your patch works great! Cool! I plan to add one more feature to allow operators to switch off components of the score function. Currently, by only changing the key to 'bytes', we aren't able to fully balance things bec

Re: [ceph-users] Case where a separate Bluestore WAL/DB device crashes...

2018-03-02 Thread Hervé Ballans
Thanks Jonathan, your feedback is really interesting. It makes me feel good to add separate SSDs for WAL/DBS partitions. Thus, I have to implement a new Ceph cluster with 6 OSD nodes (that each contains 22 OSDs SAS 10k). Following the recommandations on http://docs.ceph.com/docs/master/rados/c

Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Igor Fedotov
Hi Max, how many OSDs do you have? Are they bluestore? what's the "cepf df detail" output? On 3/2/2018 1:21 PM, Max Cuttins wrote: Hi everybody, i deleted everything from the cluster after some test with RBD. Now I see that there something still in use:   data:     pools:   0 poo

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Daniel K
There's been quite a few VMWare/Ceph threads on the mailing list in the past. One setup I've been toying with is a linux guest running on the vmware host on local storage, with the guest mounting a ceph RBD with a filesystem on it, then exporting that via NFS to the VMWare host as a datastore. Ex

[ceph-users] Luminous and Calamari

2018-03-02 Thread Budai Laszlo
Dear all, is it possible to use Calamari with Luminous (I know about the manager dashboard, but that is "read only", I need a tool for also managing ceph). Kind regards, Laszlo ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.co

Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread David Turner
[1] Here is a ceph starts on a brand new cluster that has never had any pools created or data or into it at all. 323GB used out of 2.3PB. that's 0.01% overhead, but we're using 10TB disks for this cluster, and the overhead is moreso per osd than per TB. It is 1.1GB overhead per osd. 34 of the osd

[ceph-users] Typos in Documentation: Bluestoremigration

2018-03-02 Thread Bjoern Laessig
Hi, i just migrated our little cluster from jewel to luminous and found on http://docs.ceph.com/docs/master/rados/operations/bluestore-migratio n/#mark-out-and-replace some wrong or missing lines: 9) ceph-volume lvm create --bluestore --data $DEVICE  --osd-id $ID (the lvm) and there is a missing

Re: [ceph-users] Typos in Documentation: Bluestoremigration

2018-03-02 Thread Alfredo Deza
On Fri, Mar 2, 2018 at 9:19 AM, Bjoern Laessig wrote: > Hi, > i just migrated our little cluster from jewel to luminous and found > on http://docs.ceph.com/docs/master/rados/operations/bluestore-migratio > n/#mark-out-and-replace some wrong or missing lines: > > 9) ceph-volume lvm create --bluesto

Re: [ceph-users] Slow requests troubleshooting in Luminous - details missing

2018-03-02 Thread Alex Gorbachev
On Fri, Mar 2, 2018 at 4:17 AM Maged Mokhtar wrote: > On 2018-03-02 07:54, Alex Gorbachev wrote: > > On Thu, Mar 1, 2018 at 10:57 PM, David Turner > wrote: > > Blocked requests and slow requests are synonyms in ceph. They are 2 names > for the exact same thing. > > > On Thu, Mar 1, 2018, 10:21 P

Re: [ceph-users] Luminous and Calamari

2018-03-02 Thread Sébastien VIGNERON
Hi, Did you look the OpenAttic project? Cordialement / Best regards, Sébastien VIGNERON CRIANN, Ingénieur / Engineer Technopôle du Madrillet 745, avenue de l'Université 76800 Saint-Etienne du Rouvray - France tél. +33 2 32 91 42 91 fax. +33 2 32 91 42 92 http://www.criann.fr mailto:seba

Re: [ceph-users] Luminous and Calamari

2018-03-02 Thread Budai Laszlo
Hi, I've seen the openATTIC. I would like to know if there are any instructions about how to get it running on Ubuntu 16.04 or Centos 7? Thank you. Laszlo On 02.03.2018 17:26, Sébastien VIGNERON wrote: Hi, Did you look the OpenAttic project? Cordialement / Best regards, Sébastien VIGNERON

Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins
Umh Taking a look to your computation I think the ratio OSD/Overhead it's really about 1.1Gb per OSD. Because I have 9 NVMe OSD alive right now. So about 9.5Gb of overhead. So I guess this is just it's right behaviour. Fine! Il 02/03/2018 15:18, David Turner ha scritto: [1] Here is a cep

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Mike Christie
On 03/02/2018 01:24 AM, Joshua Chen wrote: > Dear all, > I wonder how we could support VM systems with ceph storage (block > device)? my colleagues are waiting for my answer for vmware (vSphere 5) We were having difficulties supporting older versions, because they will drop down to using SCSI-2

Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Igor Fedotov
Yes, by default BlueStore reports 1Gb per OSD as used by BlueFS. On 3/2/2018 8:10 PM, Max Cuttins wrote: Umh Taking a look to your computation I think the ratio OSD/Overhead it's really about 1.1Gb per OSD. Because I have 9 NVMe OSD alive right now. So about 9.5Gb of overhead. So I guess

Re: [ceph-users] Cluster is empty but it still use 1Gb of data

2018-03-02 Thread Max Cuttins
perfect Il 02/03/2018 19:18, Igor Fedotov ha scritto: Yes, by default BlueStore reports 1Gb per OSD as used by BlueFS. On 3/2/2018 8:10 PM, Max Cuttins wrote: Umh Taking a look to your computation I think the ratio OSD/Overhead it's really about 1.1Gb per OSD. Because I have 9 NVMe O

[ceph-users] Jewel Release

2018-03-02 Thread Alex Litvak
Are there plans to release Jewel 10.2.11 before the end of the support? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph mgr balancer bad distribution

2018-03-02 Thread Stefan Priebe - Profihost AG
Hi, Am 02.03.2018 um 14:29 schrieb Dan van der Ster: > On Fri, Mar 2, 2018 at 10:12 AM, Stefan Priebe - Profihost AG > wrote: >> Thanks! Your patch works great! > > Cool! I plan to add one more feature to allow operators to switch off > components of the score function. Currently, by only changi

Re: [ceph-users] Luminous and Calamari

2018-03-02 Thread Kai Wagner
Hi, given the fact that we don't have Ubu or Centos packages, you could install directly from our sources. http://download.openattic.org/sources/3.x/openattic-3.6.2.tar.bz2 Our docs are hosted at: http://docs.openattic.org/en/latest/ Kai On 03/02/2018 04:39 PM, Budai Laszlo wrote: > Hi, > > I

[ceph-users] how is iops from ceph -s client io section caculated?

2018-03-02 Thread shadow_lin
Hi list, There is a client io section from the result of ceph -s. I found the value of it is kinda confusing. I am using fio to test rbd seq write performance with 4m block.The throughput is about 2000MB/s and fio shows the iops is 500.But from the ceph -s client io section the throughput is abo