Re: [ceph-users] inconsistent number of pools

2019-05-28 Thread Lars Täuber
Yes, thanks. This helped. Regards, Lars Tue, 28 May 2019 11:50:01 -0700 Gregory Farnum ==> Lars Täuber : > You’re the second report I’ve seen if this, and while it’s confusing, you > should be Abel to resolve it by restarting your active manager daemon. > > On Sun, May 26, 2019 at 11:52 PM

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Konstantin Shalygin
Dear All, Quick question regarding SSD sizing for a DB/WAL... I understand 4% is generally recommended for a DB/WAL. Does this 4% continue for "large" 12TB drives, or can we economise and use a smaller DB/WAL? Ideally I'd fit a smaller drive providing a 266GB DB/WAL per 12TB OSD, rather than

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Frank Yu
Hi Jake, I have same question about size of DB/WAL for OSD。My situations: 12 osd per OSD nodes, 8 TB(maybe 12TB later) per OSD, Intel NVMe SSD (optane P4800x) 375G per OSD nodes, which means DB/WAL can use about 30GB per OSD(8TB), I mainly use CephFS to serve the HPC cluster for ML. (plan to

Re: [ceph-users] Cephfs free space vs ceph df free space disparity

2019-05-28 Thread Robert Ruge
Thanks for everyone's suggestions which have now helped me to fix the space free problem. The newbie mistake was not knowing anything about rebalancing. Turning on the balancer and using upmap I have gone from 7TB free to 50TB free on my cephfs. Seeing that the object store is saying 180TB free

[ceph-users] Balancer: uneven OSDs

2019-05-28 Thread Tarek Zegar
I enabled the balancer plugin and even tried to manually invoke it but it won't allow any changes. Looking at ceph osd df, it's not even at all. Thoughts? root@hostadmin:~# ceph osd df ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL%USE VAR PGS 1 hdd 0.0098000 B 0 B

[ceph-users] Meaning of Ceph MDS / Rank in "Stopped" state.

2019-05-28 Thread Wesley Dillingham
I am working to develop some monitoring for our File clusters and as part of the check I inspect `ceph mds stat` for damaged,failed,stopped MDS/Ranks. Initially I set my check to Alarm if any of these states was discovered but as I distributed it out I noticed that one of our clusters had the

Re: [ceph-users] inconsistent number of pools

2019-05-28 Thread Gregory Farnum
You’re the second report I’ve seen if this, and while it’s confusing, you should be Abel to resolve it by restarting your active manager daemon. On Sun, May 26, 2019 at 11:52 PM Lars Täuber wrote: > Fri, 24 May 2019 21:41:33 +0200 > Michel Raabe ==> Lars Täuber , > ceph-users@lists.ceph.com :

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Benjeman Meekhof
I suggest having a look at this thread, which suggests that sizes 'in between' the requirements of different RocksDB levels have no net effect, and size accordingly. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030740.html My impression is that 28GB is good (L0+L1+L3), or 280

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Igor Fedotov
Hi Jake, just my 2 cents - I'd suggest to use LVM for DB/WAL to be able seamlessly extend their sizes if needed. Once you've configured this way and if you're able to add more NVMe later you're almost free to select any size at the initial stage. Thanks, Igor On 5/28/2019 4:13 PM, Jake

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Martin Verges
Hello Jake, you can use 2.2% as well and performance will most of the time better than without having a DB/WAL. However if the DB/WAL is filled up, a spillover to the regular drive is done and the performance will just drop as if you wouldn't have a DB/WAL drive. I believe that you could use

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Jake Grimmett
Hi Martin, thanks for your reply :) We already have a separate NVMe SSD pool for cephfs metadata. I agree it's much simpler & more robust not using a separate DB/WAL, but as we have enough money for a 1.6TB SSD for every 6 HDD, so it's tempting to go down that route. If people think a 2.2%

Re: [ceph-users] is rgw crypt default encryption key long term supported ?

2019-05-28 Thread Casey Bodley
On 5/28/19 11:17 AM, Scheurer François wrote: Hi Casey I greatly appreciate your quick and helpful answer :-) It's unlikely that we'll do that, but if we do it would be announced with a long deprecation period and migration strategy. Fine, just the answer we wanted to hear ;-)

Re: [ceph-users] is rgw crypt default encryption key long term supported ?

2019-05-28 Thread Scheurer François
Hi Casey I greatly appreciate your quick and helpful answer :-) >It's unlikely that we'll do that, but if we do it would be announced with a >long deprecation period and migration strategy. Fine, just the answer we wanted to hear ;-) >However, I would still caution against using either as

Re: [ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Martin Verges
Hello Jake, do you have any latency requirements that you do require the DB/WAL at all? If not, CephFS with EC on SATA HDD works quite well as long as you have the metadata on a separate ssd pool. -- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: martin.ver...@croit.io Chat:

Re: [ceph-users] is rgw crypt default encryption key long term supported ?

2019-05-28 Thread Casey Bodley
Hi François, Removing support for either of rgw_crypt_default_encryption_key or rgw_crypt_s3_kms_encryption_keys would mean that objects encrypted with those keys would no longer be accessible. It's unlikely that we'll do that, but if we do it would be announced with a long deprecation

[ceph-users] SSD Sizing for DB/WAL: 4% for large drives?

2019-05-28 Thread Jake Grimmett
Dear All, Quick question regarding SSD sizing for a DB/WAL... I understand 4% is generally recommended for a DB/WAL. Does this 4% continue for "large" 12TB drives, or can we economise and use a smaller DB/WAL? Ideally I'd fit a smaller drive providing a 266GB DB/WAL per 12TB OSD, rather than

Re: [ceph-users] Cephfs free space vs ceph df free space disparity

2019-05-28 Thread Peter Wienemann
On 27.05.19 09:08, Stefan Kooman wrote: > Quoting Robert Ruge (robert.r...@deakin.edu.au): >> Ceph newbie question. >> >> I have a disparity between the free space that my cephfs file system >> is showing and what ceph df is showing. As you can see below my >> cephfs file system says there is

Re: [ceph-users] Luminous OSD: replace block.db partition

2019-05-28 Thread Konstantin Shalygin
On 5/28/19 5:16 PM, Igor Fedotov wrote: LVM volume and raw file resizing is quite simple, while partition one might need manual data movement to another target via dd or something. This also possible and tested, how-to is here https://bit.ly/2UFVO9Z k

Re: [ceph-users] Quotas with Mimic (CephFS-FUSE) clients in a Luminous Cluster

2019-05-28 Thread Oliver Freyermuth
Am 28.05.19 um 03:24 schrieb Yan, Zheng: On Mon, May 27, 2019 at 6:54 PM Oliver Freyermuth wrote: Am 27.05.19 um 12:48 schrieb Oliver Freyermuth: Am 27.05.19 um 11:57 schrieb Dan van der Ster: On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth wrote: Dear Dan, thanks for the quick

[ceph-users] Problem with adding new OSDs on new storage nodes

2019-05-28 Thread Luk
Hi, We have six storage nodes, and added three new only-SSD storage.nodes. I started increasing weight to fill in freshly added OSD on new osd storage nodes, the command was: ceph osd crush reweight osd.126 0.2 cluster started rebalance: 2019-05-22 11:00:00.000253 mon.ceph-mon-01 mon.0

[ceph-users] is rgw crypt default encryption key long term supported ?

2019-05-28 Thread Scheurer François
Dear Casey, Dear Ceph Users The following is written in the radosgw documentation (http://docs.ceph.com/docs/luminous/radosgw/encryption/): rgw crypt default encryption key = 4YSmvJtBv0aZ7geVgAsdpRnLBEwWSWlMIGnRS8a9TSA= Important: This mode is for diagnostic purposes only! The ceph

Re: [ceph-users] BlueStore bitmap allocator under Luminous and Mimic

2019-05-28 Thread Marc Roos
I switched first of may, and did not notice to much difference in memory usage. After the restart of the osd's on the node I see the memory consumption gradually getting back to as before. Can't say anything about latency. -Original Message- From: Konstantin Shalygin Sent:

Re: [ceph-users] Luminous OSD: replace block.db partition

2019-05-28 Thread Igor Fedotov
Konstantin, one should resize device  before using bluefs-bdev-expand command. So the first question should be what's the backend for block.db - simple device partition, LVM volume, raw file? LVM volume and raw file resizing is quite simple, while partition one might need manual data

Re: [ceph-users] Object Gateway - Server Side Encryption

2019-05-28 Thread Scheurer François
Hi Casey Thank you for your help. We fixed the problem on the same day but then I forgot to post back the solution here... So basically we had 2 problems: -the barbican secret key payload need to be exactly 32 Bytes -the ceph.conf need a user id (username not ok): rgw keystone barbican user =

Re: [ceph-users] BlueStore bitmap allocator under Luminous and Mimic

2019-05-28 Thread Konstantin Shalygin
Hi, With the release of 12.2.12 the bitmap allocator for BlueStore is now available under Mimic and Luminous. [osd] bluestore_allocator = bitmap bluefs_allocator = bitmap Before setting this in production: What might the

Re: [ceph-users] Luminous OSD: replace block.db partition

2019-05-28 Thread Konstantin Shalygin
Hello - I have created an OSD with 20G block.db, now I wanted to change the block.db to 100G size. Please let us know if there is a process for the same. PS: Ceph version 12.2.4 with bluestore backend. You should upgrade to 12.2.11+ first! Expand your block.db via `ceph-bluestore-tool

[ceph-users] RGW multisite sync issue

2019-05-28 Thread Matteo Dacrema
Hi All, I’ve configured a multisite deployment on Ceph Nautilus 14.2.1 with one zone group “eu", one master zone and two secondary zone. If I upload ( on the master zone ) for 200 objects of 80MB each and I delete all of them without waiting for the replication to finish I end up with one

Re: [ceph-users] QEMU/KVM client compatibility

2019-05-28 Thread Kevin Olbrich
Am Di., 28. Mai 2019 um 10:20 Uhr schrieb Wido den Hollander : > > > On 5/28/19 10:04 AM, Kevin Olbrich wrote: > > Hi Wido, > > > > thanks for your reply! > > > > For CentOS 7, this means I can switch over to the "rpm-nautilus/el7" > > repository and Qemu uses a nautilus compatible client? > > I

Re: [ceph-users] QEMU/KVM client compatibility

2019-05-28 Thread Wido den Hollander
On 5/28/19 10:04 AM, Kevin Olbrich wrote: > Hi Wido, > > thanks for your reply! > > For CentOS 7, this means I can switch over to the "rpm-nautilus/el7" > repository and Qemu uses a nautilus compatible client? > I just want to make sure, I understand correctly. > Yes, that is correct. Keep

Re: [ceph-users] QEMU/KVM client compatibility

2019-05-28 Thread Kevin Olbrich
Hi Wido, thanks for your reply! For CentOS 7, this means I can switch over to the "rpm-nautilus/el7" repository and Qemu uses a nautilus compatible client? I just want to make sure, I understand correctly. Thank you very much! Kevin Am Di., 28. Mai 2019 um 09:46 Uhr schrieb Wido den Hollander

[ceph-users] Any CEPH's iSCSI gateway users?

2019-05-28 Thread Igor Podlesny
What is your experience? Does it make sense to use it -- is it solid enough or beta quality rather (both in terms of stability and performance)? I've read it was more or less packaged to work with RHEL. Does it hold true still? What's the best way to install it on, say, CentOS or Debian/Ubuntu?

Re: [ceph-users] QEMU/KVM client compatibility

2019-05-28 Thread Wido den Hollander
On 5/28/19 7:52 AM, Kevin Olbrich wrote: > Hi! > > How can I determine which client compatibility level (luminous, mimic, > nautilus, etc.) is supported in Qemu/KVM? > Does it depend on the version of ceph packages on the system? Or do I > need a recent version Qemu/KVM? This is mainly