Re: [ceph-users] amount of PGs/pools/OSDs for your openstack / Ceph

2018-04-09 Thread Subhachandra Chandra
Our use case is not Openstack but we have a cluster with similar size to what you are looking at. Our cluster has 540 OSDs with 4PB of raw storage spread across 9 nodes at this point. 2 pools - 512 PGs - 3 way redundancy - 32768 PGs - RS(6,3) erasure coding (99.9% of data in this pool) The

Re: [ceph-users] User deletes bucket with partial multipart uploads in, objects still in quota

2018-04-09 Thread David Turner
I believe there is a command in radosgw-admin to change the owner of a bucket which might be able to resolve the incorrect quota issue. I don't know if that will work since the bucket doesn't think it exists. Perhaps creating a new bucket of the same name and trying to run commands against that

Re: [ceph-users] Question to avoid service stop when osd is full

2018-04-09 Thread David Turner
The proper way to prevent this is to set your full ratios safe and monitor your disk usage. That will allow you to either clean up old data or add new storage before you get to 95 full on any OSDs. What I mean by setting your full ratios safe is that if your use case can fill 20% of your disk

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-04-09 Thread Marc Roos
I have this on a rbd pool with images/snapshots that have been created in Luminous > Hi Stefan, Mehmet, > > Are these clusters that were upgraded from prior versions, or fresh > luminous installs? > > > This message indicates that there is a stray clone object with no > associated head or

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-04-09 Thread Marc Roos
I have found one image, how do I know what snapshot version to delete? I have multiple -Original Message- From: c...@elchaka.de [mailto:c...@elchaka.de] Sent: zondag 8 april 2018 13:30 To: ceph-users Subject: Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object? Am

Re: [ceph-users] Ceph luminous - troubleshooting performance issues overall DSK 100%, busy 1%

2018-04-09 Thread Steven Vacaroaia
Disk controller seem fine Any other suggestions will be really appreciated megacli -AdpBbuCmd -aAll BBU status for Adapter: 0 BatteryType: BBU Voltage: 3925 mV Current: 0 mA Temperature: 17 C Battery State: Optimal BBU Firmware Status: Charging Status : None Voltage

Re: [ceph-users] Admin socket on a pure client: is it possible?

2018-04-09 Thread Wido den Hollander
On 04/09/2018 04:01 PM, Fulvio Galeazzi wrote: > Hallo, > >   I am wondering whether I could have the admin socket functionality > enabled on a server which is a pure Ceph client (no MDS/MON/OSD/whatever > running on such server). Is this at all possible? How should ceph.conf > be configured?

[ceph-users] Admin socket on a pure client: is it possible?

2018-04-09 Thread Fulvio Galeazzi
Hallo, I am wondering whether I could have the admin socket functionality enabled on a server which is a pure Ceph client (no MDS/MON/OSD/whatever running on such server). Is this at all possible? How should ceph.conf be configured? Documentation pages led me to write something like this:

[ceph-users] Scrubbing for RocksDB

2018-04-09 Thread Eugen Block
Hi list, we were wondering if and how the consistency of OSD journals (BlueStore) is checked. Our cluster runs on Luminous (12.2.2) and we had migrated all our filestore OSDs to bluestore a couple of months ago. During that process we placed each rocksDB on a separate partition on a

[ceph-users] Ceph Dashboard v2 update

2018-04-09 Thread Lenz Grimmer
Hi all, a month has passed since the Dashboard v2 was merged into the master branch, so I thought it might be helpful to write a summary/update (with screenshots) of what we've been up to since then: https://www.openattic.org/posts/ceph-dashboard-v2-update/ Let us know what you think!

Re: [ceph-users] Issue with fstrim and Nova hw_disk_discard=unmap

2018-04-09 Thread Fulvio Galeazzi
Hallo Jason, thanks again for your time and apologies for long silence but I was busy upgrading to Luminous and converting Filestore->Bluestore. In the meantime, the staging cluster where I was making tests was both upgraded to Ceph-Luminous and upgraded to OpenStack-Pike: good news is

[ceph-users] Question to avoid service stop when osd is full

2018-04-09 Thread 渥美 慶彦
Hi, I have 2 questions. I want to use ceph for OpenStack's volume backend by creating 2 ceph pools. One pool consists of osds on SSD, and the other consists of osds on HDD. The storage capacity of SSD pool is much smaller than that of HDD pool, so I want to make configuration not to stop all IO

Re: [ceph-users] Fwd: Separate --block.wal --block.db bluestore not working as expected.

2018-04-09 Thread Hervé Ballans
Hi, Just a little question regarding this operation : [root@osdhost osd]# ceph-volume lvm prepare --bluestore --data /dev/sdc --block.wal /dev/sda2 --block.db /dev/sda1 On a previous post, I understood that if both wal and db are stored on the same separate device, then we could use a

[ceph-users] Move ceph admin node to new other server

2018-04-09 Thread Nghia Than
Hello, We have use 1 server for deploy (called ceph-admin-node) for 3 mon and 4 OSD node. We have created a folder called *ceph-deploy* to deploy all node members. May we move this folder to other server? This folder contains all following files: total 1408 -rw--- 1 root root 113 Oct