Re: [ceph-users] Error bluestore doesn't support lvm

2018-07-20 Thread Satish Patel
after google and digging i found this BUG, why its not pushed to all branches ? https://github.com/ceph/ceph-ansible/commit/d3b427e16990f9ebcde7575aae367fd7dfe36a8d#diff-34d2eea5f7de9a9e89c1e66b15b4cd0a On Fri, Jul 20, 2018 at 11:26 PM, Satish Patel wrote: > My Ceph version is > >

Re: [ceph-users] Error bluestore doesn't support lvm

2018-07-20 Thread Satish Patel
My Ceph version is [root@ceph-osd-02 ~]# ceph -v ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable) On Fri, Jul 20, 2018 at 11:24 PM, Satish Patel wrote: > I am using openstack-ansible with ceph-ansible to deploy my Ceph > custer and here is my config in yml file >

[ceph-users] Error bluestore doesn't support lvm

2018-07-20 Thread Satish Patel
I am using openstack-ansible with ceph-ansible to deploy my Ceph custer and here is my config in yml file --- osd_objectstore: bluestore osd_scenario: lvm lvm_volumes: - data: /dev/sdb - data: /dev/sdc - data: /dev/sdd - data: /dev/sde This is the error i am getting.. TASK [ceph-osd :

[ceph-users] 12.2.7 - Available space decreasing when adding disks

2018-07-20 Thread Glen Baars
Hello Ceph Users, We have added more ssd storage to our ceph cluster last night. We added 4 x 1TB drives and the available space went from 1.6TB to 0.6TB ( in `ceph df` for the SSD pool ). I would assume that the weight needs to be changed but I didn't think I would need to? Should I change

Re: [ceph-users] JBOD question

2018-07-20 Thread Oliver Freyermuth
Hi Satish, that really completely depends on your controller. For what it's worth: We have AVAGO MegaRAID controllers (9361 series). They can be switched to a "JBOD personality". After doing so and reinitializing (poewrcycling), the cards change PCI-ID and run a different firmware, optimized

Re: [ceph-users] JBOD question

2018-07-20 Thread Satish Patel
Thanks Brian, That make sense because i was reading document and found you can either choose RAID or JBOD On Fri, Jul 20, 2018 at 5:33 PM, Brian : wrote: > Hi Satish > > You should be able to choose different modes of operation for each > port / disk. Most dell servers will let you do RAID and

Re: [ceph-users] JBOD question

2018-07-20 Thread Brian :
Hi Satish You should be able to choose different modes of operation for each port / disk. Most dell servers will let you do RAID and JBOD in parallel. If you can't do that and can only either turn RAID on or off then you can use SW RAID for your OS On Fri, Jul 20, 2018 at 9:01 PM, Satish Patel

[ceph-users] mon fail to start for disk issue

2018-07-20 Thread Satish Patel
I am getting this error, why its complaining about disk even we have enough space 2018-07-20 16:04:58.313331 7f0c047f8ec0 0 set uid:gid to 167:167 (ceph:ceph) 2018-07-20 16:04:58.313350 7f0c047f8ec0 0 ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable), process

[ceph-users] JBOD question

2018-07-20 Thread Satish Patel
Folks, I never used JBOD mode before and now i am planning so i have stupid question if i switch RAID controller to JBOD mode in that case how does my OS disk will get mirror? Do i need to use software raid for OS disk when i use JBOD mode? ___

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-20 Thread Vasu Kulkarni
On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn wrote: > Hi, > > > > I noticed that in commit > https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a98023b60efe421f3, > the ability to specify a cluster name was removed. Is there a reason for > this removal ? > > > > Because right

[ceph-users] [Ceph-deploy] Cluster Name

2018-07-20 Thread Thode Jocelyn
Hi, I noticed that in commit https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a98023b60efe421f3, the ability to specify a cluster name was removed. Is there a reason for this removal ? Because right now, there are no possibility to create a ceph cluster with a different name

Re: [ceph-users] Default erasure code profile and sustaining loss of one host containing 4 OSDs

2018-07-20 Thread Ziggy Maes
Hello Caspar That makes a great deal of sense, thank you for elaborating. Am I correct to assume that if we were to use a k=2, m=2 profile, it would be identical to a replicated pool (since there would be an equal amount of data and parity chunks)? Furthermore, how should the proper erasure

Re: [ceph-users] [RBD]Replace block device cluster

2018-07-20 Thread Nino Bosteels
In response to my own questions, I read that you shouldn't separate your journal / rocksDB from the disks where your data resides, with bluestore. And the general rule of one core per OSD seems to be unnecessary, since in the current clusters we've got 4 cores with 5 disks and CPU usage never

Re: [ceph-users] Default erasure code profile and sustaining loss of one host containing 4 OSDs

2018-07-20 Thread Caspar Smit
Ziggy, For EC pools: min_size = k+1 So in your case (m=1) -> min_size is 3 which is the same as the number of shards. So if ANY shard goes down, IO is freezed. If you choose m=2 min_size will still be 3 but you now have 4 shards (k+m = 4) so you can loose a shard and still remain availability.

Re: [ceph-users] Default erasure code profile and sustaining loss of one host containing 4 OSDs

2018-07-20 Thread Ziggy Maes
Caspar, Thank you for your reply. I’m in all honesty still not clear on what value to use for min_size. From what I understand, it should be be set to the sum of k+m for erasure coded pools, as it is set by default. Additionally, could you elaborate why m=2 would be able to sustain a node

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread Sébastien VIGNERON
Correct, sorry, I have just read the first question and answered too quickly. As fas as I know the space available is "shared" (the space is a combination of OSD drives and crushmap ) between pools using the same device class but you can define quota for each pool if needed. ceph osd pool

Re: [ceph-users] Default erasure code profile and sustaining loss of one host containing 4 OSDs

2018-07-20 Thread Caspar Smit
Ziggy, The default min_size for your pool is 3 so losing ANY single OSD (not even host) will result in reduced data availability: https://patchwork.kernel.org/patch/8546771/ Use m=2 to be able to handle a node failure. Met vriendelijke groet, Caspar Smit Systemengineer SuperNAS

[ceph-users] Default erasure code profile and sustaining loss of one host containing 4 OSDs

2018-07-20 Thread Ziggy Maes
Hello I am currently trying to find out if Ceph can sustain the loss of a full host (containing 4 OSDs) in a default erasure coded pool (k=2, m=1). We have currently have a production EC pool with the default erasure profile, but would like to make sure the data on this pool remains accessible

Re: [ceph-users] design question - NVME + NLSAS, SSD or SSD + NLSAS

2018-07-20 Thread Satish Patel
No way I'm expert and let see what other folks suggesting but I would say go with Intel if you only care about performance. Sent from my iPhone > On Jul 19, 2018, at 12:54 PM, Steven Vacaroaia wrote: > > Hi, > I would appreciate any advice ( with arguments , if possible) regarding the >

Re: [ceph-users] Converting to BlueStore, and external journal devices

2018-07-20 Thread Marc Roos
I had similar question a while ago, maybe these you want to read. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46768.html https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46799.html -Original Message- From: Satish Patel [mailto:satish@gmail.com] Sent:

Re: [ceph-users] Converting to BlueStore, and external journal devices

2018-07-20 Thread Satish Patel
What is the use of LVM in blurstore, I have seen people using LVM but don't know why ? Sent from my iPhone > On Jul 19, 2018, at 10:00 AM, Eugen Block wrote: > > Hi, > > if you have SSDs for RocksDB, you should provide that in the command > (--block.db $DEV), otherwise Ceph will use the one

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread Marc Roos
That is the used column not? [@c01 ~]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED G G G 60.78 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS iscsi-images 16

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread sinan
Hi Sebastien, Your command(s) returns the replication size and not the size in terms of bytes. I want to see the size of a pool in terms of bytes. The MAX AVAIL in "ceph df" is: [empty space of an OSD disk with the least empty space] multiplied by [amount of OSD] That is not what I am looking

Re: [ceph-users] Be careful with orphans find (was Re: Lost TB for Object storage)

2018-07-20 Thread CUZA Frédéric
Hi Matthew, Thanks for the advice but we are no longer using orphans find since the problem does not seem to be solved with it. Regards, -Message d'origine- De : Matthew Vernon Envoyé : 20 July 2018 11:03 À : CUZA Frédéric ; ceph-users@lists.ceph.com Objet : Be careful with orphans

[ceph-users] Be careful with orphans find (was Re: Lost TB for Object storage)

2018-07-20 Thread Matthew Vernon
Hi, On 19/07/18 17:19, CUZA Frédéric wrote: > After that we tried to remove the orphans : > > radosgw-admin orphans find –pool= default.rgw.buckets.data > --job-id=ophans_clean > > radosgw-admin orphans finish --job-id=ophans_clean > > It finds some orphans : 85, but the command finish seems

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread Eugen Block
Hi, ceph osd pool get your_pool_name size ceph osd pool ls detail these are commands to get the size of a pool regarding the replication, not the available storage. So the capacity in 'ceph df' is returning the space left on the pool and not the 'capacity size'. I'm not aware of a

Re: [ceph-users] 12.2.6 CRC errors

2018-07-20 Thread Stefan Schneebeli
In the meantime I upgraded the cluster to 12.2.7 and added the osd distrust data digest = true setting in ceph.conf because it's mixed cluster. But I still see a constantly growing number of inconsistent PG's and Scrub errors. If I check the running ceph config with ceph --admin-daemon

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread Sébastien VIGNERON
# for a specific pool: ceph osd pool get your_pool_name size > Le 20 juil. 2018 à 10:32, Sébastien VIGNERON a > écrit : > > #for all pools: > ceph osd pool ls detail > > >> Le 20 juil. 2018 à 09:02, si...@turka.nl a écrit : >> >> Hi, >> >> How can I see the size of a pool? When I create

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread Sébastien VIGNERON
#for all pools: ceph osd pool ls detail > Le 20 juil. 2018 à 09:02, si...@turka.nl a écrit : > > Hi, > > How can I see the size of a pool? When I create a new empty pool I can see > the capacity of the pool using 'ceph df', but as I start putting data in > the pool the capacity is decreasing.

Re: [ceph-users] 12.2.6 upgrade

2018-07-20 Thread Glen Baars
Thanks, we are fully bluestore and therefore just set osd skip data digest = true Kind regards, Glen Baars -Original Message- From: Dan van der Ster Sent: Friday, 20 July 2018 4:08 PM To: Glen Baars Cc: ceph-users Subject: Re: [ceph-users] 12.2.6 upgrade That's right. But please

Re: [ceph-users] 12.2.6 upgrade

2018-07-20 Thread Dan van der Ster
That's right. But please read the notes carefully to understand if you need to set osd skip data digest = true or osd distrust data digest = true .. dan On Fri, Jul 20, 2018 at 10:02 AM Glen Baars wrote: > > I saw that on the release notes. > > Does that mean that the

Re: [ceph-users] 12.2.6 upgrade

2018-07-20 Thread Glen Baars
I saw that on the release notes. Does that mean that the active+clean+inconsistent PGs will be OK? Is the data still getting replicated even if inconsistent? Kind regards, Glen Baars -Original Message- From: Dan van der Ster Sent: Friday, 20 July 2018 3:57 PM To: Glen Baars Cc:

Re: [ceph-users] 12.2.6 upgrade

2018-07-20 Thread Dan van der Ster
CRC errors are expected in 12.2.7 if you ran 12.2.6 with bluestore. See https://ceph.com/releases/12-2-7-luminous-released/#upgrading-from-v12-2-6 On Fri, Jul 20, 2018 at 8:30 AM Glen Baars wrote: > > Hello Ceph Users, > > > > We have upgraded all nodes to 12.2.7 now. We have 90PGs ( ~2000 scrub

Re: [ceph-users] active+clean+inconsistent PGs after upgrade to 12.2.7

2018-07-20 Thread Dan van der Ster
On Thu, Jul 19, 2018 at 11:51 AM Robert Sander wrote: > > On 19.07.2018 11:15, Ronny Aasen wrote: > > > Did you upgrade from 12.2.5 or 12.2.6 ? > > Yes. > > > sounds like you hit the reason for the 12.2.7 release > > > > read : https://ceph.com/releases/12-2-7-luminous-released/ > > > > there

[ceph-users] Pool size (capacity)

2018-07-20 Thread sinan
Hi, How can I see the size of a pool? When I create a new empty pool I can see the capacity of the pool using 'ceph df', but as I start putting data in the pool the capacity is decreasing. So the capacity in 'ceph df' is returning the space left on the pool and not the 'capacity size'. Thanks!

[ceph-users] PGs go to down state when OSD fails

2018-07-20 Thread shrey chauhan
Hi, I am trying to understand what happens when an OSD fails. Few days back I wanted to check what happens when an OSD goes down for that what I did was I just went to the node and stopped one of the osd's service. When OSD went in down state pgs started recovering and after sometime everything

[ceph-users] 12.2.6 upgrade

2018-07-20 Thread Glen Baars
Hello Ceph Users, We have upgraded all nodes to 12.2.7 now. We have 90PGs ( ~2000 scrub errors ) to fix from the time when we ran 12.2.6. It doesn't seem to be affecting production at this time. Below is the log of a PG repair. What is the best way to correct these errors? Is there any