Re: [ceph-users] how to set osd_crush_initial_weight 0 without restart any service

2019-10-01 Thread Satish Patel
Paul, I have tried your idea but it didn't work, i did set nobalance but it still did rebalancing and fill lots of data on my new OSD. I believe your option doesn't work with ceph-ansible playbook On Tue, Oct 1, 2019 at 2:45 PM Satish Patel wrote: > > You are saying set "c

Re: [ceph-users] how to set osd_crush_initial_weight 0 without restart any service

2019-10-01 Thread Satish Patel
aking of any action in reliance upon this > information by persons or entities other than the intended recipient is > prohibited. If you received this in error, please contact the sender and > destroy any copies of this information. > -------- > >

[ceph-users] how to set osd_crush_initial_weight 0 without restart any service

2019-10-01 Thread Satish Patel
Folks, Method: 1 In my lab i am playing with ceph and trying to understand how to add new OSD without starting rebalancing. I want to add this option on fly so i don't need to restart any services or anything. $ ceph tell mon.* injectargs '--osd_crush_initial_weight 0' $ ceph daemon

Re: [ceph-users] what is Implicated osds

2018-08-20 Thread Satish Patel
clearing up. On Mon, Aug 20, 2018 at 7:11 PM, Brad Hubbard wrote: > On Tue, Aug 21, 2018 at 2:37 AM, Satish Patel wrote: >> Folks, >> >> Today i found ceph -s is really slow and just hanging for minute or 2 >> minute to give me output also same with "ceph osd tree"

[ceph-users] what is Implicated osds

2018-08-20 Thread Satish Patel
Folks, Today i found ceph -s is really slow and just hanging for minute or 2 minute to give me output also same with "ceph osd tree" output, command just hanging long time to give me output.. This is what i am seeing output, one OSD down not sure why its down and what is the relation with

Re: [ceph-users] A few questions about using SSD for bluestore journal

2018-08-16 Thread Satish Patel
Eugen you beat me!!! On Thu, Aug 16, 2018 at 6:37 PM, Satish Patel wrote: > I am new too but i had same question and this is my opinion (i would > wait for other people to correct me or add more) > > A1. I didn't find any formula but i believe 10 to 20G is more than > enou

Re: [ceph-users] A few questions about using SSD for bluestore journal

2018-08-16 Thread Satish Patel
I am new too but i had same question and this is my opinion (i would wait for other people to correct me or add more) A1. I didn't find any formula but i believe 10 to 20G is more than enough for each OSD ( there are some variation like how long you going to hold data etc..) A2. basic rule is 5

Re: [ceph-users] Ceph-mon MTU question

2018-08-16 Thread Satish Patel
sized packets, which will be dropped by the switch > > I do not know if there is osd -> mon traffic > Yet such configuration will clearly be bug-prone > > > On 08/16/2018 05:53 PM, Satish Patel wrote: >> Folks, >> >> I am changing all my OSD node MTU to 9000 and

Re: [ceph-users] Ceph-mon MTU question

2018-08-16 Thread Satish Patel
s where at 9000, everything was fine again. > Cluster ran fine with 9000 on the OSD's + Clients and 1500 on MON's (but wht > would you?) > > > - Original Message - > From: "Satish Patel" > To: "ceph-users" > Sent: Thursday, August 16, 2018 5:53:07 PM >

[ceph-users] Ceph-mon MTU question

2018-08-16 Thread Satish Patel
Folks, I am changing all my OSD node MTU to 9000 and just wonder does ceph-mon node need MTU 9000 ? I know that are not going to deal with high volume data but just curious does that impact functionality if ceph-mon running on MTU 1500 and all OSD data node run on MTU 9000 (FYI: they all are on

Re: [ceph-users] pg count question

2018-08-10 Thread Satish Patel
4 per OSD. For the smaller > pool 16 seems too low. You can go with 32 and 256 if you want lower number > of PGs in the vms pool and expand later. The calculator recommends 32 and > 512 for your settings. > > Subhachandra > > > > > On Fri, Aug 10, 2018 at 8:43 AM, Satis

Re: [ceph-users] pg count question

2018-08-10 Thread Satish Patel
at 9:23 AM, Satish Patel wrote: > Re-sending it, because i found my i lost membership so wanted to make > sure, my email went through > > On Fri, Aug 10, 2018 at 7:07 AM, Satish Patel wrote: >> Thanks, >> >> Can you explain about %Data field in that calculation, is

Re: [ceph-users] pg count question

2018-08-10 Thread Satish Patel
Re-sending it, because i found my i lost membership so wanted to make sure, my email went through On Fri, Aug 10, 2018 at 7:07 AM, Satish Patel wrote: > Thanks, > > Can you explain about %Data field in that calculation, is this total data > usage for specific pool or total ? >

Re: [ceph-users] pg count question

2018-08-10 Thread Satish Patel
from my iPhone > On Aug 9, 2018, at 4:25 PM, Subhachandra Chandra > wrote: > > I have used the calculator at https://ceph.com/pgcalc/ which looks at > relative sizes of pools and makes a suggestion. > > Subhachandra > >> On Thu, Aug 9, 2018 at 1:11 PM,

Re: [ceph-users] pg count question

2018-08-09 Thread Satish Patel
; On Wed, Aug 8, 2018 at 12:40 AM, Sébastien VIGNERON > wrote: >> >> The formula seems correct for a 100 pg/OSD target. >> >> >> > Le 8 août 2018 à 04:21, Satish Patel a écrit : >> > >> > Thanks! >> > >> > Do you have any commen

Re: [ceph-users] pg count question

2018-08-07 Thread Satish Patel
gt;> Le 7 août 2018 à 16:50, Satish Patel a écrit : >> >> Folks, >> >> I am little confused so just need clarification, I have 14 osd in my >> cluster and i want to create two pool (pool-1 & pool-2) how do i >> device pg between two pool with replica

[ceph-users] pg count question

2018-08-07 Thread Satish Patel
Folks, I am little confused so just need clarification, I have 14 osd in my cluster and i want to create two pool (pool-1 & pool-2) how do i device pg between two pool with replication 3 Question: 1 Is this correct formula? 14 * 100 / 3 / 2 = 233 ( power of 2 would be 256) So should i give

Re: [ceph-users] ceph lvm question

2018-07-30 Thread Satish Patel
ng quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\nwith_items:\n - {{ foo }}\n\nShould be written as:\n\nwith_items:\n - \"{{ foo }}\"\n"} On Mon, Jul 30, 2018 at 1:11 PM, Alfredo Deza wrote: > On Sat, Jul 28, 2018 a

[ceph-users] pg calculation question

2018-07-29 Thread Satish Patel
Folks, I am building new ceph storage and i have currently 8 OSD total (in future i am going to add more) Based on official document I should do following to calculate Total PG 8 * 100 / 3 = 266 ( nearest power of 2 is 512 ) Now i have 2 pool at present in my ceph cluster (images & vms) so

[ceph-users] ceph lvm question

2018-07-27 Thread Satish Patel
I have simple question i want to use LVM with bluestore (Its recommended method), If i have only single SSD disk for osd in that case i want to keep journal + data on same disk so how should i create lvm to accommodate ? Do i need to do following pvcreate /dev/sdb vgcreate vg0 /dev/sdb Now i

Re: [ceph-users] Why lvm is recommended method for bleustore

2018-07-24 Thread Satish Patel
t 2:33 PM, Satish Patel wrote: >> Alfredo, >> >> Thanks, I think i should go with LVM then :) >> >> I have question here, I have 4 physical SSD per server, some reason i >> am using ceph-ansible 3.0.8 version which doesn't create LVM volume >> itself so i h

[ceph-users] ceph cluster monitoring tool

2018-07-23 Thread Satish Patel
My 5 node ceph cluster is ready for production, now i am looking for good monitoring tool (Open source), what majority of folks using in their production? ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Reclaim free space on RBD images that use Bluestore?????

2018-07-23 Thread Satish Patel
Forgive me found this post which solved my issue: https://www.sebastien-han.fr/blog/2015/02/02/openstack-and-ceph-rbd-discard/ On Mon, Jul 23, 2018 at 11:22 PM, Satish Patel wrote: > I have same issue, i just build new Ceph cluster for my Openstack VMs > workload using rbd and i have c

Re: [ceph-users] Reclaim free space on RBD images that use Bluestore?????

2018-07-23 Thread Satish Patel
I have same issue, i just build new Ceph cluster for my Openstack VMs workload using rbd and i have created bunch of VM did some dd test to create big big file to test performance now i deleted all dd file but ceph still showing USED space. I tried to do from guest VM [root@c7-vm ~]# sudo fstrim

Re: [ceph-users] JBOD question

2018-07-23 Thread Satish Patel
I am planning to buy "LSI SAS 9207-8i" does anyone know it support both RAID & JBOD mode together so i can do RAID-1 on OS disk and other disk for JBOD On Sat, Jul 21, 2018 at 11:16 AM, Willem Jan Withagen wrote: > On 21/07/2018 01:45, Oliver Freyermuth wrote: >> >> Hi Satish, >> >> that really

Re: [ceph-users] Why lvm is recommended method for bleustore

2018-07-23 Thread Satish Patel
on, Jul 23, 2018 at 1:56 PM, Satish Patel wrote: >> This is great explanation, based on your details look like when reboot >> machine (OSD node) it will take longer time to initialize all number >> of OSDs but if we use LVM in that case it shorten that time. > > That is on

Re: [ceph-users] Why lvm is recommended method for bleustore

2018-07-23 Thread Satish Patel
et 2018 à 09:51 -0400, Satish Patel a écrit : >>> I read that post and that's why I open this thread for few more >>> questions and clearence, >>> >>> When you said OSD doesn't come up what actually that means? After >>> reboot of node or after service

Re: [ceph-users] Why lvm is recommended method for bleustore

2018-07-22 Thread Satish Patel
e.com/ceph-users@lists.ceph.com/msg47802.html > > > > > > -Original Message- > From: Satish Patel [mailto:satish@gmail.com] > Sent: zaterdag 21 juli 2018 20:59 > To: ceph-users > Subject: [ceph-users] Why lvm is recommended method for bleustore > > Folks, >

[ceph-users] Why lvm is recommended method for bleustore

2018-07-21 Thread Satish Patel
Folks, I think i am going to boil ocean here, I google a lot about this topic why lvm is recommended method for bluestore, but didn't find any good and detail explanation, not even in Ceph official website. Can someone explain here in basic language because i am no way expert so just want to

[ceph-users] bluestore lvm scenario confusion

2018-07-21 Thread Satish Patel
I am trying to deploy ceph-ansible with lvm osd scenario and reading at http://docs.ceph.com/ceph-ansible/master/osds/scenarios.html I have all SSD disk and i don't have separate journal, my plan was keep WAL/DB on same disk because all SSD and same speed. ceph-ansible doesn't create lvm so i

Re: [ceph-users] Error bluestore doesn't support lvm

2018-07-20 Thread Satish Patel
after google and digging i found this BUG, why its not pushed to all branches ? https://github.com/ceph/ceph-ansible/commit/d3b427e16990f9ebcde7575aae367fd7dfe36a8d#diff-34d2eea5f7de9a9e89c1e66b15b4cd0a On Fri, Jul 20, 2018 at 11:26 PM, Satish Patel wrote: > My Ceph version is > > [

Re: [ceph-users] Error bluestore doesn't support lvm

2018-07-20 Thread Satish Patel
My Ceph version is [root@ceph-osd-02 ~]# ceph -v ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable) On Fri, Jul 20, 2018 at 11:24 PM, Satish Patel wrote: > I am using openstack-ansible with ceph-ansible to deploy my Ceph > custer and here is my config in ym

[ceph-users] Error bluestore doesn't support lvm

2018-07-20 Thread Satish Patel
I am using openstack-ansible with ceph-ansible to deploy my Ceph custer and here is my config in yml file --- osd_objectstore: bluestore osd_scenario: lvm lvm_volumes: - data: /dev/sdb - data: /dev/sdc - data: /dev/sdd - data: /dev/sde This is the error i am getting.. TASK [ceph-osd :

Re: [ceph-users] JBOD question

2018-07-20 Thread Satish Patel
you do RAID and JBOD in > parallel. > > If you can't do that and can only either turn RAID on or off then you > can use SW RAID for your OS > > > On Fri, Jul 20, 2018 at 9:01 PM, Satish Patel wrote: >> Folks, >> >> I never used JBOD mode before and now i am pla

[ceph-users] mon fail to start for disk issue

2018-07-20 Thread Satish Patel
I am getting this error, why its complaining about disk even we have enough space 2018-07-20 16:04:58.313331 7f0c047f8ec0 0 set uid:gid to 167:167 (ceph:ceph) 2018-07-20 16:04:58.313350 7f0c047f8ec0 0 ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable), process

[ceph-users] JBOD question

2018-07-20 Thread Satish Patel
Folks, I never used JBOD mode before and now i am planning so i have stupid question if i switch RAID controller to JBOD mode in that case how does my OS disk will get mirror? Do i need to use software raid for OS disk when i use JBOD mode? ___

Re: [ceph-users] design question - NVME + NLSAS, SSD or SSD + NLSAS

2018-07-20 Thread Satish Patel
No way I'm expert and let see what other folks suggesting but I would say go with Intel if you only care about performance. Sent from my iPhone > On Jul 19, 2018, at 12:54 PM, Steven Vacaroaia wrote: > > Hi, > I would appreciate any advice ( with arguments , if possible) regarding the >

Re: [ceph-users] Converting to BlueStore, and external journal devices

2018-07-20 Thread Satish Patel
What is the use of LVM in blurstore, I have seen people using LVM but don't know why ? Sent from my iPhone > On Jul 19, 2018, at 10:00 AM, Eugen Block wrote: > > Hi, > > if you have SSDs for RocksDB, you should provide that in the command > (--block.db $DEV), otherwise Ceph will use the one

Re: [ceph-users] Need advice on Ceph design

2018-07-19 Thread Satish Patel
ce class. > > For your OSD nodes, I recommend at least 6 GB of RAM (1 GB of RAM per TB plus > some for the OS). > > For the cluster administration, look for ansible-ceph. > > You should bench your disks and pools when created to see what is best for > you. > > If anybod

Re: [ceph-users] RAID question for Ceph

2018-07-19 Thread Satish Patel
Thanks for massive details, so what are the options I have can I disable raid controller and run system without raid and use software raid for OS? Does that make sense ? Sent from my iPhone > On Jul 19, 2018, at 6:33 AM, Willem Jan Withagen wrote: > >> On 19/07/2018 10:53, Simon Ironside

[ceph-users] RAID question for Ceph

2018-07-18 Thread Satish Patel
If i have 8 OSD drives in server on P410i RAID controller (HP), If i want to make this server has OSD node in that case show should i configure RAID? 1. Put all drives in RAID-0? 2. Put individual HDD in RAID-0 and create 8 individual RAID-0 so OS can see 8 separate HDD drives What most people

Re: [ceph-users] Need advice on Ceph design

2018-07-18 Thread Satish Patel
For production, it's recommended to have dedicated MON/MGR nodes. > - You may also need dedicated MDS nodes, depending the CEPH access > protocol(s) you choose. > - If you need commercial support afterward, you should see with a Redhat > representative. > > Samsung 850 pro is

[ceph-users] Need advice on Ceph design

2018-07-18 Thread Satish Patel
I have decided to setup 5 node Ceph storage and following is my inventory, just tell me is it good to start first cluster for average load. 0. Ceph Bluestore 1. Journal SSD (Intel DC 3700) 2. OSD disk Samsung 850 Pro 500GB 3. OSD disk SATA 500GB (7.5k RPMS) 4. 2x10G NIC (separate public/cluster

Re: [ceph-users] SSDs for data drives

2018-07-16 Thread Satish Patel
% On Mon, Jul 16, 2018 at 1:18 PM, Michael Kuriger wrote: > I dunno, to me benchmark tests are only really useful to compare different > drives. > > > > > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Paul Emmerich > Sent: Monday, July 1

Re: [ceph-users] SSDs for data drives

2018-07-16 Thread Satish Patel
p and cheerful and let the data > availability be handled by Ceph, don’t expect the performance to keep up. > > > > > > > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Satish Patel > Sent: Wednesday, 11 July 2018 10:50 PM > To: Paul Emme

[ceph-users] osd prepare issue device-mapper mapping

2018-07-13 Thread Satish Patel
I am installing ceph in my lab box using ceph-ansible, i have two HDD for OSD and i am getting following error on one of OSD not sure what is the issue. [root@ceph-osd-01 ~]# ceph-disk prepare --cluster ceph --bluestore /dev/sdb ceph-disk: Error: Device /dev/sdb1 is in use by a device-mapper

[ceph-users] Ceph-ansible issue with libselinux-python

2018-07-11 Thread Satish Patel
I am installing Ceph Cluster using openstack-ansible and having this strange issue, i did google and some people saying bug and some saying it can be solve by hack.. This is my error TASK [ceph-config : create ceph conf directory and assemble directory]

Re: [ceph-users] SSDs for data drives

2018-07-11 Thread Satish Patel
Prices going way up if I am picking Samsung SM863a for all data drives. We have many servers running on consumer grade sad drives and we never noticed any performance or any fault so far (but we never used ceph before) I thought that is the whole point of ceph to provide high availability if

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Satish Patel
I am planning to use Intel 3700 (200GB) for journal and 500GB Samsung 850 EVO for OSD, do you think this design make sense? On Tue, Jul 10, 2018 at 3:04 PM, Simon Ironside wrote: > > On 10/07/18 19:32, Robert Stanford wrote: >> >> >> Do the recommendations apply to both data and journal SSDs

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Satish Patel
On Tue, Jul 10, 2018 at 11:51 AM, Simon Ironside wrote: > Hi, > > On 10/07/18 16:25, Satish Patel wrote: >> >> Folks, >> >> I am in middle or ordering hardware for my Ceph cluster, so need some >> recommendation from communities. >> >> - Wha

[ceph-users] Journel SSD recommendation

2018-07-10 Thread Satish Patel
Folks, I am in middle or ordering hardware for my Ceph cluster, so need some recommendation from communities. - What company/Vendor SSD is good ? - What size should be good for Journal (for BlueStore) I have lots of Samsung 850 EVO but they are consumer, Do you think consume drive should be

[ceph-users] SSD for bluestore

2018-07-08 Thread Satish Patel
Folks, I'm just reading from multiple post that bluestore doesn't need SSD journel, is that true? I'm planning to build 5 node cluster so depending on that I purchase SSD for journel. If it does require SSD for journel then what would be the best vendor and model which last long? Any

[ceph-users] Small ceph cluster design question

2018-07-06 Thread Satish Patel
Folks, I'm new in ceph world, and still in learning stage, this is what I have in my inventory to build cluster please need some suggestions. 5 x HP DL360p G8 / 32 core 2.9Ghz / 32GB memory / 2x10G nic (server has 10 HDD slot) 3 x HP DL 460c blades for Monitor ( I'm not very worried about