[ceph-users] Unable to track different ceph client version connections

2020-01-21 Thread Pardhiv Karri
Hi, We upgraded our Ceph cluster from Hammer to Luminous and it is running fine. Post upgrade we live migrated all our Openstack instances (not 100% sure). Currently we see 1658 clients still on Hammer version. To track the clients we increased the debugging of debug_mon=10/10, debug_ms=1/5,

[ceph-users] Ceph Rebalancing Bug in Luminous?

2019-09-03 Thread Pardhiv Karri
Hi, When I take down two OSDs crush weight to zero in a cluster with 575 OSDs with all flags set to not rebalance there is an insane spike in client IO and Bandwidth for few seconds and then when the flags are removed too many slow requests every few seconds. Does anyone know why it happens, is

Re: [ceph-users] Ceph Clients Upgrade?

2019-06-18 Thread Pardhiv Karri
Hi Paul, All the underlying compute nodes Ceph packages were upgraded already but the instances were not. So are you saying that live-migrate will get them upgraded? Thanks, Pardhiv Karri On Tue, Jun 18, 2019 at 7:34 AM Paul Emmerich wrote: > You can live-migrate VMs to a ser

[ceph-users] Ceph Clients Upgrade?

2019-06-17 Thread Pardhiv Karri
um": 619 } }, "client": { "group": { "features": "0x81dff8eeacfffb", "release": "hammer", "num": 3316 }, "group": { "fea

Re: [ceph-users] Update crushmap when monitors are down

2019-04-01 Thread Pardhiv Karri
mon.sh1ora1301 mon.1 10.15.29.15:6789/0 295 : cluster [INF] mon.sh1ora1301 calling monitor election 2019-04-02 00:52:39.810572 mon.sh1ora1301 mon.1 10.15.29.15:6789/0 296 : cluster [INF] mon.sh1ora1301 is new leader, mons sh1ora1301,sh1ora1302 in quorum (ranks 1,2) Thanks, Pardhiv Karri On Mon, Apr

[ceph-users] Update crushmap when monitors are down

2019-04-01 Thread Pardhiv Karri
, Pardhiv Karri ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph crushmap re-arrange with minimum rebalancing?

2019-03-07 Thread Pardhiv Karri
it? Thanks, *Pardhiv Karri* "Rise and Rise again until LAMBS become LIONS" ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph 2 PGs Inactive and Incomplete after node reboot and OSD toast

2019-02-27 Thread Pardhiv Karri
240361653 objects degraded (0.000%); recovery 151527/240361653 objects misplaced (0.063%) pg 13.110c is stuck inactive since forever, current state incomplete, last acting [490,16,120] pg 7.9b7 is stuck inactive since forever, current state incomplete, last acting [492,680,265] Thanks, -- *Pardhiv Kar

Re: [ceph-users] Is it possible to increase Ceph Mon store?

2019-01-07 Thread Pardhiv Karri
ot; to get it back earlier this week. Currently the mon store is around 12G on each monitor. If it doesn't grow then I won't change the value but if it grows and gives the warning then I will increase it using "mon_data_size_warn". Thanks, Pardhiv Karri On Mon, Jan 7, 2019 at 1:55 PM Br

[ceph-users] Is it possible to increase Ceph Mon store?

2019-01-07 Thread Pardhiv Karri
ze of the mon store to 32GB or something to avoid getting the Ceph health to warning state due to Mon store growing too big? -- Thanks, *Pardhiv Karri* ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph OOM Killer Luminous

2018-12-21 Thread Pardhiv Karri
Thank You for the quick response Dyweni! We are using FileStore as this cluster is upgraded from Hammer-->Jewel-->Luminous 12.2.8. 16x2TB HDD per node for all nodes. R730xd has 128GB and R740xd has 96GB of RAM. Everything else is the same. Thanks, Pardhiv Karri On Fri, Dec 21, 2018 at 1

[ceph-users] Ceph OOM Killer Luminous

2018-12-21 Thread Pardhiv Karri
actively rebooting the nodes in timely manner to avoid crashes. One R740xd node we set all the OSDs to 0.0 and there is no memory leak there. Any pointers to fix the issue would be helpful. Thanks, *Pardhiv Karri* ___ ceph-users mailing list ceph-users@lists

Re: [ceph-users] Ceph Cluster to OSD Utilization not in Sync

2018-12-21 Thread Pardhiv Karri
Thank You Dwyeni for the quick response. We have 2 Hammer which are due for upgrade to Luminous next month and 1 Luminous 12.2.8. Will try this on Luminous and if it works then will apply the same once the Hammer clusters are upgraded rather than adjusting the weights. Thanks, Pardhiv Karri

[ceph-users] Ceph Cluster to OSD Utilization not in Sync

2018-12-21 Thread Pardhiv Karri
space unused as some OSDs are above 87% and some are below 50%. If the above 87% OSDs reach 95% then the cluster will have issues. What is the best way to mitigate this issue? Thanks, *Pardhiv Karri* ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] How to setup Ceph OSD auto boot up on node reboot

2018-09-04 Thread Pardhiv Karri
start ceph-osd-all" isn't working well and doesn't like the idea of "sudo start ceph-osd id=1" for each OSD in rc file. Need to do it for both Hammer (Ubuntu 1404) and Luminous (Ubuntu 1604). -- Thanks, Pardhiv Karri ___ ceph-users mai

[ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-26 Thread Pardhiv Karri
Hi, I am playing with Ceph Luminous and getting confused information around usage of WalDB vs RocksDB. I have 2TB NVMe drive which I want to use for Wal/Rocks DB and have 5 2TB SSD's for OSD. I am planning to create 5 30GB partitions for RocksDB on NVMe drive, do I need to create partitions of

[ceph-users] Is Ceph Full Tiering Possible?

2018-06-14 Thread Pardhiv Karri
pool and reduce slow storage pool it will be easier in migration and also currently works for our budget in getting ceph faster for heavy users? I also looked at storage tiering but that won't be of much help as the usage cannot be combined between storage tiers. Thanks, Pardhiv Karri

Re: [ceph-users] Adding additional disks to the production cluster without performance impacts on the existing

2018-06-11 Thread Pardhiv Karri
before using it in production. Script Name: osd_crush_reweight.py Config File Name: rebalance_config.ini Script: https://jpst.it/1gwrk Config File: https://jpst.it/1gwsh --Pardhiv Karri On Fri, Jun 8, 2018 at 12:20 AM, mj wrote: > Hi Pardhiv, > > On 06/08/2018 05:07 AM, Pardhiv Ka

Re: [ceph-users] Adding additional disks to the production cluster without performance impacts on the existing

2018-06-07 Thread Pardhiv Karri
' --Pardhiv Karri On Thu, Jun 7, 2018 at 2:23 PM, Paul Emmerich wrote: > Hi, > > the "osd_recovery_sleep_hdd/ssd" options are way better to fine-tune the > impact of a backfill operation in this case. > > Paul > > 2018-06-07 20:55 GMT+02:00 David Turner : >

Re: [ceph-users] Openstack VMs with Ceph EC pools

2018-06-07 Thread Pardhiv Karri
Thank you, Andrew and Jason for replying. Jason, Do you have a sample ceph config file that you can share which works with RBD and EC pools? Thanks, Pardhiv Karri On Thu, Jun 7, 2018 at 9:08 AM, Jason Dillaman wrote: > On Thu, Jun 7, 2018 at 11:54 AM, Andrew Denton > wrote: > >

[ceph-users] Openstack VMs with Ceph EC pools

2018-06-06 Thread Pardhiv Karri
Hi, Is anyone using Openstack with Ceph Erasure Coding pools as it now supports RBD in Luminous. If so, hows the performance? Thanks, Pardhiv Karri ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] Can Bluestore work with 2 replicas or still need 3 for data integrity?

2018-05-25 Thread Pardhiv Karri
Thank You Linh for the info. Started reading about this solution. Could be lot of cost savings, need to check about the limitations though. Not sure how it works with Openstack as a front end to Ceph with Erasure Coded pools in Luminous. Thanks, Pardhiv Karri On Thu, May 24, 2018 at 6:39 PM

[ceph-users] Can Bluestore work with 2 replicas or still need 3 for data integrity?

2018-05-24 Thread Pardhiv Karri
to Bluestore-Luminous all SSD. Due to the cost of SSD's want to know if 2 replica is good or still need 3. Thanks, Pardhiv Karri ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-24 Thread Pardhiv Karri
Finally figured that it is happening because of unbalanced rack structure. When we moved the host/osd to another rack they are working just fine. Now we balanced the racks by moving hosts, some rebalancing happened due to that but everything is fine now. Thanks, Pardhiv Karri On Tue, May 22

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread Pardhiv Karri
{ ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type rack step emit } # end crush map Thanks, Pardhiv Karri On Tue, May 22, 2018 at 9:58 AM, Pardhiv Karri <meher4in...@gmail.com> wrote: > Hi David, > > We are using tree algorithm.

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread Pardhiv Karri
Hi David, We are using tree algorithm. Thanks, Pardhiv Karri On Tue, May 22, 2018 at 9:42 AM, David Turner <drakonst...@gmail.com> wrote: > Your PG counts per pool per osd doesn't have any PGs on osd.38. that > definitely matches what your seeing, but I've never seen this hap

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread Pardhiv Karri
gp_num: 64 = Working on pool: compute pg_num: 512 pgp_num: 512 = Working on pool: volumes pg_num: 1024 pgp_num: 1024 = Working on pool: images pg_num: 128 pgp_num: 128 root@or1010051251044:~# Thanks, Pardhiv Karri On Tue, May 22, 2018 at 9:16 AM, David Turner

Re: [ceph-users] How to see PGs of a pool on a OSD

2018-05-22 Thread Pardhiv Karri
This is exactly what I'm looking for. Tested it in our lab and it works great. Thanks you Caspar! Thanks, Pardhiv Karri On Tue, May 22, 2018 at 3:42 AM, Caspar Smit <caspars...@supernas.eu> wrote: > Here you go: > > ps. You might have to map your poolnames to po

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread Pardhiv Karri
:~# Thanks, Pardhiv Karri On Tue, May 22, 2018 at 5:01 AM, David Turner <drakonst...@gmail.com> wrote: > What are your `ceph osd tree` and `ceph status` as well? > > On Tue, May 22, 2018, 3:05 AM Pardhiv Karri <meher4in...@gmail.com> wrote: > >> Hi, >> >> We

[ceph-users] How to see PGs of a pool on a OSD

2018-05-22 Thread Pardhiv Karri
Hi, Our ceph cluster have 12 pools and only 3 pools are really used. How can I see number of PGs on a OSD and which PGs belong to which pool on that OSD? Something like below, OSD 0 = 1000PGs (500PGs belong to PoolA, 200PGs belong to PoolB, 300PGs belong to PoolC) Thanks, Pardhiv Karri

[ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread Pardhiv Karri
STDDEV: 8.26 Thanks Pardhiv karri ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-11 Thread Pardhiv Karri
, May 11, 2018, 10:06 PM Pardhiv Karri <meher4in...@gmail.com> > wrote: > >> Hi David, >> >> Thanks for the reply. Yeah we are seeing that 0.0001 usage on pretty much >> on all OSDs. But this node it is different whether full weight or just >> 0.2of OSD 611 the O

Re: [ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-11 Thread Pardhiv Karri
//docs.ceph.com/docs/master/rados/operations/crush- >> map/#hammer-crush-v4 >> >> That should help as well. Once that's enabled you can convert your >> existing >> buckets to straw2 as well. Just be careful you don't have any old clients >> connecting to your cl

Re: [ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-11 Thread Pardhiv Karri
ur cluster that don't support that feature yet. > > Bryan > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- *Pardhiv Karri* "Rise and Rise again until LAMBS become LIONS" ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph osd crush weight to utilization incorrect on one node

2018-05-10 Thread Pardhiv Karri
Hi, We have a large 1PB ceph cluster. We recently added 6 nodes with 16 2TB disks each to the cluster. All the 5 nodes rebalanced well without any issues and the sixth/last node OSDs started acting weird as I increase weight of one osd the utilization doesn't change but a different osd on the