Re: [ceph-users] Filestore to Bluestore migration question

2018-11-05 Thread Amit Ghadge
On Mon, 5 Nov 2018, 21:13 Hayashida, Mami, wrote: > Additional info -- I know that /var/lib/ceph/osd/ceph-{60..69} are not > mounted at this point (i.e. mount | grep ceph-60, and 61-69, returns > nothing.). They don't show up when I run "df", either. > ceph-volume command automatically mount

[ceph-users] Ceph luminous custom plugin

2018-11-14 Thread Amit Ghadge
Hi, I copied my custom module in /usr/lib64/ceph/mgr and run "ceph mgr module enable --force" to enable plugin. It's plug and print some message in plugin but it's not print any log in ceph-mgr log file. Thanks, Amit G ___ ceph-users mailing list

Re: [ceph-users] Ceph luminous custom plugin

2018-11-14 Thread Amit Ghadge
On Wed, Nov 14, 2018 at 5:11 PM Amit Ghadge wrote: > Hi, > I copied my custom module in /usr/lib64/ceph/mgr and run "ceph mgr module > enable --force" to enable plugin. It's plug and print some > message in plugin but it's not print any log in ceph-mgr log file. > &g

[ceph-users] [Ceph-users] Multisite-Master zone still in recover mode

2019-01-02 Thread Amit Ghadge
Hi, We following http://docs.ceph.com/docs/master/radosgw/multisite/ steps to migrate single-site to master zone and then setup secondary zone. We not delete existing data and all objects sync to secondary zone but in master zone it still showing in recovery mode, dynamic resharding is disable.

[ceph-users] Journal drive recommendation

2018-11-26 Thread Amit Ghadge
Hi all, We have planning to use SSD data drive, so for journal drive, is there any recommendation to use same drive or separate drive? Thanks, Amit ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Journal drive recommendation

2018-11-26 Thread Amit Ghadge
D: DE310638492 > Com. register: Amtsgericht Munich HRB 231263 > > Web: https://croit.io > YouTube: https://goo.gl/PGE1Bx > > > Am Di., 27. Nov. 2018, 02:50 hat Amit Ghadge > geschrieben: > >> Hi all, >> >> We have planning to use SSD data drive, so

Re: [ceph-users] Cluster Status:HEALTH_ERR for Full OSD

2019-01-30 Thread Amit Ghadge
Better way is increase osd set-full-ratio slightly (.97) and then remove buckets. -AmitG On Wed, 30 Jan 2019, 21:30 Paul Emmerich, wrote: > Quick and dirty solution: take the full OSD down to issue the deletion > command ;) > > Better solutions: temporarily incrase the full limit (ceph osd >

Re: [ceph-users] Multisite Ceph setup sync issue

2019-01-30 Thread Amit Ghadge
Have you commit your changes on slave gateway? First, run commit command on slave gateway and then try. -AmitG On Wed, 30 Jan 2019, 21:06 Krishna Verma, wrote: > Hi Casey, > > Thanks for your reply, however I tried with "--source-zone" option with > sync command but getting below error: > >