Better way is increase osd set-full-ratio slightly (.97) and then remove
buckets.
-AmitG
On Wed, 30 Jan 2019, 21:30 Paul Emmerich, wrote:
> Quick and dirty solution: take the full OSD down to issue the deletion
> command ;)
>
> Better solutions: temporarily incrase the full limit (ceph osd
> se
Have you commit your changes on slave gateway?
First, run commit command on slave gateway and then try.
-AmitG
On Wed, 30 Jan 2019, 21:06 Krishna Verma, wrote:
> Hi Casey,
>
> Thanks for your reply, however I tried with "--source-zone" option with
> sync command but getting below error:
>
> Syn
Hi,
We following http://docs.ceph.com/docs/master/radosgw/multisite/ steps to
migrate single-site to master zone and then setup secondary zone.
We not delete existing data and all objects sync to secondary zone but in
master zone it still showing in recovery mode, dynamic resharding is
disable.
M
D: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
>
> Web: https://croit.io
> YouTube: https://goo.gl/PGE1Bx
>
>
> Am Di., 27. Nov. 2018, 02:50 hat Amit Ghadge
> geschrieben:
>
>> Hi all,
>>
>> We have planning to use SSD data drive, so
Hi all,
We have planning to use SSD data drive, so for journal drive, is there any
recommendation to use same drive or separate drive?
Thanks,
Amit
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.
On Wed, Nov 14, 2018 at 5:11 PM Amit Ghadge wrote:
> Hi,
> I copied my custom module in /usr/lib64/ceph/mgr and run "ceph mgr module
> enable --force" to enable plugin. It's plug and print some
> message in plugin but it's not print any log in ceph-mgr log fil
Hi,
I copied my custom module in /usr/lib64/ceph/mgr and run "ceph mgr module
enable --force" to enable plugin. It's plug and print some
message in plugin but it's not print any log in ceph-mgr log file.
Thanks,
Amit G
___
ceph-users mailing list
ceph-
On Mon, 5 Nov 2018, 21:13 Hayashida, Mami, wrote:
> Additional info -- I know that /var/lib/ceph/osd/ceph-{60..69} are not
> mounted at this point (i.e. mount | grep ceph-60, and 61-69, returns
> nothing.). They don't show up when I run "df", either.
>
ceph-volume command automatically mount ce