Great, thanks for the update.
Jason
On Fri, Apr 13, 2018 at 11:06 PM, Alex Gorbachev
wrote:
> On Thu, Apr 12, 2018 at 9:38 AM, Alex Gorbachev
> wrote:
>> On Thu, Apr 12, 2018 at 7:57 AM, Jason Dillaman wrote:
>>> If you run "partprobe" after you resize in your second example, is the
>>> chan
On Fri, Apr 13, 2018 at 8:20 PM, Rhian Resnick wrote:
> Evening,
>
> When attempting to create an OSD we receive the following error.
>
> [ceph-admin@ceph-storage3 ~]$ sudo ceph-volume lvm create --bluestore
> --data /dev/sdu
> Running command: ceph-authtool --gen-print-key
> Running command: cep
Hi,
Still leaking again after update to 12.2.4, around 17G after 9 days
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
ceph 629903 50.7 25.9 17473680 17082432 ? Ssl avril05 6498:21
/usr/bin/ceph-mds -f --cluster ceph --id ceph4-1.odiso.net --setuser ceph
Thanks all,
Here is a link to our our command being executed: https://pastebin.com/iy8iSaKH
Here are the results from the command
Executed with debug enabled (after a zap with destroy)
[root@ceph-storage3 ~]# ceph-volume lvm create --bluestore --data /dev/sdu
Running command: ceph-authtool
I deleted my default.rgw.buckets.data and default.rgw.buckets.index pools
in an attempt to clean them out. I brought this up on the list and
received replies telling me essentially, "You shouldn't do that." There
was however no helpful advice on recovering.
When I run 'radosgw-admin bucket lis
Afternoon,
Happily, I resolved this issue.
Running vgdisplay showed that ceph-volume tried to create a disk on failed
disk. (We didn't know we had a bad did so this is information that was new to
us) and when the command failed it left three bad volume groups. Since you
cannot rename them yo