Hi Everyone,
I've got a couple of pools that I don't believe are being used but
have a reasonably large number of pg's (approx 50% of our total pg's).
I'd like to delete them but as they were pre-existing when I inherited
the cluster, I wanted to make sure they aren't needed for anything
first.
I think what you're looking for is the public bind addr option.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
I am installing ceph using ceph-deploy. I am installing it on 4 VMs which
have private IP addresses and public IPs NAT-ted to it. But upon
installation, even after adding
public network = 0.0.0.0/0
it still listens on the private IP.
I tried doing the steps mentioned in
Unfortunately, I don’t see that setting documented anywhere other than the
release notes. Its hard to find guidance for questions in that case, but
luckily you noted it in your blog post. I wish I knew what setting to put that
at. I did use the deprecated one after moving to hammer a while
Sorry for the delay
Here are the results when using bs=16k and rw=write
( Note: I am running the command directly on a OSD host as root)
fio /home/cephuser/write.fio
write-4M: (g=0): rw=write, bs=16K-16K/16K-16K/16K-16K, ioengine=rbd,
iodepth=32
fio-2.2.8
Starting 1 process
rbd engine: RBD
Below is the status of my disastrous self-inflicted journey. I will preface
this by admitting this could not have been prevented by software attempting
to keep me from being stupid.
I have a production cluster with over 350 XFS backed osds running Luminous.
We want to transition the cluster to
Ok I fixed the address error. The service is able to start now. ceph -s
hangs though.
gentooserver ~ # ceph -s
^CError EINTR: problem getting command descriptions from mon.
I'm not sure how to fix the permissions issue. /var/run/ceph is a temporary
directory so I can't just chown it.
On Sat,
Hi List
I'm getting the following error when trying to run docker with a rbd volume
(either pre-existing, or not):
"VolumeDriver.Create: Unable to create Ceph RBD Image"
Please could someone give me a clue as to how to debug this further and
resolve it?
Details of my platform:
1. ceph version
Hi all,
We have 5 node ceph cluster (Luminous 12.2.1) installed via ceph-ansible.
All servers have 16X1.5TB SSD disks.
3 of these servers are also acting as MON+MGRs.
We don't have separated network for cluster and public, each node has 4
NICs bonded together (40G) and serves cluster+public