For anyone interested or having similar issues, I figured out what was
wrong by running
# radosgw-admin --cluster ceph bucket check
--bucket=12856/weird_bucket --check-objects > obj.check.out
Reviewed the +1M entries in the file, wasn't really sure what the
output was about but figured it was prob
Hi cepher,
Recently I had a long time test which just untar linux kernel code to two
different directories and compare it in a kvm vm. Those two directories were
two rbd volumes in ceph cluster.
I realized sometimes there’s a diff, but when I manually compare again then
diff disappear. It seem
This is the final release candidate for Luminous, the next long term
stable release. Please note that this is still a *release candidate* and
not the final release, we're expecting the final Luminous release in
about a week's time, meanwhile testing and feedback is appreciated.
Ceph Luminous (v12.
I am not sure if I am the only one having this. But there is an issue
with the collectd plugin and the luminous release. I think I didn’t
have this in Kraken, looks like something changed in the JSON? I also
reported it here https://github.com/collectd/collectd/issues/2343, I
have no idea who
Hi,
We ran a test with 1500 MTU and 9000MTU on a small ceph test cluster (3mon
+ 10 hosts with 2 SSD each, one for journal and one for data) and found
minimal ~10% perf improvements.
We tested with FIO for 4K, 8K and 64K block sizes, using RBD directly.
Anyone else have any experience with this?
Hi,
I have no number to share, nor did I test with Ceph specifically
However, I saturated a 4*10G NIC, with and without jumbo
In CPU consumption, jumbo frames eated 2 to 3 times less CPU
If you can use jumbo frames, just do it -there is no drawback, and the
gains are appreciable
On 11/08/2017 2
I'm no expert but maybe another test might be iperf and watch your cpu
utilization while doing it
You can set iperf to run between a couple monitors and OSD servers
Try setting it at 1500 or your switch's stock MTU
then put the servers at 9000 and the switch at 9128 (for packet
overhead/managem
> Op 11 aug. 2017 om 20:22 heeft Sameer Tiwari het
> volgende geschreven:
>
> Hi,
>
> We ran a test with 1500 MTU and 9000MTU on a small ceph test cluster (3mon +
> 10 hosts with 2 SSD each, one for journal and one for data) and found minimal
> ~10% perf improvements.
>
> We tested with FI
Cluster is ok and mgr is active, but unable to get the dashboard to start.
I see the following errors in logs:
2017-08-12 15:40:07.805991 7f508effd500 0 pidfile_write: ignore empty
--pid-file
2017-08-12 15:40:07.810124 7f508effd500 -1 auth: unable to find a keyring
on /var/lib/ceph/mgr/ceph-0/key