Hi!
1. Use it at your own risk. I'm not responsible to any damage, you can get by
running thos script
2. What is it for.
Ceph osd daemon have so called 'admin socket' - a local (to osd host) unix
socket, that we can
use to issue commant to that osd. The script connects to a list od osd hosts
Ok, just to update everyone, after moving out all the pg directories on the
OSD that were no longer valid PGs I was able to start it and the cluster is
back to healthy.
I'm going to trigger a deep scrub of osd.3 to be safe prior to deleting any
of those PGs though.
If I understand the gist of
My hardware setup
One OSD host
- EL6
- 10 spinning disks with configuration
- sda (hpsa0): 450GB (0%) RAID-0 == 1 x 450GB 15K SAS/6
- 31GB Memory
- 1Gb/s ethernet line
Monitor and gateway hosts have the same configuration with just one disk.
I am benchmarking newstore
Hi
Is it safe to tweak the value of `mon pg warn max object skew` from the
default value of 10 to a higher value of 20-30 or so. What would be a
safe upper limit for this value?
Also what does exceeding this ratio signify in terms of the cluster
health? We are sometimes hitting this limit in
On 05/20/2015 11:02 AM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've downloaded the new tarball, placed it in rpmbuild/SOURCES then
with the extracted spec file in rpmbuild/SPEC, I update it to the new
version and then rpmbuild -ba program.spec. If you install the
Hi Warren,
Following our brief chat after the Ceph Ops session at the Vancouver
summit today, I added a few more notes to the etherpad
(https://etherpad.openstack.org/p/YVR-ops-ceph).
I wonder whether you'd considered setting up crush layouts so you can
have multiple cinder AZs or volume-types
Hi,
Just to add, there’s also a collectd plugin at
https://github.com/rochaporto/collectd-ceph
https://github.com/rochaporto/collectd-ceph.
Things to check when you have slow read performance is:
*) how much defragmentation on those xfs-partitions? With some workloads you
get high values
On 05/21/2015 08:47 AM, Brad Hubbard wrote:
On 05/20/2015 11:02 AM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've downloaded the new tarball, placed it in rpmbuild/SOURCES then
with the extracted spec file in rpmbuild/SPEC, I update it to the new
version and then
Could be a stupid/bad question; is a three tier cache/mid/cold setup supported?
Example would be:
1. Fast NVME drives — (write-back)— 2. Mid grade MLC SSD for primary working
set —(write-back)— 3. Super-Cold EC Pool for cheapest-deepest-oldest data
Theoretically, that middle tier of quality
Hi Brad!
Thanks for pointing out that for CentOS 6 the fix is included! Good to
know that!
But I think that the original package doesn't support RBD by default so
it has to be built again, am I right?
If that's correct then starting from there and building a new RPM with
RBD support is
On 05/19/2015 11:31 AM, Srikanth Madugundi wrote:
Hi,
I am seeing write performance hit with small files (60K) using radosgw.
The radosgw is configured to run with 600 threads. Here is the write
speed I get with file sizes of 60K
# sudo ceph -s
cluster
11 matches
Mail list logo