Hello
Could you help me please
ceph status
cluster 4da1f6d8-ca10-4bfa-bff7-c3c1cdb3f888
health HEALTH_WARN 229 pgs peering; 102 pgs stuck inactive; 236 pgs stuck
unclean; 1 mons down, quorum 0,1 st1,st2
monmap e3: 3 mons at
Hi,
perhaps your filesystem is too full?
df -k
du -hs /var/lib/ceph/mon/ceph-st3/store.db
What output/Error-Message you get if you start the mon in the foreground?
ceph-mon -i st3 -d -c /etc/ceph/ceph.conf
Udo
On 15.02.2014 09:30, Vadim Vatlin wrote:
Hello
Could you help me please
ceph
Dear all,
I am following this guide http://ceph.com/docs/master/radosgw/config/
to setup Object Storage on CentOS 6.5.
My problem is that when I try to start the service as indicated here:
http://ceph.com/docs/master/radosgw/config/#restart-services-and-start-the-gateway
I get nothing
#
Hi,
does ceph -s also stuck on missing keyring?
Do you have an keyring like:
cat /etc/ceph/keyring
[client.admin]
key = AQCdkHZR2NBYMBAATe/rqIwCI96LTuyS3gmMXp==
Or do you have anothe defined keyring in ceph.conf?
global-section - keyring = /etc/ceph/keyring
The key is in ceph - see
ceph
1) ceph -s is working as expected
# ceph -s
cluster c465bdb2-e0a5-49c8-8305-efb4234ac88a
health HEALTH_OK
monmap e1: 1 mons at {master=192.168.0.10:6789/0}, election epoch
1, quorum 0 master
mdsmap e111: 1/1/1 up {0=master=up:active}
osdmap e114: 2 osds: 2 up, 2 in
Hi Cephers,
I am trying to configure ceph rbd as backend for cinder and glance by
following the steps mentioned in:
http://ceph.com/docs/master/rbd/rbd-openstack/
Before I start all openstack services are running normally and ceph cluster
health shows HEALTH_OK
But once I am done with all
I have a performance problem i would like advise.
I have the following sub-optimal setup:
* 2 Servers (WTFM008 WTFM009)
* HP Proliant DL180
* SmartArray G6 P410 raid-controller
* 4x 500GB RAID5 (seq writes = 230MB/s)
* CentOS 6.5 x86_64
* 2.000.000 files (ms-word), with no directory
On Sat, 15 Feb 2014, Samuel Terburg - Panther-IT BV wrote:
I have a performance problem i would like advise.
I have the following sub-optimal setup:
* 2 Servers (WTFM008 WTFM009)
* HP Proliant DL180
* SmartArray G6 P410 raid-controller
* 4x 500GB RAID5 (seq writes = 230MB/s)
*
Dear Ceph experts,
We've found that a single client running rados bench can drive other
users, ex. RBD users, into slow requests.
Starting with a cluster that is not particularly busy, e.g. :
2014-02-15 23:14:33.714085 mon.0 xx:6789/0 725224 : [INF] pgmap
v6561996: 27952 pgs: 27952
Hi,
I created a 1TB rbd-image formated with vmfs (vmware) for an ESX server - but
with a wrong order (25 instead of 22 ...). The rbd man page tells me for
export/import/cp, rbd will use the order of the source image.
Is there a way to change the order of a rbd image by doing some conversion?
Hi Jean,
Here is the output for ceph auth list for client.cinder
client.cinder
key: AQCKaP9ScNgiMBAAwWjFnyL69rBfMzQRSHOfoQ==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=volumes, allow rx pool=images
Here is the output of
(2014/02/16 3:06), Kei.masumoto wrote:
(2014/02/11 23:02), Alfredo Deza wrote:
On Tue, Feb 11, 2014 at 7:57 AM, Kei.masumoto
kei.masum...@gmail.com wrote:
(2014/02/10 23:33), Alfredo Deza wrote:
On Sat, Feb 8, 2014 at 7:56 AM, Kei.masumoto kei.masum...@gmail.com
wrote:
(2014/02/05 23:49),
12 matches
Mail list logo