[ceph-users] Issues with CentOS RDO Liberty (OpenStack) and Ceph Repo (dependency resolution failed)

2015-11-06 Thread c...@dolphin-it.de
Dear Ceph-users, I just set up a new CentOS7 Ceph- and OpenStack-Cluster. When "ceph-deploy install compute2" starts to set up the Ceph repo, it fails at dependency resolution: === Loaded plugins: fastestmirror, langpacks, priorities Loading mirror speeds from cached hostfile * base:

Re: [ceph-users] Ceph OSDs with bcache experience

2015-11-06 Thread Wido den Hollander
On 11/05/2015 11:03 PM, Michal Kozanecki wrote: > Why did you guys go with partitioning the SSD for ceph journals, instead of > just using the whole SSD for bcache and leaving the journal on the filesystem > (which itself is ontop bcache)? Was there really a benefit to separating the > journals

[ceph-users] Soft removal of RBD images

2015-11-06 Thread Wido den Hollander
Hi, Since Ceph Hammer we can protect pools from being removed from the cluster, but we can't protect against this: $ rbd ls|xargs -n 1 rbd rm That would remove all not opened RBD images from the cluster. This requires direct access to your Ceph cluster and keys with the proper permission, but

Re: [ceph-users] Ceph Openstack deployment

2015-11-06 Thread Vasiliy Angapov
There must be something in /var/log/cinder/volume.log or /var/log/nova/nova-compute.log that points to the problem. Can you post it here? 2015-11-06 20:14 GMT+08:00 Iban Cabrillo : > Hi Vasilly, > Thanks, but I still see the same error: > > cinder.conf (of course I just

Re: [ceph-users] Ceph Openstack deployment

2015-11-06 Thread Iban Cabrillo
Hi Vasily, Of course, from cinder-volume.log 2015-11-06 12:28:52.865 366 WARNING oslo_config.cfg [req-41a4-4bec-40d2-a7c1-6e8d73644b4c b7aadbb4a85745feb498b74e437129cc ce2dd2951bd24c1ea3b43c3b3716f604 - - -] Option "lock_path" from group "DEFAULT" is deprecated. Use option "lock_path" from

Re: [ceph-users] Soft removal of RBD images

2015-11-06 Thread Gregory Farnum
On Fri, Nov 6, 2015 at 2:03 AM, Wido den Hollander wrote: > Hi, > > Since Ceph Hammer we can protect pools from being removed from the > cluster, but we can't protect against this: > > $ rbd ls|xargs -n 1 rbd rm > > That would remove all not opened RBD images from the cluster. > >

Re: [ceph-users] Group permission problems with CephFS

2015-11-06 Thread Aaron Ten Clay
I'm seeing similar behavior as well. -rw-rw-r-- 1 testuser testgroup 6 Nov 6 07:41 testfile aaron@testhost$ groups ... testgroup ... aaron@testhost$ cat > testfile -bash: testfile: Permission denied Running version 9.0.2. Were you able to make any progress on this? Thanks, -Aaron On Tue,

Re: [ceph-users] Group permission problems with CephFS

2015-11-06 Thread Burkhard Linke
Hi, On 11/06/2015 04:52 PM, Aaron Ten Clay wrote: I'm seeing similar behavior as well. -rw-rw-r-- 1 testuser testgroup 6 Nov 6 07:41 testfile aaron@testhost$ groups ... testgroup ... aaron@testhost$ cat > testfile -bash: testfile: Permission denied Running version 9.0.2. Were you able to

Re: [ceph-users] Ceph Openstack deployment

2015-11-06 Thread Iban Cabrillo
Hi Vasilly, Thanks, but I still see the same error: cinder.conf (of course I just restart the cinder-volume service) # default volume type to use (string value) [rbd-cephvolume] rbd_user = cinder rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxx volume_backend_name=rbd volume_driver =

[ceph-users] ceph-deploy on lxc container - 'initctl: Event failed'

2015-11-06 Thread Bogdan SOLGA
Hello, everyone! I just tried to create a new Ceph cluster, using 3 LXC clusters as monitors, and the 'ceph-deploy mon create-initial' command fails for each of the monitors with a 'initctl: Event failed' error, when running the following command: [ceph-mon-01][INFO ] Running command: sudo

[ceph-users] osd fails to start, rbd hangs

2015-11-06 Thread Philipp Schwaha
Hi, I have an issue with my (small) ceph cluster after an osd failed. ceph -s reports the following: cluster 2752438a-a33e-4df4-b9ec-beae32d00aad health HEALTH_WARN 31 pgs down 31 pgs peering 31 pgs stuck inactive 31 pgs stuck unclean

Re: [ceph-users] osd fails to start, rbd hangs

2015-11-06 Thread Gregory Farnum
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/ :) On Friday, November 6, 2015, Philipp Schwaha wrote: > Hi, > > I have an issue with my (small) ceph cluster after an osd failed. > ceph -s reports the following: > cluster

Re: [ceph-users] Ceph Openstack deployment

2015-11-06 Thread Iban Cabrillo
Hi, One more step debugging this issue (hypervisor/nova-compute node is XEN 4.4.2): I think the problem is that libvirt is not getting the correct user or credentials tu access pool, on instance qemu log i see: xen be: qdisk-51760: error: Could not open

Re: [ceph-users] Ceph Openstack deployment

2015-11-06 Thread Vasiliy Angapov
At cinder.conf you should place this options: rbd_user = cinder rbd_secret_uuid = 67a6d4a1-e53a-42c7-9bc9-xxx to [rbd-cephvolume] section instead of DEFAULT. 2015-11-06 19:45 GMT+08:00 Iban Cabrillo : > Hi, > One more step debugging this issue

Re: [ceph-users] osd fails to start, rbd hangs

2015-11-06 Thread Philipp Schwaha
On 11/06/2015 09:25 PM, Gregory Farnum wrote: > http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/ > > :) > Thanks, I tried to follow the advice to "... start that ceph-osd and things will recover.", for the better part of the last two days but did not succeed in

Re: [ceph-users] ceph-deploy on lxc container - 'initctl: Event failed'

2015-11-06 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I've put monitors in LXC but I haven't done it with ceph-deploy. I've had no problems with it. - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Fri, Nov 6, 2015 at 12:55 PM, Bogdan SOLGA

Re: [ceph-users] osd fails to start, rbd hangs

2015-11-06 Thread Iban Cabrillo
Hi Philipp, I see you only have 2 osds, have you check that your "osd pool get size" is 2, and min_size=1?? Cheers, I 2015-11-06 22:05 GMT+01:00 Philipp Schwaha : > On 11/06/2015 09:25 PM, Gregory Farnum wrote: > > >

[ceph-users] v9.2.0 Infernalis released

2015-11-06 Thread Sage Weil
[I'm going to break my own rule and do this on a Friday only because this has been built and in the repos for a couple of days now; I've just been traveling and haven't had time to announce it.] This major release will be the foundation for the next stable series. There have been some major