Re: [ceph-users] Help needed porting Ceph to RSockets

2013-08-09 Thread Matthew Anderson
So I've had a chance to re-visit this since Bécholey Alexandre was kind enough to let me know how to compile Ceph with the RDMACM library (thankyou again!). At this stage it compiles and runs but there appears to be a problem with calling rshutdown in Pipe as it seems to just wait forever for the

Re: [ceph-users] Openstack glance ceph rbd_store_user authentification problem

2013-08-09 Thread Steffen Thorhauer
Hi, thanks for your answers. It was my fault. I configured all at the beginning of the [DEFAULT] section of glance-api.conf and overlooked the default settings later ( the default ubuntu glance-api.conf has later a default RBD Store Options part ) On 08/08/2013 05:04 PM, Josh Durgin wrote:

Re: [ceph-users] ceph-deploy behind corporate firewalls

2013-08-09 Thread Luc Dumaine
Hi, I was able to use ceph-deploy behind a proxy, by defining the appropriate environment variables used by wget.. I.E. on ubuntu just add to /etc/environnement: http_proxy=http://host:port ftp_proxy=http://host:port https_proxy=http://host:port Regard, Luc. - Mail original - De:

[ceph-users] pgs stuck unclean -- how to fix? (fwd)

2013-08-09 Thread Jeff Moskow
Hi, I have a 5 node ceph cluster that is running well (no problems using any of the rbd images and that's really all we use). I have replication set to 3 on all three pools (data, metadata and rbd). ceph -s reports: health HEALTH_WARN 3 pgs degraded;

Re: [ceph-users] pgs stuck unclean -- how to fix? (fwd)

2013-08-09 Thread Wido den Hollander
On 08/09/2013 10:58 AM, Jeff Moskow wrote: Hi, I have a 5 node ceph cluster that is running well (no problems using any of the rbd images and that's really all we use). I have replication set to 3 on all three pools (data, metadata and rbd). ceph -s reports:

Re: [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process, [Qemu-devel] [Bug 1207686]

2013-08-09 Thread Oliver Francke
Hi Josh, just opened http://tracker.ceph.com/issues/5919 with all collected information incl. debug-log. Hope it helps, Oliver. On 08/08/2013 07:01 PM, Josh Durgin wrote: On 08/08/2013 05:40 AM, Oliver Francke wrote: Hi Josh, I have a session logged with:

[ceph-users] All old pgs in stale after recreating all osds

2013-08-09 Thread Da Chun
On Centos 6.4, Ceph 0.61.7. I had a ceph cluster of 9 osds. Today I destroyed all of the osds, and recreated 6 new ones. Then I find all the old pgs are in stale. [root@ceph0 ceph]# ceph -s health HEALTH_WARN 192 pgs stale; 192 pgs stuck inactive; 192 pgs stuck stale; 192 pgs stuck unclean

Re: [ceph-users] pgs stuck unclean -- how to fix? (fwd)

2013-08-09 Thread Jeff Moskow
Thanks for the suggestion. I had tried stopping each OSD for 30 seconds, then restarting it, waiting 2 minutes and then doing the next one (all OSD's eventually restarted). I tried this twice. -- ___ ceph-users mailing list

[ceph-users] mounting a pool via fuse

2013-08-09 Thread Georg Höllrigl
Hi, I'm using ceph 0.61.7. When using ceph-fuse, I couldn't find a way, to only mount one pool. Is there a way to mount a pool - or is it simply not supported? Kind Regards, Georg ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] do we need to install ceph on KVM hypervisor for cloudstack-ceph intergration

2013-08-09 Thread Suresh Sadhu
HI, To access the storage cluster from kvm hypervisor what are the packages need to install on kvm hypervisor(do we need to install qemu,ceph on KVM host? For cloudstack-ceph integration). MY hypervisor version is rhel6.3. Regards Sadhu ___

Re: [ceph-users] do we need to install ceph on KVM hypervisor for cloudstack-ceph intergration

2013-08-09 Thread Wido den Hollander
On 08/09/2013 01:51 PM, Suresh Sadhu wrote: HI, To access the storage cluster from kvm hypervisor what are the packages need to install on kvm hypervisor(do we need to install qemu,ceph on KVM host? For cloudstack-ceph integration). You only need librbd and librados The Ceph CLI tools and

[ceph-users] All old pgs in stale after recreating all osds

2013-08-09 Thread Da Chun
On Centos 6.4, Ceph 0.61.7. I had a ceph cluster of 9 osds. Today I destroyed all of the osds, and recreated 6 new ones. Then I find all the old pgs are in stale. [root@ceph0 ceph]# ceph -s health HEALTH_WARN 192 pgs stale; 192 pgs stuck inactive; 192 pgs stuck stale; 192 pgs stuck unclean

Re: [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process, [Qemu-devel] [Bug 1207686]

2013-08-09 Thread Andrei Mikhailovsky
I can confirm that I am having similar issues with ubuntu vm guests using fio with bs=4k direct=1 numjobs=4 iodepth=16. Occasionally i see hang tasks, occasionally guest vm stops responding without leaving anything in the logs and sometimes i see kernel panic on the console. I typically leave

Re: [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process, [Qemu-devel] [Bug 1207686]

2013-08-09 Thread Stefan Hajnoczi
On Fri, Aug 09, 2013 at 03:05:22PM +0100, Andrei Mikhailovsky wrote: I can confirm that I am having similar issues with ubuntu vm guests using fio with bs=4k direct=1 numjobs=4 iodepth=16. Occasionally i see hang tasks, occasionally guest vm stops responding without leaving anything in the

[ceph-users] CEPH-DEPLOY TRIALS/EVALUATION RESULT ON CEPH VERSION 61.7

2013-08-09 Thread Aquino, BenX O
CEPH-DEPLOY EVALUATION ON CEPH VERSION 61.7 ADMINNODE: root@ubuntuceph900athf1:~# ceph -v ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff) root@ubuntuceph900athf1:~# SERVERNODE: root@ubuntuceph700athf1:/etc/ceph# ceph -v ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff)

Re: [ceph-users] ceph-deploy behind corporate firewalls

2013-08-09 Thread Alfredo Deza
On Fri, Aug 9, 2013 at 1:34 AM, Luc Dumaine lduma...@sitiv.fr wrote: Hi, I was able to use ceph-deploy behind a proxy, by defining the appropriate environment variables used by wget.. I.E. on ubuntu just add to /etc/environnement: http_proxy=http://host:port ftp_proxy=http://host:port

Re: [ceph-users] STGT targets.conf example

2013-08-09 Thread Dan Mick
Awesome. Thanks Darryl. Do you want to propose a fix to stgt, or shall I? On Aug 8, 2013 7:21 PM, Darryl Bond db...@nrggos.com.au wrote: Dan, I found that the tgt-admin perl script looks for a local file if (-e $backing_store ! -d $backing_store $can_alloc == 1) { A bit nasty, but I

Re: [ceph-users] Why is my mon store.db is 220GB?

2013-08-09 Thread Joao Eduardo Luis
On 07/08/13 15:14, Jeppesen, Nelson wrote: Joao, Have you had a chance to look at my monitor issues? I Ran ''ceph-mon -i FOO -compact' last week but it did not improve disk usage. Let me know if there's anything else I dig up. The monitor still at 0.67-rc2 with the OSDs at .0.61.7. Hi

[ceph-users] [ANN] ceph-deploy v1.2 has been released!

2013-08-09 Thread Alfredo Deza
I am very pleased to announce the release of ceph-deploy to the Python Package Index. The OS packages are yet to come, I will make sure to update this thread when they do. For now, if you are familiar with Python install tools, you can install directly from PyPI with pip or easy_install:

Re: [ceph-users] [ANN] ceph-deploy v1.2 has been released!

2013-08-09 Thread Sébastien RICCIO
Hi! Awesome :)) Thanks for such a great work! Cheers, Sébastien On 10.08.2013 02:52, Alfredo Deza wrote: I am very pleased to announce the release of ceph-deploy to the Python Package Index. The OS packages are yet to come, I will make sure to update this thread when they do. For now, if