Re: [ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread linghucongsong
Thanks to all! I might have found the reason. It is look like relate to the below bug. https://bugs.launchpad.net/nova/+bug/1773449 At 2018-12-04 23:42:15, "Ouyang Xu" wrote: Hi linghucongsong: I have got this issue before, you can try to fix it as below: 1. use r

Re: [ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread linghucongsong
c. 2018 kl 09:49 skrev linghucongsong : HI all! I have a ceph test envirment use ceph with openstack. There are some vms run on the openstack. It is just a test envirment. my ceph version is 12.2.4. Last day I reboot all the ceph hosts before this I do not shutdown the vms on the openstack

[ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread linghucongsong
HI all! I have a ceph test envirment use ceph with openstack. There are some vms run on the openstack. It is just a test envirment. my ceph version is 12.2.4. Last day I reboot all the ceph hosts before this I do not shutdown the vms on the openstack. When all the hosts boot up and the

Re: [ceph-users] Is luminous ceph rgw can only run with the civetweb ?

2018-09-20 Thread linghucongsong
-09-20 11:29:28, "Konstantin Shalygin" wrote: >On 09/20/2018 10:09 AM, linghucongsong wrote: >> By the way I use keepalive+lvs to loadbalance and ha. > >This is good. But in that case I wonder why fastcgi+nginx, instead >

Re: [ceph-users] Is luminous ceph rgw can only run with the civetweb ?

2018-09-19 Thread linghucongsong
Thank you Shalygin for sharing. I have know the reason. it is in L the fastcgi is disabled by default. I have reenable the fastcgi and it worked well now. By the way I use keepalive+lvs to loadbalance and ha. Thanks again! At 2018-09-18 18:36:46, "Konstantin Shalygin" wrote: >>

Re: [ceph-users] Ceph talks from Mounpoint.io

2018-09-05 Thread linghucongsong
Thank you for Gregory to provide it. By the way where to get the pdf for these talks? Thanks again! At 2018-09-06 07:03:32, "Gregory Farnum" wrote: >Hey all, >Just wanted to let you know that all the talks from Mountpoint.io are >now available on YouTube. These are reasonably

[ceph-users] Is luminous ceph rgw can only run with the civetweb ?

2018-08-31 Thread linghucongsong
In jewel I use the below config rgw is work well with the nginx. But with luminous the nginx look like can not work with the rgw. 10.11.3.57, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/ceph/ceph-client.rgw.ceph-11.asok:", host: "10.11.3.57:7480" 2018/08/31 16:38:25

Re: [ceph-users] New Ceph community manager: Mike Perez

2018-08-28 Thread linghucongsong
Welcome! At 2018-08-29 09:13:24, "Sage Weil" wrote: >Hi everyone, > >Please help me welcome Mike Perez, the new Ceph community manager! > >Mike has a long history with Ceph: he started at DreamHost working on >OpenStack and Ceph back in the early days, including work on the original

Re: [ceph-users] why we show removed snaps in ceph osd dump pool info?

2018-03-26 Thread linghucongsong
but that was the gist of the answer I got back when we were working on some bugs with the Ceph support team previously. On Wed, Mar 14, 2018 at 5:38 AM linghucongsong <linghucongs...@163.com> wrote: what is the purpose for we to show the removed snaps? look like the removed snaps no use to

[ceph-users] why we show removed snaps in ceph osd dump pool info?

2018-03-14 Thread linghucongsong
what is the purpose for we to show the removed snaps? look like the removed snaps no use to the user. we use rbd export and import backup images from one ceph cluster to another ceph cluster. the increment image backup depand on the snap.and we wiil remove the snap after the backup.so it will

[ceph-users] in the same ceph cluster, why the object in the same osd some are 8M and some are 4M?

2018-01-01 Thread linghucongsong
Hi, all! I just use ceph rbd for openstack. my ceph version is 10.2.7. I find a surprise thing that the object save in the osd , in some pgs the objects are 8M, and in some pgs the objects are 4M, can someone tell me why? thanks!

Re: [ceph-users] Gracefully reboot OSD node

2017-08-03 Thread linghucongsong
set the osd noout nodown At 2017-08-03 18:29:47, "Hans van den Bogert" wrote: Hi all, One thing which has bothered since the beginning of using ceph is that a reboot of a single OSD causes a HEALTH_ERR state for the cluster for at least a couple of seconds. In

Re: [ceph-users] "rbd create" hangs for specific pool

2017-08-03 Thread linghucongsong
root ssds { id -9 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 } It is empty in ssds! rule ssdpool { ruleset 1 type replicated min_size 1 max_size 10 step take ssds step chooseleaf firstn

Re: [ceph-users] jewel - recovery keeps stalling (continues after restarting OSDs)

2017-07-28 Thread linghucongsong
17:57:11, "Nikola Ciprich" <nikola.cipr...@linuxbox.cz> wrote: >On Fri, Jul 28, 2017 at 05:52:29PM +0800, linghucongsong wrote: >> >> >> >> You have two crush rule? One is ssd the other is hdd? >yes, exactly.. > >> >> Can you show

Re: [ceph-users] jewel - recovery keeps stalling (continues after restarting OSDs)

2017-07-28 Thread linghucongsong
You have two crush rule? One is ssd the other is hdd? Can you show ceph osd dump|grep pool ceph osd crush dump At 2017-07-28 17:47:48, "Nikola Ciprich" <nikola.cipr...@linuxbox.cz> wrote: > >On Fri, Jul 28, 2017 at 05:43:14PM +0800, linghucongsong wrote: >>

Re: [ceph-users] jewel - recovery keeps stalling (continues after restarting OSDs)

2017-07-28 Thread linghucongsong
It look like the osd in your cluster is not all the same size. can you show ceph osd df output? At 2017-07-28 17:24:29, "Nikola Ciprich" wrote: >I forgot to add that OSD daemons really seem to be idle, no disk >activity, no CPU usage.. it just looks to me like