Re: [ceph-users] how to mount one of the cephfs namespace using ceph-fuse?

2018-12-04 Thread Zhenshi Zhou
Hi I can use this mount cephfs manually. But how to edit fstab so that the system will auto-mount cephfs by ceph-fuse? Thanks Yan, Zheng 于2018年11月20日周二 下午8:08写道: > ceph-fuse --client_mds_namespace=xxx > On Tue, Nov 20, 2018 at 7:33 PM ST Wong (ITSC) wrote: > > Hi all, > > > > We’re

Re: [ceph-users] Decommissioning cluster - rebalance questions

2018-12-04 Thread Marco Gaiarin
Mandi! si...@turka.nl In chel di` si favelave... > What I don't get is, when I perform 'ceph osd out ' the cluster is > rebalancing, but when I perform 'ceph osd crush remove osd.' it again > starts to rebalance. Why does this happen? I've recently hit the same 'strangeness'. Note that i'm not

Re: [ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread linghucongsong
Thank you for reply! But it is just in case suddenly power off for all the hosts! So the best way for this it is to have the snapshot on the import vms or have to mirror the images to other ceph cluster? At 2018-12-04 17:30:13, "Janne Johansson" wrote: Den tis 4 dec. 2018 kl

[ceph-users] [cephfs] Kernel outage / timeout

2018-12-04 Thread ceph
Hi, I have some wild freeze using cephfs with the kernel driver For instance: [Tue Dec 4 10:57:48 2018] libceph: mon1 10.5.0.88:6789 session lost, hunting for new mon [Tue Dec 4 10:57:48 2018] libceph: mon2 10.5.0.89:6789 session established [Tue Dec 4 10:58:20 2018] ceph: mds0 caps stale [..]

[ceph-users] Assert when upgrading from Hammer to Jewel

2018-12-04 Thread Smith, Eric
We were upgrading from Ceph Hammer to Ceph Jewel, we updated our OS from CentOS 7.1 to CentOS 7.3 prior to this without issue – we ran into 2 issues: 1. FAILED assert(0 == "Missing map in load_pgs") * We found the following article fixed this issue:

[ceph-users] High average apply latency Firefly

2018-12-04 Thread Klimenko, Roman
Hi everyone! On the old prod cluster - baremetal, 5 nodes (24 cpu, 256G RAM) - ceph 0.80.9 filestore - 105 osd, size 114TB (each osd 1.1T, SAS Seagate ST1200MM0018) , raw used 60% - 15 journals (eash journal 0.4TB, Toshiba PX04SMB040) - net 20Gbps - 5 pools, size 2, min_size 1 we have

[ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread linghucongsong
HI all! I have a ceph test envirment use ceph with openstack. There are some vms run on the openstack. It is just a test envirment. my ceph version is 12.2.4. Last day I reboot all the ceph hosts before this I do not shutdown the vms on the openstack. When all the hosts boot up and the

Re: [ceph-users] Decommissioning cluster - rebalance questions

2018-12-04 Thread Jarek
On Mon, 03 Dec 2018 16:41:36 +0100 si...@turka.nl wrote: > Hi, > > Currently I am decommissioning an old cluster. > > For example, I want to remove OSD Server X with all its OSD's. > > I am following these steps for all OSD's of Server X: > - ceph osd out > - Wait for rebalance (active+clean)

Re: [ceph-users] [cephfs] Kernel outage / timeout

2018-12-04 Thread NingLi
Hi,maybe this reference can help you http://docs.ceph.com/docs/master/cephfs/troubleshooting/#disconnected-remounted-fs > On Dec 4, 2018, at 18:55, c...@jack.fr.eu.org wrote: > > Hi, > > I have some wild freeze using cephfs with the kernel driver > For instance: > [Tue Dec 4 10:57:48 2018]

Re: [ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread Janne Johansson
Den tis 4 dec. 2018 kl 09:49 skrev linghucongsong : > HI all! > > I have a ceph test envirment use ceph with openstack. There are some vms > run on the openstack. It is just a test envirment. > my ceph version is 12.2.4. Last day I reboot all the ceph hosts before > this I do not shutdown the vms

Re: [ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread Janne Johansson
Den tis 4 dec. 2018 kl 10:37 skrev linghucongsong : > Thank you for reply! > But it is just in case suddenly power off for all the hosts! > So the best way for this it is to have the snapshot on the import vms or > have to mirror the > images to other ceph cluster? Best way is probably to do

Re: [ceph-users] High average apply latency Firefly

2018-12-04 Thread Janne Johansson
Den tis 4 dec. 2018 kl 11:20 skrev Klimenko, Roman : > > Hi everyone! > > On the old prod cluster > - baremetal, 5 nodes (24 cpu, 256G RAM) > - ceph 0.80.9 filestore > - 105 osd, size 114TB (each osd 1.1T, SAS Seagate ST1200MM0018) , raw used 60% > - 15 journals (eash journal 0.4TB, Toshiba

Re: [ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread Simon Ironside
On 04/12/2018 09:37, linghucongsong wrote: But it is just in case suddenly power off for all the hosts! I'm surprised you're seeing I/O errors inside the VM once they're restarted. Is the cluster healthy? What's the output of ceph status? Simon ___

Re: [ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread Jason Dillaman
I would check to see if the images have an exclusive-lock still held by a force-killed VM. librbd will generally automatically clear this up unless it doesn't have the proper permissions to blacklist a dead client from the Ceph cluster. Verify that your OpenStack Ceph user caps are correct [1][2].

Re: [ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread Ouyang Xu
Hi linghucongsong: I have got this issue before, you can try to fix it as below: 1. use /rbd lock ls/ to get the lock for the vm 2. use /rbd lock rm/ to remove that lock for the vm 3. start vm again hope that can help you. regards, Ouyang On 2018/12/4 下午4:48, linghucongsong wrote: HI all!

[ceph-users] Need help related to authentication

2018-12-04 Thread Rishabh S
Dear Members, I am new to ceph and implementing object store using ceph. I have following scenario. 1. I have an application which needs to store thousands of files in to ceph cluster 2. My application will be deployed in kubernetes cluster 3. My application will communicate using Rest API

Re: [ceph-users] [cephfs] Kernel outage / timeout

2018-12-04 Thread Jack
Thanks However, I do not think this tip is related to my issue Best regards, On 12/04/2018 12:00 PM, NingLi wrote: > > Hi,maybe this reference can help you > > http://docs.ceph.com/docs/master/cephfs/troubleshooting/#disconnected-remounted-fs > > >> On Dec 4, 2018, at 18:55,

Re: [ceph-users] rbd IO monitoring

2018-12-04 Thread Michael Green
Interesting, thanks for sharing. I'm looking at the example output in the PR 25114: write_bytes 409600/107 409600/107 write_latency 2618503617/107 How should these values be interpreted? -- Michael Green > On Dec 3, 2018, at 2:47 AM, Jan Fajerski wrote: > >> Question: what tools

Re: [ceph-users] [cephfs] Kernel outage / timeout

2018-12-04 Thread Jack
Why is the client frozen at the first place ? Is this because it somehow lost the connection to the mon (have not found anything about this yet) ? How can I prevent this ? Can I make the client reconnect in less that 15 minutes, to lessen the impact ? Best regards, On 12/04/2018 07:41 PM,

Re: [ceph-users] [cephfs] Kernel outage / timeout

2018-12-04 Thread Gregory Farnum
Yes, this is exactly it with the "reconnect denied". -Greg On Tue, Dec 4, 2018 at 3:00 AM NingLi wrote: > > Hi,maybe this reference can help you > > > http://docs.ceph.com/docs/master/cephfs/troubleshooting/#disconnected-remounted-fs > > > > On Dec 4, 2018, at 18:55, c...@jack.fr.eu.org wrote:

[ceph-users] Cephalocon (was Re: CentOS Dojo at Oak Ridge, Tennessee CFP is now open!)

2018-12-04 Thread Matthew Vernon
On 03/12/2018 22:46, Mike Perez wrote: Also as a reminder, lets try to coordinate our submissions on the CFP coordination pad: https://pad.ceph.com/p/cfp-coordination I see that mentions a Cephalocon in Barcelona in May. Did I miss an announcement about that? Regards, Matthew -- The

Re: [ceph-users] Need help related to authentication

2018-12-04 Thread Paul Emmerich
You are probably looking for radosgw-admin which can manage users on the shell, e.g.: radosgw-admin user create --uid username --display-name "full name" radosgw-admin user list radosgw-admin user info --uid username The create and info commands return the secret/access key which can be used

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett wrote: > Going to take another stab at this... > > We have a development environment–made up of VMs–for developing and > testing the deployment tools for a particular service that depends on > cephfs for sharing state data between hosts. In

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:31, Vasu Kulkarni wrote: > >> Is there a way we can easily set that up without trying to use outdated >> tools? Presumably if ceph still supports this as the docs claim, there's a >> way to get it done without using ceph-deploy? >> > It might be more involved if you are

Re: [ceph-users] Cephalocon (was Re: CentOS Dojo at Oak Ridge, Tennessee CFP is now open!)

2018-12-04 Thread Mike Perez
On 16:25 Dec 04, Matthew Vernon wrote: > On 03/12/2018 22:46, Mike Perez wrote: > > >Also as a reminder, lets try to coordinate our submissions on the CFP > >coordination pad: > > > >https://pad.ceph.com/p/cfp-coordination > > I see that mentions a Cephalocon in Barcelona in May. Did I miss an >

Re: [ceph-users] [cephfs] Kernel outage / timeout

2018-12-04 Thread Yan, Zheng
On Tue, Dec 4, 2018 at 6:55 PM wrote: > > Hi, > > I have some wild freeze using cephfs with the kernel driver > For instance: > [Tue Dec 4 10:57:48 2018] libceph: mon1 10.5.0.88:6789 session lost, > hunting for new mon > [Tue Dec 4 10:57:48 2018] libceph: mon2 10.5.0.89:6789 session established

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
Going to take another stab at this... We have a development environment–made up of VMs–for developing and testing the deployment tools for a particular service that depends on cephfs for sharing state data between hosts. In production we will be using filestore OSDs because of the very low

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:04, Vasu Kulkarni wrote: > > > On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett > wrote: > >> you are using HOST:DIR option which is bit old and I think it was >> supported till jewel, since you are using 2.0.1 you should be using only >> 'osd create' with logical

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 3:19 PM Matthew Pounsett wrote: > > > On Tue, 4 Dec 2018 at 18:12, Vasu Kulkarni wrote: > >> >>> As explained above, we can't just create smaller raw devices. Yes, >>> these are VMs but they're meant to replicate physical servers that will be >>> used in production,

Re: [ceph-users] rbd IO monitoring

2018-12-04 Thread Jason Dillaman
The "osd_perf_query" mgr module is just a demo / testing framework. However, the output was tweaked prior to merge to provide more readable values instead of the "{value summation} / {count}" in the original submission. On Tue, Dec 4, 2018 at 1:56 PM Michael Green wrote: > > Interesting, thanks

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Mon, Dec 3, 2018 at 4:47 PM Matthew Pounsett wrote: > > I'm in the process of updating some development VMs that use ceph-fs. It > looks like recent updates to ceph have deprecated the 'ceph-deploy osd > prepare' and 'activate' commands in favour of the previously-optional > 'create'

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 3:07 PM Matthew Pounsett wrote: > > > On Tue, 4 Dec 2018 at 18:04, Vasu Kulkarni wrote: > >> >> >> On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett >> wrote: >> >>> you are using HOST:DIR option which is bit old and I think it was >>> supported till jewel, since you are

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Matthew Pounsett
On Tue, 4 Dec 2018 at 18:12, Vasu Kulkarni wrote: > >> As explained above, we can't just create smaller raw devices. Yes, these >> are VMs but they're meant to replicate physical servers that will be used >> in production, where no such volumes are available. >> > In that case you will have to

Re: [ceph-users] all vms can not start up when boot all the ceph hosts.

2018-12-04 Thread linghucongsong
Thanks to all! I might have found the reason. It is look like relate to the below bug. https://bugs.launchpad.net/nova/+bug/1773449 At 2018-12-04 23:42:15, "Ouyang Xu" wrote: Hi linghucongsong: I have got this issue before, you can try to fix it as below: 1. use rbd lock ls to

Re: [ceph-users] Need help related to authentication

2018-12-04 Thread Rishabh S
Hi Paul, Thank You. I was looking for suggestions on how my ceph client should get access and secret keys. Another thing where I need help is regarding encryption http://docs.ceph.com/docs/mimic/radosgw/encryption/# I am little confused

[ceph-users] 【cephfs】cephfs hung when scp/rsync large files

2018-12-04 Thread NingLi
Hi all, We found that some process writing cephfs will hang for a long time (> 120s) when uploading(scp/rsync) large files(totally 50G ~ 300G)to the app node's cephfs mountpoint. This problem is not always reproduciable. But when this problem occurs, the web(nginx) or some other services