Re: [ceph-users] Commercial support

2019-01-23 Thread Erik McCormick
Suse as well https://www.suse.com/products/suse-enterprise-storage/ On Wed, Jan 23, 2019, 6:01 PM Alex Gorbachev On Wed, Jan 23, 2019 at 5:29 PM Ketil Froyn wrote: > > > > Hi, > > > > How is the commercial support for Ceph? More specifically, I was > recently pointed in the direction of the

Re: [ceph-users] Commercial support

2019-01-23 Thread Alex Gorbachev
On Wed, Jan 23, 2019 at 5:29 PM Ketil Froyn wrote: > > Hi, > > How is the commercial support for Ceph? More specifically, I was recently > pointed in the direction of the very interesting combination of CephFS, Samba > and ctdb. Is anyone familiar with companies that provide commercial support

[ceph-users] Commercial support

2019-01-23 Thread Ketil Froyn
Hi, How is the commercial support for Ceph? More specifically, I was recently pointed in the direction of the very interesting combination of CephFS, Samba and ctdb. Is anyone familiar with companies that provide commercial support for in-house solutions like this? Regards, Ketil

[ceph-users] Playbook Deployment - [TASK ceph-mon : test if rbd exists ]

2019-01-23 Thread Meysam Kamali
Hi Ceph Community, I am using ansible 2.2 and ceph branch stable-2.2, on centos7, to deploy the playbook. But the deployment get hangs in this step "TASK [ceph-mon : test if rbd exists]". it gets hangs there and doesnot move. I have all the three ceph nodes ceph-admin, ceph-mon, ceph-osd I

Re: [ceph-users] Process stuck in D+ on cephfs mount

2019-01-23 Thread Marc Roos
Are there any others I need to grab? So can I do all at once. I do not like to have to restart this one so often. > > Yes sort of. I do have an inconsistent pg for a while, but it is on a > different pool. But I take it this is related to a networking issue I > currently have with rsync and

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Alfredo Deza
On Wed, Jan 23, 2019 at 11:03 AM Dietmar Rieder wrote: > > On 1/23/19 3:05 PM, Alfredo Deza wrote: > > On Wed, Jan 23, 2019 at 8:25 AM Jan Fajerski wrote: > >> > >> On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote: > >>> Hi, > >>> > >>> thats a bad news. > >>> > >>> round about 5000

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Dietmar Rieder
On 1/23/19 3:05 PM, Alfredo Deza wrote: > On Wed, Jan 23, 2019 at 8:25 AM Jan Fajerski wrote: >> >> On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote: >>> Hi, >>> >>> thats a bad news. >>> >>> round about 5000 OSDs are affected from this issue. It's not realy a >>> solution to

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Paul Emmerich
On Wed, Jan 23, 2019 at 4:15 PM Manuel Lausch wrote: > yes you are right. The activate disables system wide the ceph-disk. > This is done by symlinking /etc/systemd/system/ceph-disk@.service > to /dev/null. > After deleting this symlink my OSDs started again after reboot. > The startup processes

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Manuel Lausch
On Wed, 23 Jan 2019 08:11:31 -0500 Alfredo Deza wrote: > I don't know how that would look like, but I think it is worth a try > if re-deploying OSDs is not feasible for you. yes, is there a working way to migrate this I will have a try it. > > The key api for encryption is *very* odd and a

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Manuel Lausch
On Wed, 23 Jan 2019 14:25:00 +0100 Jan Fajerski wrote: > I might be wrong on this, since its been a while since I played with > that. But iirc you can't migrate a subset of ceph-disk OSDs to > ceph-volume on one host. Once you run ceph-volume simple activate, > the ceph-disk systemd units and

Re: [ceph-users] Process stuck in D+ on cephfs mount

2019-01-23 Thread Yan, Zheng
On Wed, Jan 23, 2019 at 6:07 PM Marc Roos wrote: > > Yes sort of. I do have an inconsistent pg for a while, but it is on a > different pool. But I take it this is related to a networking issue I > currently have with rsync and broken pipe. > > Where exactly does it go wrong? The cephfs kernel

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Alfredo Deza
On Wed, Jan 23, 2019 at 8:25 AM Jan Fajerski wrote: > > On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote: > >Hi, > > > >thats a bad news. > > > >round about 5000 OSDs are affected from this issue. It's not realy a > >solution to redeploy this OSDs. > > > >Is it possible to migrate

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Jan Fajerski
On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote: Hi, thats a bad news. round about 5000 OSDs are affected from this issue. It's not realy a solution to redeploy this OSDs. Is it possible to migrate the local keys to the monitors? I see that the OSDs with the "lockbox feature"

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Alfredo Deza
On Wed, Jan 23, 2019 at 4:01 AM Manuel Lausch wrote: > > Hi, > > thats a bad news. > > round about 5000 OSDs are affected from this issue. It's not realy a > solution to redeploy this OSDs. > > Is it possible to migrate the local keys to the monitors? > I see that the OSDs with the "lockbox

Re: [ceph-users] Spec for Ceph Mon+Mgr?

2019-01-23 Thread Jan Kasprzak
jes...@krogh.cc wrote: : Hi. : : We're currently co-locating our mons with the head node of our Hadoop : installation. That may be giving us some problems, we dont know yet, but : thus I'm speculation about moving them to dedicated hardware. : : It is hard to get specifications "small" engough

[ceph-users] Cephfs snapshot create date

2019-01-23 Thread Marc Roos
How can I get the snapshot create date on cephfs. When I do an ls on .snap dir it will give me the date of the snapshot source date. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Migrating to a dedicated cluster network

2019-01-23 Thread Jan Kasprzak
Jakub Jaszewski wrote: : Hi Yenya, : : Can I ask how your cluster looks and why you want to do the network : splitting? Jakub, we have deployed the Ceph cluster originally as a proof of concept for a private cloud. We run OpenNebula and Ceph on about 30 old servers with old HDDs (2

[ceph-users] crush location hook with mimic

2019-01-23 Thread Mattia Belluco
Hi, we are having issues with the crush location hooks on Mimic: we deployed the same script we have been using since Hammer (and has been working fine also in Jewel) that returns: root=fresh-install host=$(hostname -s)-fresh however it seems the output of the script is completely disregarded.

Re: [ceph-users] Using Ceph central backup storage - Best practice creating pools

2019-01-23 Thread cmonty14
Hi, due to performance issues RGW is not an option. This statement may be wrong, but there's the following aspect to consider. If I write a backup that is typically a large file, this is normally a single IO stream. This causes massive performance issues on Ceph because this single IO stream is

Re: [ceph-users] Migrating to a dedicated cluster network

2019-01-23 Thread Jakub Jaszewski
Hi Yenya, Can I ask how your cluster looks and why you want to do the network splitting? We used to set up 9-12 OSD nodes (12-16 HDDs each) clusters using 2x10Gb for access and 2x10Gb for cluster network, however, I don't see the reasons to not use just one network for next cluster setup.

Re: [ceph-users] Process stuck in D+ on cephfs mount

2019-01-23 Thread Marc Roos
Yes sort of. I do have an inconsistent pg for a while, but it is on a different pool. But I take it this is related to a networking issue I currently have with rsync and broken pipe. Where exactly does it go wrong? The cephfs kernel clients is sending a request to the osd, but the osd never

[ceph-users] Migrating to a dedicated cluster network

2019-01-23 Thread Jan Kasprzak
Hello, Ceph users, is it possible to migrate already deployed Ceph cluster, which uses public network only, to a split public/dedicated networks? If so, can this be done without service disruption? I have now got a new hardware which makes this possible, but I am not sure how to do it.

Re: [ceph-users] migrate ceph-disk to ceph-volume fails with dmcrypt

2019-01-23 Thread Manuel Lausch
Hi, thats a bad news. round about 5000 OSDs are affected from this issue. It's not realy a solution to redeploy this OSDs. Is it possible to migrate the local keys to the monitors? I see that the OSDs with the "lockbox feature" has only one key for data and journal partition and the older OSDs

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-23 Thread Mykola Golub
On Tue, Jan 22, 2019 at 01:26:29PM -0800, Void Star Nill wrote: > Regarding Mykola's suggestion to use Read-Only snapshots, what is the > overhead of creating these snapshots? I assume these are copy-on-write > snapshots, so there's no extra space consumed except for the metadata? Yes. --

[ceph-users] osd bad crc cause whole cluster halt

2019-01-23 Thread lin yunfan
Hi list, I have encounter this problem both on jewel cluster and luminous cluster. The symptom is some request will be blocked forever and the whole cluster won't able to receive any data anymore. Further investigating shows the blocked request happened on 2 osds(the pool size is 2 so I guess it

Re: [ceph-users] The OSD can be “down” but still “in”.

2019-01-23 Thread Eugen Block
Hi, If the OSD represents the primary one for a PG, then all IO will be stopped..which may lead to application failure.. no, that's not how it works. You have an acting set of OSDs for a PG, typically 3 OSDs in a replicated pool. If the primary OSD goes down, the secondary becomes the