Re: [ceph-users] MON dedicated hosts

2018-12-17 Thread Sam Huracan
Thanks Mr Konstantin and Martin, So with 200TB Cluster, the most afforadable choice is adding MON to OSD hosts, and preparing enough CPU, RAM for MON services and Storage for levelDB. Vào Th 2, 17 thg 12, 2018 vào lúc 16:55 Martin Verges < martin.ver...@croit.io> đã viết: > Hello, > > we do

Re: [ceph-users] [Warning: Forged Email] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
Hi Oliver, Peter Thanks, about an hour after my second email I sat back and thought about it some more and realised this was the case. I've also fixed the CEPH issue, a simple set of issues compounding into ceph-mon's not working correctly. 1. We had power failure 7 days ago. Which for some reas

Re: [ceph-users] Ceph 10.2.11 - Status not working

2018-12-17 Thread Dyweni - Ceph-Users
On 2018-12-17 20:16, Brad Hubbard wrote: On Tue, Dec 18, 2018 at 10:23 AM Mike O'Connor wrote: Hi All I have a ceph cluster which has been working with out issues for about 2 years now, it was upgrade about 6 month ago to 10.2.11 root@blade3:/var/lib/ceph/mon# ceph status 2018-12-18 10:4

[ceph-users] MDS failover very slow the first time, but very fast at second time

2018-12-17 Thread Ch Wan
Hi all, I have a ceph cluster running luminous 12.2.5. In the cluster, we configured the cephfs with two MDS server, ceph-mds-test04 is active and ceph-mds-test05 is standby. Here is the MDS configuration: > [mds] mds_cache_size = 100 mds_cache_memory_limit = 42949672960 mds_standby_replay

Re: [ceph-users] ceph remote disaster recovery plan

2018-12-17 Thread Zhenshi Zhou
Hi Alex, We are using ceph mostly for shared filesystem with cephfs. As well as some rbd images for docker volumes and a little s3 data. I have been looking for a plan that can backup datas on rados level. But maybe I should backup data seperately. Thanks for the reply. Alex Gorbachev 于2018年12月

Re: [ceph-users] [Warning: Forged Email] Ceph 10.2.11 - Status not working

2018-12-17 Thread Oliver Freyermuth
That's kind of unrelated to Ceph, but since you wrote two mails already, and I believe it is caused by the mailing list software for ceph-users... Your original mail distributed via the list ("[ceph-users] Ceph 10.2.11 - Status not working") did *not* have the forged-warning. Only the subseque

Re: [ceph-users] Ceph 10.2.11 - Status not working

2018-12-17 Thread Brad Hubbard
On Tue, Dec 18, 2018 at 10:23 AM Mike O'Connor wrote: > > Hi All > > I have a ceph cluster which has been working with out issues for about 2 > years now, it was upgrade about 6 month ago to 10.2.11 > > root@blade3:/var/lib/ceph/mon# ceph status > 2018-12-18 10:42:39.242217 7ff770471700 0 -- 10.1

Re: [ceph-users] [Warning: Forged Email] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
Added DKIM to my server, will this help On 18/12/18 11:04 am, Mike O'Connor wrote: > mmm wonder why the list is saying my email is forged, wonder what I have > wrong. > > My email is sent via an outbound spam filter, but I was sure I had the > SPF set correctly. > > Mike > > On 18/12/18 10:53 am, M

[ceph-users] why libcephfs API use  "struct ceph_statx"  instead of   "struct stat"  

2018-12-17 Thread wei.qiaomiao
Hi,everyone I found libcephfs API, redefine "struct ceph_statx" instead of "struct stat", why not use "struct stat " directly and I think it may be better understandable and more convenient to use for caller. struct ceph_statx { uint32_tstx_mask; uint32_

Re: [ceph-users] ceph remote disaster recovery plan

2018-12-17 Thread Alex Gorbachev
On Thu, Dec 13, 2018 at 5:01 AM Zhenshi Zhou wrote: > > Hi all, > > I'm running a luminous cluster with tens of OSDs and > the cluster runs well. As the data grows, ceph becomes > more and more important. > > What worries me is that many services will down if the > cluster is out, for instance, th

Re: [ceph-users] [Warning: Forged Email] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
mmm wonder why the list is saying my email is forged, wonder what I have wrong. My email is sent via an outbound spam filter, but I was sure I had the SPF set correctly. Mike On 18/12/18 10:53 am, Mike O'Connor wrote: > Hi All > > I have a ceph cluster which has been working with out issues for

[ceph-users] Ceph 10.2.11 - Status not working

2018-12-17 Thread Mike O'Connor
Hi All I have a ceph cluster which has been working with out issues for about 2 years now, it was upgrade about 6 month ago to 10.2.11 root@blade3:/var/lib/ceph/mon# ceph status 2018-12-18 10:42:39.242217 7ff770471700  0 -- 10.1.5.203:0/1608630285 >> 10.1.5.207:6789/0 pipe(0x7ff768000c80 sd=4 :0

[ceph-users] Ceph on Azure ?

2018-12-17 Thread LuD j
Hello, We are working to integrate s3 protocol in our webs applications. The objective is to stop storing documents in bdd or filesytem but use s3's buckets in replacement. We already gave a try to ceph with rados gateway on physicals nodes, its working well. But we are also on Azure, and we can'

[ceph-users] Ceph Meetings Canceled for Holidays

2018-12-17 Thread Mike Perez
Hi all, Ceph meetings are canceled until January 7th to observe various upcoming holidays. You can stay up to date with meetings by subscribing to the Ceph community calendar via: Google calendar: https://calendar.google.com/calendar/b/1?cid=OXRzOWM3bHQ3dTF2aWMyaWp2dnFxbGZwbzBAZ3JvdXAuY2FsZW5kYX

[ceph-users] Omap issues - metadata creating too many

2018-12-17 Thread Josef Zelenka
Hi everyone, i'm running a Luminous 12.2.5 cluster with 6 hosts on ubuntu 16.04 - 12 HDDs for data each, plus 2 SSD metadata OSDs(three nodes have an additional SSD i added to have more space to rebalance the metadata). CUrrently, the cluster is used mainly as a radosgw storage, with 28tb data

Re: [ceph-users] Luminous radosgw S3/Keystone integration issues

2018-12-17 Thread Burkhard Linke
Hi, On 12/17/18 11:42 AM, Dan van der Ster wrote: Hi all, Bringing up this old thread with a couple questions: 1. Did anyone ever follow up on the 2nd part of this thread? -- is there any way to cache keystone EC2 credentials? I don't think this is possible. The AWS signature algorithms inv

Re: [ceph-users] Luminous radosgw S3/Keystone integration issues

2018-12-17 Thread Dan van der Ster
Hi all, Bringing up this old thread with a couple questions: 1. Did anyone ever follow up on the 2nd part of this thread? -- is there any way to cache keystone EC2 credentials? 2. A question for Valery: could you please explain exactly how you added the EC2 credentials to the local backend (your

Re: [ceph-users] MON dedicated hosts

2018-12-17 Thread Martin Verges
Hello, we do not see a problem in a small cluster having 3 MON on OSD hosts. However we do suggest to use 5 MON's. Near every customers of us does this without a problem! Please just make sure to have enough cpu/ram/disk available. So: 1. No not necessary, only if you want to spend more money tha

Re: [ceph-users] MON dedicated hosts

2018-12-17 Thread Konstantin Shalygin
We've runned a 50TB Cluster with 3 MON services on the same nodes with OSDs. We are planning to upgrade to 200TB, I have 2 questions: 1. Should we separate MON services to dedicated hosts? If you think about stability, simplicity and redundancy - yes. 2. From your experiences, how size of

[ceph-users] MON dedicated hosts

2018-12-17 Thread Sam Huracan
Hi everybody, We've runned a 50TB Cluster with 3 MON services on the same nodes with OSDs. We are planning to upgrade to 200TB, I have 2 questions: 1. Should we separate MON services to dedicated hosts? 2. From your experiences, how size of cluster we shoud consider to put MON on dedicated hos

Re: [ceph-users] ceph remote disaster recovery plan

2018-12-17 Thread Zhenshi Zhou
Any idea would be appreciated:) Zhenshi Zhou 于2018年12月13日周四 下午6:01写道: > Hi all, > > I'm running a luminous cluster with tens of OSDs and > the cluster runs well. As the data grows, ceph becomes > more and more important. > > What worries me is that many services will down if the > cluster is out