Re: [ceph-users] [rgw] civetweb behind haproxy doesn't work with absolute URI

2018-03-31 Thread Matt Benjamin
I think if you haven't defined it in the Ceph config, it's disabled? Matt On Sat, Mar 31, 2018 at 4:59 PM, Rudenko Aleksandr wrote: > Hi, Sean. > > Thank you for the reply. > > What does it mean: “We had to disable "rgw dns name" in the end”? > > "rgw_dns_name": “”, has no

Re: [ceph-users] [rgw] civetweb behind haproxy doesn't work with absolute URI

2018-03-31 Thread Rudenko Aleksandr
Hi, Sean. Thank you for the reply. What does it mean: “We had to disable "rgw dns name" in the end”? "rgw_dns_name": “”, has no effect for me. On 29 Mar 2018, at 11:23, Sean Purdy > wrote: We had something similar recently. We had

Re: [ceph-users] 1 mon unable to join the quorum

2018-03-31 Thread Julien Lavesque
At first the cluster has been deployed using ceph-ansible in version infernalis. For some unknown reason the controller02 was out of the quorum and we were unable to add it in the quorum. We have updated the cluster to jewel version using the rolling-update playbook from ceph-ansible The

[ceph-users] [Hamme-r][Simple Msg]Cluster can not work when Accepter::entry quit

2018-03-31 Thread yu2xiangyang
Hi cephers, Recently there has been a big problem in our production ceph cluster.It has been running very well for one and a half years. RBD client network and ceph public network are different, communicating through a router. Our ceph version is 0.94.5. Our IO transport is using Simple Messanger.

Re: [ceph-users] Bluestore caching, flawed by design?

2018-03-31 Thread Jack
On 03/31/2018 03:24 PM, Mark Nelson wrote: >> 1. Completely new users may think that bluestore defaults are fine and >> waste all that RAM in their machines. > > What does "wasting" RAM mean in the context of a node running ceph? Are > you upset that other applications can't come in and evict

Re: [ceph-users] Bluestore caching, flawed by design?

2018-03-31 Thread Mark Nelson
On 03/29/2018 08:59 PM, Christian Balzer wrote: Hello, my crappy test cluster was rendered inoperational by an IP renumbering that wasn't planned and forced on me during a DC move, so I decided to start from scratch and explore the fascinating world of Luminous/bluestore and all the assorted