We have multiple interfaces on our Rados gateway node, each of which is
assigned to one of our many VLANs with a unique IP address.
Is it possible to set multiple DNS names for a single Rados GW, so it can
handle the request to each of the VLAN specific IP address DNS names?
eg.
rgw dns name =
... Trying to send again after reporting bounce backs to dreamhost ...
... Trying to send one more time after seeing mails come through the
list today ...
Hi all,
First off, I should point out that this is a 'small cluster' issue and
may well be due to the stretched resources. If I'm doomed to
I didn't have a need for this kind of setup, but as you already need
http server (apache, nginx, etc) to proxy requests to rgw, you could
setup all domains on it and when forwarding use only one.
On 2/21/15 1:58, Shinji Nakamoto wrote:
We have multiple interfaces on our Rados gateway node,
On Wed, Feb 18, 2015 at 9:19 PM, Florian Haas flor...@hastexo.com wrote:
Hey everyone,
I must confess I'm still not fully understanding this problem and
don't exactly know where to start digging deeper, but perhaps other
users have seen this and/or it rings a bell.
System info: Ceph giant
On 02/23/2015 12:21 PM, Florian Haas wrote:
On Wed, Feb 18, 2015 at 9:19 PM, Florian Haas flor...@hastexo.com wrote:
Hey everyone,
I must confess I'm still not fully understanding this problem and
don't exactly know where to start digging deeper, but perhaps other
users have seen this and/or
On 02/23/2015 01:41 PM, Nick Fisk wrote:
Hi Mark,
Thanks for publishing these results they are very interesting. I was wondering
if you could spare a few minutes to answer a few questions
1. Can you explain why in the read graphs the runs are for different lengths of
time? At first I
Hello,
The librados write ops are atomic? I mean what happens if two different clients
try to write the same object with the same content?
Regards.
Italo Santos
http://italosantos.com.br/
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi all,
I'm currently facing a strange problem when deploying giant on Ubuntu
14.04. Following the docs, and reaching to
ceph-deploy mon create-initial
on the three mon hosts leads to every single mon stuck with a companion
ceph-create-keys waiting forever.
ceph-deploy quits after waiting for
- Original Message -
From: Shinji Nakamoto shinji.nakam...@mgo.com
To: ceph-us...@ceph.com
Sent: Friday, February 20, 2015 3:58:39 PM
Subject: [ceph-users] RadosGW - multiple dns names
We have multiple interfaces on our Rados gateway node, each of which is
assigned to one of our
Hi Stephan,
Could you share the /etc/ceph/ceph.conf content ? Maybe ceph-create-keys cannot
reach the monitor ?
Cheers
On 23/02/2015 22:53, Stephan Seitz wrote:
Hi all,
I'm currently facing a strange problem when deploying giant on Ubuntu
14.04. Following the docs, and reaching to
Hi Stephan,
I've just had a very similar problem today. It turned out that the problem was
the mgmt network which I use to manage the nodes is different to the public
network.
I had created host entries resolving to the mgmt network ip's, so when initial
create was running it was trying to
Hello everyone,
In my rather heterogeneous setup ...
-1 54.36 root default
-2 42.44 room 2.162
-4 6.09host le09091
3 2.03osd.3 up 1
1 2.03osd.1 up 1
9 2.03
Hello,
On Tue, 24 Feb 2015 06:28:25 + frank.zirkelb...@lew-verteilnetz.de
wrote:
Hello everyone,
In my rather heterogeneous setup ...
-1 54.36 root default
-2 42.44 room 2.162
-4 6.09host le09091
3 2.03
Running Calamari v1.2.3.1 and hit a oddity:
Cluster has registered successfully and all graphs display.
Only the KEY VALUE tables from
Manage Cluster Hosts (click the i' icon)
are all empty.
Manually running e.g. salt '*' grains.item num_cpus works.
salt '*' ceph.get_heartbeats work.
Sorry this is delayed, catching up. I beleive this was talked about in
the last Ceph summit. I think this was the blueprint.
https://wiki.ceph.com/Planning/Blueprints/Hammer/Towards_Ceph_Cold_Storage
On Wed, Jan 14, 2015 at 9:35 AM, Martin Millnert mar...@millnert.se wrote:
Hello list,
I'm
15 matches
Mail list logo