Hi John
Why ? do the 'service' scripts not work ? (sorry I don't have access to the
systems from my location) I used dumpling and ceph-deploy on debian.
-gary
On Mon, Sep 23, 2013 at 11:25 PM, John Wilkins john.wilk...@inktank.comwrote:
I will update the Cephx docs. The usage in those
Am 23.09.2013 21:56:56, schrieb Alfredo Deza:
On Mon, Sep 23, 2013 at 11:23 AM, Bernhard Glomm
bernhard.gl...@ecologic.eu wrote:
Hi all,
something with ceph-deploy doesen't work at all anymore.
After an upgrade ceph-depoly failed to roll out a new monitor
with permission
Hi ceph-users,
I deployed a Ceph cluster (including RadosGW) with use of ceph-deploy on
RHEL6.4, during the deployment, I have a couple of questions which need your
help.
1. I followed the steps http://ceph.com/docs/master/install/rpm/ to deploy the
RadosGW node, however, after the deployment,
John Wilkins schrieb:
Clients use the public network. The cluster network is principally for
OSD-to-OSD communication--heartbeats, replication, backfill, etc.
Hmm, well, I'm aware of this, but the question is, if it is nevertheless
possible, ie. is it actively prohibited or just not
Hi there,
I want to set the flag hashpspool on an existing pool. ceph osd pool set
{pool-name} {field} {value} does not seem to work. So I wonder how I can set/
unset flags on pools?
Corin
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 09/24/2013 10:22 AM, Corin Langosch wrote:
Hi there,
I want to set the flag hashpspool on an existing pool. ceph osd pool
set {pool-name} {field} {value} does not seem to work. So I wonder how
I can set/ unset flags on pools?
I believe that at the moment you'll only be able to have that
On 09/20/2013 10:27 AM, Maciej Gałkiewicz wrote:
Hi guys
Do you have any list of companies that use Ceph in production?
regards
Inktank has a list of customers up on the site:
http://www.inktank.com/customers/
-Joao
--
Joao Eduardo Luis
Software Engineer | http://inktank.com |
On 09/23/2013 10:10 AM, Fuchs, Andreas (SwissTXT) wrote:
I'm following different threads here, mainly the poor radosgw performance one.
And what I see there are often recommendation to put a certain config to
ceph.conf, but often it's unclear to me where exactly to put them
- does it matter if
On Tue, Sep 24, 2013 at 3:27 AM, Bernhard Glomm
bernhard.gl...@ecologic.euwrote:
Am 23.09.2013 21:56:56, schrieb Alfredo Deza:
On Mon, Sep 23, 2013 at 11:23 AM, Bernhard Glomm
bernhard.gl...@ecologic.eu wrote:
Hi all,
something with ceph-deploy doesen't work at all anymore.
After an
On Tue, Sep 24, 2013 at 6:44 AM, bernhard glomm
bernhard.gl...@ecologic.euwrote:
*From: *bernhard glomm bernhard.gl...@ecologic.eu
*Subject: **Re: [ceph-users] ceph-deploy again*
*Date: *September 24, 2013 11:47:00 AM GMT+02:00
*To: *Fuchs, Andreas (SwissTXT) andreas.fu...@swisstxt.ch
Authentication works. I was interested in trying it without authentication. I
didn't see the upstart link earlier.
Is the plan to only use upstart and not service for Dumpling and beyond?
Tim
From: Gary Mazzaferro [mailto:ga...@oedata.com]
Sent: Tuesday, September 24, 2013 1:16 AM
To: John
On Tue, Sep 24, 2013 at 12:46 AM, Guang yguan...@yahoo.com wrote:
Hi ceph-users,
I deployed a Ceph cluster (including RadosGW) with use of ceph-deploy on
RHEL6.4, during the deployment, I have a couple of questions which need your
help.
1. I followed the steps
I did the same thing, restarted with upstart, and I still need to use
authentication. Not sure why yet. Maybe I didn't change the /etc/ceph
configs on all the nodes
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Snider, Tim
Sent:
Am 24.09.2013 12:24, schrieb Joao Eduardo Luis:
I believe that at the moment you'll only be able to have that flag set on a
pool at creation time, if 'osd pool default flag hashpspool = true' on your conf.
I just updated my config like this:
[osd]
osd journal size = 100
filestore xattr
Hi there,
do snapshots have an impact on write performance? I assume on each write all
snapshots have to get updated (cow) so the more snapshots exist the worse write
performance will get?
Is there any way to see how much disk space a snapshot occupies? I assume
because of cow snapshots
From your pastie details, it looks like you are using auth supported
= none. That's pre 0.51, as noted in the documentation. Perhaps I
should omit the old usage or omit it entirely.
It should look like this:
auth cluster required = none
auth service required = none
auth client required = none
On Tue, Sep 24, 2013 at 1:14 AM, Kurt Bauer kurt.ba...@univie.ac.at wrote:
John Wilkins schrieb:
Clients use the public network. The cluster network is principally for
OSD-to-OSD communication--heartbeats, replication, backfill, etc.
Hmm, well, I'm aware of this, but the question is, if it
Is the form: auth cluster required = none or auth_cluster_required = none?
(_s as a word separator)
-Original Message-
From: John Wilkins [mailto:john.wilk...@inktank.com]
Sent: Tuesday, September 24, 2013 11:43 AM
To: Aronesty, Erik
Cc: Snider, Tim; Gary Mazzaferro;
Either one should work. For RHEL, CentOS, etc., use sysvinit.
I rewrote the ops doc, but it's in a wip branch right now. Here:
http://ceph.com/docs/wip-doc-quickstart/rados/operations/operating/
I still may make some edits to it, but follow the sysvinit section.
On Tue, Sep 24, 2013 at 10:08
Hi
I want to use ceph and kvm with rdb hosting mysql and oracle
I have already use kvm with iscsi but with sgbd it suffer of io limitation
is there some people who have good and bad experience on hosting sgbd.
thank
___
ceph-users mailing list
This noshare option may have just helped me a ton -- I sure wish I would
have asked similar questions sooner, because I have seen the same failure
to scale. =)
One question -- when using the noshare option (or really, even without
it) are there any practical limits on the number of RBDs that can
On Tue, 24 Sep 2013, Travis Rhoden wrote:
This noshare option may have just helped me a ton -- I sure wish I would
have asked similar questions sooner, because I have seen the same failure to
scale. =)
One question -- when using the noshare option (or really, even without it)
are there any
On Tue, Sep 24, 2013 at 5:16 PM, Sage Weil s...@inktank.com wrote:
On Tue, 24 Sep 2013, Travis Rhoden wrote:
This noshare option may have just helped me a ton -- I sure wish I would
have asked similar questions sooner, because I have seen the same failure to
scale. =)
One question -- when
Hi Sage,
We did quite a few experiment to see how ceph read performance can scale up.
Here is the summary.
1.
First we tried to see how far a single node cluster with one osd can scale up.
We started with cuttlefish release and the entire osd file system is on the
ssd. What we saw with 4K
Hi Somnath!
On Tue, 24 Sep 2013, Somnath Roy wrote:
Hi Sage,
We did quite a few experiment to see how ceph read performance can scale up.
Here is the summary.
1.
First we tried to see how far a single node cluster with one osd can scale
up. We started with cuttlefish release
Hi Sage,
Thanks for your input. I will try those. Please see my response inline.
Thanks Regards
Somnath
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Tuesday, September 24, 2013 3:47 PM
To: Somnath Roy
Cc: Travis Rhoden; Josh Durgin; ceph-de...@vger.kernel.org;
You need to add a line to /etc/lvm/lvm.conf:
types = [ rbd, 1024 ]
It should be in the devices section of the file.
On Tue, Sep 24, 2013 at 5:00 PM, John-Paul Robinson j...@uab.edu wrote:
Hi,
I'm exploring a configuration with multiple Ceph block devices used with
LVM. The goal is to
27 matches
Mail list logo