hi
I am newly use ceph,when mon start up ,ceph -s show no monitors specified
to connect to.
Error connecting to cluster: ObjectNotFound(even in the mon). What
reason may be?
ZTE Information Security Notice: The information contained in
Hi,
In the last few days this PG (pool is .rgw.buckets) has been in error after
running the scrub process.
After getting the error, and trying to see what may be the issue (and
finding none), I've just issued a ceph repair followed by a ceph
deep-scrub. However it doesn't seem to have fixed the
Hi all,
I have a small ceph cluster with just 2 OSDs, latest firefly.
Default data, metadata and rbd pools were created with size=3 and min_size=1
An additional pool rbd2 was created with size=2 and min_size=1
This would give me a warning status, saying that 64 pgs were
active+clean and 192
Hi all,
I fixed the issue with the following commands:
# ceph osd pool set data size 1
(wait some seconds for clean+active state of +64pgs)
# ceph osd pool set data size 2
# ceph osd pool set metadata size 1
(wait some seconds for clean+active state of +64pgs)
# ceph osd pool set metadata size 2
On 10/12/14 07:36, Vivek Varghese Cherian wrote:
Hi,
I am trying to integrate OpenStack Juno Keystone with the Ceph Object
Gateway(radosw).
I want to use keystone as the users authority. A user that keystone
authorizes to access the gateway will also be created on the radosgw.
Tokens that
Hi All,
This is a new release of ceph-deploy that defaults to installing the
Giant release of Ceph.
Additionally, there are a couple of bug fixes that makes sure that
calls to 'gatherkeys' returns non-zero upon failure, and that the EPEL
repo is properly enabled as a prerequisite to installation
On Tue, Dec 9, 2014 at 3:11 PM, Christopher Armstrong
ch...@opdemand.com wrote:
Hi folks,
I think we have a bit of confusion around how initial members is used. I
understand that we can specify a single monitor (or a subset of monitors) so
that the cluster can form a quorum when it first
Thanks Greg - I thought the same thing, but confirmed with the user that it
appears the radosgw client is indeed using initial members - when he added
all of his hosts to initial members, things worked just fine. In either
event, all of the monitors were always fully enumerated later in the config
I'm a big fan of /etc/*.d/ configs. Basically if the package maintained
/etc/ceph.conf includes all files in /etc/ceph.d/ then I can break up the
files however I'd like (mon, ods, mds, client, one per daemon, etc). Then
when upgrading, I don't have to worry about the new packages trying to
On Wed, 10 Dec 2014, Robert LeBlanc wrote:
I'm a big fan of /etc/*.d/ configs. Basically if the package maintained
/etc/ceph.conf includes all files in /etc/ceph.d/ then I can break up the
files however I'd like (mon, ods, mds, client, one per daemon, etc). Then
when upgrading, I don't have to
On Wed, 10 Dec 2014, Robert LeBlanc wrote:
I guess you would have to specify the cluster name in /etc/ceph/ceph.conf?
That would be my only concern.
ceph.conf is $cluster.conf (default cluster name is 'ceph').
Unfortunately under systemd it's not possible to parameterize daemons with
two
On Wed, 10 Dec 2014, Robert LeBlanc wrote:
If cluster is specified in /etc/default/ceph than I don't have any other
reservations to your proposal.
Great. I like it much better than putting something in /var/lib/ceph/*
(because it's bad policy) and this nicely sidesteps the 'include file'
What version is he running?
Joao, does this make any sense to you?
-Greg
On Wed, Dec 10, 2014 at 11:54 AM, Christopher Armstrong
ch...@opdemand.com wrote:
Thanks Greg - I thought the same thing, but confirmed with the user that it
appears the radosgw client is indeed using initial members -
I would suggest that it would be:
/etc/ceph/$cluster.conf
/etc/ceph/$cluster.d/00-a.conf
/etc/ceph/$cluster.d/01-b.conf
To make sure there is no conflict with anything else in /etc/ and it is
easy to always find the ceph configs instead of being mixed in with
everything else. I also like the
Hi,
root@ppm-c240-ceph3:~# /usr/bin/radosgw -n client.radosgw.gateway -d
log-to-stderr
2014-12-09 12:51:31.410944 7f073f6457c0 0 ceph version 0.80.7
(6c0127fcb58008793d3c8b62d925bc91963672a3), process radosgw, pid 5958
common/ceph_crypto.cc: In function 'void
Craig, Gregory,
my disks were a bit smaller than 10GB, I changed them with 20GB disks and
the cluster's health went OK.
Thanks a lot
2014-12-10 0:08 GMT+01:00 Craig Lewis cle...@centraldesktop.com:
When I first created a test cluster, I used 1 GiB disks. That causes
problems.
Ceph has a
Does anyone know of any good web interface for ceph?
I tried calamari but it looks like it's more for rh6?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello Cephers
I am observing in .87 and .89 that rgw occupies a lot more disk space than
the objects and .rgw.buckets has thousands of _shadow and _multipart
objects.
After deleting the S3 objects he rados objects still remain.
radosgw-admin gc list is empty
radosgw-admin gc process
doesn't
On Wed, 10 Dec 2014, Robert LeBlanc wrote:
I would suggest that it would be:
/etc/ceph/$cluster.conf
/etc/ceph/$cluster.d/00-a.conf
/etc/ceph/$cluster.d/01-b.conf
Yeah
If it is like other config files, options later in the file override options
earlier in the file. That makes it easy to
Hi,
Am 08.12.2014 20:23, schrieb Sanders, Bill:
I've just stood up a Ceph cluster for some experimentation. Unfortunately,
we're having some performance and stability problems I'm trying to pin down.
More unfortunately, I'm new to Ceph, so I'm not sure where to start looking
for
the
Hi,
Am 08.12.2014 20:23, schrieb Sanders, Bill:
I've just stood up a Ceph cluster for some experimentation. Unfortunately,
we're having some performance and stability problems I'm trying to pin down.
More unfortunately, I'm new to Ceph, so I'm not sure where to start looking
for
the
Thank you for the reply, Florian.
Yes, MON data is on the RAID drive. Is it recommended to get its own drive?
What do MON writes look like in terms of size/frequency?
So slow IO's and monitor elections are a symptom of not enough disk
performance? Even though the disks don't appear to be
Hi,
Am 11.12.2014 00:22, schrieb Sanders, Bill:
Thank you for the reply, Florian.
Yes, MON data is on the RAID drive. Is it recommended to get its own drive?
What do MON writes look like in terms of size/frequency?
So slow IO's and monitor elections are a symptom of not enough disk
On 12/10/2014 09:05 PM, Gregory Farnum wrote:
What version is he running?
Joao, does this make any sense to you?
From the MonMap code I'm pretty sure that the client should have built
the monmap from the [mon.X] sections, and solely based on 'mon addr'.
'mon_initial_members' is only useful
Hello,
I think this might very well be my poor, unacknowledged bug report:
http://tracker.ceph.com/issues/10012
People with a mon_hosts entry in [global] (as created by ceph-deploy) will
be fine, people with mons specified outside of [global] will not.
Regards,
Christian
On Thu, 11 Dec 2014
Hi, I tried to download firefly rpm package, but found two rpms existing in
different folders, what is the difference of 0.87.0 and 0.80.7?
http://ceph.com/rpm/el6/x86_64/ceph-0.87-0.el6.x86_64.rpm
http://ceph.com/rpm-firefly/el6/x86_64/ceph-0.80.7-0.el6.x86_64.rpm
Wei Cao (Buddy)
We're running Ceph entirely in Docker containers, so we couldn't use
ceph-deploy due to the requirement of having a process management daemon
(upstart, in Ubuntu's case). So, I wrote things out and templated them
myself following the documentation.
Thanks for linking the bug, Christian! You saved
Hi, Cao.
https://github.com/ceph/ceph/commits/firefly
2014-12-11 5:00 GMT+03:00 Cao, Buddy buddy@intel.com:
Hi, I tried to download firefly rpm package, but found two rpms existing
in different folders, what is the difference of 0.87.0 and 0.80.7?
On 11/12/14 02:33, Vivek Varghese Cherian wrote:
Hi,
root@ppm-c240-ceph3:~# /usr/bin/radosgw -n client.radosgw.gateway -d
log-to-stderr
2014-12-09 12:51:31.410944 7f073f6457c0 0 ceph version 0.80.7
(__6c0127fcb58008793d3c8b62d925bc__91963672a3), process
29 matches
Mail list logo