I'm getting the same issue with one of my OSD's.
Calculating dependencies... done!
[ebuild R ~] app-arch/snappy-1.1.0 USE=-static-libs 0 kB
[ebuild R ~] dev-libs/leveldb-1.9.0-r5 USE=snappy -static-libs 0 kB
[ebuild R ~] sys-cluster/ceph-0.60-r1 USE=-debug -fuse -gtk
-libatomic
Hello and thanks again!
Push help back to community:
1. Please correct this doc
http://ceph.com/docs/master/start/quick-rgw/#create-a-gateway-configuration-file
2. I have successful tested this clients - DragonDisk (
http://www.dragondisk.com/), CrossFTP (http://www.crossftp.com/) and
S3Browser
Hello everybody,
I'm sending this here in case someone from the list is interested.
Ganeti [1] is a mentoring organization in this year's Google Summer
of Code and one of the Ideas proposed is:
Better support for RADOS/Ceph in Ganeti
Please see here:
Hello,
Whether any best practices how to make Hing Availability of RadosGW?
For example, is this right way to create two or tree RadosGW (keys for
ceph-auth, directory and so on) and having for example this is ceph.conf:
[client.radosgw.a]
host = ceph01
...options...
[client.radosgw.b]
host =
Hi,
I've been trying to use block device recently. I have a running cluster
with 2 machines and 3 OSDs.
On a client machine, let's say A, I created a rbd image using `rbd create`
, then formatted, mounted and wrote something in it, everything was working
fine.
However, problem occurred when I
That is the expected behavior. RBD is emulating a real device, you wouldn't
expect good things to happen if you were to plug the same drive into two
different machines at once (perhaps with some soldering). There is no built in
mechanism for two machines to access the same block device
Hi everyone,
I'm setting up a test ceph cluster and am having trouble getting it running
(great for testing, huh?). I went through the installation on Debian
squeeze, had to modify the mkcephfs script a bit because it calls
monmaptool with too many paramaters in the $args variable (mine had --add
Here is my ceph.conf. I just figured out that the second host = isn't
necessary, though it is like that on the 5-minute quick start guide...
(Perhaps I'll submit my couple of fixes that I've had to implement so far).
That fixes the redefined host issue, but none of the others.
[global]
# For
Why would you update 'rgw usage max user shards' setting? I don't really
understand what it's for. Thank you.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Wyatt,
A few notes:
- Yes, the second host = ceph under mon.a is redundant and should be
deleted.
- auth client required = cephx [osd] should be simply
auth client required = cephx.
- Looks like you only have one OSD. You need at least as many (and
hopefully more) OSDs than highest
Well, those points solved the issue of the redefined host and the
unidentified protocol. The
HEALTH_WARN 384 pgs degraded; 384 pgs stuck unclean; recovery 21/42
degraded (50.000%)
error is still an issue, though. Is this something simple like some hard
drive corruption that I can clean up with a
On Wed, May 1, 2013 at 9:29 AM, Jeppesen, Nelson
nelson.jeppe...@disney.com wrote:
Why would you update 'rgw usage max user shards' setting? I don’t really
understand what it’s for. Thank you.
This param specifies the number of objects the usage data is being
written to. A higher number means
Hi Wyatt,
This is almost certainly a configuration issue. If i recall, there is a
min_size setting in the CRUSH rules for each pool that defaults to two
which you may also need to reduce to one. I don't have the documentation
in front of me, so that's just off the top of my head...
Dino
On
On Wed, May 1, 2013 at 1:32 PM, Dino Yancey dino2...@gmail.com wrote:
Hi Wyatt,
This is almost certainly a configuration issue. If i recall, there is a
min_size setting in the CRUSH rules for each pool that defaults to two which
you may also need to reduce to one. I don't have the
[ Please keep all discussions on the list. :) ]
Okay, so you've now got just 128 that are sad. Those are all in pool
2, which I believe is rbd — you'll need to set your replication
level to 1 on all pools and that should fix it. :)
Keep in mind that with 1x replication you've only got 1 copy of
I added a blueprint for extending the crush rule language. If there are
interesting or strange placement policies you'd like to do and aren't able
to currently express using CRUSH, please help us out by enumerating them
on that blueprint.
Thanks!
sage
On Wed, May 1, 2013 at 2:44 PM, Sage Weil s...@inktank.com wrote:
I added a blueprint for extending the crush rule language. If there are
interesting or strange placement policies you'd like to do and aren't able
to currently express using CRUSH, please help us out by enumerating them
on that
On 05/01/2013 04:51 PM, Gregory Farnum wrote:
On Wed, May 1, 2013 at 2:44 PM, Sage Weil s...@inktank.com wrote:
I added a blueprint for extending the crush rule language. If there are
interesting or strange placement policies you'd like to do and aren't able
to currently express using CRUSH,
18 matches
Mail list logo