Hi,
On 02/22/2012 07:08 PM, Gregory Farnum wrote:
Wido,
Sorry we lost track of this last week — we were all distracted by FAST 12! :)
No problem!
So it looks like they're both on the same map and osd.4 is sending
pings to osd.19, but osd.19 is just ignoring them? Or do you really
have on
On 02/21/2012 10:35 PM, Sage Weil wrote:
On Tue, 21 Feb 2012, Paul Pettigrew wrote:
G'day Greg, thanks for the fast response.
Yes, I forgot to explicitly state the Journal would go to SATA Journals in
CASE1, and it is easy to appreciate the performance impact of this case as you
documented
On Wed, Feb 22, 2012 at 23:12, madhusudhana
madhusudhana.u.acha...@gmail.com wrote:
1. can you please let me know how I can make only 1 MDS active ?
You can see that in ceph -s output, the mds line should have just
one entry like 0=a=up:active with the word active.
You can control that with the
2012/2/13 Székelyi Szabolcs szeke...@niif.hu:
Okay, that sounds like a bug then. The two interesting things would be a
ceph-fuse log (--debug-client 10 --debug-ms 1 --log-file /path/to/log) and
an mds log (debug mds = 20, debug ms = 1 in [mds] section of ceph.conf).
Thanks, I've set it up,
Hi,
Yes, that's a bug... Those ='s should be -, as with the straw bucket type.
Opened http://tracker.newdream.net/issues/2096
Thanks! This'll be fixed in v0.43.
sage
On Wed, 15 Feb 2012, ZhuRongze wrote:
Hi,
The function crush_adjust_tree_bucket_item_weight in
On Thu, Feb 23, 2012 at 11:00 AM, Tommi Virtanen
tommi.virta...@dreamhost.com wrote:
On Thu, Feb 23, 2012 at 01:15, Дениска-редиска s...@inbox.lv wrote:
ehllo here,
i have tried to setup ceph .41 in simple configuration:
3 nodes, each running mon, mds osd with replication level 3 for data
On Thu, 23 Feb 2012, Tommi Virtanen wrote:
On Thu, Feb 23, 2012 at 01:15, ÿÿ-ÿÿ s...@inbox.lv
wrote:
ehllo here,
i have tried to setup ceph .41 in simple configuration:
3 nodes, each running mon, mds osd with replication level 3 for data
metadata pools.
On Thu, Feb 23, 2012 at 11:07, Gregory Farnum
gregory.far...@dreamhost.com wrote:
3 nodes, each running mon, mds osd with replication level 3 for data
metadata pools.
...
Actually the OSDs will happily (well, not happily; the will complain.
But they will run) run in degraded mode. However,
On Tue, 21 Feb 2012, Sage Weil wrote:
On Wed, 22 Feb 2012, Paul Pettigrew wrote:
G'day all
Today's testing was on having a client (i.e. Ubuntu 12.04 server running
KVM to be virtualisation host, named server) connect to the 3x node v0.42
Ceph Cluster (names ceph1, ceph2 ceph3).
On Wed, 22 Feb 2012, Sage Weil wrote:
A patch to add this is in the wip-decoder branch, including a man page and
adding it to the .deb as well. Should be merged shortly.
Now in master.
Thanks!
sage
s
On Tue, 21 Feb 2012, Alexandre Oliva wrote:
Signed-off-by: Alexandre Oliva
On Wed, Feb 22, 2012 at 12:25 PM, Jens Rehpöhler
jens.rehpoeh...@filoo.de wrote:
Hi Gregory,
On 22.02.2012 18:12, Gregory Farnum wrote:
On Feb 22, 2012, at 1:53 AM, Jens Rehpöhler jens.rehpoeh...@filoo.de
wrote:
Some Additios: meanwhile we are at the state:
2012-02-22 10:38:49.587403
11 matches
Mail list logo