cephx_sign_messages: true
Regards
Andreas
--
Andreas Bluemle mailto:andreas.blue...@itxperts.de
ITXperts GmbH http://www.itxperts.de
Balanstrasse 73, Geb. 08Phone: (+49) 89 89044917
D-81541 Muenchen (Germany) Fax: (+49) 89 89044910
and OP_SETATTR operations.
Find attached a README and source code patch, which
describe a prototype for coalescing the OP_OMAP_SETKEYS
operations and the performance impact f this change.
Regards
Andreas Bluemle
--
Andreas Bluemle mailto:andreas.blue...@itxperts.de
ITXperts GmbH
is 1a0507a2940f6edcc2bf9533cfa6c210b0b41933.
As my build environment is rpm, I had to modify the invocation
of the configure script in the spec file instead of the do_autogen.sh
script.
Best Regards
Andreas Bluemle
On Tue, 27 Jan 2015 09:18:45 -0800 (PST)
Sage Weil sw...@redhat.com wrote:
I spent some
of placement groups
which causes the message count throttle to hit. And
I was only lucky in some sense when modifiying the number
of placement group and achieving a well-behaving distribution.
Regards
Andreas Bluemle
On Thu, 15 Jan 2015 09:15:32 -0800 (PST)
Sage Weil s...@newdream.net wrote:
On Thu
achieved by the specific set of pg's - and vice versa, a different
pattern of I/O requests will change the behavior again.
Regards
Andreas Bluemle
On Wed, 14 Jan 2015 22:44:01 +
Somnath Roy somnath@sandisk.com wrote:
Stephen,
You may want to tweak the following parameter(s) in your
Hi Gregory,
On Tue, 2 Dec 2014 10:32:50 -0800
Gregory Farnum g...@gregs42.com wrote:
On Tue, Dec 2, 2014 at 10:17 AM, Andreas Bluemle
andreas.blue...@itxperts.de wrote:
Hi,
during code profiling using LTTng, I encounter that during
processing of write requests to the cluster, the ceph
at?
Regards
Andreas Bluemle
--
Andreas Bluemle mailto:andreas.blue...@itxperts.de
ITXperts GmbH http://www.itxperts.de
Balanstrasse 73, Geb. 08Phone: (+49) 89 89044917
D-81541 Muenchen (Germany) Fax: (+49) 89 89044910
Company
the log_operation and
the actual queue_operation be reversed in
ReplicatedBackend::submit_transaction?
Regards
Andreas Bluemle
--
Andreas Bluemle mailto:andreas.blue...@itxperts.de
ITXperts GmbH http://www.itxperts.de
Balanstrasse 73, Geb
majordomo info at http://vger.kernel.org/majordomo-info.html
--
Andreas Bluemle mailto:andreas.blue...@itxperts.de
Heinrich Boell Strasse 88 Phone: (+49) 89 4317582
D-81829 Muenchen (Germany) Mobil: (+49) 177 522 0151
timestamps-primary-write.pdf
threads reduced really significantly.
(And sorry for the late response)
Greets,
Stefan
Best Regards
Andreas Bluemle
On Wed, 8 Oct 2014 02:51:21 +0200
Mark Nelson mark.nel...@inktank.com wrote:
Hi All,
Just a remind that the weekly
Hi Sage,
[embedded below]
On Tue, 14 Oct 2014 06:13:58 -0700 (PDT)
Sage Weil s...@newdream.net wrote:
On Tue, 14 Oct 2014, Andreas Bluemle wrote:
Hi,
On Wed, 8 Oct 2014 16:55:38 -0700
Paul Von-Stamwitz pvonstamw...@us.fujitsu.com wrote:
Hi,
as mentioned
*/cpufreq/scaling_governor
is set to performance
Using above we see a constant frequency at the maximum level
allowed by the CPU (except Turbo mode).
Best Regards
Andreas Bluemle
On Wed, 8 Oct 2014 02:51:21 +0200
Mark Nelson mark.nel...@inktank.com wrote:
Hi All,
Just a remind
,
outgoing and incoming (heartbeats?)
Regards
Andreas Bluemle
On Thu, 25 Sep 2014 21:27:28 +0200
Kasper Dieter dieter.kas...@ts.fujitsu.com wrote:
Hi Sage,
I'm definitely interested in joining this weekly call starting Oct
1st. Thanks for this initiative!
Especially I'm interested
On Thu, 12 Sep 2013 12:20:03 +0200
Gandalf Corvotempesta gandalf.corvotempe...@gmail.com wrote:
2013/9/10 Andreas Bluemle andreas.blue...@itxperts.de:
Since I have added these workarounds to my version of the librdmacm
library, I can at least start up ceph using LD_PRELOAD and end up
cluster state.
I would not call these workarounds a real fix, but they should point
out the problems which I am trying to solve.
Regards
Andreas Bluemle
On Fri, 23 Aug 2013 00:35:22 +
Hefty, Sean sean.he...@intel.com wrote:
I tested out the patch and unfortunately had the same results
this list: send the line unsubscribe linux-rdma
in the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Andreas Bluemle mailto:andreas.blue...@itxperts.de
Heinrich Boell Strasse 88 Phone: (+49
an offset of -34 lines. Which code base are you using?
Best Regards
Andreas Bluemle
On Tue, 20 Aug 2013 09:21:13 +0200
Andreas Bluemle andreas.blue...@itxperts.de wrote:
Hi Sean,
I will re-check until the end of the week; there is
some test scheduling issue with our test system, which
affects
.
But the POLLIN is not generated from the layer below
rsockets (ib_uverbs.ko?) as far as I can tell.
See also: http://www.greenend.org.uk/rjk/tech/poll.html
Best Regards
Andreas Bluemle
- Sean
--
To unsubscribe from this list: send the line unsubscribe linux-rdma
in the body of a message to majord
value (ms tcp read timeout in our case), I would
embed the real poll() into a loop, splitting the user specified timeout
into smaller portions and doing the rsockets specific rs_poll_check()
on every timeout of the real poll().
Best Regards
Andreas Bluemle
On Tue, 13 Aug 2013 07:53:12
list
with the log files attached to it because of the size of
the attachments exceeding some limit; I hadnÄt been subscribed
to the list at that point. Is the uses of pastebin.com the better
way to provide such lengthy information in general?
Best Regards
Andreas Bluemle
On Tue, 13 Aug 2013 11
Hi Sage,
On Thu, 8 Aug 2013 15:09:27 -0700 (PDT)
Sage Weil s...@inktank.com wrote:
On Thu, 8 Aug 2013, Andreas Bluemle wrote:
Hi,
maybe this is the wrong list - but I am looking for
logging support for the /usr/bin/ceph adminstration
comamand.
This is build from the same source
Hi Sage,
as this crash had been around for a while already: do you
know whether this had happened in ceph version 0.61.4 as well?
Best Regards
Andreas Bluemle
On Mon, 29 Jul 2013 08:47:00 -0700 (PDT)
Sage Weil s...@inktank.com wrote:
Hi Andreas,
Can you reproduce this (from mkcephfs
| grep pool | \
awk '{ print $1 $2 : $3 }'
pool 0: 'data'
pool 1: 'metadata'
pool 2: 'rbd'
pool 3: 'SSD-group-2'
pool 4: 'SSD-group-3'
pool 5: 'SAS-group-2'
pool 6: 'SAS-group-3'
Is that a real problem?
Best Regards
Andreas Bluemle
--
Andreas Bluemle
fuer ein Verzeichnis und damit einen ganzen Baum vergeben
werden.
Mit freundlichen Gruessen
Andreas Bluemle
[root@rx37-2 ~]# cephfs /mnt/cephfs/pool3 set_layout -s 4194304 \
-c 1 -u 4194304 -p 6
Error setting layout: Invalid argument
[root@rx37-2 ~]# ceph mds
a new pool for being usable by mds.
Best Regards
Andreas Bluemle
On Tue, 16 Jul 2013 14:26:21 +0200
Andreas Bluemle andreas.blue...@itxperts.de wrote:
Hallo Dieter,
Kein Fehler, sondern ein fehlendes Kommando.
Ein neuer pool muss dem mds erst bekannt gemacht werden.
ceph mds
find attached some more details on this problem.
Is this a known issue already?
Best Regards
Andreas Bluemle
--
Andreas Bluemle mailto:andreas.blue...@itxperts.de
ITXperts GmbH http://www.itxperts.de
Balanstrasse 73, Geb. 08Phone: (+49) 89
requests arrive at the OSD.
So it looks that the dispatching code itself is efficient, but
collapses on the locks if the request rate for a single OSD
hits some threshold.
Best Regards
Andreas Bluemle
Andreas Bluemle wrote:
Hi,
the size of the rbd data objects on the OSD is 4 MByte (default
Hi Sage,
Sage Weil wrote:
Hi Andreas,
On Thu, 16 Aug 2012, Andreas Bluemle wrote:
Hi,
I have been trying to migrate a ceph cluster (ceph-0.48argonaut)
to a high speed cluster network and encounter scalability problems:
the overall performance of the ceph cluster does not scale well
is causing the delays.
Best Regards
Andreas
On Thu, 16 Aug 2012 09:44:23 -0700
Yehuda Sadeh yeh...@inktank.com wrote:
On Thu, Aug 16, 2012 at 9:08 AM, Andreas Bluemle
andreas.blue...@itxperts.de wrote:
Hi,
I have been trying to migrate a ceph cluster (ceph-0.48argonaut)
to a high
, Aug 16, 2012 at 9:08 AM, Andreas Bluemle
andreas.blue...@itxperts.de wrote:
Hi,
I have been trying to migrate a ceph cluster (ceph-0.48argonaut)
to a high speed cluster network and encounter scalability problems:
the overall performance of the ceph cluster does not scale well
with an increase
30 matches
Mail list logo