ceph-osd: microbenchmark for 4k read operations

2015-05-20 Thread Andreas Bluemle
cephx_sign_messages: true Regards Andreas -- Andreas Bluemle mailto:andreas.blue...@itxperts.de ITXperts GmbH http://www.itxperts.de Balanstrasse 73, Geb. 08Phone: (+49) 89 89044917 D-81541 Muenchen (Germany) Fax: (+49) 89 89044910

FileStore performance: coalescing operations

2015-02-26 Thread Andreas Bluemle
and OP_SETATTR operations. Find attached a README and source code patch, which describe a prototype for coalescing the OP_OMAP_SETKEYS operations and the performance impact f this change. Regards Andreas Bluemle -- Andreas Bluemle mailto:andreas.blue...@itxperts.de ITXperts GmbH

Re: wip-auth

2015-01-29 Thread Andreas Bluemle
is 1a0507a2940f6edcc2bf9533cfa6c210b0b41933. As my build environment is rpm, I had to modify the invocation of the configure script in the spec file instead of the do_autogen.sh script. Best Regards Andreas Bluemle On Tue, 27 Jan 2015 09:18:45 -0800 (PST) Sage Weil sw...@redhat.com wrote: I spent some

Re: Memstore performance improvements v0.90 vs v0.87

2015-01-19 Thread Andreas Bluemle
of placement groups which causes the message count throttle to hit. And I was only lucky in some sense when modifiying the number of placement group and achieving a well-behaving distribution. Regards Andreas Bluemle On Thu, 15 Jan 2015 09:15:32 -0800 (PST) Sage Weil s...@newdream.net wrote: On Thu

Re: Memstore performance improvements v0.90 vs v0.87

2015-01-15 Thread Andreas Bluemle
achieved by the specific set of pg's - and vice versa, a different pattern of I/O requests will change the behavior again. Regards Andreas Bluemle On Wed, 14 Jan 2015 22:44:01 + Somnath Roy somnath@sandisk.com wrote: Stephen, You may want to tweak the following parameter(s) in your

Re: LTTng tracing: ReplicatedPG::log_operation

2014-12-03 Thread Andreas Bluemle
Hi Gregory, On Tue, 2 Dec 2014 10:32:50 -0800 Gregory Farnum g...@gregs42.com wrote: On Tue, Dec 2, 2014 at 10:17 AM, Andreas Bluemle andreas.blue...@itxperts.de wrote: Hi, during code profiling using LTTng, I encounter that during processing of write requests to the cluster, the ceph

LTTng tracing: hitting the message throttle

2014-12-02 Thread Andreas Bluemle
at? Regards Andreas Bluemle -- Andreas Bluemle mailto:andreas.blue...@itxperts.de ITXperts GmbH http://www.itxperts.de Balanstrasse 73, Geb. 08Phone: (+49) 89 89044917 D-81541 Muenchen (Germany) Fax: (+49) 89 89044910 Company

LTTng tracing: ReplicatedPG::log_operation

2014-12-02 Thread Andreas Bluemle
the log_operation and the actual queue_operation be reversed in ReplicatedBackend::submit_transaction? Regards Andreas Bluemle -- Andreas Bluemle mailto:andreas.blue...@itxperts.de ITXperts GmbH http://www.itxperts.de Balanstrasse 73, Geb

Re: FIXED: 11/5/2014 Weekly Ceph Performance Meeting IS ON!

2014-11-05 Thread Andreas Bluemle
majordomo info at http://vger.kernel.org/majordomo-info.html -- Andreas Bluemle mailto:andreas.blue...@itxperts.de Heinrich Boell Strasse 88 Phone: (+49) 89 4317582 D-81829 Muenchen (Germany) Mobil: (+49) 177 522 0151 timestamps-primary-write.pdf

Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params

2014-10-14 Thread Andreas Bluemle
threads reduced really significantly. (And sorry for the late response) Greets, Stefan Best Regards Andreas Bluemle On Wed, 8 Oct 2014 02:51:21 +0200 Mark Nelson mark.nel...@inktank.com wrote: Hi All, Just a remind that the weekly

Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params

2014-10-14 Thread Andreas Bluemle
Hi Sage, [embedded below] On Tue, 14 Oct 2014 06:13:58 -0700 (PDT) Sage Weil s...@newdream.net wrote: On Tue, 14 Oct 2014, Andreas Bluemle wrote: Hi, On Wed, 8 Oct 2014 16:55:38 -0700 Paul Von-Stamwitz pvonstamw...@us.fujitsu.com wrote: Hi, as mentioned

Re: 10/7/2014 Weekly Ceph Performance Meeting: kernel boot params

2014-10-08 Thread Andreas Bluemle
*/cpufreq/scaling_governor is set to performance Using above we see a constant frequency at the maximum level allowed by the CPU (except Turbo mode). Best Regards Andreas Bluemle On Wed, 8 Oct 2014 02:51:21 +0200 Mark Nelson mark.nel...@inktank.com wrote: Hi All, Just a remind

Re: Weekly performance meeting

2014-10-01 Thread Andreas Bluemle
, outgoing and incoming (heartbeats?) Regards Andreas Bluemle On Thu, 25 Sep 2014 21:27:28 +0200 Kasper Dieter dieter.kas...@ts.fujitsu.com wrote: Hi Sage, I'm definitely interested in joining this weekly call starting Oct 1st. Thanks for this initiative! Especially I'm interested

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-09-12 Thread Andreas Bluemle
On Thu, 12 Sep 2013 12:20:03 +0200 Gandalf Corvotempesta gandalf.corvotempe...@gmail.com wrote: 2013/9/10 Andreas Bluemle andreas.blue...@itxperts.de: Since I have added these workarounds to my version of the librdmacm library, I can at least start up ceph using LD_PRELOAD and end up

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-09-10 Thread Andreas Bluemle
cluster state. I would not call these workarounds a real fix, but they should point out the problems which I am trying to solve. Regards Andreas Bluemle On Fri, 23 Aug 2013 00:35:22 + Hefty, Sean sean.he...@intel.com wrote: I tested out the patch and unfortunately had the same results

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-08-20 Thread Andreas Bluemle
this list: send the line unsubscribe linux-rdma in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- Andreas Bluemle mailto:andreas.blue...@itxperts.de Heinrich Boell Strasse 88 Phone: (+49

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-08-20 Thread Andreas Bluemle
an offset of -34 lines. Which code base are you using? Best Regards Andreas Bluemle On Tue, 20 Aug 2013 09:21:13 +0200 Andreas Bluemle andreas.blue...@itxperts.de wrote: Hi Sean, I will re-check until the end of the week; there is some test scheduling issue with our test system, which affects

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-08-14 Thread Andreas Bluemle
. But the POLLIN is not generated from the layer below rsockets (ib_uverbs.ko?) as far as I can tell. See also: http://www.greenend.org.uk/rjk/tech/poll.html Best Regards Andreas Bluemle - Sean -- To unsubscribe from this list: send the line unsubscribe linux-rdma in the body of a message to majord

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-08-13 Thread Andreas Bluemle
value (ms tcp read timeout in our case), I would embed the real poll() into a loop, splitting the user specified timeout into smaller portions and doing the rsockets specific rs_poll_check() on every timeout of the real poll(). Best Regards Andreas Bluemle On Tue, 13 Aug 2013 07:53:12

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-08-12 Thread Andreas Bluemle
list with the log files attached to it because of the size of the attachments exceeding some limit; I hadnÄt been subscribed to the list at that point. Is the uses of pastebin.com the better way to provide such lengthy information in general? Best Regards Andreas Bluemle On Tue, 13 Aug 2013 11

Re: ceph admin command: debuggging

2013-08-11 Thread Andreas Bluemle
Hi Sage, On Thu, 8 Aug 2013 15:09:27 -0700 (PDT) Sage Weil s...@inktank.com wrote: On Thu, 8 Aug 2013, Andreas Bluemle wrote: Hi, maybe this is the wrong list - but I am looking for logging support for the /usr/bin/ceph adminstration comamand. This is build from the same source

Re: mds.0 crashed with 0.61.7

2013-07-29 Thread Andreas Bluemle
Hi Sage, as this crash had been around for a while already: do you know whether this had happened in ceph version 0.61.4 as well? Best Regards Andreas Bluemle On Mon, 29 Jul 2013 08:47:00 -0700 (PDT) Sage Weil s...@inktank.com wrote: Hi Andreas, Can you reproduce this (from mkcephfs

ceph file system: extended attributes differ between ceph.ko and ceph-fuse

2013-07-18 Thread Andreas Bluemle
| grep pool | \ awk '{ print $1 $2 : $3 }' pool 0: 'data' pool 1: 'metadata' pool 2: 'rbd' pool 3: 'SSD-group-2' pool 4: 'SSD-group-3' pool 5: 'SAS-group-2' pool 6: 'SAS-group-3' Is that a real problem? Best Regards Andreas Bluemle -- Andreas Bluemle

Re: cephfs fs set_layout --pool_meta SSD --pool_data SAS

2013-07-16 Thread Andreas Bluemle
fuer ein Verzeichnis und damit einen ganzen Baum vergeben werden. Mit freundlichen Gruessen Andreas Bluemle [root@rx37-2 ~]# cephfs /mnt/cephfs/pool3 set_layout -s 4194304 \ -c 1 -u 4194304 -p 6 Error setting layout: Invalid argument [root@rx37-2 ~]# ceph mds

Re: cephfs fs set_layout --pool_meta SSD --pool_data SAS

2013-07-16 Thread Andreas Bluemle
a new pool for being usable by mds. Best Regards Andreas Bluemle On Tue, 16 Jul 2013 14:26:21 +0200 Andreas Bluemle andreas.blue...@itxperts.de wrote: Hallo Dieter, Kein Fehler, sondern ein fehlendes Kommando. Ein neuer pool muss dem mds erst bekannt gemacht werden. ceph mds

rbd: assertion failure in rbd_img_obj_callback()

2013-06-20 Thread Andreas Bluemle
find attached some more details on this problem. Is this a known issue already? Best Regards Andreas Bluemle -- Andreas Bluemle mailto:andreas.blue...@itxperts.de ITXperts GmbH http://www.itxperts.de Balanstrasse 73, Geb. 08Phone: (+49) 89

Re: SimpleMessenger dispatching: cause of performance problems?

2012-09-04 Thread Andreas Bluemle
requests arrive at the OSD. So it looks that the dispatching code itself is efficient, but collapses on the locks if the request rate for a single OSD hits some threshold. Best Regards Andreas Bluemle Andreas Bluemle wrote: Hi, the size of the rbd data objects on the OSD is 4 MByte (default

Re: SimpleMessenger dispatching: cause of performance problems?

2012-08-20 Thread Andreas Bluemle
Hi Sage, Sage Weil wrote: Hi Andreas, On Thu, 16 Aug 2012, Andreas Bluemle wrote: Hi, I have been trying to migrate a ceph cluster (ceph-0.48argonaut) to a high speed cluster network and encounter scalability problems: the overall performance of the ceph cluster does not scale well

Re: SimpleMessenger dispatching: cause of performance problems?

2012-08-17 Thread Andreas Bluemle
is causing the delays. Best Regards Andreas On Thu, 16 Aug 2012 09:44:23 -0700 Yehuda Sadeh yeh...@inktank.com wrote: On Thu, Aug 16, 2012 at 9:08 AM, Andreas Bluemle andreas.blue...@itxperts.de wrote: Hi, I have been trying to migrate a ceph cluster (ceph-0.48argonaut) to a high

Re: SimpleMessenger dispatching: cause of performance problems?

2012-08-17 Thread Andreas Bluemle
, Aug 16, 2012 at 9:08 AM, Andreas Bluemle andreas.blue...@itxperts.de wrote: Hi, I have been trying to migrate a ceph cluster (ceph-0.48argonaut) to a high speed cluster network and encounter scalability problems: the overall performance of the ceph cluster does not scale well with an increase