Just out of curiosity, does your system only allow one outstanding request?
or just one outstanding request a ledger? without overlapping network
transporting and request processing, it is hard to fully utilize the
resources (either network or disk).

- Sijie

On Wed, Jun 10, 2015 at 7:21 AM, Maciej Smoleński <[email protected]> wrote:

> Yes, I have only one request outstanding at a time.
>
> With 1K request I've got more than 1000 requests / sec.
>
> With 100K request I only get 250 requests / sec.
> Only 1/8 of network bandwidth is used.
> I tested it with physical disks (ext3) and with ramfs and the performance
> was the same - 250 requests / sec.
>
>
>
>
> On Wed, Jun 10, 2015 at 4:06 PM, Robin Dhamankar <
> [email protected]> wrote:
>
>> Are you saying you have only one request outstanding at a time and the
>> previous request has to be acknowledged before the next request can be sent?
>>
>> If that is the case, given that there is a durable write to the journal
>> required before an add is acknowledged by the bookie, there isn't much more
>> room to improve beyond the 250 requests per second you are currently getting
>> On Jun 10, 2015 7:00 AM, "Maciej Smoleński" <[email protected]> wrote:
>>
>>> Thank You for Your comment.
>>>
>>> Unfortunately, these option will not help in my case.
>>> In my case BookKeeper client will receive next request when previous
>>> request is confirmed.
>>> It is expected also that there will be only single stream of such
>>> requests.
>>>
>>> I would like to understand how to achieve performance equal to the
>>> network bandwidth.
>>>
>>>
>>>
>>> On Wed, Jun 10, 2015 at 2:27 PM, Flavio Junqueira <[email protected]
>>> > wrote:
>>>
>>>> BK currently isn't wired to stream bytes to a ledger, so writing
>>>> synchronously large entries as you're doing is likely not to get the best
>>>> its performance. A couple of things you could try to get higher performance
>>>> are to write asynchronously and to have multiple clients writing.
>>>>
>>>> -Flavio
>>>>
>>>>
>>>>
>>>>
>>>>   On Wednesday, June 10, 2015 12:08 PM, Maciej Smoleński <
>>>> [email protected]> wrote:
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>> I'm testing BK performance when appending 100K entries synchronously
>>>> from 1 thread (using one ledger).
>>>> The performance I get is 250 entries/s.
>>>>
>>>> What performance should I expect ?
>>>>
>>>> My setup:
>>>>
>>>> Ledger:
>>>> Ensemble size: 3
>>>> Quorum size: 2
>>>>
>>>> 1 client machine and 3 server machines.
>>>>
>>>> Network:
>>>> Each machine with bonding: 4 x 1000Mbps on each machine
>>>> manually tested between client and server: 400MB/s
>>>>
>>>> Disk:
>>>> I tested two configurations:
>>>> dedicated disks with ext3 (different for zookeeper, journal, data,
>>>> index, log)
>>>> dedicated ramfs partitions (different for zookeeper, journal, data,
>>>> index, log)
>>>>
>>>> In both configurations the performance is the same: 250 entries / s
>>>> (25MB / s).
>>>> I confirmed this with measured network bandwidth:
>>>> - on client 50 MB/s
>>>> - on server 17 MB/s
>>>>
>>>> I run java with profiler enabled on BK client and BK server but didn't
>>>> find anything unexpected (but I don't know bookkeeper internals).
>>>>
>>>> I tested it with two BookKeeper versions:
>>>> - 4.3.0
>>>> - 4.2.2
>>>> The result were the same with both BookKeeper versions.
>>>>
>>>> What should be changed/checked to get better performance ?
>>>>
>>>> Kind regards,
>>>> Maciej
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>

Reply via email to