Re: Problem of Prometheus Stats Providers

2018-07-08 Thread Sijie Guo
Interesting, I have tried these settings at my laptop. I am able to curl
the metrics : "curl -s localhost:8000/metrics".

Can you clarify a few things:

- what version of bookkeeper are you using?
- did you happen to enable bookie http server? set httpServerEnabled to be
true

- Sijie

On Sun, Jul 8, 2018 at 8:53 PM li.peng...@zhaopin.com.cn <
li.peng...@zhaopin.com.cn> wrote:

> set configs , but i can't reach 8000 port.
>
>
>
> #
> ## Stats Providers
>
> #
>
> # Whether statistics are enabled
> enableStatistics=true
>
> # The flag to enable recording task execution stats.
> enableTaskExecutionStats=true
>
> # Stats Provider Class (if `enableStatistics` are enabled)
> # Options:
>
> #   - Prometheus: 
> org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider
>
> #   - Codahale  : 
> org.apache.bookkeeper.stats.codahale.CodahaleMetricsProvider
>
> #   - Twitter Finagle   : 
> org.apache.bookkeeper.stats.twitter.finagle.FinagleStatsProvider
>
> #   - Twitter Ostrich   : 
> org.apache.bookkeeper.stats.twitter.ostrich.OstrichProvider
>
> #   - Twitter Science   : 
> org.apache.bookkeeper.stats.twitter.science.TwitterStatsProvider
> # Default value is:
> #   org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider
> #
>
> # For configuring corresponding stats provider, see details at each section 
> below.
> #
>
> # 
> statsProviderClass=org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider
>
>
> #
> ## Prometheus Metrics Provider
>
> #
>
> # These configs are used when using `PrometheusMetricsProvider`.
>
> statsProviderClass=org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider
>
> # Default port for Prometheus metrics exporter
> prometheusStatsHttpPort=8000
>
> # latency stats rollover interval, in seconds
> prometheusStatsLatencyRolloverSeconds=60
>
>
>


Re: latency of bookkeeper

2018-07-08 Thread Sijie Guo
On Sun, Jul 8, 2018 at 8:36 PM li.penghui  wrote:

> Thanks for your reply
>
>
> I try to set different bookkeeper configs
>
> 1.default config(i just set zookeeper connection string)
>
> to addEntry sync with 8 threads. i got 5000 ops/s.  and can’t improve
> throughput by incr threads. 16 threads got 5000 ops/s.
>
> 2. journalSyncData = false, journalAdaptiveGroupWrites = false
>
> i think in this config, i will got low latency by one thread.  but i
> test it, i got 400 ops/s per thread.
>

`journalAdaptiveGroupWrites == false` doesn't mean disabling group commits.
even you set `journalSyncData` to false, group commit still happens at 2ms
interval, that means for each write it has to wait for 2ms, and you are
using synchronous api, so 400 ops/s per thread is the throughput you can
get.

if you want to get low latency with synchronous API,

- set `journalSyncData` to false, to disable fsync
- set `journalMaxGroupWaitMSec` to 0, disable time based group commit
- set `journalBufferedWritesThreshold` and journalBufferedEntriesThreshold`
to 0, disable bytes/entries based group commit

you can consider increasing `journalWriteBufferSizeKB` from 64 to 1024, to
leverage buffer write to give you low latency for sync writes.


>
> so i have two problem with i use bookeeper. i must blocked to wait until
> bookkeeper tell me entry id. if use aync apis, i can’t return add result to
> users, because rpc framework in my company can’t support async calls.
>
> 1.how can i get low latency in one thread.
>

see my comment above.


> 2.how can i get high throughput in one ledger with muti threads and sync
> apis.
>

since you are using synchronous apis, the only way to get high throughput
is to reduce the write latency per operation (disable group commit and
disable fsync on server side). because even you have multiple threads
writing to one ledger, but these multiple threads are kind of blocking each
other, because bk acknowledges entries in order. so the whole throughput is
eventually bound by server write latency.

try use the settings I pointed out above.



>
>
>  原始邮件
> *发件人:* li.penghui
> *收件人:* user
> *发送时间:* 2018年7月6日(周五) 17:56
> *主题:* Re: latency of bookkeeper
>
> Can I close the WAL if i can tolerate any enries lost. if WAL can be
> closed, i can use bookkeeper in different scenes.
>
>  原始邮件
> *发件人:* Sijie Guo
> *收件人:* user
> *发送时间:* 2018年7月6日(周五) 16:11
> *主题:* Re: latency of bookkeeper
>
> I think your question is a bit not clear, latency and throughput are two
> kind of different metrics. Your question seems to be asking for high
> throughput.
>
> Anyway, I will try to explain the performance tradeoff between latency and
> throughput and hope that helps.
>
> Bookkeeper by default fsync the data to disks. It does 1ms group commit by
> default to keep a good tradeoff between throughput and latency.
>
> 1) if you are using synchronous adds, since you are blocking on waiting
> write response, your single thread throughput will be limited by the group
> commit interval.
> You can use multiple threads to improve throughput, since multiple writes
> will be grouped together writing to disks, you will get as lower latency as
> your group commit interval for your writes.
>
> 2) if your application can leverage asynchronous adds, you should consider
> using asynchronous apis. It will offer you the best latency while be able
> to achieve high throughput.
>
> 3) the latency will eventually be dominated by your disk fsync latency.
> SSD or HDD with battery will have good fsync latency (about half
> millisecond). However if your disk is not as good to
> provide such lower latency, you can consider disable fsync and rely on
> replication to achieve durability.
> https://github.com/apache/bookkeeper/blob/master/conf/bk_server.conf#L309
>
> Hope this helps.
>
> On Thu, Jul 5, 2018 at 11:46 PM li.peng...@zhaopin.com.cn <
> li.peng...@zhaopin.com.cn> wrote:
>
>>
>> Hi
>>
>> I try to use bookkeeper.  i care latency of write. so start
>> a test in single thread.  get 400 ops/s  in double SSD.
>>
>> how to improve performance to get the low-latency.
>>
>> Thanks.
>>
>> --
>> li.peng...@zhaopin.com.cn
>>
> Can I close the WAL if i can tolerate any enries lost. if WAL can be
> closed, i can use bookkeeper in different scenes.
>
>  原始邮件
> *发件人:* Sijie Guo
> *收件人:* user
> *发送时间:* 2018年7月6日(周五) 16:11
> *主题:* Re: latency of bookkeeper
>
> I think your question is a bit not clear, latency and throughput are two
> kind of different metrics. Your question seems to be asking for high
> throughput.
>
> Anyway, I will try to explain the performance tradeoff between latency and
> throughput and hope that helps.
>
> Bookkeeper by default fsync the data to disks. It does 1ms group commit by
> default to keep a good tradeoff between throughput and latency.
>
> 1) if you are using synchronous adds, since you are blocking on waiting
> write response, your single thread throughput will be l

Problem of Prometheus Stats Providers

2018-07-08 Thread li.peng...@zhaopin.com.cn
set configs , but i can't reach 8000 port.


#
## Stats Providers
#

# Whether statistics are enabled
enableStatistics=true

# The flag to enable recording task execution stats.
enableTaskExecutionStats=true

# Stats Provider Class (if `enableStatistics` are enabled)
# Options:
#   - Prometheus: 
org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider
#   - Codahale  : 
org.apache.bookkeeper.stats.codahale.CodahaleMetricsProvider
#   - Twitter Finagle   : 
org.apache.bookkeeper.stats.twitter.finagle.FinagleStatsProvider
#   - Twitter Ostrich   : 
org.apache.bookkeeper.stats.twitter.ostrich.OstrichProvider
#   - Twitter Science   : 
org.apache.bookkeeper.stats.twitter.science.TwitterStatsProvider
# Default value is:
#   org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider
# 
# For configuring corresponding stats provider, see details at each section 
below.
#
# 
statsProviderClass=org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider

#
## Prometheus Metrics Provider
#

# These configs are used when using `PrometheusMetricsProvider`.
statsProviderClass=org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider

# Default port for Prometheus metrics exporter
prometheusStatsHttpPort=8000

# latency stats rollover interval, in seconds
prometheusStatsLatencyRolloverSeconds=60




Re: latency of bookkeeper

2018-07-08 Thread li.penghui
Thanks for your reply


I try to set different bookkeeper configs


1.default config(i just set zookeeper connection string)


to addEntry sync with 8 threads. i got 5000 ops/s. and can’t improve throughput 
by incr threads. 16 threads got 5000 ops/s.


2.journalSyncData = false,journalAdaptiveGroupWrites = false


  i think in this config, i will got lowlatency by one thread. but i test it, i 
got 400 ops/s per thread.


so i have two problem with i use bookeeper. i must blocked to wait until 
bookkeeper tell me entry id. if use aync apis, i can’t return add result to 
users, because rpc framework in my company can’t support async calls.


1.how can i get lowlatency in one thread.
2.how can i get highthroughput in one ledger with muti threads and sync apis.




原始邮件
发件人:li.penghuili.peng...@zhaopin.com.cn
收件人:useru...@bookkeeper.apache.org
发送时间:2018年7月6日(周五) 17:56
主题:Re: latency of bookkeeper


Can I close the WAL if i can tolerate any enries lost. if WAL can be closed, i 
can use bookkeeper in different scenes.


原始邮件
发件人:Sijie guoguosi...@gmail.com
收件人:useru...@bookkeeper.apache.org
发送时间:2018年7月6日(周五) 16:11
主题:Re: latency of bookkeeper


I think your question is a bit not clear, latency and throughput are two kind 
of different metrics. Your question seems to be asking for high throughput.


Anyway, I will try to explain the performance tradeoff between latency and 
throughput and hope that helps.


Bookkeeper by default fsync the data to disks. It does 1ms group commit by 
default to keep a good tradeoff between throughput and latency.


1) if you are using synchronous adds, since you are blocking on waiting write 
response, your single thread throughput will be limited by the group commit 
interval.
You can use multiple threads to improve throughput, since multiple writes will 
be grouped together writing to disks, you will get as lower latency as your 
group commit interval for your writes.


2) if your application can leverage asynchronous adds, you should consider 
using asynchronous apis. It will offer you the best latency while be able to 
achieve high throughput.


3) the latency will eventually be dominated by your disk fsync latency. SSD or 
HDD with battery will have good fsync latency (about half millisecond). However 
if your disk is not as good to
provide such lower latency, you can consider disable fsync and rely on 
replication to achieve 
durability.https://github.com/apache/bookkeeper/blob/master/conf/bk_server.conf#L309


Hope this helps.


On Thu, Jul 5, 2018 at 11:46 PM li.peng...@zhaopin.com.cn 
li.peng...@zhaopin.com.cn wrote:



  Hi


I try to use bookkeeper. i care latency of write. so start a test in single 
thread. get 400 ops/s in double SSD.
  
how to improve performance to get the low-latency.


  Thanks.


li.peng...@zhaopin.com.cn
Can I close the WAL if i can tolerate any enries lost. if WAL can be closed, i 
can use bookkeeper in different scenes.


原始邮件
发件人:Sijie guoguosi...@gmail.com
收件人:useru...@bookkeeper.apache.org
发送时间:2018年7月6日(周五) 16:11
主题:Re: latency of bookkeeper


I think your question is a bit not clear, latency and throughput are two kind 
of different metrics. Your question seems to be asking for high throughput.


Anyway, I will try to explain the performance tradeoff between latency and 
throughput and hope that helps.


Bookkeeper by default fsync the data to disks. It does 1ms group commit by 
default to keep a good tradeoff between throughput and latency.


1) if you are using synchronous adds, since you are blocking on waiting write 
response, your single thread throughput will be limited by the group commit 
interval.
You can use multiple threads to improve throughput, since multiple writes will 
be grouped together writing to disks, you will get as lower latency as your 
group commit interval for your writes.


2) if your application can leverage asynchronous adds, you should consider 
using asynchronous apis. It will offer you the best latency while be able to 
achieve high throughput.


3) the latency will eventually be dominated by your disk fsync latency. SSD or 
HDD with battery will have good fsync latency (about half millisecond). However 
if your disk is not as good to
provide such lower latency, you can consider disable fsync and rely on 
replication to achieve 
durability.https://github.com/apache/bookkeeper/blob/master/conf/bk_server.conf#L309


Hope this helps.


On Thu, Jul 5, 2018 at 11:46 PM li.peng...@zhaopin.com.cn 
li.peng...@zhaopin.com.cn wrote:



  Hi


I try to use bookkeeper. i care latency of write. so start a test in single 
thread. get 400 ops/s in double SSD.
  
how to improve performance to get the low-latency.


  Thanks.


li.peng...@zhaopin.com.cn