2019-03-01 09:34:20 UTC - Maarten Tielemans:
<https://github.com/apache/pulsar/issues/3727>
----
2019-03-01 10:08:57 UTC - bhagesharora: @Sijie Guo Thank you so much for your
response, Whenever you'll update the document please response me through
e-mail/tag me so again I can start my exploration. So I can test same work or
get back to you whether is working or not !! Thank you :slightly_smiling_face:
----
2019-03-01 10:09:50 UTC - Sijie Guo: sure will do
----
2019-03-01 11:34:47 UTC - Shivji Kumar Jha: Hi guys, Whats the best way to
batch in the synchronous send in pulsar?
Basically, I want to:
1) hold until a threshold of number of message OR time elapsed is reached
2) send this batch synchronously to pulsar
3) Ack to the client
----
2019-03-01 11:50:15 UTC - jia zhai: @Shivji Kumar Jha It could config both the
elapse time and number of message for batch in producerbuilder
----
2019-03-01 11:50:43 UTC - jia zhai: ```
private long batchingMaxPublishDelayMicros =
TimeUnit.MILLISECONDS.toMicros(1);
private int batchingMaxMessages = 1000;
private boolean batchingEnabled = true; // enabled by default
```
----
2019-03-01 11:51:56 UTC - jia zhai: ```
/**
* Set the time period within which the messages sent will be batched
<i>default: 1 ms</i> if batch messages are
* enabled. If set to a non zero value, messages will be queued until
either:
* <ul>
* <li>this time interval expires</li>
* <li>the max number of messages in a batch is reached ({@link
#batchingMaxMessages(int)})
* <li>the max size of batch is reached
* </ul>
* <p>
* All messages will be published as a single batch message. The consumer
will be delivered individual messages in
* the batch in the same order they were enqueued.
*
* @param batchDelay
* the batch delay
* @param timeUnit
* the time unit of the {@code batchDelay}
* @return the producer builder instance
*/
ProducerBuilder<T> batchingMaxPublishDelay(long batchDelay, TimeUnit
timeUnit);
/**
* Set the maximum number of messages permitted in a batch.
<i>default: 1000</i> If set to a value greater than 1,
* messages will be queued until this threshold is reached or batch
interval has elapsed
* <p>
* All messages in batch will be published as a single batch message. The
consumer will be delivered individual
* messages in the batch in the same order they were enqueued
*
* @see #batchingMaxPublishDelay(long, TimeUnit)
* @param batchMessagesMaxMessagesPerBatch
* maximum number of messages in a batch
* @return the producer builder instance
*/
ProducerBuilder<T> batchingMaxMessages(int
batchMessagesMaxMessagesPerBatch);
```
+1 : Shivji Kumar Jha
----
2019-03-01 11:58:22 UTC - Sijie Guo: add one more comment, if you want to
control the flush. you can set the batchingMaxMessages to a large enough value
and set delay to 0 (disable time based).
then you can do following:
producer.sendAsync(msg1);
producer.sendAsync(msg2);
producer.sendAsync(msg3);
producer.flush();
----
2019-03-01 12:12:39 UTC - Shivji Kumar Jha: Thats a good idea.
----
2019-03-01 12:12:53 UTC - Shivji Kumar Jha: Thank you :grinning:
----
2019-03-01 12:14:32 UTC - Shivji Kumar Jha: Thank you @jia zhai. I will play
around with these :grinning:
----
2019-03-01 12:17:12 UTC - jia zhai: @Shivji Kumar Jha you are welcome
----
2019-03-01 13:47:00 UTC - Maarten Tielemans: Question, unclear from
documentation, does Pulsar support a "broadcast" subscription mode, in which
every subscriber receives every message?
----
2019-03-01 13:50:15 UTC - Darragh: @Darragh has joined the channel
----
2019-03-01 13:51:07 UTC - Sijie Guo: can’t you just create many subscriptions?
----
2019-03-01 13:51:49 UTC - Darragh: From the documentation it seems that there
are 3 modes. Exclusive/Shared/Failover
----
2019-03-01 13:52:19 UTC - Darragh: In shared mode the messages seem to be
spread out over all the subcriptions so adding more subscriptions won't help
----
2019-03-01 13:53:03 UTC - Sijie Guo: in a shared subscription, the messages are
spreading out over consumers
----
2019-03-01 13:53:18 UTC - Mark Marijnissen: @Maarten Tielemans :
What you want is called a "reader":
<https://pulsar.apache.org/docs/en/concepts-clients/>
The important difference is this: Who manages the offset?
If you want Pulsar to track the offset, you create a subscription. The
subscription distributes work to one or more consumers.
If you want to manage the offset in your client (i.e. just receive the latest
messages), then you create a reader. You can control at which offset you want
to start receiving messages.
EDIT: Of course, if you want your clients to remember their offset, you can
create more than one subscription. Client will resume where they left off if
they crash/restart.
----
2019-03-01 13:54:36 UTC - Darragh: Ah sorry my bad, I was under the impression
the shared/exclusive etc applied directly on the topic, missed the extra
subscription layer
----
2019-03-01 13:55:17 UTC - Sijie Guo: you can have many subscriptions, each
subscription will be receiving all the messages.
for each subscription, you can have many consumers (a subscription is kind of a
consumer group). you can choose the subscription type.
so to broadcast, you can just have each consumer being a subscription
:slightly_smiling_face:
----
2019-03-01 13:55:52 UTC - Darragh: yup, silly mistake on my end, thanks for the
quick response!
----
2019-03-01 14:00:13 UTC - Sijie Guo: :slightly_smiling_face:
----
2019-03-01 14:48:39 UTC - Darragh: Looking at the deployment on kubernetes
guide, the default setup lists 2 bk clusters, 3 zk clusters and 3 broker
clusters. Are these minimum values or if recommended values could someone
enlighten what the impact would be if I changed any of those numbers (lower /
higher)? thanks in advance
----
2019-03-01 15:12:16 UTC - Daniel Root: Does the admin api support creating a
non-partitioned topic?
----
2019-03-01 15:18:52 UTC - Daniel Root: No worries. I tried the same thing but
with 1 as the number of partitions. I thought perhaps an unpartitioned topic
is a topic with a single partition. It throws an exception letting me know I
need more than one partition. :slightly_frowning_face:
----
2019-03-01 15:24:11 UTC - Darragh: and what happens if you just put/post to
`/admin/v2/persistent/:tenant/:namespace/:topic`
----
2019-03-01 15:24:27 UTC - Darragh: it's not documented but the delete is on
that url, so might be worth a shot :smile:
----
2019-03-01 15:34:42 UTC - Daniel Root: Yeah I noticed that about the delete.
Unfortunately, neither put or post are allowed.
----
2019-03-01 15:35:42 UTC - Daniel Root: The only way I can see to do this is by
creating a producer or consumer. Weird.
----
2019-03-01 16:51:54 UTC - Matteo Merli: The non-partitioned topics are
auto-created (if permissions are there). There is an ongoing effort to allow
disabling auto-creation and adding a new API to explicitly create topics
----
2019-03-01 17:22:41 UTC - Grant Wu: @Matteo Merli Do I need to use pulsar-admin
version 2.3.0 with a broker of version 2.2.1 ?
----
2019-03-01 17:22:49 UTC - Grant Wu: :sob:
----
2019-03-01 17:22:57 UTC - Matteo Merli: Not necessarely
----
2019-03-01 17:23:04 UTC - Grant Wu: Now I have a `pulsar-admin function create`
invocation hanging
----
2019-03-01 17:23:20 UTC - Grant Wu: `strace` says it’s just stuck in `wait4(-1,
`
----
2019-03-01 17:23:50 UTC - Matteo Merli: Uhm, it looks the same issue with Jetty
threads that was reported earlier
----
2019-03-01 17:24:09 UTC - Grant Wu: Hrm… I didn’t see that, can you link?
----
2019-03-01 17:24:12 UTC - Matteo Merli: though that was in 2.3 (brokers) , not
2.2
----
2019-03-01 17:25:31 UTC - Grant Wu: Oof, I’m getting a lot of
```
17:24:16.302 [BookKeeperClientWorker-OrderedExecutor-0-0] ERROR
org.apache.bookkeeper.common.util.SafeRunnable - Unexpected throwable caught
java.lang.ArrayIndexOutOfBoundsException: -2
at
com.google.common.collect.RegularImmutableList.get(RegularImmutableList.java:60)
~[com.google.guava-guava-21.0.jar:?]
at
org.apache.bookkeeper.client.PendingReadOp$SequenceReadRequest.sendNextRead(PendingReadOp.java:400)
~[org.apache.bookkeeper-bookkeeper-server-4.9.0.jar:4.9.0]
at
org.apache.bookkeeper.client.PendingReadOp$SequenceReadRequest.read(PendingReadOp.java:382)
~[org.apache.bookkeeper-bookkeeper-server-4.9.0.jar:4.9.0]
at
org.apache.bookkeeper.client.PendingReadOp.initiate(PendingReadOp.java:526)
~[org.apache.bookkeeper-bookkeeper-server-4.9.0.jar:4.9.0]
at
org.apache.bookkeeper.client.PendingReadOp.safeRun(PendingReadOp.java:536)
~[org.apache.bookkeeper-bookkeeper-server-4.9.0.jar:4.9.0]
at
org.apache.bookkeeper.common.util.SafeRunnable.run(SafeRunnable.java:36)
[org.apache.bookkeeper-bookkeeper-common-4.9.0.jar:4.9.0]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[?:1.8.0_181]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[?:1.8.0_181]
at
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
[io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
``` too
----
2019-03-01 17:25:54 UTC - Matteo Merli: ehm.. not finding the issue..
<https://apache-pulsar.slack.com/archives/C5Z4T36F7/p1550857080062400?thread_ts=1550848664.058900&cid=C5Z4T36F7>
----
2019-03-01 17:26:06 UTC - Grant Wu: I see
----
2019-03-01 17:27:20 UTC - Matteo Merli: Creating an issue on BK fo the last
stack trace
----
2019-03-01 17:28:03 UTC - Grant Wu: Also this:
```
17:23:24.734 [broker-topic-workers-OrderedScheduler-6-0] ERROR
org.apache.pulsar.broker.service.persistent.PersistentDispatcherSingleActiveConsumer
- [<persistent://public/functions/metadata> /
-Consumer{subscription=PersistentSubscription{topic=<persistent://public/functions/metadata>,
name=reader-949f7820f4}, consumerId=1, consumerName=a5f58,
address=/10.244.1.22:53552}] Error reading entries at 1276:0 : Bookie operation
timeout - Retrying to read in 59.54 seconds
```
----
2019-03-01 17:28:18 UTC - Grant Wu: I imagine this could explain the
`pulsar-admin functions create` timing out?
----
2019-03-01 17:28:58 UTC - Matteo Merli:
<https://github.com/apache/bookkeeper/issues/1970>
----
2019-03-01 17:29:50 UTC - Matteo Merli: > I imagine this could explain the
`pulsar-admin functions create` timing out?
Yes, that looks the most probable explanation
----