2020-10-02 09:53:38 UTC - Enrico: hi in my test, if i send message with async 
into non-persistent topic I lost 40% messages, because I send 50k messages and 
consumer consume 30k, if i use sync or i change topic with a persistent topic, 
i don't lost any message i send 50k and i receive 50k, how i can fix this 
problem?
----
2020-10-02 09:54:31 UTC - Enrico: Ah if i set a sleep 1ns for every message i 
send from producer i don't lost message. i have problem only if i send messages 
with async and non-persistent topic without sleep
----
2020-10-02 10:16:27 UTC - Toktok Rambo: hello all I’m using Pulsar with the Go 
client 
(<http://github.com/apache/pulsar-client-go/pulsar|github.com/apache/pulsar-client-go/pulsar>);
 even though I have specified a topic, it’s still receiving messages from other 
topics :confused:
any idea when would this happen? I’ve confirmed the producer of the respective 
message has an appropriate Topic as well
----
2020-10-02 11:02:56 UTC - Shivji Kumar Jha: @Toktok Rambo Hey, can you share 
the sample sample code maybe that does that?
----
2020-10-02 12:25:49 UTC - Alan Hoffmeister: That's great! Thx a lot!
----
2020-10-02 13:28:23 UTC - Łukasz Śnieżewski: @Łukasz Śnieżewski has joined the 
channel
----
2020-10-02 16:45:08 UTC - Jim M.: any tuning guides. Currently for a cluster of 
9 bk, brokers, 5 zk. With any number of partitions. Im stuck at 100 msg per 
second
----
2020-10-02 16:48:49 UTC - Claude: @Claude has joined the channel
----
2020-10-02 17:02:33 UTC - Addison Higham: from your producer? that seems quite 
low. What client and client config are you using?
----
2020-10-02 17:03:46 UTC - Jim M.: Yup. Was using the pref test tool in the 
broker
----
2020-10-02 17:03:56 UTC - Jim M.: And java client normally with default configs
----
2020-10-02 17:09:28 UTC - Addison Higham: can you share details of your cluster 
like JVM sizes and the commands you are using for pulsar-perf?
----
2020-10-02 17:11:32 UTC - Jim M.: Pref producer topic name
----
2020-10-02 17:14:13 UTC - Jim M.: Grabbing stats
----
2020-10-02 17:16:54 UTC - Jim M.: Xmx on broker is 4096
----
2020-10-02 17:20:16 UTC - Jim M.: 1024 for bk
----
2020-10-02 17:20:20 UTC - Jim M.: 512 for zk
----
2020-10-02 17:20:52 UTC - Jim M.: 1 cpu bk, 2 cpu for broker, .5 zk
----
2020-10-02 17:29:20 UTC - Addison Higham: what disks are you using for bookies? 
also, are you using multiple partitions?
----
2020-10-02 17:29:43 UTC - Addison Higham: because a single partition can only 
use a single broker and is limited by bookies
----
2020-10-02 17:29:48 UTC - Jim M.: Rook ceph but underlying disk us local
----
2020-10-02 17:30:24 UTC - Jim M.: So more brokers then? With more partitions?
----
2020-10-02 17:31:30 UTC - Addison Higham: just more partitions should work
----
2020-10-02 17:32:35 UTC - Jim M.: Did that. Went from 27, 36, 56. Same result
----
2020-10-02 17:36:41 UTC - Addison Higham: how large are your messages? assuming 
1kb messages, that performance is very low. Have you looked and seen in some 
how you placed producer rate limits? Or perhaps your pulsar-perf client process 
is being constrained somehow?
----
2020-10-02 17:37:08 UTC - Jim M.: 8 kb decompressed
----
2020-10-02 17:37:21 UTC - Jim M.: No rate limits im aware of.
----
2020-10-02 17:37:30 UTC - Jim M.: We are doing key shared strategy
----
2020-10-02 17:37:56 UTC - Jim M.: Network internally is 10 gb fiber
----
2020-10-02 18:05:28 UTC - Priyath Gregory: @Addison Higham same question posted 
here :)
----
2020-10-02 18:46:56 UTC - Addison Higham: key shared on consumers I assume? 
pulsar-perf doesn't really have key shared support?
----
2020-10-02 18:58:14 UTC - Addison Higham: sorry for delay, I keep going back 
and forth, I remember thinking about this extensively when I worked on this 
code last and concluded it was safe but did involve some small details, but now 
I can't remember the details and am questioning again :stuck_out_tongue:

One thing to note though is the call to checkpoint is blocking and only happens 
after messages have been enqueued, but it does seem like queue would need to be 
drained before we checkpoint. I actually didn't implement all this code 
initially, but just refactored. Let me look at this more see if I can get an 
answer. But in the meantime, it might make sense to open an issue
----
2020-10-02 18:58:52 UTC - Addison Higham: @Priyath Gregory ^^
----
2020-10-02 20:03:31 UTC - Addison Higham: @Jim M. it is hard to say where it is 
going wrong without knowing more, if you are running prometheus/grafana, the 
default dashboard might be illuminating. Also, I am happy to jump on a 30 min 
call and see if we can diagnose it
----
2020-10-02 21:18:16 UTC - Andy Papia: Just need a sanity check... I'm 
publishing messages to a topic from the python client and trying to stream them 
out with the pulsar-client binary (also tried the java client).  If I connect 
to the topic as a consumer while the publisher is publishing, I get events.  
But if I wait until after the producer has disconnected, I don't get any events 
despite specifying `Earliest` on my subscription.  At first I though this was 
because I had persistence disabled on minikube (though we thought it should 
still be in broker memory), but even with persistence enabled, this is the 
behavior I see.
----
2020-10-02 21:18:21 UTC - Andy Papia: Am I missing something?
----
2020-10-02 21:28:03 UTC - Addison Higham: If you don't have any existing 
subscriptions or a persistence policy, pulsar discards messages.

You can either create a subscription then publish and later consume or set a 
persistence policy for the namespace
----
2020-10-03 00:22:54 UTC - Andy Papia: excellent thanks!
----
2020-10-03 02:26:12 UTC - Priyath Gregory: @Addison Higham @David Kjerrumgaard 
Created an issue
for this. I don't seem to have permission to assign it to a user though. 
<https://issues.apache.org/jira/browse/PULSAR-7>
----
2020-10-03 02:26:54 UTC - Addison Higham: We use github for issues, not jira
+1 : Priyath Gregory
----
2020-10-03 02:32:27 UTC - Priyath Gregory: Apologies! my bad. 
<https://github.com/apache/pulsar/issues/8192>
----

Reply via email to