2019-02-25 10:27:11 UTC - Laurent Chriqui: Hi @Matteo Merli have you been able 
to look into it? The functions create sometimes does go through but it fails 
about 90% of the time with the timeout
----
2019-02-25 10:28:59 UTC - Christophe Bornet: Bold question: is it possible to 
have brokers and bookies belonging to several clusters ?
----
2019-02-25 10:32:53 UTC - bhagesharora: what will happen in this scenario , if 
we have multiple producers which producing the messages but no consumer to 
consuming the messages ?
----
2019-02-25 10:44:07 UTC - jia zhai: @bhagesharora If a topic is not subscribed 
by any consumer, the messages is default to be treated as acked, and will not 
be kept.
You could set retention and backlog, if you want to keep messages at this 
situation.
<http://pulsar.apache.org/docs/en/cookbooks-retention-expiry.html>
----
2019-02-25 10:50:35 UTC - Christophe Bornet: I guess the GLobal Zookeeper is 
always a SPOF, isnt' it ?
----
2019-02-25 11:20:27 UTC - bhagesharora: @jia zhai A consumer sent a message to 
pulsar broker, message has been successfully processed, But in this condition 
there is no consumer so there will be no acknowledgement, If no acknowledgement 
then message will be retained until it's processed. Is my understanding is 
clear ?? and In this situation I need to set retention and backlog ?
----
2019-02-25 11:20:49 UTC - Sijie Guo: currently the global zookeeper is only 
used for configuration store (storing the replication configuration). so it is 
not in critical path.
if you changed to the way you suggested, it will be in the critical path 
:slightly_smiling_face:
----
2019-02-25 11:24:52 UTC - jia zhai: @bhagesharora You may mean *Producer* sent 
message, and no consumer.
So, in this situation, topic has not been subscribed by any consumer?
----
2019-02-25 11:31:11 UTC - jia zhai: If topic not subscribed, messages sent to 
topic is no one care, so treat it as acked, and not kept.
If topic has been subscribed, un-acked messages will be retained until it is 
acked.
----
2019-02-25 11:31:49 UTC - jia zhai: retention is set for the acked messages.
backlog is set for the un-acked messages.
+1 : Yuvaraj Loganathan, durga
----
2019-02-25 11:32:25 UTC - bhagesharora: @jia zhai yes, producer sent message 
but no consumer
----
2019-02-25 11:32:57 UTC - jia zhai: then, by default messages not been kept
----
2019-02-25 11:33:39 UTC - bhagesharora: message not been kept, so if any error 
will come ? How we can handle this situation
----
2019-02-25 11:34:37 UTC - jia zhai: &gt; so if any error will come ?
Do you mean, you want to be able to consume already sent messages?
----
2019-02-25 11:37:22 UTC - bhagesharora: yes, can we consume messages  ?
----
2019-02-25 11:37:28 UTC - jia zhai: you need set retention, and at the time to 
start subscribe and consume the message,  use parameter 
`SubscriptionInitialPosition.Earliest`
```
        Consumer&lt;byte[]&gt; earliestConsumer = 
pulsarClient.newConsumer().topic(topicName)
                .subscriptionName("test-subscription-earliest")
                
.subscriptionInitialPosition(SubscriptionInitialPosition.Earliest).subscribe();
```
----
2019-02-25 11:38:01 UTC - bhagesharora: Ok I'll try to implement same scenario 
in java, then let you know if I'll face any error
----
2019-02-25 11:38:17 UTC - jia zhai: It could consume from the 
earliest(determined by retention setting) message.
----
2019-02-25 12:28:31 UTC - Christophe Bornet: Indeed
----
2019-02-25 13:25:41 UTC - Yuvaraj Loganathan: After upgrading to pulsar 2.3.0 
our throughput reduced from 1500rps to 15rps for synchronous send . Average 
publish latency increased from 8ms to 454ms. :disappointed: Async send we are 
able to see the same throughput. I have tuned the JournalSyncData=false in 
bookeeper still no change in throughput though.  All the resource usage 
including cpu,ram and disk are idle. @Matteo Merli Any help would be highly 
appreciated.
----
2019-02-25 13:55:11 UTC - Yuvaraj Loganathan: Also what is the best strategy to 
increase the synchronous send throughput ? increasing the `io_threads` on 
producer will it help?
----
2019-02-25 14:02:09 UTC - naga: Can we say kinesis <http://same.as|same.as> 
pulsar
----
2019-02-25 14:05:30 UTC - Yuvaraj Loganathan: 
----
2019-02-25 14:05:33 UTC - Yuvaraj Loganathan: 
----
2019-02-25 14:06:04 UTC - Yuvaraj Loganathan: I have upload the bookkeeper and 
broker conf here @Matteo Merli ^^
----
2019-02-25 14:10:16 UTC - Maarten Tielemans: AWS Kinesis? No, it is not the 
same technology as Apache Pulsar
----
2019-02-25 14:16:59 UTC - bhagesharora: @jia zhai [pulsar-timer-4-1] INFO 
org.apache.pulsar.client.impl.ProducerStatsRecorderImpl - 
[<persistent://prop/use/ns-abc/Retention>] [standalone-0-31] , This error is 
coming if we are only producing a message and there is no consumer.
----
2019-02-25 14:25:34 UTC - Christophe Bornet: &gt; you can still use global 
zookeeper for all clusters with proper chroot.

Not sure what it means. Are the paths for a given cluster somehow prefixed by 
the cluster name so they can be namespaced ? If it's the case then I guess we 
can have multiple clusters managed by the same ZK quorum.
----
2019-02-25 14:35:05 UTC - jia zhai: @bhagesharora what is the error?
----
2019-02-25 14:46:56 UTC - Christophe Bornet: It seems that I can namespace in 
ZK by just using the connexion string like 
`<http://zk.mydomain.com:2181/dc1|zk.mydomain.com:2181/dc1>` and 
`<http://zk.mydomain.com:2181/dc2|zk.mydomain.com:2181/dc2>`. Do you confirm it 
would work ?
----
2019-02-25 14:52:06 UTC - Matteo Merli: Yes, it’s a problem with Jetty threads 
config that was introduced in last release. Working on the fix. Will be 
released soon in 2.3.1
----
2019-02-25 15:40:49 UTC - Enrico Olivelli: Hi guys, I trynig to upgrade Pulsar 
to 2.3, only un unit (integration) tests. My test starts a Pulsar broken and in 
the same VM it runs the client application. I get this new error to me:
19-02-25-16-38-57       Error when trying to get active brokers
19-02-25-16-38-57       org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for /loadbalance/brokers
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /loadbalance/brokers
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:118)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
        at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2501)
        at 
org.apache.bookkeeper.zookeeper.ZooKeeperClient.access$3701(ZooKeeperClient.java:70)
        at 
org.apache.bookkeeper.zookeeper.ZooKeeperClient$27.call(ZooKeeperClient.java:1248)
        at 
org.apache.bookkeeper.zookeeper.ZooKeeperClient$27.call(ZooKeeperClient.java:1242)
        at 
org.apache.bookkeeper.zookeeper.ZooWorker.syncCallWithRetries(ZooWorker.java:140)
        at 
org.apache.bookkeeper.zookeeper.ZooKeeperClient.getChildren(ZooKeeperClient.java:1242)
        at 
org.apache.pulsar.zookeeper.ZooKeeperCache$3.call(ZooKeeperCache.java:388)
        at 
org.apache.pulsar.zookeeper.ZooKeeperCache$3.call(ZooKeeperCache.java:1)
        at 
com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4876)
        at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
        at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
        at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
        at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
        at com.google.common.cache.LocalCache.get(LocalCache.java:3952)
        at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4871)
        at 
org.apache.pulsar.zookeeper.ZooKeeperCache.getChildren(ZooKeeperCache.java:384)
        at 
org.apache.pulsar.zookeeper.ZooKeeperChildrenCache.get(ZooKeeperChildrenCache.java:52)
        at 
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl.getAvailableBrokers(ModularLoadManagerImpl.java:341)
        at 
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl.doLoadShedding(ModularLoadManagerImpl.java:581)
        at 
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerWrapper.doLoadShedding(ModularLoadManagerWrapper.java:54)
        at 
org.apache.pulsar.broker.loadbalance.LoadSheddingTask.run(LoadSheddingTask.java:41)
        at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
----
2019-02-25 15:42:51 UTC - Enrico Olivelli: Full logs:
19-02-25-16-41-37       Successfully acquired ownership of 
/namespace/pulsar/magnews/localhost:9372/0x00000000_0xffffffff
19-02-25-16-41-37       added heartbeat namespace name in local cache: 
ns=pulsar/magnews/localhost:9372
19-02-25-16-41-37       Loading all topics on bundle: 
pulsar/magnews/localhost:9372/0x00000000_0xffffffff
Error when trying to get active brokers
19-02-25-16-42-36       org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for /loadbalance/brokers
----
2019-02-25 15:43:11 UTC - Enrico Olivelli: the main thread is stuck at :
 java.lang.Thread.State: TIMED_WAITING (parking)
        at jdk.internal.misc.Unsafe.park([email protected]/Native Method)
        - parking to wait for  &lt;0x0000000088f7b938&gt; (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at 
java.util.concurrent.locks.LockSupport.parkNanos([email protected]/LockSupport.java:234)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos([email protected]/AbstractQueuedSynchronizer.java:2123)
        at 
java.util.concurrent.ThreadPoolExecutor.awaitTermination([email protected]/ThreadPoolExecutor.java:1454)
        at 
org.eclipse.jetty.util.thread.ExecutorThreadPool.join(ExecutorThreadPool.java:182)
        at org.apache.pulsar.broker.web.WebService.close(WebService.java:200)
----
2019-02-25 15:48:30 UTC - Enrico Olivelli: got it, this is the real error, 
maybe I have to adapt deps:
java.lang.NoClassDefFoundError: org/apache/pulsar/common/schema/SchemaType
        at 
org.apache.pulsar.broker.service.schema.JsonSchemaCompatibilityCheck.getSchemaType(JsonSchemaCompatibilityCheck.java:39)
        at 
org.apache.pulsar.broker.service.schema.SchemaRegistryService.getCheckers(SchemaRegistryService.java:40)
        at 
org.apache.pulsar.broker.service.schema.SchemaRegistryService.create(SchemaRegistryService.java:55)
        at org.apache.pulsar.broker.PulsarService.start(PulsarService.java:425)
----
2019-02-25 15:49:46 UTC - David Kjerrumgaard: Not as they are currently 
designed.
----
2019-02-25 16:31:03 UTC - Matteo Merli: That comes in `pulsar-client-api` that 
should pulled in automatically by pulsar-broker which depends on pulsar-client
----
2019-02-25 17:08:02 UTC - Yuvaraj Loganathan: I think I am really doing 
something wrong here
----
2019-02-25 18:28:12 UTC - bhagesharora: @jia zhai [pulsar-timer-4-1] INFO 
org.apache.pulsar.client.impl.ProducerStatsRecorderImpl - 
[<persistent://prop/use/ns-abc/Retention>] [standalone-0-31] ,
----
2019-02-25 18:45:35 UTC - bhagesharora: Is there any chance/situation to 
message failure in apache pulsar ?? Ex. If the connection is lost, Consumer got 
disconnected ??
----
2019-02-25 20:15:06 UTC - durga: Hi Guys - Wondering if any one can provide 
guidance into on how to troubleshoot an issue we run into occasionally. We have 
pulsar replicating messages between 3 data centers. Once in while we see couple 
messages on a particular topic not reaching one of the other data centers. 
Sometimes it could be due to network latency but we don’t have an easy way to 
know. Other times we find it is certainly not a network latency issue as 
messages on other topic reached the target datacenter.

To troubleshoot, we need to look into what happened to the message after 
publish and whether it made to bookkeeper on producer side. Similarly whether 
that message made to destination. If not then it could be due to network issue. 
If it made to target datacenter then we need to see whether it made to 
bookkeeper and so on.

I am assuming this may be a common need i.e., ability to trace the path of a 
past message for debug/troubleshooting purposes. I am wondering how people do 
it currently and appreciate any guidance on this.
----
2019-02-25 21:05:13 UTC - Grant Wu: ?
----
2019-02-25 21:06:37 UTC - David Kjerrumgaard: :smile_cat: :keyboard:
joy : Yadi Yang
----
2019-02-26 00:33:12 UTC - Emma Pollum: Is it possible to enable authentication 
via JSON tokens on JUST the proxies and not the brokers?
----
2019-02-26 00:42:02 UTC - Emma Pollum: how do I provide a token through the cli 
when using pulsar-client produce
----
2019-02-26 00:59:02 UTC - David Kjerrumgaard: 
<http://pulsar.apache.org/docs/en/reference-cli-tools/#pulsar-client>
----
2019-02-26 00:59:27 UTC - David Kjerrumgaard: The only option you have is 
`--auth-params`
----
2019-02-26 00:59:53 UTC - Emma Pollum: Thanks @David Kjerrumgaard. Should I use 
"token" as the key, and the token itself as the value?
----
2019-02-26 01:00:18 UTC - David Kjerrumgaard: What value are you using for 
`--auth-plugin` ?
----
2019-02-26 01:01:27 UTC - David Kjerrumgaard: I think you pass it like so ` 
tokenSecretKey=data:base64,FLFyW0oLJ2Fi22KKCm21J18mbAdztfSHN/l`
----
2019-02-26 01:01:36 UTC - Emma Pollum: 
org.apache.pulsar.broker.authentication.AuthenticationProviderToken
----
2019-02-26 01:01:40 UTC - Emma Pollum: okay thank you!
----
2019-02-26 01:01:59 UTC - David Kjerrumgaard: 
<http://pulsar.apache.org/docs/en/security-token-admin/>
----
2019-02-26 01:02:19 UTC - David Kjerrumgaard: That works for admin CLI. IDK 
about the client CLI
----
2019-02-26 01:02:48 UTC - Emma Pollum: Cool, I'll play with it 
:slightly_smiling_face:
----
2019-02-26 01:03:00 UTC - David Kjerrumgaard: I think this is what you are 
looking for
----
2019-02-26 01:03:00 UTC - David Kjerrumgaard: 
<http://pulsar.apache.org/docs/en/security-token-client/>
----
2019-02-26 01:03:10 UTC - David Kjerrumgaard: 
`authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken
authParams=token:eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJKb2UifQ.ipevRNuRP6HflG8cFKnmUPtypruRC4fb1DWtoLL62SY`
----
2019-02-26 01:03:33 UTC - David Kjerrumgaard: good luck!!
----
2019-02-26 01:09:52 UTC - Emma Pollum: @David Kjerrumgaard that was exactly it, 
thank you so much!
+1 : David Kjerrumgaard
----
2019-02-26 01:15:57 UTC - David Kjerrumgaard: Glad to help!!
----
2019-02-26 01:54:03 UTC - bin ma: @bin ma has joined the channel
----
2019-02-26 02:47:18 UTC - bossbaby: @jia zhai this is my conf
broker.conf: <https://gist.github.com/tuan6956/946cfe7cf01694d97991197d6c6b82cb>
bookkeeper.conf: 
<https://gist.github.com/tuan6956/780116a8f0ad11794a85b685725617ee>
Can you check it?
----
2019-02-26 03:54:44 UTC - wdm: @wdm has joined the channel
----
2019-02-26 04:30:26 UTC - Yuvaraj Loganathan: Thanks @jia zhai
----
2019-02-26 04:35:49 UTC - Cao Chunh: @Cao Chunh has joined the channel
----
2019-02-26 04:40:11 UTC - bossbaby: I edited it, the reason for the above error 
is that numHttpServerThreads Default is set to Runtime.getRuntime (). 
availableProcessors () is smaller than the number of threads that the pulsar 
needs, so I just need to adjust the number of threads in numHttpServerThreads 
with some examples like 100
+1 : jia zhai
----
2019-02-26 04:42:49 UTC - bossbaby: In pulsar 2.2.1, Number of threads to use 
for HTTP requests processing. Default is set to 2 * 
Runtime.getRuntime().availableProcessors() but in 2.3.0 it is only 
Runtime.getRuntime (). availableProcessors (). why do pulsar change it?
----
2019-02-26 04:46:32 UTC - Yuvaraj Loganathan: We got our Performance back to 
original state in 2.3.0. It was mistake from our side we have set 
journalMaxGroupWaitMSec=500. Even though this is an flush internal the producer 
will be held till we reach `journalMaxGroupWaitMSec` to get acknowledgement. 
Thanks @Sijie Guo for your Help!
+1 : Sijie Guo, Ali Ahmed, Karthik Ramasamy
----
2019-02-26 05:33:00 UTC - Sijie Guo: glad to hear your resolved the problem 
yourself (sorry I didn’t get time to help you)
----
2019-02-26 06:04:56 UTC - rfyiamcool: @rfyiamcool has joined the channel
----

Reply via email to