2020-10-14 10:01:29 UTC - Seun: Guys,
How best can one really begin to look into what is going on within pulsar 
cluster from the perspective of several micro-services that are interacting 
with pulsar? Does Pulsar manager get populated with events as they occur within 
pulsar cluster? So far, I have some clue from microservice logs within my k8s 
cluster. But, I think there should be a way to see the entirety of events 
happening within pulsar from pulsar itself rather than relying on logs from k8s.
----
2020-10-14 10:02:26 UTC - Seun: Or is this where some external 
logging/monitoring tools come into play?
----
2020-10-14 10:33:18 UTC - Shivam Arora: I think what you are looking for is 
end-to-end tracing solution. I am not sure if zipkin can intercept pulsar 
message but here are other tool which does the same job

<https://streamnative.io/blog/tech/2020-06-11-opentracing-instrumentation-for-pulsar>

In our case we added request id into message attribute and use ELK for 
monitoring the whole system.
----
2020-10-14 10:36:29 UTC - Seun: Thanks @Shivam Arora It basically comes down to 
integrating external tools. Thanks for the link. Do you think 
grafana/prometheus solves this?
----
2020-10-14 10:38:11 UTC - Shivam Arora: You can use Grafana/prometheus instead 
of ELK stack but you need some kind of common attribute to trace as i said 
earlier.
----
2020-10-14 10:56:18 UTC - Rajani Rahangdale: @Rajani Rahangdale has joined the 
channel
----
2020-10-14 10:58:03 UTC - Shivam Arora: 
<https://github.com/apache/pulsar/wiki/PIP-23:-Message-Tracing-By-Interceptors>
----
2020-10-14 11:06:10 UTC - Seun: Thanks. Will check this
----
2020-10-14 12:11:24 UTC - Gilles Barbier: Hi all, I'm not sure if it's a dumb 
question or not, but what would be your solution to display the content of a 
topic in real-time on a web page?  And what if you need to filter this content 
according to a parameter embedded in messages (let's say a userId) but can not 
do it front-side (due to high topic throughput)
----
2020-10-14 14:58:07 UTC - Penghui Li: &gt; but what would be your solution to 
display the content of a topic in real-time on a web page
Pulsar has websocket protocol support, is it work for you?

&gt;  And what if you need to filter this content according to a parameter 
embedded in messages (let’s say a userId) but can not do it front-side (due to 
high topic throughput)
Now, pulsar do not have any filters at the broker side, one way you can 
optimize this case by add some partitions, and keep data with the same userId 
write to the same partition, so that you can only read the data from that 
partition.
----
2020-10-14 15:22:58 UTC - Ebere Abanonu: Maybe this can act as a guide for you: 
<https://github.com/eaba/SharpPulsarSamples>
----
2020-10-14 15:36:31 UTC - Frederick Paine: @Frederick Paine has joined the 
channel
----
2020-10-14 15:53:02 UTC - Joshua Decosta: Has anyone else noticed the metrics 
don’t map correctly after a few topics/namespaces are added? Also has anyone 
else integrated signalfx with pulsar? 
----
2020-10-14 15:55:34 UTC - Gilles Barbier: Thx, I’ll look at it
----
2020-10-14 16:33:46 UTC - Gilles Barbier: @Penghui Li do you think that 
creating a non-persistent topic per userId and having a function dispatching 
messages to those topic could do the trick?
----
2020-10-14 17:38:13 UTC - Frank Kelly: I ended up creating this issue 
<https://github.com/apache/pulsar/issues/8264>
----
2020-10-14 17:39:07 UTC - Frank Kelly: Found a possible issue with Java Client 
`sendAsync` and message ordering - please see here 
<https://github.com/apache/pulsar/issues/8264>. Certainly very well come be a 
misunderstanding of how async send and batching is supposed to work. Thanks!
----
2020-10-14 17:50:32 UTC - Addison Higham: hey @Frank Kelly apologies, I took a 
look at your example yesterday, but haven't had a chance to dig in, will look 
at the issue as well
----
2020-10-14 17:51:16 UTC - Frank Kelly: No worries at all - I'm sure you have a 
bunch to do
----
2020-10-14 18:13:49 UTC - Damien Roualen: Hi, I would like to share with you an 
error with a function-worker running in kubernetes.
When creating a new function, a connection is created in the function-worker 
with zookeeper but the IPs used are wrong. We don't know why the 
function-worker is not using the right IPs despite the fact the configuration 
is right, and worker initialisation used the right Zookeeper IPs.
```The wrong zookeeper IPs: 
10.242.144.42:2181,10.242.144.118:2181,10.242.145.186:2181```
Logs from the worker:
``` 17:51:31.104 [function-web-21-7] INFO  org.eclipse.jetty.util.TypeUtil - 
JVM Runtime does not support Modules                                            
                                                                  │
│ 17:51:31.847 [function-web-21-7] INFO  
org.apache.pulsar.functions.utils.Actions - Sucessfully completed action [ 
Creating authentication secret for function tenant/namespace/VoidFunction ]     
                    │
...
│ 17:51:32.029 [function-web-21-7] INFO  org.apache.zookeeper.ZooKeeper - 
Initiating client connection, 
connectString=10.242.144.42:2181,10.242.144.118:2181,10.242.145.186:2181 
sessionTimeout=30000 watcher=org.apache.boo │
│ 17:51:32.029 [function-web-21-7] INFO  org.apache.zookeeper.ClientCnxnSocket 
- jute.maxbuffer value is 10485760 Bytes
...
│ 17:52:02.370 [function-web-21-7] ERROR 
org.apache.distributedlog.bk.SimpleLedgerAllocator - Error creating ledger for 
allocating 
/pulsar/functions/tenant/namespace/VoidFunction/50711f54-bcbc-4e14-9a4e-9b645a40b467
 │
│ org.apache.distributedlog.ZooKeeperClient$ZooKeeperConnectionException: 
Problem connecting to servers: 
10.242.144.42:2181,10.242.144.118:2181,10.242.145.186:2181```
The error is only on our `testing` setup and `staging` is working well.
I checked the zookeepers IPs config for zoookeepers, the zookeeprs IPs set to 
the bookies but nothing is wrong.
Since we did an update recently of our Pulsar Cluster by changing all component 
instances and reconfiguring the config correctly, `is there a cache to clear in 
order to use the right defined IPs with functions?` 
----
2020-10-14 19:37:44 UTC - Toktok Rambo: hello all, I’m seeing this log on my 
pulsar server
 `19:24:30.776 [pulsar-io-51-7] INFO 
org.apache.pulsar.broker.service.persistent.PersistentDispatcherMultipleConsumers
 - Removed consumer 
Consumer{subscription=PersistentSubscription{topic=<persistent://public/default/myapp>,
 name=myapp-subscriber}, consumerId=1, consumerName=myapp-subscriber, 
address=/172.18.0.1:55272} with pending 0 acks`

should I be giving unique subscriber names always? I already know unique 
publisher names are required  :thinking_face:
 I’m using the Go client, 
<http://github.com/apache/pulsar-client-go|github.com/apache/pulsar-client-go> 
v0.2.0
----
2020-10-14 20:09:05 UTC - Dmitry S.: @Dmitry S. has joined the channel
----
2020-10-14 21:54:18 UTC - Roger Johansson: @Roger Johansson has joined the 
channel
----
2020-10-15 00:57:40 UTC - Penghui Li: If userIds is not too much, I think it 
can works
----
2020-10-15 04:24:43 UTC - Akshar Dave: we are running into this exact issue, 
what was the workaround?
----
2020-10-15 08:00:47 UTC - Emil: Hi everyone,
Have anyone deployed instances of Java and Golang functions simultaneously with 
kubernetes runtime?
We've noticed that the k8s runtime feature has not been implemented in golang 
sdk, and it throws `UnsupportedOperationException` exception: 
<https://github.com/apache/pulsar/blob/59e0cfb580e4e6d22e1ea8284f5866c1c5a6fd07/pulsar-functions/runtime/src/main/java/org/apache/pulsar/functions/runtime/kubernetes/KubernetesRuntimeFactory.java#L251|KubernetesRuntimeFactory.java>
So currently, with the runtime enabled, golang functions throw this exception.
Is there any workaround for that, at least to keep using golang functions 
without accessing k8s features?
Thank you!
----
2020-10-15 09:04:10 UTC - xiaolong.ran: Yes, currently in k8s runtime, we do 
not support the use of Go Functions yet.
----

Reply via email to