2020-05-14 09:39:03 UTC - Damien Roualen: Good morning,
Is there a working Pulsar connector for the last version 333 of Presto? Did
someone tried?
When adding the connector plugin to Presto I got an error.
```PULSAR_VERSION=2.5.1
PRESTO_VERSION=333
...
mv apache-pulsar-${PULSAR_VERSION}/lib/presto/plugin/pulsar-presto-connector
presto-server-${PRESTO_VERSION}/plugin/pulsar```
When starting Presto Server.
```~/presto-server-333$ ./bin/launcher run
....
2020-05-14T09:30:38.974Z INFO main
io.prestosql.server.PluginManager -- Finished loading plugin
/home/damien.roualen/presto-server-333/plugin/postgresql --
2020-05-14T09:30:38.975Z INFO main
io.prestosql.server.PluginManager -- Loading plugin
/home/damien.roualen/presto-server-333/plugin/pulsar --
2020-05-14T09:30:38.994Z ERROR main
io.prestosql.server.PrestoServer No service providers of type
io.prestosql.spi.Plugin
java.lang.IllegalStateException: No service providers of type
io.prestosql.spi.Plugin
at
com.google.common.base.Preconditions.checkState(Preconditions.java:589)```
----
2020-05-14 11:59:57 UTC - JG: in fact, it never reach the topic of the function
----
2020-05-14 12:00:25 UTC - JG: Producer is sending the message correctly and
there is a subscription but my function is never called
----
2020-05-14 12:15:10 UTC - Steve Kim: Yes. I would like to learn about the
format of the entry log files and index files.
----
2020-05-14 12:58:25 UTC - rani: Greetings! Currently setting up an autoscaling
mechanism for my AWS EC2 ASG managed pulsar cluster running on version `2.5.1`!
*Strange Observation*
After starting with a baseline of 3 healthy brokers, I initiated a manual ASG
scale-up event to scale the number of broker nodes to 4.
After the 4th broker service starts, I am encountering a `502` when querying
broker health using `{{PULSAR_PROXY_ENDPOINT}}/admin/v2/brokers/health` .
This goes away completely if i do a rolling restart on the `pulsar-proxy` nodes.
My only guess at this point is that `pulsar-proxy` doesn’t refresh its cache of
“currently active brokers” from zookeepers regularly enough. If this assumption
is true, Is there any parameter that we can tune to “force-resync” or refresh
cache? @Sijie Guo
*PS:* there are no useful logs in neither pulsar-proxy nor pulsar-broker
----
2020-05-14 15:06:14 UTC - Vincent LE MAITRE: Hi all,
I am running a Pulsar cluster on Kubernetes using the pulsar helm chart.
Pulsar version is 2.5.1.
I would like to enable the webSocket service which is not enabled by default on
the brokers.
As explained into the documentation, I have added
"webSocketServiceEnabled=true" into the conf/broker.conf through the broker
configMap to enable websocket service in embedded mode.
When connecting a producer using the websocket service, I get the following
message in broker logs :
`WARN org.apache.pulsar.websocket.ProducerHandler - [127.0.0.1:34824] Failed
in creating producer on topic <persistent://test/test/test>: Param serviceUrl
must not be blank.`
I tried to add "serviceUrl" or "brokerServiceUrl" properties into broker.conf,
without success.
I also tried to add "serviceUrl" into conf/websocket.conf file, no more
success...
Could you help me to get the websocket service working ?
Thanks
----
2020-05-14 15:08:28 UTC - Guilherme Perinazzo: I belive you need to setup the
websocket config file also
----
2020-05-14 15:09:15 UTC - Guilherme Perinazzo: But I went with the separate
component, so maybe i'm wrong
----
2020-05-14 15:10:50 UTC - Vincent LE MAITRE: In
<https://pulsar.apache.org/docs/en/client-libraries-websocket/|doc>, it says
that in embedded mode websocket.conf is not used...
Moreover the helm template does not fill the parameters into this config file.
----
2020-05-14 15:12:14 UTC - Vincent LE MAITRE: But if it is required in "embedded
mode", i will try to improve the helm package to fill websocket conf in
addition.
----
2020-05-14 15:13:06 UTC - Vincent LE MAITRE: is there someone that try this
websocket embedded mode inside broker ?
----
2020-05-14 15:38:55 UTC - Sijie Guo: Did you use the fully qualified name as
the output topic name when submitting a function?
----
2020-05-14 15:41:46 UTC - Rounak Jaggi: Now as per the bookkeeper conf, we set
these:
tlsProviderFactoryClass=org.apache.bookkeeper.tls.TLSContextFactory
tlsCertificatePath=<path>/bookie.cert.pem
tlsKeyStoreType=PEM
tlsKeyStore=<path>/bookie.key-pk8.pem
tlsTrustStoreType=PEM
tlsTrustStore=<path>/ca.cert.pem
And we are seeing this in bookkeeper logs:
current state START_TLS :
<http://javax.net|javax.net>.ssl.SSLHandshakeException: General OpenSslEngine
problem
Caused by: sun.security.validator.ValidatorException: Extended key usage does
not permit use for TLS server authentication
ERROR org.apache.bookkeeper.proto.PerChannelBookieClient - TLS handshake failed
----
2020-05-14 16:09:59 UTC - Sijie Guo: I think the certificates that you obtain
must allow for both clientAuth and serverAuth if the extended key usage
extension is present. Or not have that extension included at all.
----
2020-05-14 16:15:26 UTC - David Kjerrumgaard: Is that data you are sending
compressed?
----
2020-05-14 16:52:40 UTC - Alexander Ursu: Hello, am wondering if the Pulsar
Zookeeper is configured by default to purge log and snapshot files under the
`/data` directory
----
2020-05-14 16:53:32 UTC - Matteo Merli: Yes, it should keep the latest 3 of
each:
<https://github.com/apache/pulsar/blob/c65c8099e18236586edc0eafdff595e7ab376997/conf/zookeeper.conf#L47>
+1 : Alexander Ursu
----
2020-05-14 17:29:58 UTC - Alexander Ursu: Hello again, I was wondering why the
image size on docker hub doesn't reflect the image size shown by `docker image
ls` on my machine. Docker hub says a little under 2gb for the `pulsar-all`
image, while on a couple of my machines it's reported as almost 3gb
----
2020-05-14 18:33:58 UTC - JG: yes -->
*<non-persistent://tenant/namespace/topic>*
----
2020-05-14 18:51:48 UTC - Mark Huang: @Mark Huang has joined the channel
----
2020-05-14 19:19:01 UTC - Alexander Ursu: Hi, I restarted one of the zookeeper
servers out of the 3 I have running, trying to update the image. I am getting
these logs now and my brokers seem to fail to connect. I am not sure why. For
context I am running a single cluster in Docker
----
2020-05-14 19:19:01 UTC - Alexander Ursu: ```19:16:18.041 [NIOWorkerThread-2]
INFO org.apache.zookeeper.server.quorum.Learner - Revalidating client:
0x1000f67f5cf0004
19:16:18.226 [NIOWorkerThread-4] WARN
org.apache.zookeeper.server.NIOServerCnxn - Unable to read additional data from
client sessionid 0x2000f6334c30001, likely client has closed socket
19:16:18.504 [NIOWorkerThread-1] INFO
org.apache.zookeeper.server.quorum.Learner - Revalidating client:
0x3000f61e6c7001e
19:16:25.126 [NIOWorkerThread-3] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.194:39760 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server
19:16:34.722 [NIOWorkerThread-1] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.175:36320 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server
19:16:35.548 [NIOWorkerThread-2] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.172:38850 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server
19:16:38.044 [NIOWorkerThread-4] WARN
org.apache.zookeeper.server.NIOServerCnxn - Unable to read additional data from
client sessionid 0x1000f67f5cf0004, likely client has closed socket
19:16:38.518 [NIOWorkerThread-1] WARN
org.apache.zookeeper.server.NIOServerCnxn - Unable to read additional data from
client sessionid 0x3000f61e6c7001e, likely client has closed socket
19:16:47.540 [NIOWorkerThread-3] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.194:39788 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server
19:16:56.997 [NIOWorkerThread-1] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.175:36340 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server
19:16:58.600 [NIOWorkerThread-2] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.172:38870 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server
19:16:59.391 [NIOWorkerThread-3] INFO
org.apache.zookeeper.server.quorum.Learner - Revalidating client:
0x2000f6334c30001
19:17:09.518 [NIOWorkerThread-4] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.194:39808 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server
19:17:19.394 [NIOWorkerThread-2] WARN
org.apache.zookeeper.server.NIOServerCnxn - Unable to read additional data from
client sessionid 0x2000f6334c30001, likely client has closed socket
19:17:19.724 [NIOWorkerThread-3] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.175:36364 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server
19:17:20.167 [NIOWorkerThread-4] INFO
org.apache.zookeeper.server.quorum.Learner - Revalidating client:
0x1000f67f5cf0004
19:17:20.699 [NIOWorkerThread-2] INFO
org.apache.zookeeper.server.quorum.Learner - Revalidating client:
0x3000f61e6c7001e
19:17:21.652 [NIOWorkerThread-4] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.172:38896 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server
19:17:32.203 [NIOWorkerThread-4] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.194:39826 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server
19:17:40.177 [NIOWorkerThread-2] WARN
org.apache.zookeeper.server.NIOServerCnxn - Unable to read additional data from
client sessionid 0x1000f67f5cf0004, likely client has closed socket
19:17:40.703 [NIOWorkerThread-4] WARN
org.apache.zookeeper.server.NIOServerCnxn - Unable to read additional data from
client sessionid 0x3000f61e6c7001e, likely client has closed socket
19:17:42.967 [NIOWorkerThread-1] INFO
org.apache.zookeeper.server.ZooKeeperServer - Refusing session request for
client /10.0.0.175:36390 as it has seen zxid 0x200000000 our last zxid is
0x100024899 client must try another server```
----
2020-05-14 19:44:51 UTC - Alex Yaroslavsky: Is "pulsar-admin functions list"
supposed to work? I always get 404.Running 2.5.1 (same was on 2.5.0), this
happens when either going through proxy or directly to function worker (workers
run separately from brokers).
----
2020-05-14 19:47:01 UTC - Alex Yaroslavsky:
<https://github.com/apache/pulsar/issues/5947>
----
2020-05-14 19:47:22 UTC - Alex Yaroslavsky: Seems that "bin/pulsar-admin
functions list" supposed to show "help" as parameters are missing
----
2020-05-14 20:08:45 UTC - Ming: This is because images on docker hub has been
compressed. The step is done by `docker push` locally. The data is compressed
before actually pushing to Docker Hub.
----
2020-05-14 23:59:41 UTC - Alexander Ursu: Hello, was wondering if there is any
early discussion around zookeeper and a possible replacement. I've noticed some
similar discussion surrounding Kafka, and am curious if anyone has any thoughts
on the matter.
----