2019-06-06 14:55:30 UTC - Chris DiGiovanni: Curious if anyone is using HAProxy
in front of their Pulsar Proxies using TLS and using a health check? I'm
getting the following warnings every second when HAProxy does the health check:
```
2019-06-06 09:54:40.207 [pulsar-discovery-io-2-1] INFO
org.apache.pulsar.proxy.server.ProxyConnection - [/10.8.53.41:32972] New
connection opened
2019-06-06 09:54:40.208 [pulsar-discovery-io-2-1] WARN
io.netty.channel.DefaultChannelPipeline - An exceptionCaught() event was fired,
and it reached at the tail of the pipeline. It usually means the last handler
in the pipeline did not handle the exception.
io.netty.channel.unix.Errors$NativeIoException: syscall:read(..) failed:
Connection reset by peer
at io.netty.channel.unix.FileDescriptor.readAddress(..)(Unknown Source)
~[io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]
2019-06-06 09:54:40.208 [pulsar-discovery-io-2-1] WARN
org.apache.pulsar.proxy.server.ProxyConnection - [/10.8.53.41:32972] Got
exception NativeIoException : syscall:read(..) failed: Connection reset by peer
2019-06-06 09:54:40.209 [pulsar-discovery-io-2-1] INFO
org.apache.pulsar.proxy.server.ProxyConnection - [/10.8.53.41:32972] Connection
closed
```
----
2019-06-06 14:56:26 UTC - Chris DiGiovanni: I also tried setting `option
ssl-hello-chk` on the backend and the Pulsar Proxy really didn't like that...
----
2019-06-06 14:58:48 UTC - Chris DiGiovanni: Really unsure what I would set for
tcp-check to make the Pulsar Proxy happy about these health checks.
----
2019-06-06 16:36:20 UTC - Devin G. Bost: @Jerry Peng Thanks! That worked!
----
2019-06-06 16:36:45 UTC - Devin G. Bost: Has anyone done performance testing on
Java vs Python using the Pulsar Functions API?
----
2019-06-06 16:37:07 UTC - Jerry Peng: Java is much faster
----
2019-06-06 17:25:15 UTC - Devin G. Bost: Thanks for the info.
----
2019-06-06 17:25:59 UTC - Devin G. Bost: Are there plans to support Pulsar
Functions with other languages, like Go?
----
2019-06-06 17:26:34 UTC - Jerry Peng: Support for Go functions have already
been added and will be in 2.4 release
tada : Devin G. Bost, David Kjerrumgaard, Ali Ahmed, Guangzhong Yao
----
2019-06-06 18:25:07 UTC - Devin G. Bost: @Jerry Peng It gets all the way to:
```
fileServerThread = new Thread(() -> {
try {
fileServer = HttpServer.create(new
InetSocketAddress(fileServerPort), 0);
fileServer.createContext("./src/test/resources/pulsar-io-data-generator.nar",
he -> { . . .
```
and then blows up with this:
```
java.lang.IllegalArgumentException: Illegal value for path or protocol
at
sun.net.httpserver.HttpContextImpl.<init>(HttpContextImpl.java:60)
at sun.net.httpserver.ServerImpl.createContext(ServerImpl.java:216)
at
sun.net.httpserver.HttpServerImpl.createContext(HttpServerImpl.java:74)
at
sun.net.httpserver.HttpServerImpl.createContext(HttpServerImpl.java:39)
at
com.overstock.dataeng.pulsar.deployment.test.TopologyTests.lambda$test_zookeeper_locally$3(TopologyTests.java:483)
at java.lang.Thread.run(Thread.java:748)
```
Do you think it's still not able to access the `.nar` file? I put it in my
resources directory.
----
2019-06-06 19:28:08 UTC - Devin G. Bost: Turns out that it was expecting an
absolute path to the file.
----
2019-06-06 21:14:14 UTC - Jerry Peng: That code isn’t really necessary
----
2019-06-06 21:19:06 UTC - Jerry Peng: It’s for testing creating functions via
url
----
2019-06-07 05:38:49 UTC - Yuwei Jiang: Hi guys, I tried to deploy pulsar
(v2.3.2) on GKE K8s cluster, using `n1-standard-4` instance (4 vCPUs, 15GB
memory), and following JVM `PULSAR_MEM` config parameters, it was running fine.
```
zookeeper: -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g
-Dcom.sun.management.jmxremote -Djute.maxbuffer=10485760
-XX:+ParallelRefProcEnabled -XX:+UnlockExperimentalVMOptions
-XX:+AggressiveOpts -XX:+DoEscapeAnalysis -XX:+DisableExplicitGC
-XX:+PerfDisableSharedMem -Dzookeeper.forceSync=no
bookkeeper: -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g
-Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024
-XX:+UseG1GC -XX:MaxGCPauseMillis=10 -XX:+ParallelRefProcEnabled
-XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+DoEscapeAnalysis
-XX:ParallelGCThreads=32 -XX:ConcGCThreads=32 -XX:G1NewSizePercent=50
-XX:+DisableExplicitGC -XX:-ResizePLAB -XX:+ExitOnOutOfMemoryError
-XX:+PerfDisableSharedMem -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
-XX:+PrintGCApplicationStoppedTime -XX:+PrintHeapAtGC -verbosegc
-XX:G1LogLevel=finest
broker: -Xms2g -Xmx2g -XX:MaxDirectMemorySize=2g
-Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024
-XX:+ParallelRefProcEnabled -XX:+UnlockExperimentalVMOptions
-XX:+AggressiveOpts -XX:+DoEscapeAnalysis -XX:ParallelGCThreads=32
-XX:ConcGCThreads=32 -XX:G1NewSizePercent=50 -XX:+DisableExplicitGC
-XX:-ResizePLAB -XX:+ExitOnOutOfMemoryError -XX:+PerfDisableSharedMem
proxy: -Xms1g -Xmx1g -XX:MaxDirectMemorySize=1g
```
----
2019-06-07 05:39:21 UTC - Yuwei Jiang: But when I tried to use the same config
but deploy on a `n1-standard-16` instance (16 vCPUs, 60GB memory), I got the
following error message in the bookker pod and it was crashing.
I’m puzzled as why with more memory on the node the bookkeeper went into
`CrashLoopBackOff` state. Any ideas? Thanks in advance!
----
2019-06-07 05:39:37 UTC - Yuwei Jiang:
----
2019-06-07 05:40:54 UTC - Ali Ahmed: why are giving such low values ?
----
2019-06-07 05:40:56 UTC - Ali Ahmed: ```broker: -Xms2g -Xmx2g
-XX:MaxDirectMemorySize=2g ```
----
2019-06-07 05:46:26 UTC - Yuwei Jiang: @Ali Ahmed: is it too low? I have other
pods running as well; on the 15GB node, this seems reasonable to me. I’m doing
some load testing but did not encounter any issues with this memory settings.
----
2019-06-07 05:47:39 UTC - Ali Ahmed: it’s low if you want to load test the
system
----
2019-06-07 05:47:50 UTC - Ali Ahmed: what kind of load are simulating ?
----
2019-06-07 05:49:50 UTC - Yuwei Jiang: I’m creating 200 tenants, each with
roughly 10 namespaces. I’m puzzled as why this config worked on a 15GB
instance, but ran into issue on a 60GB instance, maybe I misconfigured some of
the JVM settings?
----
2019-06-07 07:42:16 UTC - Ali Ahmed: what’s the throughput ?
----
2019-06-07 08:59:19 UTC - dba: Hi
When I set initialSequenceId on a producer, both the CommandSend.sequence_id
and MessageMetadata.sequence_id is set to initialSequenceId+1. I was a little
surprised that CommandSend.sequence_id was also set and that if I reproduce a
message with an existing sequenceId, then it is just added to the topic. So now
I have two different messages, but with the same sequenceId. Could someone
explained how sequenceId is meant to work? Is it just pass-through info to the
consumer(s) or does it have some server-side logic?
----