2020-04-06 10:31:36 UTC - Rattanjot Singh: How can we deploy pulsar-manager on 
kubernetes rather than pulsar-dashboard. Any wiki's?
----
2020-04-06 12:15:09 UTC - Arthur: Enabling tls in pulsar standalone 
configuration and pulsar on Kubernetes with ingress with is the same effect ?
I mean, when a client connect to a "secured" pulsar, does it work in the same 
with tls enable on ingress than when it directly configured in broker.conf ?
----
2020-04-06 13:30:22 UTC - Esakkimuthu: How to set up pulsar cluster using 
pulsar 2.5.0
----
2020-04-06 13:30:31 UTC - Esakkimuthu: Anyone having idea on this
----
2020-04-06 14:57:37 UTC - Addison Higham: @Franck Schmidlin I don't think 
fargate will work. You can't attach extra storage to fargate tasks (either in 
EKS or ECS) 
<https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-storage.html>
----
2020-04-06 15:02:35 UTC - Franck Schmidlin: I don't need extra storage for 
brokers do I, they are stateless, no?
Current thinking is zk and bk on EC2, that's my static infrastructure, and then 
brokers on fargate, scaling up and down to meet demand.
On paper it works... :thinking_face:
----
2020-04-06 16:22:53 UTC - Kanthi: @Kanthi has joined the channel
----
2020-04-06 16:34:28 UTC - Sijie Guo: &gt; Can the bookies be deployed 
separately from the brokers
Yes. In most of the deployments, they are deployed separately.

<https://github.com/streamnative/tgip/blob/master/episodes/001/network-topology.png>
 This will give you some ideas about the network topology between components 
within a pulsar-cluster.

<https://github.com/streamnative/tgip/blob/master/episodes/002/network-topology.png>
 This will give you an overview how a typical pulsar productiion cluster look 
like. All the components can be containerized and run in a containerized 
environment. some of them can be co-run together. E.g. `auto-recoovery` can run 
as part of bookies, `function-worker` can run as part of brokers.

I recorded a session walking through the installation steps: 
<https://www.youtube.com/watch?v=1RQSot5tTuU>

Hope they are helpful to you.
+1 : Franck Schmidlin
----
2020-04-06 16:35:39 UTC - Sijie Guo: It is a national holiday for @Penghui Li 
today. He can probably take a look tomorrow when he is back online.
----
2020-04-06 16:38:12 UTC - Sijie Guo: replied in the other thread you asked 
question.
----
2020-04-06 16:47:58 UTC - Sijie Guo: @Rattanjot Singh for the minimal, you can 
just launch the pulsar-manager in a deployment or statefulset without changing 
or setting any configuration. The pulsar-manager can run with a local postgres 
db. When the container crashes, it will be restarted with a brand new 
environment.

for configuring tls, jwt, and customizing databases, you can refer to 
<https://github.com/streamnative/pulsar-manager/tree/master/src>

If you are looking for a helm chart for your production reference, you can 
checkout <https://github.com/streamnative/charts>

It configures pulsar-manager to run with tls and jwt and automate all the 
startup sequences. You can check the files prefixed with “pulsar-manager-” : 
<https://github.com/streamnative/charts/tree/master/charts/pulsar/templates>
----
2020-04-06 16:51:41 UTC - Sijie Guo: &gt; Enabling tls in pulsar standalone 
configuration and pulsar on Kubernetes with ingress with is the same effect ?
If you are running Pulsar proxies and expose proxies via Ingress, then they are 
same.
+1 : Arthur
----
2020-04-06 16:53:01 UTC - Sijie Guo: There is a section in Pulsar documentation 
“Deployment”. You can find the deployment method that works for you the best. 
Example: <http://pulsar.apache.org/docs/en/deploy-aws/>
----
2020-04-06 17:03:58 UTC - Franck Schmidlin: Thank you, I love a good diagram
----
2020-04-06 18:11:43 UTC - Tim Corbett: No worries, I know he has a fix PR in, 
which we are not running for those tests, so they may be of limited value.  
Just wanted to see if I could help illustrate the issue we were seeing better, 
and also now I have a baseline for when we do get the fix.
----
2020-04-06 18:44:57 UTC - apratapani: @apratapani has joined the channel
----
2020-04-06 18:46:49 UTC - apratapani: Can anyone please list the pros and cons 
between apache pulsar and Apache kafka. What are the major differences in as 
much unbiased way as possible
----
2020-04-06 19:14:16 UTC - Addison Higham: 
----
2020-04-06 19:15:30 UTC - Addison Higham: oh I mised that stuff being on ec2
----
2020-04-06 19:15:34 UTC - Addison Higham: yeah that should work
+1 : Franck Schmidlin
----
2020-04-06 19:16:09 UTC - Addison Higham: ^^ anyone ever seen an issue like 
that before? On a pulsar function, I am getting errors of max number of 
consumer per subscription (which I have set to 100). Looking at the stats, I 
see all 100 instances have the same consumer name, all appear disconnected, and 
all are connected over a small period of time
----
2020-04-06 19:34:24 UTC - Alan Broddle: @Alan Broddle has joined the channel
----
2020-04-06 21:18:16 UTC - Greg: Hi, we are facing a reconnection issue and i 
really can't understand what is happening, if someone can help me understand 
what we are doing wrong :wink:
In the server i see this :
``` Close connection because received internal-server error 
java.lang.IllegalStateException: Namespace bundle 
infinity-qa/default/0x40000000_0x80000000 is being unloaded```
And then when the clients tries to reconnect, we get this error :
```2020/04/06-21:39:33.518  org.apache.pulsar.client.impl.ConnectionHandler 
INFO [<non-persistent://infinity-qa/default/cluster>] [local-2-166] 
Reconnecting after connection was closed
2020/04/06-21:39:33.520  org.apache.pulsar.client.impl.ProducerImpl INFO 
[<non-persistent://infinity-qa/default/cluster>] [local-2-166] Creating 
producer on cnx [id: 0xd16272d4, L:/10.200.13.182:35438 - 
R:10.200.16.180/10.200.16.180:6650]
2020/04/06-21:39:33.522  org.apache.pulsar.client.impl.ClientCnx WARN [id: 
0xd16272d4, L:/10.200.13.182:35438 - R:10.200.16.180/10.200.16.180:6650] 
Received error from server: Producer is already present on the connection```
And on the broker :
```19:21:44.054 [pulsar-io-24-1] INFO  
org.apache.pulsar.broker.service.ServerCnx - 
[/10.200.16.205:49808][<non-persistent://infinity-qa/default/cluster>] Creating 
producer. producerId=0
broker-7664c47c4c-v84qm broker 19:21:44.056 
[BookKeeperClientWorker-OrderedExecutor-1-0] INFO  
org.apache.pulsar.broker.service.ServerCnx - [/10.200.16.205:49808] Created new 
producer: 
Producer{topic=NonPersistentTopic{topic=<non-persistent://infinity-qa/default/cluster>},
 client=/10.200.16.205:49808, producerName=local-2-160, producerId=0}
broker-7664c47c4c-v84qm broker 19:22:12.873 [pulsar-io-24-1] WARN  
org.apache.pulsar.broker.service.ServerCnx - [/10.200.16.1:56482][0] Producer 
with id <non-persistent://infinity-qa/default/cluster> is already present on 
the connection```
We are using client/server 2.5.0
----
2020-04-06 22:04:56 UTC - Dzmitry Kazimirchyk: Hi everyone, We are running 
pulsar 2.5.0 cluster in GKE and occasionally seeing this error when multiple 
broker and bookkeeper pods are restarted at the same time:
```15:47:13.954 [BookKeeperClientScheduler-OrderedScheduler-0-0] ERROR 
org.apache.bookkeeper.client.TopologyAwareEnsemblePlacementPolicy - Unexpected 
exception while handling joining bookie 
robot-pulsar-bookkeeper-1.robot-pulsar-bookkeeper.robot.svc.cluster.local:3181
java.lang.NullPointerException: null
        at 
<http://org.apache.bookkeeper.net|org.apache.bookkeeper.net>.NetUtils.resolveNetworkLocation(NetUtils.java:77)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.client.TopologyAwareEnsemblePlacementPolicy.resolveNetworkLocation(TopologyAwareEnsemblePlacementPolicy.java:779)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.client.TopologyAwareEnsemblePlacementPolicy.createBookieNode(TopologyAwareEnsemblePlacementPolicy.java:775)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.client.TopologyAwareEnsemblePlacementPolicy.handleBookiesThatJoined(TopologyAwareEnsemblePlacementPolicy.java:707)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicyImpl.handleBookiesThatJoined(RackawareEnsemblePlacementPolicyImpl.java:79)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicy.handleBookiesThatJoined(RackawareEnsemblePlacementPolicy.java:246)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.client.TopologyAwareEnsemblePlacementPolicy.onClusterChanged(TopologyAwareEnsemblePlacementPolicy.java:654)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicyImpl.onClusterChanged(RackawareEnsemblePlacementPolicyImpl.java:79)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicy.onClusterChanged(RackawareEnsemblePlacementPolicy.java:89)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.client.BookieWatcherImpl.processReadOnlyBookiesChanged(BookieWatcherImpl.java:190)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.client.BookieWatcherImpl.lambda$initialBlockingBookieRead$2(BookieWatcherImpl.java:209)
 ~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.discover.ZKRegistrationClient$WatchTask.accept(ZKRegistrationClient.java:139)
 [org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
org.apache.bookkeeper.discover.ZKRegistrationClient$WatchTask.accept(ZKRegistrationClient.java:62)
 [org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
        at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
 [?:1.8.0_232]
        at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
 [?:1.8.0_232]
        at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
 [?:1.8.0_232]
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_232]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_232]
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [?:1.8.0_232]
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [?:1.8.0_232]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_232]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_232]
        at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
 [io.netty-netty-common-4.1.43.Final.jar:4.1.43.Final]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]```
After this error occurs, the affected broker instance continues to run, but all 
consumers/producers that are trying to connect to that broker fail with a 
timeout error. The problem goes away only after the affected broker instance is 
manually restarted again. Wondering if this is a known issue and if anyone 
could suggest a workaround (I was trying to come up with a liveness probe that 
could detect this state, but unfortunately REST API calls continue to function 
normally, only subscriptions info requests for concrete topics are failing)
----
2020-04-07 02:03:14 UTC - Sijie Guo: “being unloaded” means there is a load 
balancing in progress.
----
2020-04-07 02:03:58 UTC - Sijie Guo: “Producer is already present on the 
connection” indicates that the client to connect with the same producer id.

Did you configure producer name in the producer?
----
2020-04-07 02:05:27 UTC - Sijie Guo: The exception is thrown when the bookie’s 
network address is unresolvable. I have a fix outstanding - 
<https://github.com/apache/bookkeeper/pull/2301>
----
2020-04-07 02:06:05 UTC - Sijie Guo: although I don’t think this issue affect 
broker though, because this issue happens in bookie discovery which runs in 
background in bookkeeper client.
----
2020-04-07 03:35:56 UTC - Dzmitry Kazimirchyk: thank you Sijie, yes, this 
definitely looks like a bookkeeper client issue, but from what I can tell it 
results in broker  permanently remaining in the bad state until it is manually 
restarted, even after bookkeeper is up and its network address is resolvable
----
2020-04-07 04:35:35 UTC - Binod Kumar Gaudel: @Binod Kumar Gaudel has joined 
the channel
----
2020-04-07 05:53:13 UTC - Greg: No, we tried without setting producer name
----
2020-04-07 09:03:58 UTC - Sijie Guo: Interesting… @Penghui Li can you check 
this?
----
2020-04-07 09:08:06 UTC - Greg: looks like by setting producerName, 
reconnection is working well
----
2020-04-07 09:09:38 UTC - Greg: we have several producers in different jvm for 
the same topic
----
2020-04-07 09:10:32 UTC - Penghui Li: Ok
----

Reply via email to