2020-03-16 09:52:33 UTC - Pavel Tishkevich: @Joe Francis For lookup we need to 
fetch from Zookeeper bundles, but not topic names. I think we are talking about 
different z-nodes.
Z-node used for lookup has following pattern:
`/namespace/<tenant>/<cluster>/<namespace>` - childs are 
bundles (hash ranges), each containing data about to which broker it’s assigned.
They don’t change that often, so reloading zk cache for them don’t create 
additional zookeeper load.

But we have the problem with z-node that has following pattern:
`/managed-ledgers/<tenant>/<cluster>/<namespace>/persistent` 
- childs are topic names

These are loaded for the first time here:
`NamespaceService.searchForCandidateBroker`:
```    // Schedule the task to pre-load topics
    pulsar.loadNamespaceTopics(bundle);```
This happens only for bundles that are not assigned to the brokers yet. - 
Usually on the first clean start of Pulsar.
I think main purpose of this line is to initialize `BrokerService.topics` map, 
but as side effect ZookeeperCache for all topics names is also initialized.
But then it looks like this zk cache (all topic names
) isn’t used, but reloaded frequently (watchers triggered) - that causes heavy 
load on Zookeeper, because in our case topics created/deleted frequently.
----
2020-03-16 11:35:00 UTC - Dennis Yung: I am getting this error too.
While using the pulsar manager, most requests are returned with 403 or 500
e.g. GET /admin/v2/clusters 403
In the log, such error message are accompanied with errors about finding 
brokers:
`[pulsar-external-web-4-5] WARN 
org.apache.pulsar.proxy.server.AdminProxyHandler - [10.128.0.113:33914] Failed 
to get next active broker No active broker is available`
`org.apache.pulsar.broker.PulsarServerException: No active broker is available 
at 
org.apache.pulsar.proxy.server.BrokerDiscoveryProvider.nextBroker(BrokerDiscoveryProvider.java:94)
 ~[org.apache.pulsar-pulsar-proxy-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT] at 
org.apache.pulsar.proxy.server.AdminProxyHandler.rewriteTarget(AdminProxyHandler.java:272)
 [org.apache.pulsar-pulsar-proxy-2.6.0-SNAPSHOT.jar:2.6.0-SNAPSHOT] at 
org.eclipse.jetty.proxy.ProxyServlet.service(ProxyServlet.java:62) 
[org.eclipse.jetty-jetty-proxy-9.4.20.v20190813.jar:9.4.20.v20190813] at 
javax.servlet.http.HttpServlet.service(HttpServlet.java:790) 
[javax.servlet-javax.servlet-api-3.1.0.jar:3.1.0] at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852) 
[org.eclipse.jetty-jetty-servlet-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:544) 
[org.eclipse.jetty-jetty-servlet-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1307)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482) 
[org.eclipse.jetty-jetty-servlet-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173)
 [org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) 
[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.Server.handle(Server.java:494) 
[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374) 
[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268) 
[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813] at 
<http://org.eclipse.jetty.io|org.eclipse.jetty.io>.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
 [org.eclipse.jetty-jetty-io-9.4.20.v20190813.jar:9.4.20.v20190813] at 
<http://org.eclipse.jetty.io|org.eclipse.jetty.io>.FillInterest.fillable(FillInterest.java:103)
 [org.eclipse.jetty-jetty-io-9.4.20.v20190813.jar:9.4.20.v20190813] at 
<http://org.eclipse.jetty.io|org.eclipse.jetty.io>.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
 [org.eclipse.jetty-jetty-io-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
 [org.eclipse.jetty-jetty-util-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
 [org.eclipse.jetty-jetty-util-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
 [org.eclipse.jetty-jetty-util-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
 [org.eclipse.jetty-jetty-util-9.4.20.v20190813.jar:9.4.20.v20190813] at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:367)
 [org.eclipse.jetty-jetty-util-9.4.20.v20190813.jar:9.4.20.v20190813] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_242] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_242] at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
 [io.netty-netty-common-4.1.43.Final.jar:4.1.43.Final] at 
java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]`
----
2020-03-16 12:47:45 UTC - Carlos: @Carlos has joined the channel
----
2020-03-16 13:13:46 UTC - Prasad Reddy: @Sijie Guo any update on this more 
appreciated
----
2020-03-16 13:56:45 UTC - Chris: What are some values that I can tune to make 
pulsar consume long backlogs more efficiently? It seems to slow down 
exponentially when trying to replay 100mm+ backlogs. I've tried increasing 
rocksdb block cache size to 3gb but it didn't seem to help much/at all. It 
seems like the messages come in in big batches of a couple hundred, then pause 
for a few seconds, then get another batch.
----
2020-03-16 15:10:59 UTC - Evan Furman: We’ve seen similar behavior. Let me know 
if you figure it out @Chris. I tried same thing 
<https://apache-pulsar.slack.com/archives/CJ0FMGHSM/p1583868861007000>
----
2020-03-16 15:12:09 UTC - Roman Popenov: Is it possible to specify config for 
pulsar functions when running in kubernetes runtime? I would like for pulsar 
functions to mount an additional volume?
----
2020-03-16 15:12:44 UTC - Chris: Yeah, the suggestion seems to make sense, but 
I wasn't able to find any metrics for how full the rocksdb is.
----
2020-03-16 15:13:04 UTC - Chris: What did you set it to? I might try just 
giving it 10gb or something silly.
----
2020-03-16 15:23:41 UTC - Evan Furman: Yea, tried that. Didn’t seem to make a 
difference
```  dbStorage_rocksDB_blockCacheSize: "25769803776"```
----
2020-03-16 15:25:58 UTC - eilonk: is there a channel for pulsar on kubernetes, 
or more specifically using helm?
----
2020-03-16 15:26:44 UTC - Evan Furman: Wondering if @Sijie Guo might have any 
other recommendations?
----
2020-03-16 15:26:55 UTC - Chris: I feel like there's gotta be some lookup that 
grows in time with the length of backlog that happens for every message
----
2020-03-16 15:27:22 UTC - Chris: I don't know quite enough about the pulsar 
source to go looking though.
----
2020-03-16 15:28:37 UTC - Evan Furman: Same here, but I think we’re in the 
right place to find out :smile:
----
2020-03-16 15:30:48 UTC - Chris: I'll try standing up a test standalone 
instance and give it a huge backlog.
----
2020-03-16 15:31:35 UTC - Chris: I feel like it's gotta be pretty obvious in a 
profiler, unless it's a storm of requests to bookies that take a while to make 
it across the network. But I feel like that would cause other issues.
----
2020-03-16 15:31:39 UTC - Evan Furman: Yea, the pulsar-perf tool can repro it 
pretty quickly. I have a cluster running on EKS
----
2020-03-16 15:32:14 UTC - Chris: <#CJ0FMGHSM|kubernetes>
----
2020-03-16 15:33:10 UTC - eilonk: thanks!
----
2020-03-16 16:26:14 UTC - Ian: Is there a way to have geo-replication while 
also consuming in the same region as the message was produced in (when 
possible)?
For example, messages could be produced and consumed in us-east and us-west, 
and replicated to each other.
Under normal conditions, us-east would consume messages produced in us-east, 
but if us-west was to fail somehow, us-east could consume the us-west produced 
messages.
----
2020-03-16 17:27:41 UTC - Chris: Have you been able to repro it on standalone? 
I just tried and maybe I didn't make my backlog big enough. Still getting 150k/s
----
2020-03-16 17:29:50 UTC - Evan Furman: I’m rebuilding my cluster now--will let 
you know
----
2020-03-16 18:53:05 UTC - Tobias Macey: @Sijie Guo I saw the upcoming webinar 
about the Kubernetes on Pulsar project. Does that mean that a release is 
imminent? Looking forward to it!
----
2020-03-16 18:58:00 UTC - Sijie Guo: You mean Kafka on Pulsar? Yes. It is 
coming.
----
2020-03-16 18:59:49 UTC - Sijie Guo: No. the connect still goes to the normal 
“lookup” logic. But the actual work is done by the “proxy”.
----
2020-03-16 18:59:58 UTC - Tobias Macey: Yes, sorry. Wrong "K" project. Are you 
coordinating the release with the webinar, or do you think it will be made 
available ahead of that date? I'm sorry if I'm coming across as pushy, I'm just 
curious because I'm in the process of designing the topology for a data flow of 
our logs and determining whether new or additional systems will be needed and 
the availability of the Kafka protocol as an interface will drastically 
simplify things.
----
2020-03-16 19:00:06 UTC - Tobias Macey: Thank you for all of the great work!
----
2020-03-16 19:00:32 UTC - Sijie Guo: what is 2.5.0-3?

Can you increase the memory that you assigned to the JVM?
----
2020-03-16 19:01:24 UTC - Vince Pergolizzi: Is the flow:
Connect to proxy -&gt; Connected
Topic lookup -&gt; Topic lookup response
Connect to proxy (with proxy broker URL set) -&gt; Connected
----
2020-03-16 19:01:54 UTC - Vince Pergolizzi: So I have 2 physical connections 
open to the proxy but 1 of them is set to route my commands through to the proxy
----
2020-03-16 19:02:14 UTC - Sijie Guo: A few notes:

1. try to make sure have reserved a certain memory for filesystem.
2. Try to increase the follow settings.
dbStorage_readAheadCacheMaxSizeMb=
dbStorage_readAheadCacheBatchSize=1000
3. How did your application consume the events?
----
2020-03-16 19:03:55 UTC - Sijie Guo: What type of EBS volumes you are using?
----
2020-03-16 19:04:05 UTC - Sijie Guo: Can you give me any insights?
----
2020-03-16 19:04:20 UTC - Sijie Guo: Because if it is EBS, there might be 
bandwidth limitation.
----
2020-03-16 19:04:36 UTC - Sijie Guo: It will be interested to see what is the 
network bandwidth limitation.
----
2020-03-16 19:05:53 UTC - Evan Furman: `gp2` and `st1`
----
2020-03-16 19:06:22 UTC - Sijie Guo: 
<http://pulsar.apache.org/docs/en/functions-runtime/#kubernetes-customruntimeoptions>
----
2020-03-16 19:06:23 UTC - Evan Furman: ```# SSDs for bookie journal storage
kind: StorageClass
apiVersion: <http://storage.k8s.io/v1|storage.k8s.io/v1>
metadata:
  name: bookie-ssd
provisioner: <http://kubernetes.io/aws-ebs|kubernetes.io/aws-ebs>
parameters:
  type: gp2
  fsType: xfs
#  To create encrypted ebs volume using kms
#  encrypted: "true"
#  kmsKeyId: &lt;enter the key id here&gt;
reclaimPolicy: Delete
---
# HDDs for bookie ledger storage
kind: StorageClass
apiVersion: <http://storage.k8s.io/v1|storage.k8s.io/v1>
metadata:
  name: bookie-hdd
provisioner: <http://kubernetes.io/aws-ebs|kubernetes.io/aws-ebs>
parameters:
  type: st1
  fsType: xfs
#  To create encrypted ebs volume using kms
#  encrypted: "true"
#  kmsKeyId: &lt;enter the key id here&gt;
reclaimPolicy: Delete```
----
2020-03-16 19:07:38 UTC - Chris: Internal k8s cluster. We've got a raid 5 of 
ssds hooked up as pulsar disks to each node and mount them directly. When it 
reads long backlogs, there's little disk activity as far as I can tell.
----
2020-03-16 19:08:30 UTC - Sijie Guo: Yes, it will be released before the 
webinar. probably coming in a few days. Stay tuned.
----
2020-03-16 19:10:50 UTC - Evan Furman: I see about `14.75K` on average for 
consumption from disk
----
2020-03-16 19:10:56 UTC - Roman Popenov: Do those allow to specify volume 
mounts?
----
2020-03-16 19:11:25 UTC - Tobias Macey: Great, thanks :+1:
----
2020-03-16 19:11:59 UTC - Evan Furman: vs `20-40k/s`  on real-time
----
2020-03-16 19:14:58 UTC - Sijie Guo: Currently, the consumer in each region 
will consume all the messages. you can filter out the replicated messages (by 
checking `
```isReplicated```
` and
```getReplicatedFrom```
 in the message) at the client side.
----
2020-03-16 19:18:05 UTC - Sijie Guo: @Evan Furman what is “K” here? “KB”?
----
2020-03-16 19:18:39 UTC - Sijie Guo: @Chris Can you give me more details about 
your setup? Bookie pod specification (cpu, memory, jvm settings, and etc)?
----
2020-03-16 19:20:02 UTC - Chris: Readahead cache size was already 6gb so I 
didn't touch it. Bumped up the batch size and have left 12 gb for filesystem 
caches on the bookies.
----
2020-03-16 19:20:43 UTC - Chris: I filter ~95% of the messages, then insert the 
remainders into another db. The problem is easily reproducible with pulsar-perf 
on the same subscription though
----
2020-03-16 19:21:09 UTC - Sijie Guo: I don’t think so. You can extend 
BasicKubernetesManifestCustomizer to add the ability to mount volumes
----
2020-03-16 19:21:34 UTC - Sijie Guo: yes
----
2020-03-16 19:23:00 UTC - Vince Pergolizzi: Thanks
----
2020-03-16 19:23:10 UTC - Chris: Sure thing. Gonna paste a bunch of stats 
below. We've got 20 bookies across a kubernetes cluster, each with stats like 
this.
```BOOKIE_MEM: '" -Xms16g -Xmx16g -XX:MaxDirectMemorySize=16g"'
dbStorage_writeCacheMaxSizeMb: "6000"  # Write cache size (direct memory)
dbStorage_readAheadCacheMaxSizeMb: "2000"  # Read cache size (direct memory)
dbStorage_readAheadCacheBatchSize: "8000"
dbStorage_rocksDB_blockCacheSize: "4000000000"

resources:
            requests:
              cpu: 10
            limits:
              memory: "42Gi"```
----
2020-03-16 19:23:18 UTC - Vince Pergolizzi: Do you know why I would get an 
authentication error on that 2nd connection?
----
2020-03-16 19:24:30 UTC - Sijie Guo: for setting up proxy, I would recommend 
setting the following settings instead of setting zookeeperServers to use 
zookeeper to do service discovery.

```# if Service Discovery is Disabled this url should point to the discovery 
service provider.
brokerServiceURL=
brokerServiceURLTLS=

# These settings are unnecessary if `zookeeperServers` is specified
brokerWebServiceURL=
brokerWebServiceURLTLS=```
----
2020-03-16 19:24:35 UTC - Chris: 7 brokers in front, similar cpu and memory.
----
2020-03-16 19:25:07 UTC - Sijie Guo: I am not sure. Can you provide more 
information about your setup?
----
2020-03-16 19:36:12 UTC - Sijie Guo: I would suggest swapping the settings 
between `dbStorage_writeCacheMaxSizeMb` and 
`dbStorage_readAheadCacheMaxSizeMb`. Also try to reduce the 
dbStorage_readAheadCacheBatchSize to 100?
+1 : Chris
----
2020-03-16 19:36:55 UTC - Sijie Guo: okay I see.
----
2020-03-16 19:37:03 UTC - Sijie Guo: I replied to the other threads
+1 : Chris
----
2020-03-16 19:37:48 UTC - Evan Furman: ``` BOOKIE_MEM: 
"\"-Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024 
-XX:+ParallelRefProcEnabled -XX:+UnlockExperimentalVMOptions 
-XX:+AggressiveOpts -XX:+DoEscapeAnalysis -XX:ParallelGCThreads=4 
-XX:ConcGCThreads=4 -XX:G1NewSizePercent=50 -XX:+DisableExplicitGC 
-XX:-ResizePLAB -XX:+ExitOnOutOfMemoryError -XX:+PerfDisableSharedMem 
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime 
-XX:+PrintHeapAtGC -verbosegc -XX:G1LogLevel=finest -Xms18g -Xmx18g 
-XX:MaxDirectMemorySize=28g\""
  dbStorage_writeCacheMaxSizeMb: "2048" # Write cache size (direct memory)
  dbStorage_readAheadCacheMaxSizeMb: "2048" # Read cache size (direct memory)
  dbStorage_rocksDB_blockCacheSize: "25769803776"```
----
2020-03-16 19:50:21 UTC - Chris: I did some profiling on a pulsar standalone 
instance with a heavy backlog vs no backlog and nothing stuck out to me. The 
same methods were at the top, and it looked like pulsar spent most of its time 
reading messages from the disk and sending them which is good. I only had disk 
space locally for ~50m messages though.
----
2020-03-16 19:53:23 UTC - Chris: Once the bookies finish restarting perhaps 
I'll try to arrange profiling of a live broker. There's so many threads that 
it's hard to make sense of the profiling though.
----
2020-03-16 20:12:38 UTC - Roman Popenov: Yeah, that’s what I looked into, I 
think it’s a bit convoluted using Java to populate all the specs of the pod
----
2020-03-16 20:13:43 UTC - Roman Popenov: I wonder if there is a library that 
maps manifest files to kuberentes java configs
----
2020-03-16 22:17:48 UTC - Alexander Ursu: Has anyone had any experience with 
visualizing the data publish to pulsar topics on Grafana? Any helpful things to 
keep in mind or try out that can make this process as seamless and scalable as 
possible?
----
2020-03-16 23:45:09 UTC - Aaron Stockton: Im using a paritioned topic with an 
exclusive consumer and not seeing the pulsar broker reflect any of my acks.

Via the pulsar consumer stats logging:
```[public/default/870|EXTERNID_NG|2020-03-16|subscription] [test-worker] 
Prefetched messages: 0 --- Consume throughput received: 704.70 msgs/s --- 1.32 
Mbit/s --- Ack sent rate: 704.70 ack/s```
and from the pulsar admin client:
```bin/pulsar-admin topics stats 
<persistent://public/default/'topic-03-16-partition-1'> | jq 
'.subscriptions."public/default/topic-03-16-subscription".unackedMessages'
324800```
the last acked time is pretty old to, like my acks arent being processed. Is 
there a config im missing somewhere?
----
2020-03-17 01:36:02 UTC - Vince Pergolizzi: I found the issue I was passing the 
ProxyToBrokerUrl with a <pulsar://host>:port format but the code is expecting 
just host:port
----
2020-03-17 03:56:06 UTC - Sijie Guo: did your application call #acknowledge()?
----
2020-03-17 04:57:36 UTC - Prasad Reddy: The 2.5.0- 3 is pulsar chart version 
and already we increased to 4 GB to direct memory and xmx is 2 gb. Do you mean 
required more memory still?
----
2020-03-17 05:12:30 UTC - xue: Error reported by hot deployment producers under 
karaf framework
----
2020-03-17 05:40:36 UTC - Luis Muniz: Hi, I am trying to evaluate apache 
pulsar, and I would like to know if there is a production-ready recommended way 
of deploying it on AWS. I have gone through several of the listed deployment  
methods on the site, and have not found a working, recommended way.
• I have tried the terraform/ansible method, and it hangs on Initialize cluster 
metadata task. This method does not use a k8s cluster but seemed like a quick 
way to start evaluating it.
• If I use k8s, should I go the kops way and the "install pulsar components" 
way or use a helm chart?
Any advice appreciated
----
2020-03-17 07:41:38 UTC - Sijie Guo: it seems that you didn’t add the pulsar 
client jar in the classpath.
----
2020-03-17 07:41:56 UTC - Sijie Guo: Can you please check how do you reference 
the client dependency?
----
2020-03-17 07:43:01 UTC - Sijie Guo: terraform/ansible is used for VM.
----
2020-03-17 07:43:12 UTC - Sijie Guo: If you are using a k8s cluster, try the 
helm approach.
----
2020-03-17 07:44:00 UTC - Sijie Guo: For “production-ready” recommendation, it 
would be based on the requirements of throughput / number of 
topics/producers/consumers.
----
2020-03-17 07:47:59 UTC - Luis Muniz: Thanks for replying. I was more worried 
about stability than performance
----
2020-03-17 07:49:49 UTC - xue: 是osgi类加载器的问题,我的应用不是直接在jvm上运行,是在osgi容器里运行,热部署
----
2020-03-17 07:51:16 UTC - Sijie Guo: The software itself has been used in many 
companies. so you don’t have to worry about that.

The helm chart in the open source provides you the basic framework to run it on 
k8s. people might take that helm chart to tweak settings for their own 
production requirements.
----
2020-03-17 07:57:01 UTC - xue: 你们在osgi容器里跑个生产者就会重现,我用的osgi框架是karaf
----
2020-03-17 07:59:23 UTC - Luis Muniz: thanks :+1:
----
2020-03-17 08:44:35 UTC - Prasad Reddy: @Sijie Guo I've created git issue and 
currently the case is in open. It would be great if pass some information 
regarding this case
----

Reply via email to