2020-07-15 10:03:49 UTC - rohit keshwani: @rohit keshwani has joined the channel
----
2020-07-15 10:13:41 UTC - Prashanth YV: @Prashanth YV has joined the channel
----
2020-07-15 10:13:50 UTC - Mohit Manaskant: Hello, We are setting up a
production grade Pulsar cluster on AWS. Wish to know whats the best practice
regarding ZooKeeper? is it fine to keep one zookeeper cluster for both Pulsar
and BookKeeper or we want it separate due to different work loads?
----
2020-07-15 10:14:29 UTC - Ali Ahmed: it’s fine to have instance pulsar and bk
do not put significant load on zk.
+1 : Mohit Manaskant
----
2020-07-15 11:42:57 UTC - Ebere Abanonu: Ok, what will be the fix?
----
2020-07-15 14:04:58 UTC - Zhenhao Li: hi, if I deploy a bookie on a new node,
is it sufficient to run `initbookie` without running `bookieformat` ?
----
2020-07-15 14:18:16 UTC - Matteo Merli: Correct
----
2020-07-15 14:20:05 UTC - Zhenhao Li: awesome. thanks!
----
2020-07-15 14:21:44 UTC - Roman Ananyev: Hi all!
I installed for testing the Debezium source connector and Pulsar 2.6 - this is
in fact Debezium version 1.0 as far as I know.
I noticed such a feature - he does not want to write in one topic. I specified
the `topicName` parameter inside the connector configuration yaml file, added
the connector with the key `--destination-topic-name`, but as a result, nothing
happens.
Anyway, for each table a separate topic is created. Can anyone have any ideas
how to do this?
In the Debezium version for Kafka, this is done using the
`io.debezium.transforms.ByLogicalTableRouter` module for Kafka SMT, but it does
not work with Pulsar yet.
----
2020-07-15 14:33:25 UTC - Frank Kelly: I have a local Minikube Dev env that I
can blow away - so wondering how I can get past this issue - I have
`persistence: false`
```14:29:03.340 [LedgerDirsMonitorThread] WARN
org.apache.bookkeeper.bookie.LedgerDirsMonitor - LedgerDirsMonitor check
process: All ledger directories are non writable
14:29:03.342 [LedgerDirsMonitorThread] ERROR
org.apache.bookkeeper.util.DiskChecker - Space left on device
data/bookkeeper/ledgers/current : 1175425024, Used space fraction: 0.96614987
> threshold 0.95.```
When I look at the Pod volumes I see the following
```Volumes:
pulsar-bookkeeper-journal:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
pulsar-bookkeeper-ledgers:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>```
Perhaps some local Minikube / Docker setting?
----
2020-07-15 15:05:29 UTC - thinker0: ```Below seems to occur periodically in
ModularLoadManagerImpl.
23:20:49.799 [pulsar-ordered-OrderedExecutor-1-0-EventThread] INFO
org.apache.pulsar.zookeeper.ZooKeeperDataCache - [State:CONNECTED Timeout:30000
sessionid:0x17169d09f3004cc local:/10.120.70.26:49516
remoteserver:ZK1511/10.128.144.130:2181 lastZxid:12886059465 xid:75320
sent:75320 recv:77408 queuedpkts:0 pendingresp:0 queuedevents:0] Received
ZooKeeper watch event: WatchedEvent state:SyncConnected type:NodeDataChanged
path:/loadbalance/brokers/10.233.56.150:31070
23:20:49.799 [pulsar-ordered-OrderedExecutor-1-0-EventThread] INFO
org.apache.pulsar.zookeeper.ZooKeeperCache - [State:CONNECTED Timeout:30000
sessionid:0x17169d09f3004cc local:/10.120.70.26:49516
remoteserver:ZK1511/10.128.144.130:2181 lastZxid:12886059465 xid:75320
sent:75320 recv:77408 queuedpkts:1 pendingresp:0 queuedevents:0] Received
ZooKeeper watch event: WatchedEvent state:SyncConnected type:NodeDataChanged
path:/loadbalance/brokers/10.233.56.150:31070
23:20:50.033 [pulsar-load-manager-3-1] WARN
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - Error when
attempting to update local broker data
java.lang.NullPointerException: null
at
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl.getBundleStats(ModularLoadManagerImpl.java:416)
~[org.apache.pulsar-pulsar-broker-2.6.0.jar:2.6.0]
at
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl.updateLocalBrokerData(ModularLoadManagerImpl.java:939)
[org.apache.pulsar-pulsar-broker-2.6.0.jar:2.6.0]
at
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl.writeBrokerDataOnZooKeeper(ModularLoadManagerImpl.java:975)
[org.apache.pulsar-pulsar-broker-2.6.0.jar:2.6.0]
at
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerWrapper.writeLoadReportOnZookeeper(ModularLoadManagerWrapper.java:115)
[org.apache.pulsar-pulsar-broker-2.6.0.jar:2.6.0]
at
org.apache.pulsar.broker.loadbalance.LoadReportUpdaterTask.run(LoadReportUpdaterTask.java:41)
[org.apache.pulsar-pulsar-broker-2.6.0.jar:2.6.0]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
[?:?]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
[?:?]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
[?:?]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
[?:?]
at
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
[io.netty-netty-common-4.1.48.Final.jar:4.1.48.Final]
at java.lang.Thread.run(Thread.java:830) [?:?]
23:20:55.033 [pulsar-load-manager-3-1] WARN
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl - Error when
attempting to update local broker data
java.lang.NullPointerException: null
at
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl.getBundleStats(ModularLoadManagerImpl.java:416)
~[org.apache.pulsar-pulsar-broker-2.6.0.jar:2.6.0]
at
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl.updateLocalBrokerData(ModularLoadManagerImpl.java:939)
[org.apache.pulsar-pulsar-broker-2.6.0.jar:2.6.0]
at
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl.writeBrokerDataOnZooKeeper(ModularLoadManagerImpl.java:975)
[org.apache.pulsar-pulsar-broker-2.6.0.jar:2.6.0]
at
org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerWrapper.writeLoadReportOnZookeeper(ModularLoadManagerWrapper.java:115)
[org.apache.pulsar-pulsar-broker-2.6.0.jar:2.6.0]
at
org.apache.pulsar.broker.loadbalance.LoadReportUpdaterTask.run(LoadReportUpdaterTask.java:41)
[org.apache.pulsar-pulsar-broker-2.6.0.jar:2.6.0]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
[?:?]
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
[?:?]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
[?:?]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
[?:?]
at
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
[io.netty-netty-common-4.1.48.Final.jar:4.1.48.Final]
at java.lang.Thread.run(Thread.java:830) [?:?]```
white_check_mark : thinker0
----
2020-07-15 15:31:15 UTC - charles: I have a question regarding offloading and
accessing messages on AWS S3.
I run a standalone Docker instance of "pulsar-all:2.6.0" and offloaded
successfully messages to an S3 bucket. I have a topic with 2 partitions. For
example, the following S3 objects are created:
```81378c52-e86b-467f-806b-f326f246f38d-ledger-250
81378c52-e86b-467f-806b-f326f246f38d-ledger-250-index ```
Getting the status of the ledger through the admin cli results in
"offloaded=true":
`$ ./bin/pulsar-admin topics stats-internal
<persistent://ACME1_DEV/erp1/SalesOrder-partition-0>`
...
```{
"ledgerId" : 250,
"entries" : 1000,
"size" : 47000,
"offloaded" : true
}```
...
Getting the content of that ledger works through the admin cli as well, e.g.
getting entryId=10 out of ledgerId=250 is done as follows:
`$ ./bin/pulsar-admin topics get-message-by-id
<persistent://ACME1_DEV/erp1/SalesOrder-partition-0> -l 250 -e 10`
``` +-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 46 6f 6f 62 61 72 |Foobar |
+--------+-------------------------------------------------+----------------+```
When I delete the S3 object from AWS, I would expect this won't work anymore:
the content was offloaded and the tiered storage was flushed.
However, I get the same result running the latter command.
Does an offload mean it is still available on the Pulsar server? How can I
clear my server, while I can still access older offloaded messages through
Pulsar?
----
2020-07-15 15:49:29 UTC - Arthur: I created an ingress on pulsar-proxy 6650 and
6651 but I get a "unable to find valid certificate" on client side (even if I
provide certificate with tlsTrustCertsFilePath).
If I ignore cert validation, I get "TooLongFrameException : Adjusted frame
length exceeds 5253120: 1213486164 - discarded"
I'm not sure creating an ingress works without configuration in pulsar?
----
2020-07-15 16:08:00 UTC - Addison Higham: What version is this @thinker0?
----
2020-07-15 16:12:33 UTC - Addison Higham: @VanderChen are all these topics in a
single namespace? If so, you will want to increase the number of bundles.
Namespaces by default have 4 bundles, which is basically just 4 "buckets" of
hash range. To support lots of topics, you need lots of bundles.
Additionally, if you are trying to create and either producer/consume to all
these topics really quickly, that will be a distinct problem compared to
reaching those number of topics organically, but you can tune the server and
client to be more forgiving, I would raise most of the timeout settings on both
the client and the server.
Hope that helps :slightly_smiling_face:
----
2020-07-15 16:16:12 UTC - Addison Higham: @charles when brokers download an
offloaded segment, they do cache it. That may just be it serving it from the
cache. If you want to force the cache to be cleared you can unload the topic,
this will force it to be re-loaded to a different broker.
`pulsar-admin topics unload <topicname>` should do it.
Also, another thing to consider is that things can be offloaded, but there is
still a lag before it gets delete from bookkeeper, controlled by the
`pulsar-admin namespaces set-offload-deletion-lag`, so you may want to make
sure that time frame has passed
----
2020-07-15 17:19:22 UTC - Matt Mitchell: I’m running into this error when
running Pulsar standalone in Docker… (client app is external). I thought it was
related to initializing a tenant and namespace, but even after creating them
ahead of time, I still get the error… any ideas on how to resolve?
`java.util.concurrent.CompletionException:
org.apache.pulsar.client.api.PulsarClientException$BrokerMetadataException:
Policies not found for foo/bar namespace`
----
2020-07-15 17:24:12 UTC - Addison Higham: So just to understand, you were
trying to do some operation before the tenant and namespace were created, you
created them, and then still got this error?
----
2020-07-15 17:52:39 UTC - VanderChen: Thank you very much. I will try to
increase the number of bundles.
----
2020-07-15 17:52:46 UTC - Zhenhao Li: hi, what is the minimal version of
zookeeper that Pulsar can work with?
----
2020-07-15 17:53:27 UTC - Zhenhao Li: I'm using a zookeeper cluster with
version 3.4.13. I see some strange issues now
----
2020-07-15 18:17:08 UTC - Zhenhao Li: what should I use for
`broker-service-url` ? a broker's address or a bookie's?
----
2020-07-15 18:19:00 UTC - Matt Mitchell: I thought so, but I’m verifying that
now. Will let you know…
----
2020-07-15 18:19:20 UTC - Addison Higham: that is the pulsar protocol address
for the broker, like `pulsar://<hostname>:6650` if not using TLS or
`pulsar+ssl://<hostname>:6651` if you are using TLS
----
2020-07-15 18:20:22 UTC - Addison Higham: There are some zookeeper client API
changes in ZK 3.5, but Pulsar should be able to work with both versions. What
are you seeing exactly?
----
2020-07-15 18:21:53 UTC - Zhenhao Li: I've see lots of
```Jul 15 19:40:25 compute1 java[789]: [myid:0] - WARN
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@383] - Exception
causing close of session 0x0: ZooKeeperServer not running
Jul 15 19:40:25 compute1 java[789]: [myid:0] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket
connection for client /192.168.1.202:45462 (no session established for client)
Jul 15 19:40:25 compute1 java[789]: [myid:0] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted
socket connection from /192.168.1.203:60424
Jul 15 19:40:25 compute1 java[789]: [myid:0] - WARN
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@383] - Exception
causing close of session 0x0: ZooKeeperServer not running
Jul 15 19:40:25 compute1 java[789]: [myid:0] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket
connection for client /192.168.1.203:60424 (no session established for client)
Jul 15 19:40:27 compute1 java[789]: [myid:0] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted
socket connection from /192.168.1.201:40618
Jul 15 19:40:27 compute1 java[789]: [myid:0] - WARN
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@383] - Exception
causing close of session 0x0: ZooKeeperServer not running
Jul 15 19:40:27 compute1 java[789]: [myid:0] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket
connection for client /192.168.1.201:40618 (no session established for client)
Jul 15 19:40:27 compute1 java[789]: [myid:0] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted
socket connection from /192.168.1.203:60434
Jul 15 19:40:27 compute1 java[789]: [myid:0] - WARN
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@383] - Exception
causing close of session 0x0: ZooKeeperServer not running
Jul 15 19:40:27 compute1 java[789]: [myid:0] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket
connection for client /192.168.1.203:60434 (no session established for client)
Jul 15 19:40:28 compute1 java[789]: [myid:0] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted
socket connection from /192.168.1.202:45468
Jul 15 19:40:28 compute1 java[789]: [myid:0] - WARN
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@383] - Exception
causing close of session 0x0: ZooKeeperServer not running
Jul 15 19:40:28 compute1 java[789]: [myid:0] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket
connection for client /192.168.1.202:45468 (no session established for
client)```
----
2020-07-15 18:22:04 UTC - Zhenhao Li: but it's gone after re-run the deployment
----
2020-07-15 18:23:28 UTC - Addison Higham: that error message would be expected
if your zookeeper cluster was trying to form a quorum
----
2020-07-15 18:23:39 UTC - Zhenhao Li: does `
```pulsar initialize-cluster-metadata```
need to run on each broker and bookie node?
----
2020-07-15 18:24:06 UTC - Zhenhao Li: I got confused by the name. I thought I
would only need to run it once
----
2020-07-15 18:24:48 UTC - Zhenhao Li: I see. thanks!
----
2020-07-15 18:26:30 UTC - Addison Higham: only once globally
----
2020-07-15 18:28:04 UTC - Yezen: I'm trying to incorporate end to end
encryption using Apache Pulsar. So far the examples I've seen look similar to
<http://pulsar.apache.org/docs/en/security-encryption/>
I saw that the encryption key (public / private key) is fetched every 4 hours
in the key rotation section. It seems to indicate that you can only have a
single key to encrypt the messages you send.
I’d like to be able to use different encryption keys for different
topics/tenants. Does pulsar support different encryption keys for different
topics?
Basically my client will pull a different key depending on which tenant the
data belongs to and encrypt any messages pertaining to that specific key using
pulsar's end to end encryption.
How would this look?
Is it as simple as creating a new producer any time I want to use a different
key for encryption?
```Producer producer = pulsarClient.newProducer()
.topic("<persistent://my-tenant/my-ns/my-topic>")
.addEncryptionKey("myTenant1Key")
.cryptoKeyReader(new RawFileKeyReader("tenant1_pubkey.pem",
"tenant1_privkey.pem"))
.create();```
So far I've used the pulsar client to send encrypted messages using a local
private/public key pair. Can I just switch out the key pair in the
`.cryptoKeyReader` and instantiate a new producer anytime I want a message to
be encrypted differently? I've tried asking around and can't find a solid
answer.
----
2020-07-15 18:29:50 UTC - Zhenhao Li: what's the meaning of pulsar protocol
address?
----
2020-07-15 18:30:14 UTC - Zhenhao Li: why is it one node's address?
----
2020-07-15 18:34:23 UTC - Joshua Eric: What do you think about this as a rough
example?
```from pulsar.schema import *
class Person(Record):
name = String()
phone = String()
class Schema:
schema = None
def __init__(self, *args):
self.schema = args[0]
def __call__(self, f):
def wrapped(*args):
return self.schema.encode(f(*(self.schema.decode(args[0]),)))
return wrapped
@Schema(AvroSchema(Person))
def tester(input):
# {'name': 'Josh Eric', 'phone': '<tel:616-402-0625|616-402-0625>'}
return input
p = Person(name='Josh Eric', phone='<tel:616-402-0625|616-402-0625>')
msg = AvroSchema(Person).encode(p)
print(tester(msg))
# b'\x02\x12Josh Eric\x02\x18616-402-0625'```
----
2020-07-15 18:35:52 UTC - Joshua Eric: If you agree, I can write it up as a
solution to implementing schema in a Pulsar function with Python.
----
2020-07-15 18:36:05 UTC - Zhenhao Li: my deployment script currently run it on
every bookie node, and fill in the node's own address
----
2020-07-15 18:36:14 UTC - Zhenhao Li: this may not make sense at all...
----
2020-07-15 18:50:24 UTC - Addison Higham: ah, so it can actually be an array of
addresses, but that value isn't that important in most cases, it is only really
used if you are doing geo-replication with multiple clusters
----
2020-07-15 18:54:39 UTC - Addison Higham: I haven't ever tried that, but I
believe it should work based. The encryption basically just encrypts the
payload and then adds metadata indicating which key is used. The consumer looks
at this metadata per message
----
2020-07-15 18:54:53 UTC - Zhenhao Li: ok. thanks!
----
2020-07-15 18:57:40 UTC - Zhenhao Li: how can I resolve this failure?
```Jul 15 20:09:43 server3 systemd[1]: Started Pulsar's Bookkeeper Daemon.
Jul 15 20:09:46 server3 pulsar-bookie-start[3908]:
/nix/store/lmdlcv80ci9fcvfqpshpxjxsbq3ap33p-unit-script-pulsar-bookie-start/bin/pulsar-bookie-start:
line 11: 1: command not found
Jul 15 20:09:46 server3 pulsar-bookie-start[3909]: JMX enabled by default
Jul 15 20:09:48 server3 pulsar-bookie-start[3909]: 20:09:48.320 [main] ERROR
org.apache.bookkeeper.client.BookKeeperAdmin - JournalDir:
/var/lib/pulsar-bookie/journal is existing and its not empty, try formatting
the bookie```
----
2020-07-15 19:43:45 UTC - VanderChen: I have changed the number of bundles to
*16* in broker.conf. But still exist this problem when I create 20k topics.
```org.apache.pulsar.client.api.PulsarClientException$TimeoutException: 34500
lookup request timedout after ms 30000```
what should I set to support this.
----
2020-07-15 19:59:18 UTC - Zhenhao Li: made some progress and now seeing this
error:
----
2020-07-15 19:59:18 UTC - Zhenhao Li: ```Jul 15 21:46:52 server1 systemd[1]:
Started Pulsar's Bookkeeper Daemon.
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: 21:47:06.249
[BookieStateManagerService-0] ERROR
org.apache.bookkeeper.discover.ZKRegistrationManager - ZK exception checking
and wait ephemeral znode /ledgers/available/127.0.0.1:3181 expired :
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]:
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode =
NodeExists for /ledgers/available/127.0.0.1:3181
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.discover.ZKRegistrationManager.checkRegNodeAndWaitExpired(ZKRegistrationManager.java:194)
[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.discover.ZKRegistrationManager.doRegisterBookie(ZKRegistrationManager.java:228)
[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.discover.ZKRegistrationManager.registerBookie(ZKRegistrationManager.java:219)
[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager.doRegisterBookie(BookieStateManager.java:266)
[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager.doRegisterBookie(BookieStateManager.java:254)
[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager$2.call(BookieStateManager.java:217)
[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager$2.call(BookieStateManager.java:213)
[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.lang.Thread.run(Thread.java:748) [?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: 21:47:06.442 [main] ERROR
org.apache.bookkeeper.bookie.Bookie - Couldn't register bookie with zookeeper,
shutting down :
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]:
java.util.concurrent.ExecutionException: java.io.IOException:
org.apache.bookkeeper.bookie.BookieException$MetadataStoreException:
java.io.IOException: ZK exception checking and wait ephemeral znode
/ledgers/available/127.0.0.1:3181 expired
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.Bookie.start(Bookie.java:1007)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.proto.BookieServer.start(BookieServer.java:140)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.server.service.BookieService.doStart(BookieService.java:57)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:83)
~[org.apache.bookkeeper-bookkeeper-common-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.common.component.LifecycleComponentStack.lambda$start$2(LifecycleComponentStack.java:113)
~[org.apache.bookkeeper-bookkeeper-common-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
com.google.common.collect.ImmutableList.forEach(ImmutableList.java:407)
[com.google.guava-guava-25.1-jre.jar:?]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.common.component.LifecycleComponentStack.start(LifecycleComponentStack.java:113)
[org.apache.bookkeeper-bookkeeper-common-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.common.component.ComponentStarter.startComponent(ComponentStarter.java:83)
[org.apache.bookkeeper-bookkeeper-common-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.server.Main.doMain(Main.java:229)
[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.server.Main.main(Main.java:203)
[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.proto.BookieServer.main(BookieServer.java:313)
[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: Caused by:
java.io.IOException:
org.apache.bookkeeper.bookie.BookieException$MetadataStoreException:
java.io.IOException: ZK exception checking and wait ephemeral znode
/ledgers/available/127.0.0.1:3181 expired
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager.doRegisterBookie(BookieStateManager.java:269)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager.doRegisterBookie(BookieStateManager.java:254)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager$2.call(BookieStateManager.java:217)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager$2.call(BookieStateManager.java:213)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: Caused by:
org.apache.bookkeeper.bookie.BookieException$MetadataStoreException:
java.io.IOException: ZK exception checking and wait ephemeral znode
/ledgers/available/127.0.0.1:3181 expired
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.discover.ZKRegistrationManager.doRegisterBookie(ZKRegistrationManager.java:247)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.discover.ZKRegistrationManager.registerBookie(ZKRegistrationManager.java:219)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager.doRegisterBookie(BookieStateManager.java:266)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager.doRegisterBookie(BookieStateManager.java:254)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager$2.call(BookieStateManager.java:217)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager$2.call(BookieStateManager.java:213)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: Caused by:
java.io.IOException: ZK exception checking and wait ephemeral znode
/ledgers/available/127.0.0.1:3181 expired
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.discover.ZKRegistrationManager.checkRegNodeAndWaitExpired(ZKRegistrationManager.java:205)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.discover.ZKRegistrationManager.doRegisterBookie(ZKRegistrationManager.java:228)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.discover.ZKRegistrationManager.registerBookie(ZKRegistrationManager.java:219)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager.doRegisterBookie(BookieStateManager.java:266)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager.doRegisterBookie(BookieStateManager.java:254)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager$2.call(BookieStateManager.java:217)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.bookie.BookieStateManager$2.call(BookieStateManager.java:213)
~[org.apache.bookkeeper-bookkeeper-server-4.10.0.jar:4.10.0]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_242]
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: Caused by:
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode =
NodeExists for /ledgers/available/127.0.0.1:3181
Jul 15 21:47:06 server1 pulsar-bookie-start[6772]: at
org.apache.bookkeeper.discover.ZKRegistrationManager.checkRegNodeAndWaitExpired(ZKRegistrationManager.java:194)...```
----
2020-07-15 20:07:36 UTC - Matt Mitchell: @Addison Higham Ok false alarm.
Creating the tenant + namespace beforehand definitely resolves the issue. All
good. thanks!
----
2020-07-15 20:08:59 UTC - Zhenhao Li: it was stupid. all the nodes were adding
`bookieId: 127.0.0.1:3181` to zookeeper
----
2020-07-15 20:17:28 UTC - Ivy Rogatko: For the C/C++ pulsar client and derived
libraries why does the async produce return a messageId for the delivery
callback? If there’s a failure it seems likely that having the whole message
would be a lot more useful. I understand the space savings but is there anyway
to get the message from messageId if there’s a produce failure and you want to
retrieve the contents? The python example which uses bindings also seems to
indicate that the msg is returned in the callback from the
<https://pulsar.apache.org/api/python/#pulsar.Producer.send_async> example too
(it looks like it’s actually messageId from the source though). If I made a PR
to thread through the full message instead of the messageId for the callback
would that be accepted?
----
2020-07-15 20:37:06 UTC - Joshua Decosta: I’m trying to configure two
authenticationProviders and I’ve noticed in the AuthenticationService’
authenticateHttpRequest it just runs through both configured providers. Has
anyone configure two AuthenticationProviders and if so how did you deal with
them both being run?
----
2020-07-15 20:38:30 UTC - Matteo Merli: You can always pass the message as a
closure on the callback function
----
2020-07-15 20:40:39 UTC - Ivy Rogatko: That doesn’t really work for libraries
using the C/C++ in other languages
----
2020-07-15 20:45:00 UTC - Addison Higham: did you re-create the namespace? If
the namespace already exists and you change the setting, it won't apply to
existing namespaces. Also, I might suggest something more like 64 or 128
bundles to start, perhaps even more.
Finally, as mentioned, if you are creating all these topics at once, that is a
different load profile. You likely will need to tune a number of things:
1. Zookeeper can be a bottleneck for creating thousands of topics, look for
large increases in request latency. If that is happening, you could probably
use more memory and faster disks
2. Set a number of broker settings like `zooKeeperOperationTimeoutSeconds`,
`bookkeeperClientTimeoutInSeconds`,
`managedLedgerMetadataOperationsTimeoutSeconds`,
`maxConcurrentTopicLoadRequest`,
Maybe just starting with doubling those. Some of those might not be ideal for
production, but is probably helpful for load testing to create so many topics
so rapidly
3. Finally, you will need to make some changes to the client, you will want to
raise the `operationTimeout` and `ioThreads`, `connectionsPerBroker
----
2020-07-15 20:46:03 UTC - Addison Higham: hadn't had a chance to look, glad you
figured it out :slightly_smiling_face:
----
2020-07-15 20:52:21 UTC - Joshua Decosta: @Sijie Guo is the default behavior
when using multiple authenticators to just try to authenticate with all of
them? I’m confused how to specify an AuthenticationProvider to different
authData.
----
2020-07-15 21:04:15 UTC - Yezen: Thanks for the response @Addison Higham
So anytime I wanted to use a new key would I just create a new `Producer` and
then pass in my other private/public key pair into `.cryptoKeyReader`? Then on
the subscriber side I would pass in that same private/public key pair if I knew
which key was used to encrypt it?
Would Pulsar have any limit to the number of private/public key pairs in its
metadata?
----
2020-07-15 21:04:42 UTC - Addison Higham: the metadata is stored per message,
so no, I don't believe so
----
2020-07-15 21:20:34 UTC - Yezen: Awesome.
Does the first part of my earlier message make sense in how we would like to
encrypt our messages?
```So anytime I wanted to use a new key would I just create a new...```
Basically before moving forward on Pulsar I would need to make sure that we can
have end to end encryption per tenant on the messages that we send or else we
would have to look into another solution.
----
2020-07-16 00:36:06 UTC - thinker0: v2.6.0
----
2020-07-16 02:13:43 UTC - thinker0: sorry, OOM Issue
```ERROR org.apache.pulsar.PulsarBrokerStarter - -- Shutting down - Received
OOM exception: Cannot reserve 16777216 bytes of direct buffer memory
(allocated: 4280800255, limit: 4294967296)```
----
2020-07-16 02:14:38 UTC - Penghui Li: @Hiroyuki Yamada Could you please verify
on this branch?
<https://github.com/codelipenghui/incubator-pulsar/tree/penghui/fix-7455> ?
----
2020-07-16 02:16:27 UTC - Hiroyuki Yamada: @Penghui Li Sure.
----
2020-07-16 02:16:55 UTC - Penghui Li: Looks the problem relate to a race
condition between adding consumer and reading messages.
----
2020-07-16 02:17:41 UTC - Hiroyuki Yamada: Ok, interesting. Let me check. Get
back to you ASAP.
----
2020-07-16 02:18:19 UTC - Penghui Li: Ok
----
2020-07-16 02:21:34 UTC - Penghui Li: Oh sorry, looks still problem there.
----
2020-07-16 02:21:42 UTC - Penghui Li: Let me check it again
----
2020-07-16 02:36:54 UTC - Penghui Li: I have pushed a new commit on
<https://github.com/codelipenghui/incubator-pulsar/tree/penghui/fix-7455>.
----
2020-07-16 02:37:00 UTC - Penghui Li: You can try on this branch.
----
2020-07-16 02:56:18 UTC - Hiroyuki Yamada: Oh, OK. Just checked with the
previous one. It still happens. (Looks like more happening.)
----
2020-07-16 02:56:24 UTC - Hiroyuki Yamada: Let me check the new one.
----
2020-07-16 03:18:51 UTC - Hiroyuki Yamada: @Penghui Li Hmm, it still happens.
----
2020-07-16 03:24:21 UTC - Joe Francis: I am not clear on what you are trying to
do here. Why are tenants sharing producers? Producers and consumers run on
tenant's hosts , not Pulsar servers
----
2020-07-16 04:15:50 UTC - Penghui Li: ok
----
2020-07-16 04:18:58 UTC - Matteo Merli: looks good, but we also need to declare
the schema of the return type
----
2020-07-16 04:20:54 UTC - Matteo Merli: btw: `return self.schema.encod(....)`
we don't need to do the encoding directly. rather we need to pass the schema
info to the functions runtime so that the producers/consumer will get created
with that schema.
----
2020-07-16 05:24:41 UTC - Sijie Guo: It will try one by one
----
2020-07-16 05:25:09 UTC - Sijie Guo: If one authentication provider can
authenticate it, it will return the authenticated data and pass it to
authorization provider.
----
2020-07-16 05:59:50 UTC - Enrico Olivelli: Yes. You will see TLS traffic only
after starttls. There is no way to do it with openssl easily. You should check
how the binary protocol works. It won't be easy
----
2020-07-16 06:41:00 UTC - Zhenhao Li: the first error was caused by
using `bookkeeper shell bookieformat -nonInteractive -deleteCookie` without
`-force`
----
2020-07-16 06:41:34 UTC - Zhenhao Li: I realized I had to use `bookkeeper shell
bookieformat -nonInteractive -force -deleteCookie` for clean up the previous
deployment
----
2020-07-16 09:04:43 UTC - charles: @Addison Higham: I replayed the scenario and
it indeed works now as suggested. As a follow-up, I noticed the "pulsar-admin
topics unload" feature has limited documentation on the
<http://pulsar.apache.org|pulsar.apache.org> site, while searching on the web
resulted in 7 results. These are contextual very limited. Your answer gives a
practical use case context: highly appreciated. Thank you very much!
----