2018-02-06 07:07:43 UTC - Matteo Merli: @CJJ The fix for this was already 
merged in master. <https://github.com/apache/incubator-pulsar/pull/1173> You 
would get `409` HTTP response if producer is already connected with same name
----
2018-02-06 18:29:32 UTC - prabal nandi: @prabal nandi has joined the channel
----
2018-02-06 18:40:49 UTC - prabal nandi: i am facing issue while trying to run 
Pulsar on my windows machine. i followed the steps mentioned on the getting 
started page, and executed "bin/pulsar standalone" on GIT BASH shell. But 
getting following error "Error: Could not find or load main class 
org.apache.pulsar.PulsarStandaloneStarter". I haven't modified any classpath, 
not sure why the jars are not getting picked up
----
2018-02-06 18:43:52 UTC - Jaebin Yoon: What happens when I unload the 
particular bundle? Is there a graceful unloading that bundle and moving to 
other brokers in terms of producer and consumer communication? I would like to 
understand the impact of unloading bundles for producers and consumers when I 
unload the bundles.
----
2018-02-06 18:44:01 UTC - Matteo Merli: I have zero experience on Windows 
(after Win98) and I don’t have access to it :slightly_smiling_face:. If it’s 
running with bash, can you try to use bash `-x` option to have it print the 
exact commands and variables?
----
2018-02-06 18:50:22 UTC - prabal nandi: i updated the shell script by 
overriding the $pulsar_classpath variable, with the fully-qualified path in 
windows style, and it worked. But not have hit another issue. Error while 
starting pulsar standalone. Following is the stack trace:

2018-02-07 00:17:27,147 - INFO  - [main:DbLedgerStorage@135] -  - Read Ahead 
Batch size: : 100
Exception in thread "main" java.io.IOException: Error open RocksDB database
        at 
org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.&lt;init&gt;(KeyValueStorageRocksDB.java:159)
        at 
org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.&lt;init&gt;(KeyValueStorageRocksDB.java:73)
        at 
org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB$1.newKeyValueStorage(KeyValueStorageRocksDB.java:44)
        at 
org.apache.bookkeeper.bookie.storage.ldb.EntryLocationIndex.&lt;init&gt;(EntryLocationIndex.java:47)
        at 
org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.initialize(DbLedgerStorage.java:138)
        at org.apache.bookkeeper.bookie.Bookie.&lt;init&gt;(Bookie.java:508)
        at 
org.apache.bookkeeper.proto.BookieServer.newBookie(BookieServer.java:308)
        at 
org.apache.bookkeeper.proto.BookieServer.&lt;init&gt;(BookieServer.java:116)
        at 
org.apache.pulsar.zookeeper.LocalBookkeeperEnsemble.runBookies(LocalBookkeeperEnsemble.java:198)
        at 
org.apache.pulsar.zookeeper.LocalBookkeeperEnsemble.start(LocalBookkeeperEnsemble.java:218)
        at 
org.apache.pulsar.PulsarStandaloneStarter.start(PulsarStandaloneStarter.java:152)
        at 
org.apache.pulsar.PulsarStandaloneStarter.main(PulsarStandaloneStarter.java:203)
Caused by: org.rocksdb.RocksDBException: Compression type LZ4 is not linked 
with the binary.
        at org.rocksdb.RocksDB.open(Native Method)
        at org.rocksdb.RocksDB.open(RocksDB.java:231)
        at 
org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.&lt;init&gt;(KeyValueStorageRocksDB.java:155)
        ... 11 more
----
2018-02-06 18:52:12 UTC - Matteo Merli: &gt; What happens when I unload the 
particular bundle? Is there a graceful unloading that bundle and moving to 
other brokers in terms of producer and consumer communication? I would like to 
understand the impact of unloading bundles for producers and consumers when I 
unload the bundles. 

Yes, there is a graceful close of all the topics for that bundles and then the 
broker releases ownership. The client retries will trigger the immediate 
reassignment to a new broker, based on the current traffic situation. 
The graceful close is done in order to reduce the “recovery time” when the 
topic is reloaded in new broker. The procedure is roughly like: 
 1. Mark the “bundle” as unavailable
 2. Mark all topic as closed, not accepting any new producer/consumer
 3. Disconnect all producers/consumers, by sending a 
`CloseProducer`/`CloseConsumer` command from broker to clients, leaving TCP 
connection untouched
 4. Start closing all managed ledgers (the storage abstraction) for the topic 
in the bundle in parallel. The closing involves finalizing the topic state, 
with things such as (last entry persisted, last messages acked and so on)
 5. When all topics are closed, release ownership of bundle

Since we try to keep # of topics per bundle &lt;= 1000, the overall failover 
time is bound and it should be typically within 300ms in most cases
+1 : Jaebin Yoon
----
2018-02-06 18:56:47 UTC - Matteo Merli: &gt; Caused by: 
org.rocksdb.RocksDBException: Compression type LZ4 is not linked with the 
binary.

Uhm, it seems the RocksDb JNI binding are not coming with LZ4 included in the 
Windows build
----
2018-02-06 19:01:32 UTC - prabal nandi: i am not using any different build, 
it's the same binary available on Pulsar web page 
(pulsar-1.21.0-incubating-bin.tar.gz). Any fix or workarounds? sorry i started 
playing with pulsar today and started facing so many issues already.
----
2018-02-06 19:07:17 UTC - Matteo Merli: Sorry about that, I don’t think many 
people have been testing it with Windows. I don’t have an immediate solution 
for the LZ4 library problem that doesn’t involve any code change
----
2018-02-06 19:56:59 UTC - Jaebin Yoon: How can I clean up all bookies and 
ledger data? I screwed up the meta data by bringing many bookies up and down 
(different machines) so lots of ledgers are just dead. Is there any way to 
cleanup if I want to clean start without losing the current topic?  For 
example, if I run "bookeeper autorecovery" I see lots of errors since it tries 
to talk to the bookies that were gone.
----
2018-02-06 19:57:57 UTC - Jaebin Yoon: Will this bookkeeper autorecovery 
eventually clean this up?
----
2018-02-06 20:00:31 UTC - Matteo Merli: If the data is gone, autorecovery won’t 
help. The topic delete should take care of it (if it doesn’t fail for the same 
missing some data reason)
----
2018-02-06 20:01:58 UTC - Jaebin Yoon: i see. deleting topic failed to clean up 
for some reason. So I guess I just need to stop all bookies and brokers and 
remove znodes and start over.
----
2018-02-06 20:03:14 UTC - Matteo Merli: Ok, deleting should actually go through 
in these cases, so we need to fix
----
2018-02-06 23:39:12 UTC - Matteo Merli: @Jaebin Yoon So far I wasn’t able to 
reproduce the delete topic problems. In my env, the delete is completing 
successfully even if no bookies are available. Tried both on simple topics and 
partitions
----
2018-02-06 23:48:24 UTC - Jaebin Yoon: @Matteo Merli ok. Thanks for trying. 
Maybe my zookeeper znodes are messed up for some reason.
----
2018-02-06 23:49:16 UTC - Matteo Merli: can you share the errors that you get 
when deleting? both in client/brokers
----
2018-02-06 23:50:16 UTC - Jaebin Yoon: I don't think I get errors in deleting. 
But there are leftover after deleting. I'll try one more time if I can 
reproduce in my current env.
----
2018-02-06 23:52:03 UTC - Matteo Merli: Ok, if some partitions are left over, 
can you also try to delete the individual partitions and see if that succeeds?
----
2018-02-06 23:52:14 UTC - Jaebin Yoon: ok i will
----

Reply via email to