2020-04-18 11:48:55 UTC - Subash Kunjupillai: Hi,
 
We are evaluating Apache Pulsar to be integrated with our product and currently 
we are trying to understand the Life Cycle Management of the Pulsar components. 
So I would like to get the following points clarified:
 
• We see that, Bookkeeper is packed and delivered by Pulsar. As part of Pulsar 
2.5.0, the provided Bookkeeper version is 4.10.0. May I please know the 
strategy on how the bookkeeper version are planned to be upgraded for each 
release of Pulsar?
• From Apache Bookkeeper page, I understand that the stable version is 4.9.2, 
but Pulsar 2.5.0 has deployed 4.10.0. Can I assume that Pulsar always delivers 
the latest version of Apache Bookkeeper irrespective of stable build?
• As a product we always support upgrade and rollback for all components. I 
understand from Apache Bookkeeper documents that, rollback is not supported out 
of box. In that case if we have to rollback Pulsar alone, can we deploy Apache 
Bookkeeper separately and configure Pulsar to use the existing bookkeeper 
cluster instead of using the bookkeeper provided by Pulsar. I guess this will 
help us to upgrade/rollback Pulsar independent of Bookkeeper. Please let me 
know does this sound a good option?
• If we can deploy our own Apache Bookkeeper & Zookeeper, is there any 
supported compatibility matrix documented by Pulsar for Bookkeeper and 
Zookeeper?

----
2020-04-18 13:24:32 UTC - JG: Hello, nobody know whare are the authentication 
parameters for simple user/password credentials accessing to pulsar ?:
----
2020-04-18 13:24:34 UTC - JG: ```PulsarAdmin admin = PulsarAdmin.builder()
        .authentication(authPluginClassName,authParams)
        .serviceHttpUrl(url)
        .tlsTrustCertsFilePath(tlsTrustCertsFilePath)
        .allowTlsInsecureConnection(tlsAllowInsecureConnection)
        .build();```

----
2020-04-18 13:24:50 UTC - JG: the name of authPluginClassName, I cound not find 
any exemple
----
2020-04-18 15:01:23 UTC - Matteo Merli: >  May I please know the strategy on 
how the bookkeeper version are planned to be upgraded for each release of 
Pulsar?
We generally track the latest release of BookKeeper

>  I understand that the stable version is 4.9.2, but Pulsar 2.5.0 has 
deployed 4.10.0
In this case I think that the 4.10.0 is already the stable version and the 
webpage was not updated correctly.

>  As a product we always support upgrade and rollback for all components. I 
understand from Apache Bookkeeper documents that, rollback is not supported out 
of box.
Pulsar (with all its components), always guarantees the rollback to the 
previous minor version (eg: 2.5.1 -> 2.4.0).

In order for this to be feasible, BookKeeper needs to be guaranteeing that (and 
it does).

> If we can deploy our own Apache Bookkeeper & Zookeeper, is there any 
supported compatibility matrix documented by Pulsar for Bookkeeper and 
Zookeeper?
It's possible, though we don't have that matrix officially published. In 
general, it's very unlikely that we depend on any new ZK or BK server side 
feature that will make it incompatible with an older client.

Given the upgrade & rollback policy, we need to also ensure that we only 
switch ZK and BK between versions that guarantee such property.

The safe approach would be to check the ZK&BK version used across minor 
Pulsar version. That would be the "safe range" of versions that you can use.

Of course, that wouldn't mean that other versions won't work.
----
2020-04-18 15:03:31 UTC - Matteo Merli: 
`org.apache.pulsar.client.impl.auth.AuthenticationBasic`

For the `authParams`:
• `userId`
• `password`
----
2020-04-18 15:14:45 UTC - JG: thank you very much !
----
2020-04-18 15:14:53 UTC - JG: is there any documentation ?
----
2020-04-18 15:25:41 UTC - JG: is it possible to have a working exemple with 
pulsar admin for <https://github.com/apache/pulsar-manager>
----
2020-04-18 15:33:23 UTC - JG: I am asking because for the moment I always got 
this error: Caused by: 
<http://org.apache.pulsar.shade.javax.ws.rs|org.apache.pulsar.shade.javax.ws.rs>.ProcessingException:
 Remotely closed
----
2020-04-18 15:43:55 UTC - JG: Fixed the problem, I was pointing a bad URL but 
exception should be more clear...
----
2020-04-18 15:44:33 UTC - JG: Do anayboy know the request to get the data 
inside a certain topic ? in fact I would like to retrive the persisted message
----
2020-04-18 17:04:59 UTC - Adelina Brask: SO... @Sijie Guo I got to the bottom 
of this. I can confirm that offloader was not the issue. The problem was *Sink 
starting up in localmode but not in cluster mode*. But I have testet localrun 
on 1 node. I have then tested localrun on the other 2 nodes and I got a "no 
certificate path" error.  After I have copied the <http://ca.cr|ca.crt at 
>/etc/pki/ca-trust/source/anchors then localrun worked on all nodes and also 
sink started in cluster mode. Now my confusion is: *Why should we have an 
certificate anchor at /etc/pki/ca-trust/source/anchors for communication 
between servers, in cluster mode, if we configure client and broker conf to use 
TTL and thus specifying the cert path there?* 
----
2020-04-18 17:08:39 UTC - Adelina Brask: i can see in the sink logs that the 
client has startet with default configuration:
`09:51:50.072 [pulsar-client-io-1-1] INFO  
org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl - Pulsar client config: 
{`
  `"serviceUrl" : "<pulsar://clstpulsar01.dmz23.local:6650>",`
  `"authPluginClassName" : null,`
  `"authParams" : null,`
  `"operationTimeoutMs" : 30000,`
  `"statsIntervalSeconds" : 60,`
  `"numIoThreads" : 1,`
  `"numListenerThreads" : 1,`
  `"connectionsPerBroker" : 1,`
  `"useTcpNoDelay" : true,`
  `"useTls" : false,`
  `"tlsTrustCertsFilePath" : null,`
  `"tlsAllowInsecureConnection" : true,`
  `"tlsHostnameVerificationEnable" : false,`
  `"concurrentLookupRequest" : 5000,`
  `"maxLookupRequest" : 50000,`
  `"maxNumberOfRejectedRequestPerConnection" : 50,`
  `"keepAliveIntervalSeconds" : 30,`
  `"connectionTimeoutMs" : 10000,`
  `"requestTimeoutMs" : 60000,`
  `"initialBackoffIntervalNanos" : 100000000,`
  `"maxBackoffIntervalNanos" : 60000000000`
----
2020-04-18 17:10:04 UTC - Adelina Brask: But my client &amp; broker conf is 
fully configured for TTL :slightly_smiling_face: Is pulsar skipping mt 
<http://client.co|client.conf ?>
----
2020-04-18 18:52:06 UTC - Patrick Hemmer: @Patrick Hemmer has joined the channel
----
2020-04-18 18:58:16 UTC - Patrick Hemmer: Are there any lightweight 
producer-side queue solutions good for integrating to pulsar?
For example, lets say I have a bunch of small client nodes with unreliable 
internet that send messages to pulsar, so when they lose connection they need 
to buffer messages. I was thinking of using Redis Streams, but then I'd have to 
write another application which would read from Redis and write to Pulsar. 
Hoping for a cleaner solution that would allow direct integration from a 
lightweight queue into Pulsar without the need for a middleware app.
----
2020-04-18 20:05:45 UTC - Artur A: @Artur A has joined the channel
----
2020-04-18 20:07:49 UTC - JG: Is there any java exemple on how to use Pulsar 
SQL ( presto ) via Java client ? I could not fibnd any unit test on github
----
2020-04-18 20:21:23 UTC - JG: got it work via sql CLI ( presto) but whn  
execute request there is a bug:
----
2020-04-18 20:21:39 UTC - JG: ning 0ms :: finishing 0ms :: begin 
2020-04-18T20:20:20.972Z :: end 2020-04-18T20:20:20.972Z
2020-04-18T20:20:38.074Z        WARN    statement-response-0    
com.facebook.presto.server.ThrowableMapper      Request failed for 
/v1/statement/20200418_202038_00004_n2dnt/1
java.lang.RuntimeException: Failed to get schemas from pulsar: Cannot cast 
org.glassfish.jersey.inject.hk2.Hk2InjectionManagerFactory to 
org.glassfish.jersey.internal.inject.InjectionManagerFactory
        at 
org.apache.pulsar.sql.presto.PulsarMetadata.listSchemaNames(PulsarMetadata.java:125)
        at 
com.facebook.presto.spi.connector.ConnectorMetadata.schemaExists(ConnectorMetadata.java:65)
        at 
com.facebook.presto.metadata.MetadataManager.lambda$schemaExists$0(MetadataManager.java:266)
        at java.util.stream.MatchOps$1MatchSink.accept(MatchOps.java:90)
        at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
        at 
java.util.Spliterators$ArraySpliterator.tryAdvance(Spliterators.java:958)ning 
0ms :: finishing 0ms :: begin 2020-04-18T20:20:20.972Z :: end 
2020-04-18T20:20:20.972Z
2020-04-18T20:20:38.074Z        WARN    statement-response-0    
com.facebook.presto.server.ThrowableMapper      Request failed for 
/v1/statement/20200418_202038_00004_n2dnt/1
java.lang.RuntimeException: Failed to get schemas from pulsar: Cannot cast 
org.glassfish.jersey.inject.hk2.Hk2InjectionManagerFactory to 
org.glassfish.jersey.internal.inject.InjectionManagerFactory
        at 
org.apache.pulsar.sql.presto.PulsarMetadata.listSchemaNames(PulsarMetadata.java:125)
        at 
com.facebook.presto.spi.connector.ConnectorMetadata.schemaExists(ConnectorMetadata.java:65)
        at 
com.facebook.presto.metadata.MetadataManager.lambda$schemaExists$0(MetadataManager.java:266)
        at java.util.stream.MatchOps$1MatchSink.accept(MatchOps.java:90)
        at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
        at 
java.util.Spliterators$ArraySpliterator.tryAdvance(Spliterators.java:958)
----
2020-04-18 20:34:04 UTC - JG: fixed if you use the latest docker image... bug 
with 2.5.0
----
2020-04-18 21:00:10 UTC - JG: presto sql client seems to be bugg as well, I 
executed the command:  `presto:sample/groups&gt; describe groupadded;`
I got this reponse: com.facebook.presto.spi.TableNotFoundException: Table 
'sample/groups.groupadded' not found
        at 
org.apache.pulsar.sql.presto.PulsarMetadata.getTableMetadata(PulsarMetadata.java:305)
        at 
org.apache.pulsar.sql.presto.PulsarMetadata.listTableColumns(PulsarMetadata.java:264)
----
2020-04-18 21:00:17 UTC - JG: But my table exists:
----
2020-04-18 21:00:54 UTC - JG: `presto:sample/groups&gt; show tables;`
     `Table`
`----------------`
 `accountadded`
 `accountremoved`
 `groupadded`
----
2020-04-18 21:04:02 UTC - JG: presto sql seems to be buggy...
----
2020-04-18 22:39:15 UTC - hello2888: @hello2888 has joined the channel
----
2020-04-18 22:58:34 UTC - hello2888: The command "mvn clean install -DskipTests 
-rf :pulsar-kafka-compat-client-test" was executed, and BUILD FAILURE was 
shown. The info for the error is :
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile 
(default-testCompile) on project pulsar-kafka-compat-client-test: Compilation 
failure
[ERROR] 
/mnt/myshare/myGitLocal/pulsar/tests/pulsar-kafka-compat-client-test/src/test/java/org/apache/pulsar/tests/integration/compat/kafka/PulsarKafkaProducerThreadSafeTest.java:[24,41]
 找不到符号
----
2020-04-18 23:58:12 UTC - JG: Problem solved, topics should always be lower 
case.
----
2020-04-19 00:49:27 UTC - JG: guys, how to make the difference between positive 
ack and negative on event stream storage ? ( I put the option to have retention 
on ack messages as well ). I would like to know which one have failed. If we 
cannot see it, we should add custom value ??
----
2020-04-19 00:53:57 UTC - JG: Also is it mandatory to use Presto as stream 
storage, isnt possible to use another database type ? ( with same columns )
----
2020-04-19 08:07:23 UTC - Subash Kunjupillai: @Adelina Brask, any thoughts?
----
2020-04-19 08:40:19 UTC - Subash Kunjupillai: &gt; It's possible, though we 
don't have that matrix officially published
It would be great if this matrix can be documented along with any restriction 
or points to consider.

&gt; In order for this to be feasible, BookKeeper needs to be guaranteeing that 
(and it does).
I understand from Bookeeper community that rollback to previous version is 
possible if the new features are not enabled after upgrade. They are checking 
the possibility to document the document accordingly.
----

Reply via email to