2020-05-12 12:00:30 UTC - Chris: @Chris has joined the channel
----
2020-05-12 12:03:26 UTC - Zhiyuan: @Zhiyuan has joined the channel
----
2020-05-12 12:30:55 UTC - Vladimir Shchur: Hi! A question regarding 
deduplication - looks like it only works for single producer, is there a way to 
achieve deduplication when there are many producers? Real world case is any web 
application with several servers (autoscalable) and every server produces 
messages to the same topic. If one such server fails right after message is 
sent without sending confirmation to client, the client can send the same 
message to another server which will create a double message in pulsar topic.
----
2020-05-12 13:20:37 UTC - Adelina Brask: hi guys, I am using Netty http to send 
json messaged to Pulsar. After a while I am getting mapping parsing failure 
when using ElasticSearch sink: `13:17:54.805 [public/default/elastic-elastic-0] 
WARN  org.apache.pulsar.functions.instance.JavaInstanceRunnable - Failed to 
process result of message 
PulsarRecord(topicName=Optional[<persistent://public/default/elastic>], 
partition=0, 
message=Optional[org.apache.pulsar.client.impl.MessageImpl@7956b81], 
failFunction=org.apache.pulsar.functions.source.PulsarSource$$Lambda$224/732095742@18640020,
 
ackFunction=org.apache.pulsar.functions.source.PulsarSource$$Lambda$223/115152073@c3f2ee0)`
`java.lang.RuntimeException: ElasticsearchStatusException[Elasticsearch 
exception [type=mapper_parsing_exception, reason=failed to parse]]; nested: 
ElasticsearchException[Elasticsearch exception [type=not_x_content_exception, 
reason=Compressor detection can only be called on some xcontent bytes or 
compressed xcontent bytes]];` Anyone can help?
----
2020-05-12 13:27:17 UTC - Ankit Lohia: @Ankit Lohia has joined the channel
----
2020-05-12 15:49:46 UTC - David Kjerrumgaard: Based on the exception, it looks 
like ES was unable to parse the message payload.
----
2020-05-12 15:57:45 UTC - David Kjerrumgaard: The backlog quota applies to 
messages that have not been acknowledged by all subscriptions and is based on 
the time-to-live setting, while the retention policy applies to acknowledged 
messages and is based on the volume of data to retain.
ok_hand : Konstantinos Papalias
----
2020-05-12 16:10:30 UTC - Kai Levy: what happens to messages where there are no 
subscriptions present on the topic?
----
2020-05-12 16:49:22 UTC - David Kjerrumgaard: Only the retention policy would 
apply. However, I believe that in the case where there are no subscriptions, 
the messages are not  retained by default.
----
2020-05-12 16:51:54 UTC - Kai Levy: Ok, thanks. But if I set the namespaces 
retention policy to be 24hrs / 1GB, those messages without subscription _would_ 
be retained, and then if I created a subscription later I could read and ack 
them?
----
2020-05-12 16:58:45 UTC - David Kjerrumgaard: Two things....a subscription 
NEVER reads old messages. It only starts consuming messages from the time it is 
first created.
----
2020-05-12 17:00:06 UTC - David Kjerrumgaard: If you want that behavior, you 
should create a subscription on the topic (which will force the messages to be 
held) but don't consume from it.  This will be a "admin" sub whose purpose is 
to force the topic to hold the messages.
----
2020-05-12 17:00:18 UTC - David Kjerrumgaard: Then produce the messages.
----
2020-05-12 17:00:48 UTC - David Kjerrumgaard: after 24 hours you can then use 
the `Reader` interface to go back and read all the messages in the topic from 
the beginning.
----
2020-05-12 17:16:06 UTC - Marcio Martins: I am running into a bit of pickle 
here... I have a 3-broker Pulsar cluster running on my minikubes. I wrote a 
simple java app that was meant to bombard Pulsar with requests, using 12 
threads with no sleeps and it's pulling ~250msg/sec per thread or ~2300msg/sec. 
Message size is 5 bytes. I did the same with a tiny driver I wrote in Dlang 
(native, similar to C++) with naive blocking sockets and am pulling in 
~60.000msg/sec. The cluster is exactly the same, the topc is not partitioned. 
My Dlang driver did nothing fancy, just open socket and send message, wait for 
reply, send next, etc... What am I doing wrong with the Java client?
----
2020-05-12 17:17:55 UTC - Matteo Merli: @Ken Huang Cluster A will be comprised 
of more than 1 machine.
----
2020-05-12 17:18:47 UTC - Matteo Merli: Even if the *entire* cluster is down, 
it would just means that data will get replicated later, when the cluster comes 
back online.
----
2020-05-12 17:19:27 UTC - Matteo Merli: Now, if you lose N copies of the data 
in cluster A, then you will lose data, but that happens in any case.
----
2020-05-12 17:20:54 UTC - Addison Higham: err... but you can create a 
subscription back at the earliest message at any time, but then you do need to 
be aware of any backlog quotes will apply?
----
2020-05-12 17:27:23 UTC - David Kjerrumgaard: Yes, @Addison Higham I forgot 
about the new 
<https://pulsar.apache.org/api/client/2.5.0-SNAPSHOT/org/apache/pulsar/client/api/ConsumerBuilder.html#subscriptionInitialPosition-org.apache.pulsar.client.api.SubscriptionInitialPosition->
 feature :smiley:
----
2020-05-12 17:29:47 UTC - Kai Levy: yeah, so how would that work if you use 
earliest position?
----
2020-05-12 17:33:38 UTC - David Kjerrumgaard: You would get messages from 
oldest to newest. Minus any that were lost due to message retention policies.
----
2020-05-12 17:33:59 UTC - Addison Higham: yeah, so it is what you originally 
said. You can write to a topic with a retention policy. At any point in time, 
you can create a subscription either to the earliest messages still remaining 
in the topic or to a specific position in the topic, from there it is just like 
normal
----
2020-05-12 17:34:12 UTC - Raman Gupta: Man, I can't wait until I can migrate my 
Kafka workloads to Pulsar. Kafka is stupid: just had an issue in which 
thousands of old events got reprocessed because a consumer had not committed 
any offsets for a while (`offsets.retention.minutes`) due to inactivity on the 
topic, and so, *even though the consumer was still actively connected and 
consuming until literally seconds before*, the consumer group reset to earliest 
on a broker restart (for a change to, ironically, increase the 
`offsets.retention.minutes` value). Argh! Migration to Pulsar waiting on full 
transaction support and proper ordering on `Key_Shared` subscriptions 
(<https://github.com/apache/pulsar/issues/6554>)!
----
2020-05-12 17:37:03 UTC - Kai Levy: Ok, thank you!
----
2020-05-12 17:38:06 UTC - David Kjerrumgaard: @Addison Higham Don't you still 
need at least one "active" subscription on the topic in order to avoid the 
automatic deletion of the topic (controlled by the 
`brokerDeleteInactiveTopicsEnabled` config setting)? Or does the existence of 
messages inside the topic negate that deletion?  Never really tried that.
----
2020-05-12 17:45:01 UTC - Sijie Guo: 
<https://github.com/apache/pulsar-manager/tree/master/src#enable-https-forward>
----
2020-05-12 17:46:54 UTC - Sijie Guo: Currently the deduplication works will 
sequence id.  It doesn’t work with content based deduplication. It is hard to 
keep sequence id in order across many producers with different producer names. 
Hence it is a bit hard to achieve your requirement in the current deduplication 
implementation.
----
2020-05-12 17:48:23 UTC - Sijie Guo: retention will keep alive the messages so 
`brokerDeleteInactiveTopics` will not kick in.
+1 : David Kjerrumgaard
----
2020-05-12 17:50:20 UTC - Sijie Guo: ^ @Penghui Li
----
2020-05-12 18:01:00 UTC - Kai Levy: what mechanism purges messages based on 
retention policy? I tried testing where I set retention policy to 0s / 0MB, as 
well as TTL of 0 and no backlog policy. I then produced messages to a new topic 
with no subscription. Since there is no subscription, I would expect the 
retention policy to remove those messages quickly. But when I connected a 
consumer (after a minute) and subscribed to the earliest position, it still saw 
all of the messages.
----
2020-05-12 18:02:48 UTC - Sijie Guo: the last active ledger will not be purged.
----
2020-05-12 18:03:07 UTC - Sijie Guo: Only closed ledger will be purged.
----
2020-05-12 18:03:41 UTC - Kai Levy: Ah, and what are the circumstances that a 
ledger is closed?
----
2020-05-12 18:03:48 UTC - Sijie Guo: there are a few settings related to ledger 
rollover.
----
2020-05-12 18:04:00 UTC - Sijie Guo: min and max time, and the max number of 
entries.
----
2020-05-12 18:04:20 UTC - Kai Levy: Excellent, thank you
----
2020-05-12 18:04:54 UTC - Sijie Guo: If you run a version prior to 2.5.1, 
ledger rollover will only happen when there is a traffic. 2.5.1 includes a fix 
to make ledger rollover to be stick to the ledger rollover settings.
+1 : Kai Levy, Konstantinos Papalias
----
2020-05-12 18:05:10 UTC - Patrik Kleindl: The default for this setting is one 
week and it is documented behaviour. I don‘t think Pulsar can keep you safe 
from such things.
----
2020-05-12 18:06:32 UTC - Raman Gupta: @Patrik Kleindl I realize its documented 
(which is why I was increasing it), but the behavior is stupid. The reason for 
the setting is to GC consumers that no longer exist. But the mechanism is based 
on the last commit, not on the last time the consumer was connected. That is 
stupid.
----
2020-05-12 18:15:21 UTC - Patrik Kleindl: It can be counter-intuitive, but it 
is about offsets, not keepalives.
Kafka was probably not intended for topics that don‘t have any traffic for such 
a long time :wink:
----
2020-05-12 18:17:34 UTC - Sijie Guo: I had a slide talking about the lifecycle 
of a Pulsar message. This would give you an idea how a message was actually 
deleted. 
<https://www.slideshare.net/streamnative/tgipulsar-ep-006-lifecycle-of-a-pulsar-message>
----
2020-05-12 18:18:11 UTC - Raman Gupta: Agreed, but when you gain wide 
acceptance of your product, you have to realize its gonna be used for more than 
the few use cases you created it for (and Confluent sells it that way), so I 
don't see that as an excuse.
----
2020-05-12 18:23:29 UTC - Patrik Kleindl: I would guess someone else has faced 
this problem before and solved it, given the wide acceptance.
Considering where we are discussing, I am interested if Pulsar has any similar 
issues regarding offset management? Or are all subscription positions kept 
indefinitely?
----
2020-05-12 18:25:47 UTC - Sijie Guo: New case study blog post from 
<https://www.tuya.com/|TuyaSmart> about its adoption story of Pulsar. 
<https://streamnative.io/blog/tech/2020-05-08-tuya-tech-blog/>
+1 : David Kjerrumgaard, Ali Ahmed
----
2020-05-12 18:28:02 UTC - Sijie Guo: There is no offset retention in Pulsar
----
2020-05-12 18:32:16 UTC - Sijie Guo: Pulsar doesn’t delete cursors for 
subscriptions. Applications have to explicitly unsubscribe the subscription. 
Pulsar allows you configure TTL so the system can automatically move cursors 
(to ack messages) for a given subscription. This ensures that a subscription 
goes down doesn’t result in a huge backlog.
----
2020-05-12 18:39:51 UTC - Patrik Kleindl: The TTL would be the equivalent of 
setting your offset explicitely before restarting the consumer in Kafka I 
guess, eg to ‚now‘ or ‚10 minutes ago‘.
The problematic feature is a cleanup of supposedly inactive consumer groups, 
which is probably not a problem in Pulsar until you have a lot of temporary 
subscriptions which never do anything afterwards.
----
2020-05-12 18:41:48 UTC - Fred George: In fairness Kafka actually retains 
offsets as long as the group is active (i.e. has active subscribers that are 
sending heartbeats to the coordinator) and it's configurable as you noted. I 
suspect it's not what you think.
----
2020-05-12 18:44:21 UTC - Raman Gupta: Not quite @Fred George -- the consumer 
group itself retains the offsets on the client side, which is fine for as long 
as the consumer is still running. The problem is that the broker has eliminated 
the server-side offsets (despite the consumer still being connected), and so 
when the consumer group restarts (or the broker restarts), when the consumer 
reconnects it says to the broker, "give me my offset please" and the broker 
says "you don't exist, your offset is 0".
----
2020-05-12 18:48:28 UTC - Raman Gupta: But your understandable confusion is 
exactly why its a poor design.
----
2020-05-12 19:01:15 UTC - Fred George: Actually, the consumer group keeps its 
offsets in the coordinator (caching the offsets topic), and the group sends 
periodic heartbeats to the coordinator to ensure liveness. The coordinator 
prunes consumer groups only once they have all stopped heartbeating for the 
previously mentioned setting. If you're running some super old version you may 
be right. But more broadly, most of us are here because we love distributed 
systems and like the benefits Pulsar provides. But we also respect the 
technologies in this space, particularly ones that laid the grounding for what 
many of us do every day. If you want to vent over your personal gripes it might 
be better to go do it on reddit. And if you do insist on badmouthing other 
apache projects, try and get your facts straight.
+1 : Patrik Kleindl
----
2020-05-12 19:11:47 UTC - Raman Gupta: You're right, I apologize for my 
wording. It was a product of frustration with not the first such issue I've had 
with Kafka, but still, that is no excuse.

Looking at the factual elements of your response, you appear to be right for 
Kafka versions greater than 2.1.0 -- see 
<https://cwiki.apache.org/confluence/display/KAFKA/KIP-186%3A+Increase+offsets+retention+default+to+7+days>
 and <https://issues.apache.org/jira/browse/KAFKA-4682>. So, apparently this 
issue was fixed (and for exactly the reasons I mentioned).

Now I'm back to square 1 -- I don't know why my consumer group offsets reset.
----
2020-05-12 19:17:05 UTC - Fred George: All good. I hope your problem resolves.
----
2020-05-12 19:44:40 UTC - Raman Gupta: I wouldn't be surprised if there are 
some corner cases in which this still occurs. Just ran into *another* prod 
issue which was supposed to be fixed, but apparently is not: 
<https://issues.apache.org/jira/browse/KAFKA-8803?filter=-2>.
----
2020-05-12 21:19:19 UTC - Jeff Schneller: @Jeff Schneller has joined the channel
----
2020-05-12 21:22:26 UTC - Jeff Schneller: Trying to setup standalone server 
with TLS for authentication and authorization.  When starting standalone, it 
errors out because of a 401 Unauthorized on 
<http://localhost:8080/admin/v2/persistent/public/functions/assignments>
Any ideas?
----
2020-05-12 21:26:26 UTC - David Kjerrumgaard: Did you associate the client cert 
with a specific role? If so, did you authorize that role to perform the action 
you are requesting?
----
2020-05-12 21:27:47 UTC - Jeff Schneller: How would I authorize that role if I 
can't start the standalone server.  The action being requested is being done by 
the standalone server during startup
----
2020-05-12 21:36:15 UTC - Vladimir Shchur: Thank you! My plan was to use hash 
of the content and to put it into sequence id, so pulsar won't need to generate 
any ids, only to check them.
----
2020-05-12 21:36:40 UTC - Kai Levy: since retention policy has both size and 
time limit, does it allow delete message that match EITHER of the criteria, or 
BOTH?
----
2020-05-12 21:40:29 UTC - David Kjerrumgaard: The short answer is that you 
can't, not without modifying the configuration of the standalone cluster via 
the `standalone.conf` file. Are you using docker to run the standalone instance?
----
2020-05-12 21:42:34 UTC - David Kjerrumgaard: I have an example of a secure 
Pulsar standalone docker image that I built, which you can find here: 
<https://github.com/david-streamlio/pulsar-in-action/tree/master/docker-images/pulsar-standalone>
 if you are interested.
----
2020-05-12 21:43:10 UTC - David Kjerrumgaard: Either
----
2020-05-12 21:46:57 UTC - Raphael Enns: Hi. I am seeing an exception in the 
pulsar logs (running version 2.5.1 in standalone mode) when creating a new 
reader in the Java client with a startMessageId that doesn't exist. To 
reproduce it, I run pulsar, create a reader that listens on a topic and send a 
few messages to the topic. I then stop pulsar and delete its data directory and 
start it up again. The client gets a TimeoutException and pulsar throws a 
NullPointerException:
----
2020-05-12 21:47:09 UTC - Raphael Enns: 
----
2020-05-12 21:47:36 UTC - Raphael Enns: The code I use to create the reader is 
(scala code using Java pulsar client):
```    val reader =
      pulsarClient
        .newReader()
        .topic(mailboxQueueInfo.privateMailboxId.value)
        .startMessageId(initialMessageId)
        .readerName(UUID.randomUUID().toString)
        .create()```
If I change it to the below, it is able to recover from Pulsar's data from 
going down.
```    val reader =
      pulsarClient
        .newReader()
        .topic(mailboxQueueInfo.privateMailboxId.value)
        .startMessageId(MessageId.latest)
        .readerName(UUID.randomUUID().toString)
        .create()

    reader.seek(initialMessageId)```

----
2020-05-12 21:47:43 UTC - Raphael Enns: First, is my change the proper way of 
creating a reader? Second, is the NullPointerException / timeout the expected 
behavior in this scenario? It seems like it could be handled better.
----
2020-05-12 22:00:41 UTC - Jeff Schneller: @David Kjerrumgaard - I did modify 
the configuration file to support TLS encryption, Authentication, and 
Authorization.   I am not running docker.
----
2020-05-12 22:04:19 UTC - Jeff Schneller: @David Kjerrumgaard in your github it 
looks like the standalone.conf has authorization disabled.  how is 
authorization handled then?
----
2020-05-12 22:15:48 UTC - Sijie Guo: Did you configure brokerClientTLS* 
settings? Pulsar Functions are the client to brokers. So you need to configure 
that. You also need to make sure the `role` (common name) associated with this 
cert is added as a super user.
----
2020-05-12 22:16:49 UTC - Sijie Guo: This seems to be a bug. Can you create a 
Github issue for us?
----
2020-05-12 22:17:10 UTC - Sijie Guo: ^ @Penghui Li
----
2020-05-12 22:31:58 UTC - Jeff Schneller: Yes, I did set the brokerClientTLS 
and set the role in the superuser list
----
2020-05-12 22:32:43 UTC - Raphael Enns: Yeah, I'll create an issue.
----
2020-05-12 22:37:40 UTC - Jeff Schneller: ```
# Role names that are treated as "super-user", meaning they will be able to do 
all admin
# operations and publish/consume from all topics
superUserRoles=admin,localhost

# Authentication settings of the broker itself. Used when the broker connects 
to other brokers,
# either in same or other clusters
brokerClientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls
brokerClientAuthenticationParameters=tlsCertFile:/root/my-ca/client-certs/admin.cert.pem,tlsKeyFile:/root/my-ca/client-certs/admin.key-pk8.pem
brokerClientTrustCertsFilePath=/root/my-ca/certs/ca.cert.pem```
here are my settings.  You reference brokerClientTLS but these do not have TLS 
in the name.  Do they need to?
----
2020-05-12 23:26:05 UTC - David Kjerrumgaard: @Jeff Schneller The version I 
checked in had it disabled, but can easily be re-enabled by removing the 
comment from the `# authorizationEnabled=true` property. Once that is 
uncommented, you have to authorize roles via the pulsar admin API, as described 
here:  
<https://pulsar.apache.org/docs/en/2.5.1/security-authorization/#administer-tenants>
----
2020-05-12 23:31:11 UTC - David Kjerrumgaard: And to grant permissions on the 
individual namespaces, you can use 
<https://pulsar.apache.org/docs/en/2.5.1/pulsar-admin/#permissions>
----
2020-05-12 23:49:11 UTC - Jeff Schneller: I realize all that.  I haven’t been 
able to even start the server once I turn on authorization.   I have my role 
set in the super user list as well.    Something is clearly wrong as the server 
should be able to start with a super user role defined before any 
authorizations are given using the pulsar-admin.  Is it because I started the 
server without authorization and now I turned it on?   Does it have to be done 
on first start?
----
2020-05-13 01:37:11 UTC - David Kjerrumgaard: Sorry. I didn't mean to patronize 
you. I can experiment with my docker image again with that property uncommented 
and see what happens. Perhaps, that is why I have it commented out?
----
2020-05-13 01:43:41 UTC - David Kjerrumgaard: 01:40:49.763 [main] ERROR 
org.apache.pulsar.functions.worker.WorkerService - Error Starting up in worker
org.apache.pulsar.client.admin.PulsarAdminException$NotAuthorizedException: 
Subscriber anonymous is not authorized to access this operation
----
2020-05-13 01:44:23 UTC - David Kjerrumgaard: This is what I get when enabling 
authorization. It is coming from the FunctionWorks, so I am missing some 
configs for that.
----
2020-05-13 01:44:32 UTC - David Kjerrumgaard: Are you seeing a similar error?
----
2020-05-13 01:45:09 UTC - Jeff Schneller: Yes.  It has to do with FunctionWorks
----
2020-05-13 01:45:24 UTC - Jeff Schneller: I am seeing a 401 Unauthorized 
----
2020-05-13 01:46:03 UTC - David Kjerrumgaard: ok, so i can replicate the issue 
finally :smiley:
----
2020-05-13 01:46:22 UTC - David Kjerrumgaard: I will work on this a bit and get 
back to you either way. Wish me luck!
----
2020-05-13 01:47:25 UTC - Jeff Schneller: Glad I am not crazy.  Thanks.  
----
2020-05-13 01:48:52 UTC - Jeff Schneller: Good luck.  
----
2020-05-13 01:54:56 UTC - David Kjerrumgaard: I have it working for JWT, but it 
should be easy to translate to TLS
----
2020-05-13 01:54:59 UTC - David Kjerrumgaard: #### Authorization ####
superUserRoles=admin,root
anonymousUserRole=admin
authorizationEnabled=true
clientAuthenticationPlugin=org.apache.pulsar.broker.authentication.AuthenticationProviderToken
clientAuthenticationParameters=file:///pulsar/manning/security/authentication/jwt/admin-token.txt
----
2020-05-13 01:56:03 UTC - David Kjerrumgaard: adding the two `client*` 
properties along with changing the `anonymousUserRole` to the role associated 
with the JWT token I specified worked. HTH!
----
2020-05-13 02:02:13 UTC - Jeff Schneller: would this be the TLS version.

`clientAuthenticationPlugin=org.apache.pulsar.client.impl.auth.AuthenticationTls`
`clientAuthenticationParameters=tlsCertFile:/root/my-ca/client-certs/admin.cert.pem,tlsKeyFile:/root/my-ca/client-certs/admin.key-pk8.pem`
`clientTrustCertsFilePath=/root/my-ca/certs/ca.cert.pem`
+1 : David Kjerrumgaard
----
2020-05-13 02:06:07 UTC - Jeff Schneller: didn't work
----
2020-05-13 02:06:23 UTC - Jeff Schneller: kept the server up for about 15 
seconds and then crashed
----
2020-05-13 02:07:20 UTC - David Kjerrumgaard: Same error or different one?
----
2020-05-13 02:09:24 UTC - Jeff Schneller: different one but along the same 
lines.
----
2020-05-13 02:09:44 UTC - Jeff Schneller: `22:06:09.245 [pulsar-web-57-1] INFO  
org.eclipse.jetty.server.RequestLog - 127.0.0.1 - - [12/May/2020:22:06:09 
-0400] "PUT /admin/v2/persistent/public/functions/metadata HTTP/1.1" 204 0 "-" 
"Pulsar-Java-v2.5.0" 14`
`22:06:09.473 [pulsar-client-io-76-1] INFO  
org.apache.pulsar.client.impl.ConnectionPool - [[id: 0x1e49a269, 
L:/127.0.0.1:47454 - R:localhost/127.0.0.1:6650]] Connected to server`
`22:06:09.493 [pulsar-io-50-1] INFO  org.apache.pulsar.broker.service.ServerCnx 
- New connection from /127.0.0.1:47454`
`22:06:09.503 [pulsar-io-50-1] WARN  org.apache.pulsar.broker.service.ServerCnx 
- [/127.0.0.1:47454] Unable to authenticate`
`javax.naming.AuthenticationException: Client unable to authenticate with TLS 
certificate`
        `at 
org.apache.pulsar.broker.authentication.AuthenticationProviderTls.authenticate(AuthenticationProviderTls.java:86)
 ~[org.apache.pulsar-pulsar-broker-common-2.5.0.jar:2.5.0]`
        `at 
org.apache.pulsar.broker.authentication.OneStageAuthenticationState.&lt;init&gt;(OneStageAuthenticationState.java:46)
 ~[org.apache.pulsar-pulsar-broker-common-2.5.0.jar:2.5.0]`
        `at 
org.apache.pulsar.broker.authentication.AuthenticationProvider.newAuthState(AuthenticationProvider.java:76)
 ~[org.apache.pulsar-pulsar-broker-common-2.5.0.jar:2.5.0]`
        `at 
org.apache.pulsar.broker.service.ServerCnx.handleConnect(ServerCnx.java:560) 
[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]`
        `at 
org.apache.pulsar.common.protocol.PulsarDecoder.channelRead(PulsarDecoder.java:160)
 [org.apache.pulsar-pulsar-common-2.5.0.jar:2.5.0]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.handler.flow.FlowControlHandler.dequeue(FlowControlHandler.java:190) 
[io.netty-netty-handler-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.handler.flow.FlowControlHandler.channelRead(FlowControlHandler.java:152)
 [io.netty-netty-handler-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:326)
 [io.netty-netty-codec-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300)
 [io.netty-netty-codec-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) 
[io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
 [io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552) 
[io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) 
[io.netty-netty-transport-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050)
 [io.netty-netty-common-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) 
[io.netty-netty-common-4.1.43.Final.jar:4.1.43.Final]`
        `at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
 [io.netty-netty-common-4.1.43.Final.jar:4.1.43.Final]`
        `at java.lang.Thread.run(Thread.java:834) [?:?]`
`22:06:09.508 [pulsar-client-io-76-1] WARN  
org.apache.pulsar.client.impl.ClientCnx - [id: 0x1e49a269, L:/127.0.0.1:47454 - 
R:localhost/127.0.0.1:6650] Received error from server: Unable to authenticate`
`22:06:09.508 [pulsar-client-io-76-1] WARN  
org.apache.pulsar.client.impl.ClientCnx - [id: 0x1e49a269, L:/127.0.0.1:47454 - 
R:localhost/127.0.0.1:6650] Received unknown request id from server: -1`
`22:06:09.509 [pulsar-client-io-76-1] INFO  
org.apache.pulsar.client.impl.ClientCnx - [id: 0x1e49a269, L:/127.0.0.1:47454 ! 
R:localhost/127.0.0.1:6650] Disconnected`
`22:06:09.510 [pulsar-client-io-76-1] WARN  
org.apache.pulsar.client.impl.ConnectionPool - [[id: 0x1e49a269, 
L:/127.0.0.1:47454 ! R:localhost/127.0.0.1:6650]] Connection handshake failed: 
org.apache.pulsar.client.api.PulsarClientException: Connection already closed`
`22:06:09.513 [pulsar-io-50-1] INFO  org.apache.pulsar.broker.service.ServerCnx 
- Closed connection from /127.0.0.1:47454`
`22:06:09.612 [pulsar-external-listener-77-1] WARN  
org.apache.pulsar.client.impl.PulsarClientImpl - [topic: 
<persistent://public/functions/assignments>] Could not get connection while 
getPartitionedTopicMetadata -- Will try again in 100 ms`
----
2020-05-13 02:10:33 UTC - Jeff Schneller: looks like it is trying to connect to 
port 6650 instead of 6651
----
2020-05-13 02:11:18 UTC - Jeff Schneller: I have the following:
----
2020-05-13 02:11:39 UTC - Jeff Schneller: `brokerServicePort=6650`

`# Port to use to server HTTP request`
`webServicePort=8080`


`# BROKER TLS`

`# Broker data port for TLS - By default TLS is disabled`
`brokerServicePortTls=6651`

`# Port to use to server HTTPS request - By default TLS is disabled`
`webServicePortTls=8443`

`# Enable TLS`
`tlsEnabled=true`
----
2020-05-13 02:18:19 UTC - David Kjerrumgaard: This is what I have in my 
standalone.conf w.r.t to TLS and ports, etc.
----
2020-05-13 02:31:33 UTC - Jeff Schneller: same issue
----
2020-05-13 02:34:57 UTC - Jeff Schneller: Here is the error with stack trace
----
2020-05-13 02:35:25 UTC - Jeff Schneller: here is the standalone.conf that I am 
using
----
2020-05-13 02:52:06 UTC - Jeff Schneller: RESOLVED ---   unresolved issue in 
github
<https://github.com/apache/pulsar/issues/5568>
----
2020-05-13 02:53:14 UTC - David Kjerrumgaard: I will take a look in the morning 
and try to replicate my success using TLS instead of JWT.
----
2020-05-13 02:54:19 UTC - Jeff Schneller: Thanks.   Calling quitting time myself
----
2020-05-13 02:55:44 UTC - Rattanjot Singh: Can this be done if you're running 
in the docker way? If yes where do i find the application.properties in docker
----
2020-05-13 06:04:42 UTC - Ruian: Hi, If I enable compression on the producers, 
are message stored on bookies compressed as well?
----
2020-05-13 07:22:30 UTC - Sijie Guo: yes
----
2020-05-13 07:59:41 UTC - Ruian: Hi, the current golang client v0.1.0 will 
memory leak when decompressing messages. Would someone can help review the 
issue and PR? <https://github.com/apache/pulsar-client-go/issues/244>
----
2020-05-13 08:29:40 UTC - Ruian: thank you
----

Reply via email to