2019-10-08 09:58:29 UTC - Vladimir Shchur: Hi! I'm working on adding support 
for Key_Shared subscription in dotnet client, but when testing I get broker 
error on consumer creation

```
19:49:45.396 [ForkJoinPool.commonPool-worker-0] INFO  
org.apache.pulsar.broker.service.ServerCnx - [/172.17.0.13:34168] Created new 
producer: 
Producer{topic=PersistentTopic{topic=<persistent://public/default/topic-de885a14d0d64f8cb1dc365213cdbda0>},
 client=/172.17.0.13:34168, producerName=PartitionedProducer, producerId=1}
19:49:45.436 [pulsar-io-23-4] WARN  org.apache.pulsar.broker.service.ServerCnx 
- [/172.17.0.13:34168] Got exception UninitializedMessageException : Message 
was missing required fields.  (Lite runtime could not determine which fields 
were missing).
org.apache.pulsar.shaded.com.google.protobuf.v241.UninitializedMessageException:
 Message was missing required fields.  (Lite runtime could not determine which 
fields were missing).
        at 
org.apache.pulsar.shaded.com.google.protobuf.v241.AbstractMessageLite$Builder.newUninitializedMessageException(AbstractMessageLite.java:298)
 ~[org.apache.pulsar-protobuf-shaded-2.1.0-incubating.jar:2.1.0-incubating]
        at 
org.apache.pulsar.common.api.proto.PulsarApi$BaseCommand$Builder.build(PulsarApi.java:26302)
 ~[org.apache.pulsar-pulsar-common-2.3.2.jar:2.3.2]
        at 
org.apache.pulsar.common.api.PulsarDecoder.channelRead(PulsarDecoder.java:86) 
~[org.apache.pulsar-pulsar-common-2.3.2.jar:2.3.2]
```

Do I need to enable it on broker? I'm running 2.4.1 pulsar. Same story with 
Java client
----
2019-10-08 11:20:00 UTC - Vladimir Shchur: Just realized that logs show 2.3.2 
version of libs, will try to update it
----
2019-10-08 11:52:50 UTC - Joose Helle: @Joe Francis "Basically any message 
after the oldest unacked message is kept. The delete cursor will not move past 
the oldest unacked message."
Is there any workaround to this? It would be great to be able to let Pulsar 
delete all the new acked messages after acking them (even if there is some old 
unacked messages).
----
2019-10-08 12:06:13 UTC - Raman Gupta: My consumers are reporting this warning 
on Pulsar 2.4.1:
```
org.apa.pul.cli.imp.UnAckedMessageTracker : 
[ConsumerBase{subscription='my-sub', consumerName='my-consumer', 
topic='<persistent://dev/my-ns/q>'}] 3600 messages have timed-out
```

It looks like <https://github.com/apache/pulsar/issues/2221> -- I'm also 
consuming from multiple threads -- but that was supposed to have been fixed. 
The only other diff might be is that I have 3 threads, *in 3 processes*, 
consuming the topic. I also note that all 3 processes report the same number of 
timed-out messages (3600) in all cases, and all output that log at least 5-6 
times and one of them 13 times. BTW, all messages are acked (backlog size is 0) 
and I use the defaults for message TTL, so as far as I know having any unacked 
messages time out is impossible.
----
2019-10-08 13:04:52 UTC - Jack: Hi all, i have a pulsar bookie running for some 
time. But today, it stopped and i can't run again it? this is log when i run 
again. Someone tell me a problem. Thanks you
----
2019-10-08 13:16:12 UTC - Raman Gupta: On the face of it it seems like the 
bookie can't connect to zookeeper. Have you checked the network and/or zk 
cluster?
----
2019-10-08 13:23:43 UTC - Jack: Everything works well
----
2019-10-08 14:26:38 UTC - Ming: Hi, If I introduce a new endpoint, where is the 
swagger or some sort of doc generator to be updated?
-1 : Kendall Magesh-Davis, Jack
no_bell : Robin Bartholdson, Jack
face_with_symbols_on_mouth : Jack
hankey : Vladimir Shchur
----
2019-10-08 14:30:40 UTC - Ming: apologize! I am new to the community/channel. I 
shouldn't have used channel notification.
----
2019-10-08 14:32:39 UTC - Addison Higham: there are annotations placed on API 
endpoints and then you can run `src/gen-swagger.sh`
+1 : Ming, David Kjerrumgaard
----
2019-10-08 14:35:14 UTC - Retardust: -
----
2019-10-08 15:20:18 UTC - Raman Gupta: Hmm, is it possible these messages were 
present in the topic before the subscription was created? But in that case, if 
my subscription was created at "latest" position (which it was), then these 
messages should have remained unacked and therefore unexpired.
----
2019-10-08 15:22:20 UTC - Addison Higham: okay, got myself into a bit of a 
pickle on a testing cluster, but a good learning experience. I left something 
running that filled up disk, but now I can't make any of the API calls work to 
delete the subscription and the ledgers because none of the bookies are healthy.
----
2019-10-08 15:23:39 UTC - Raman Gupta: Can you make the disk bigger?
----
2019-10-08 15:24:01 UTC - Addison Higham: it seems like I could manually delete 
some ledgers using BK cli...but that will leave things inconsistent
----
2019-10-08 15:24:05 UTC - Addison Higham: is there another way to do that?
----
2019-10-08 15:24:40 UTC - Addison Higham: I could... but mostly I just want to 
recover by deleting the data (it isn't important, just some stress test data 
from another system)
----
2019-10-08 15:25:15 UTC - Raman Gupta: Sure, but making the disk bigger maybe 
allows you to bring up the bookies, so you can delete the data. Then you can 
resize back down.
----
2019-10-08 15:28:36 UTC - Addison Higham: yeah, I know what you mean, was just 
trying to be lazy. Technically, the disks aren't *completely* full, they just 
hit there limit so they marked themselves unhealthly, so maybe I could just try 
adjust that limit...
----
2019-10-08 15:29:35 UTC - Raman Gupta: Lazy is good :slightly_smiling_face:
----
2019-10-08 15:33:56 UTC - Addison Higham: that worked BTW, raised the 
`diskUsageThreshold`, restarted my bookies, made the calls
+1 : Raman Gupta
----
2019-10-08 15:34:04 UTC - Addison Higham: now I just need to clear up the disk 
by deleting the topics
----
2019-10-08 15:36:15 UTC - Raman Gupta: Note to self: always reserve some free 
space on a FS for cases like this! `tune2fs -m` to the rescue.
----
2019-10-08 15:36:16 UTC - Matteo Merli: @Joose Helle The data is stored as an 
immutable log, so it’s not possible to selectively physically delete messages 
individually.

Though, if the messages are acked, they won’t be dispatched again.
----
2019-10-08 15:47:51 UTC - Addison Higham: okay... deleted the topics, but it 
seems deletion of old ledgers isn't eager
----
2019-10-08 16:37:57 UTC - Xiaolei Zhu: Hi all, quick questions about using 
schema in Functions.  Is there a way to let pulsar do the schema conversion? I 
am using a set of generic functions that select fields to perform operations 
based on settings passed through user config, somewhat like in SQL. Currently I 
am using Avro to manually decode each message and had to keep a redundant copy 
of schemas for each topic so that this can be done automatically.  I also had 
to create each topic before the functions are deployed to make this work.  I 
wonder if there are examples for how to utilize the built-in Schema Registry in 
functions?  Thank you!
----
2019-10-08 16:43:43 UTC - Matteo Merli: When you specify a POJO as the in/out 
types for a Java function, the types will automatically be used to enforce the 
schema. The schema type (JSON, Avro, …) will be either inferred from the topic 
schema, specified in the function configuration or defaulted to JSON
----
2019-10-08 19:48:42 UTC - Xiaolei Zhu: Thank you.  Is there a way to access 
this in the python api?  I am trying to decouple the functions from schema, so 
it will be preferable not having to specify schema in function definition.  For 
example I wrote a func to do mean and variance, and I can specify or modify on 
the fly which field it is computing from.  This allows the same function to be 
applied to any topic.  Ideally it would be fantastic to directly receive an 
Avro record object that is adapted to the schema of the topic.  I did this some 
roundabout way but I’m wondering if there is an existing mechanism to do it.
----
2019-10-08 19:50:23 UTC - Xiaolei Zhu: Or perhaps if there is a way to receive 
specific field(s) as function inputs that be awesome
----
2019-10-08 20:45:21 UTC - Addison Higham: speaking of schema... if I have an 
avro schema already, I don't see an easy way to convert that to a pulsar avro 
schema...
----
2019-10-08 20:45:24 UTC - Addison Higham: am I missing something?
----
2019-10-08 20:47:29 UTC - Sijie Guo: Use SchemaDefinition to specify the avro 
schema  json definition
----
2019-10-08 20:50:17 UTC - Addison Higham: okay, that makes sense... but I think 
I know where my problem is going to be... I am trying to use scala case classes 
without having to set `@BeanProperty` annotation everywhere...
----
2019-10-08 20:50:40 UTC - Sijie Guo: oh ok
----
2019-10-08 20:50:55 UTC - Raman Gupta: I had this question too... why wouldn't 
Pulsar just accept instances of `org.apache.avro.Schema` in `SchemaDefinition`?
----
2019-10-08 20:52:49 UTC - Addison Higham: oh maybe this will work... it takes 
the json def and doesn't use the pojo if you have the json definitions and 
instead just parses that, but trying to see where it does the actual 
serialization
----
2019-10-08 21:00:46 UTC - Dragos: @Dragos has joined the channel
----
2019-10-08 21:10:49 UTC - Matteo Merli: &gt; Is there a way to access this in 
the python api

The functions Python API is not yet integrated with the schema
----
2019-10-08 21:21:00 UTC - Sijie Guo: because we don’t want to depend on the 
external library on the api module
----
2019-10-08 21:22:09 UTC - Raman Gupta: You could make it an optional (in gradle 
terms, an implementation) dependency.
----
2019-10-08 21:22:46 UTC - Sijie Guo: well if you use it in interface, it is a 
compile dependency not a provided dependency
----
2019-10-08 21:23:41 UTC - Sijie Guo: also Pulsar shades its dependencies as 
well, the avro library in pulsar-client module is also being shaded.
----
2019-10-08 21:24:03 UTC - Raman Gupta: Yeah, that's causing problems for me 
too, as the shaded avro version in Pulsar is too old.
----
2019-10-08 21:24:58 UTC - Sijie Guo: You can use the unshaded version 
pulsar-client-original. Although you might have to deal with dependencies 
conflicts with other your libraries.
----
2019-10-08 21:25:27 UTC - Raman Gupta: &gt; well if you use it in interface, it 
is a compile dependency not a provided dependency

It wouldn't be part of the `SchemaDefinition` interface -- only the 
`AvroSchema` implementation.
----
2019-10-08 21:26:15 UTC - Raman Gupta: As an aside, I saw 
`pulsar-client-original`, yes, and that was my plan. Would be nice not to have 
to do that extra work though.
----
2019-10-08 21:26:41 UTC - Sijie Guo: if you are talking about AvroSchema, we 
can expose a constructor for AvroSchema with avro.Schema if needed.
----
2019-10-08 21:28:37 UTC - Sijie Guo: &gt; it takes the json def and doesn’t use 
the pojo if you have the json definitions and instead just parses that, but 
trying to see where it does the actual serialization

@Addison Higham: if you supply the json def, Pulsar schema will not use POJO to 
generate the schema definition. If you don’t supply the json def, Pulsar schema 
will generate the schema definition for the POJO. Avro serializes the POJO 
based on the json definition.
----
2019-10-08 21:29:06 UTC - Raman Gupta: Yeah that would be nice. I've already 
got all the compiled Avro files lying around. Better yet, allow to passin a 
SpecificRecord class, and extract the schema from it directly.
----
2019-10-08 21:29:28 UTC - Raman Gupta: Or even any `GenericContainer`
----
2019-10-08 21:30:21 UTC - Sijie Guo: Can you file a GitHub issue for that?
----
2019-10-08 21:30:31 UTC - Raman Gupta: Yep
----
2019-10-08 21:33:00 UTC - Addison Higham: @Sijie Guo I think the problem is 
though that the default Avro serializers don't know what to do with case 
classes (but maybe I am wrong?) as they aren't POJOs, so I could do generic 
records that way, but if I want it to just take case classes, I need to 
implement my own SchemaReader and SchemaWriter that is smart about case classes
----
2019-10-08 21:33:16 UTC - Addison Higham: avro4s does all that... just need to 
plumb it in
----
2019-10-08 21:36:19 UTC - Raman Gupta: Does avro4s generate case classes that 
implement `SpecificRecord`? Because if so, then the default Avro serializer 
should know how to deal with it, IIUC.
----
2019-10-08 21:36:59 UTC - Addison Higham: na, it is more of "I have case 
classes that I want to make go in and out of avro" less of code generation from 
avro defs
----
2019-10-08 21:37:11 UTC - Raman Gupta: Ah
----
2019-10-08 21:43:35 UTC - Sijie Guo: Ah I see 
----
2019-10-08 21:44:04 UTC - Sijie Guo: Maybe check out how pulsar4s handles the 
case classes for Avro schema 
----
2019-10-08 21:44:45 UTC - Addison Higham: it just uses `@BeanProperty` for 
everything
----
2019-10-08 21:44:57 UTC - Addison Higham: which I have a few hundred props... 
sorta annoying to add :stuck_out_tongue:
----
2019-10-08 22:04:39 UTC - kelv: @kelv has joined the channel
----
2019-10-09 00:50:24 UTC - Nicolas Ha: 
<https://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerInterceptor.html>
`onAcknowledge` - where can I find what kind of Exceptions, and how to 
distinguish a “good” acknowledge, a timeout or a nack?
----
2019-10-09 00:59:08 UTC - Nicolas Ha: also, given a `   
onAcknowledgeCumulative` `MessageId` - is there a way to find the non-acked 
`messagesId`s that will be acked?
----
2019-10-09 02:46:32 UTC - yici: @yici has joined the channel
----

Reply via email to