Potential message incompatibility between versions 0.11 and 2.2.1 of Kafka?

2020-04-10 Thread Malcolm McFarland
Hey folks,

We're using Samza 0.14.1 for a project, with Kafka 0.11.0.2 as a client.
We're also using Amazon's MSK for our clusters, which is currently set to
Kafka broker version 2.2.1. Occasionally, when starting Samza containers,
we'll see the beginnings of an attempt to read Samza's checkpoint stream,
but this will halt after the first message with no errors:

2020-04-09T19:24:14.968Z Validating offset 0 for topic and partition
[__samza_checkpoint_ver_1_for__1,0]
2020-04-09T19:24:14.969Z Able to successfully read from offset 0 for topic
and partition [__samza_checkpoint_ver_1_for__1,0]. Using it to
instantiate consumer.
2020-04-09T19:24:14.974Z Reading checkpoint for taskName
SystemStreamPartition [kafka, , ]

During debugging, I accidentally used 2 different versions of kafkacat, and
noticed that librdkafka 1.3.0 *can* read the checkpoint stream, but
librdkafka 0.8.6 *cannot*. Could there be a version incompatibility with
how the data is being stored on the kafka server?

Cheers,
Malcolm McFarland
Cavulus


Re: [kafka-clients] Re: [VOTE] 2.5.0 RC3

2020-04-10 Thread Israel Ekpo
+1 (non-binding)

Used the following environment in my validation of the release artifacts:
Ubuntu 18.04, OpenJDK 11, Scala 2.13.1, Gradle 5.6.2

Verified GPG Signatures for all release artifacts
Verified md5 sha1 sha512 checksums for each artifact
Checked Scala and Java Docs
Ran Unit and Integration Tests successfully

One comment I have is that the following release artifacts initially in the
release process were not reachable directly for validation like the other
release artifacts.

* Documentation:
https://kafka.apache.org/25/documentation.html

* Protocol:
https://kafka.apache.org/25/protocol.html

If we can improve that in future releases that would be great.

Thanks for running the release, David.

On Fri, Apr 10, 2020 at 1:55 PM Manikumar  wrote:

> Hi David,
>
> +1 (binding)
>
> - Verified signatures, artifacts,  Release notes
> - Built from sources, ran tests
> - Ran core/streams quick start for Scala 2.13 binary, ran few manual tests
> - Verified docs
>
> As discussed offline, we need to add upgrade instructions to 2.5 docs.
>
> Thanks for driving the release.
>
> Thanks,
> Manikumar
>
> On Fri, Apr 10, 2020 at 7:53 PM Bill Bejeck  wrote:
>
>> Hi David,
>>
>> +1 (non-binding) Verified signatures, built jars from source, ran unit and
>> integration tests, all passed.
>>
>> Thanks for running the release.
>>
>> -Bill
>>
>> On Wed, Apr 8, 2020 at 10:10 AM David Arthur  wrote:
>>
>> > Passing Jenkins build on 2.5 branch:
>> > https://builds.apache.org/job/kafka-2.5-jdk8/90/
>> >
>> > On Wed, Apr 8, 2020 at 12:03 AM David Arthur  wrote:
>> >
>> >> Hello Kafka users, developers and client-developers,
>> >>
>> >> This is the forth candidate for release of Apache Kafka 2.5.0.
>> >>
>> >> * TLS 1.3 support (1.2 is now the default)
>> >> * Co-groups for Kafka Streams
>> >> * Incremental rebalance for Kafka Consumer
>> >> * New metrics for better operational insight
>> >> * Upgrade Zookeeper to 3.5.7
>> >> * Deprecate support for Scala 2.11
>> >>
>> >> Release notes for the 2.5.0 release:
>> >>
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
>> >>
>> >> *** Please download, test and vote by Friday April 10th 5pm PT
>> >>
>> >> Kafka's KEYS file containing PGP keys we use to sign the release:
>> >> https://kafka.apache.org/KEYS
>> >>
>> >> * Release artifacts to be voted upon (source and binary):
>> >> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
>> >>
>> >> * Maven artifacts to be voted upon:
>> >> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>> >>
>> >> * Javadoc:
>> >> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
>> >>
>> >> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
>> >> https://github.com/apache/kafka/releases/tag/2.5.0-rc3
>> >>
>> >> * Documentation:
>> >> https://kafka.apache.org/25/documentation.html
>> >>
>> >> * Protocol:
>> >> https://kafka.apache.org/25/protocol.html
>> >>
>> >> Successful Jenkins builds to follow
>> >>
>> >> Thanks!
>> >> David
>> >>
>> >
>> >
>> > --
>> > David Arthur
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups
>> > "kafka-clients" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an
>> > email to kafka-clients+unsubscr...@googlegroups.com.
>> > To view this discussion on the web visit
>> >
>> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6oy4_Vw6B4M%3DoFtLvfk0OZAnioQW2u1xjgqe9r%3D3sC%2B5A%40mail.gmail.com
>> > <
>> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6oy4_Vw6B4M%3DoFtLvfk0OZAnioQW2u1xjgqe9r%3D3sC%2B5A%40mail.gmail.com?utm_medium=email_source=footer
>> >
>> > .
>> >
>>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAMVt_AwDkY5_%3DPUL9rEYN5o%3DxvXd%2Bq9sTt_602RzgXoKfT_sbA%40mail.gmail.com
> 
> .
>


Re: Kafka Streams endless rebalancing

2020-04-10 Thread John Roesler
Hey Alex,

Huh.

Unprefixed configs apply to all consumers, but in this case, it's
irrelevant because only the "main" consumer participates in group
management (so the config only applies to the main consumer).

So you actually have max.poll.interval.ms set to Integer.MAX_VALUE,
which amounts to 25 days? I agree, in that case it doesn't seem like
it could be a slow batch. In fact, it couldn't be anything related to
polling, since you see rebalances sooner than 25 days.

If you have the broker logs, they'll contain the reason for the rebalance.
The only other thing I can think of that causes rebalances is failing to 
heartbeat. What do you have for session.timeout.ms and
heartbeat.interval.ms ?

If anyone else has any ideas, please jump in.

Thanks,
-John

On Fri, Apr 10, 2020, at 14:55, Alex Craig wrote:
> Thanks John, I double-checked my configs and I've actually got the
> max.poll.interval.ms set to the max (not prefixed with anything so
> presumably that’s the “main” consumer).  So I think that means the problem
> isn’t due to a single batch of messages not getting processed/committed
> within the polling cycle right?  I guess what I’m wondering is, could the
> OVERALL length of time needed to fully restore the state stores (which
> could be multiple topics with multiple partitions) be exceeding some
> timeout or threshold?  Thanks again for any ideas,
> 
> 
> 
> Alex C
> 
> 
> On Thu, Apr 9, 2020 at 9:36 AM John Roesler  wrote:
> 
> > Hi Alex,
> >
> > It sounds like your theory is plausible. After a rebalance, Streams needs
> > to restore its stores from the changelog topics. Currently, Streams
> > performs this restore operation in the same loop that does processing and
> > polls the consumer for more records. If the restore batches (or the
> > processing) take too long, Streams won’t be able to call Consumer#poll (on
> > the “main” consumer)within the max.poll.interval, which causes the
> > Consumer’s heartbeat thread to assume the instance is unhealthy and stop
> > sending heartbeats, which in turn causes another rebalance.
> >
> > You could try either adjusting the max poll interval for the _main_
> > consumer or decreasing the batch size for the _restore_ consumer to make
> > sure Streams can call poll() frequently enough to stay in the group. There
> > are prefixes you can add to the consumer configuration portions to target
> > the main or restore consumer.
> >
> > Also worth noting, we’re planning to change this up pretty soon, so that
> > restoration happens in a separate thread and doesn’t block polling like
> > this.
> >
> > I hope this helps!
> > -John
> >
> > On Thu, Apr 9, 2020, at 08:33, Alex Craig wrote:
> > > Hi all, I’ve got a Kafka Streams application running in a Kubernetes
> > > environment.  The topology on this application has 2 aggregations (and
> > > therefore 2 Ktables), both of which can get fairly large – the first is
> > > around 200GB and the second around 500GB.  As with any K8s platform, pods
> > > can occasionally get rescheduled or go down, which of course will cause
> > my
> > > application to rebalance.  However, what I’m seeing is the application
> > will
> > > literally spend hours rebalancing, without any errors being thrown or
> > other
> > > obvious causes for the frequent rebalances – all I can see in the logs is
> > > an instance will be restoring a state store from the changelog topic,
> > then
> > > suddenly it will have its partitions revoked and begin the join-group
> > > process all over again.  (I’m running 10 pods/instances of my app, and I
> > > see this same pattern in each instance)  In some cases it never really
> > > recovers from this rebalancing cycle – even after 12 hours or more - and
> > > I’ve had to scale down the application completely and start over by
> > purging
> > > the application state and re-consuming from earliest on the source topic.
> > > Interestingly, after purging and starting from scratch the application
> > > seems to recover from rebalances pretty easily.
> > >
> > > The storage I’m using is a NAS device, which admittedly is not
> > particularly
> > > fast.  (it’s using spinning disks and is shared amongst other tenants) As
> > > an experiment, I’ve tried switching the k8s storage to an in-memory
> > option
> > > (this is at the k8s layer - the application is still using the same
> > RocksDB
> > > stores) to see if that helps.  As it turns out, I never have the
> > rebalance
> > > problem when using an in-memory persistence layer.  If a pod goes down,
> > the
> > > application spends around 10 - 15 minutes rebalancing and then is back to
> > > processing data again.
> > >
> > > At this point I guess my main question is: when I’m using the NAS storage
> > > and the state stores are fairly large, could I be hitting some timeout
> > > somewhere that isn’t allowing the restore process to complete, which then
> > > triggers another rebalance?  In other words, the restore process is
> > simply
> > > taking too long given the 

Re: Kafka Streams endless rebalancing

2020-04-10 Thread Alex Craig
Thanks John, I double-checked my configs and I've actually got the
max.poll.interval.ms set to the max (not prefixed with anything so
presumably that’s the “main” consumer).  So I think that means the problem
isn’t due to a single batch of messages not getting processed/committed
within the polling cycle right?  I guess what I’m wondering is, could the
OVERALL length of time needed to fully restore the state stores (which
could be multiple topics with multiple partitions) be exceeding some
timeout or threshold?  Thanks again for any ideas,



Alex C


On Thu, Apr 9, 2020 at 9:36 AM John Roesler  wrote:

> Hi Alex,
>
> It sounds like your theory is plausible. After a rebalance, Streams needs
> to restore its stores from the changelog topics. Currently, Streams
> performs this restore operation in the same loop that does processing and
> polls the consumer for more records. If the restore batches (or the
> processing) take too long, Streams won’t be able to call Consumer#poll (on
> the “main” consumer)within the max.poll.interval, which causes the
> Consumer’s heartbeat thread to assume the instance is unhealthy and stop
> sending heartbeats, which in turn causes another rebalance.
>
> You could try either adjusting the max poll interval for the _main_
> consumer or decreasing the batch size for the _restore_ consumer to make
> sure Streams can call poll() frequently enough to stay in the group. There
> are prefixes you can add to the consumer configuration portions to target
> the main or restore consumer.
>
> Also worth noting, we’re planning to change this up pretty soon, so that
> restoration happens in a separate thread and doesn’t block polling like
> this.
>
> I hope this helps!
> -John
>
> On Thu, Apr 9, 2020, at 08:33, Alex Craig wrote:
> > Hi all, I’ve got a Kafka Streams application running in a Kubernetes
> > environment.  The topology on this application has 2 aggregations (and
> > therefore 2 Ktables), both of which can get fairly large – the first is
> > around 200GB and the second around 500GB.  As with any K8s platform, pods
> > can occasionally get rescheduled or go down, which of course will cause
> my
> > application to rebalance.  However, what I’m seeing is the application
> will
> > literally spend hours rebalancing, without any errors being thrown or
> other
> > obvious causes for the frequent rebalances – all I can see in the logs is
> > an instance will be restoring a state store from the changelog topic,
> then
> > suddenly it will have its partitions revoked and begin the join-group
> > process all over again.  (I’m running 10 pods/instances of my app, and I
> > see this same pattern in each instance)  In some cases it never really
> > recovers from this rebalancing cycle – even after 12 hours or more - and
> > I’ve had to scale down the application completely and start over by
> purging
> > the application state and re-consuming from earliest on the source topic.
> > Interestingly, after purging and starting from scratch the application
> > seems to recover from rebalances pretty easily.
> >
> > The storage I’m using is a NAS device, which admittedly is not
> particularly
> > fast.  (it’s using spinning disks and is shared amongst other tenants) As
> > an experiment, I’ve tried switching the k8s storage to an in-memory
> option
> > (this is at the k8s layer - the application is still using the same
> RocksDB
> > stores) to see if that helps.  As it turns out, I never have the
> rebalance
> > problem when using an in-memory persistence layer.  If a pod goes down,
> the
> > application spends around 10 - 15 minutes rebalancing and then is back to
> > processing data again.
> >
> > At this point I guess my main question is: when I’m using the NAS storage
> > and the state stores are fairly large, could I be hitting some timeout
> > somewhere that isn’t allowing the restore process to complete, which then
> > triggers another rebalance?  In other words, the restore process is
> simply
> > taking too long given the amount of data needed to restore and the slow
> > storage?   I’m currently using Kafka 2.4.1, but I saw this same behavior
> in
> > 2.3.  I am using a custom RocksDB config setter to limit off-heap memory,
> > but I’ve tried removing that and saw no difference in the rebalance
> > problem.  Again, no errors that I’m seeing or anything else in the logs
> > that seems to indicate why it can never finish rebalancing.  I’ve tried
> > turning on DEBUG logging but I’m having a tough time sifting through the
> > amount of log messages, though I’m still looking.
> >
> > If anyone has any ideas I would appreciate it, thanks!
> >
> > Alex C
> >
>


Re: Global state store: Lazy loading

2020-04-10 Thread Matthias J. Sax
Global state store are designed to load data upfront from a topics. I
don't think you can "bend" it easily.

You can keep the topic empty before starting the application, and put
data into that topic on demand, and the store would pick it up
afterwards. Not sure if this would be feasible for your use case.


-Matthias

On 4/9/20 11:51 PM, Navneeth Krishnan wrote:
> Hi All,
> 
> Any suggestions on how I can achieve this?
> 
> Thanks
> 
> On Fri, Apr 3, 2020 at 12:49 AM Navneeth Krishnan 
> wrote:
> 
>> Hi Boyang,
>>
>> Basically I don’t want to load all the states upfront. For local kv store,
>> when the very first message arrives I basically do a http request to an
>> external service and load the data.
>>
>> I can do the same for global state store but since the global states are
>> read only I couldn’t figure out a better way to do it.
>>
>> Thanks
>>
>> On Thu, Apr 2, 2020 at 1:38 PM Boyang Chen 
>> wrote:
>>
>>> Hey Navneeth,
>>>
>>> could you clarify a bit on what you mean by `lazy load`, specifically how
>>> you make it happen with local KV store?
>>>
>>> On Thu, Apr 2, 2020 at 12:09 PM Navneeth Krishnan <
>>> reachnavnee...@gmail.com>
>>> wrote:
>>>
 Hi All,

 Is there a recommend way for lazy loading the global state store. I'm
>>> using
 PAPI and I have the local KV state stores in lazy load fashion so that I
 don't end up loading unnecessary data. Similarly I want to get the
>>> global
 state store to be loaded only when the request has the id for which the
 data is not present.

 Thanks

>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [kafka-clients] Re: [VOTE] 2.5.0 RC3

2020-04-10 Thread Manikumar
Hi David,

+1 (binding)

- Verified signatures, artifacts,  Release notes
- Built from sources, ran tests
- Ran core/streams quick start for Scala 2.13 binary, ran few manual tests
- Verified docs

As discussed offline, we need to add upgrade instructions to 2.5 docs.

Thanks for driving the release.

Thanks,
Manikumar

On Fri, Apr 10, 2020 at 7:53 PM Bill Bejeck  wrote:

> Hi David,
>
> +1 (non-binding) Verified signatures, built jars from source, ran unit and
> integration tests, all passed.
>
> Thanks for running the release.
>
> -Bill
>
> On Wed, Apr 8, 2020 at 10:10 AM David Arthur  wrote:
>
> > Passing Jenkins build on 2.5 branch:
> > https://builds.apache.org/job/kafka-2.5-jdk8/90/
> >
> > On Wed, Apr 8, 2020 at 12:03 AM David Arthur  wrote:
> >
> >> Hello Kafka users, developers and client-developers,
> >>
> >> This is the forth candidate for release of Apache Kafka 2.5.0.
> >>
> >> * TLS 1.3 support (1.2 is now the default)
> >> * Co-groups for Kafka Streams
> >> * Incremental rebalance for Kafka Consumer
> >> * New metrics for better operational insight
> >> * Upgrade Zookeeper to 3.5.7
> >> * Deprecate support for Scala 2.11
> >>
> >> Release notes for the 2.5.0 release:
> >> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
> >>
> >> *** Please download, test and vote by Friday April 10th 5pm PT
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> >> https://kafka.apache.org/KEYS
> >>
> >> * Release artifacts to be voted upon (source and binary):
> >> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
> >>
> >> * Maven artifacts to be voted upon:
> >> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >>
> >> * Javadoc:
> >> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
> >>
> >> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> >> https://github.com/apache/kafka/releases/tag/2.5.0-rc3
> >>
> >> * Documentation:
> >> https://kafka.apache.org/25/documentation.html
> >>
> >> * Protocol:
> >> https://kafka.apache.org/25/protocol.html
> >>
> >> Successful Jenkins builds to follow
> >>
> >> Thanks!
> >> David
> >>
> >
> >
> > --
> > David Arthur
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6oy4_Vw6B4M%3DoFtLvfk0OZAnioQW2u1xjgqe9r%3D3sC%2B5A%40mail.gmail.com
> > <
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6oy4_Vw6B4M%3DoFtLvfk0OZAnioQW2u1xjgqe9r%3D3sC%2B5A%40mail.gmail.com?utm_medium=email_source=footer
> >
> > .
> >
>


Re: How to set custom properties and message expiration for Kafka topic message.

2020-04-10 Thread Nicolas Carlot
Ok, then you may be looking for custom headers:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-82+-+Add+Record+Headers


Le ven. 10 avr. 2020 à 16:08, Naveen Kumar M 
a écrit :

> Hello Nicolas,
>
> Thanks a lot for the response. "Custom properties" are nothing but
> additional message properties based on our requirement and which are not
> part of message payload. For example trackingId which I need to add this
> additional property while producing message and read the same in consumer
> end.
>
> In JMS messaging system, we can add JMS Custom properties while sending
> messages to consumer and consumer can read the same. I am looking for
> similar functionality in Kafka.
>
> Thanks and regards,
> Naveen
>
> On Fri, Apr 10, 2020, 7:22 PM Nicolas Carlot  >
> wrote:
>
> > Hello,
> >
> > Message expiration is based on topic configuration, not on your producer
> > configuration.
> > Look at kafka server configuration. retention.ms is one of the
> properties
> > that drive the message deletion
> > I don't know what is "customer properties", if you're talking about
> "custom
> > properties", what do you mean by reading them on the consumer end ?
> >
> > Le ven. 10 avr. 2020 à 15:46, Naveen Kumar M <
> eai.naveenkuma...@gmail.com>
> > a écrit :
> >
> > > Hello Team,
> > >
> > > Could you please help me on this?
> > >
> > > Thanks and regards,
> > > Naveen
> > >
> > > On Tue, Apr 7, 2020, 2:27 PM Naveen Kumar M <
> eai.naveenkuma...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hello Friends,
> > > >
> > > > Hope you are all doing good!
> > > >
> > > > I am bit new to Kafka messaging system.
> > > >
> > > > We are using Kafka as messaging broker.
> > > >
> > > > Could someone let me know how to set custom properties and message
> > > > expiration while sending message to Kafka topic and how to read
> > customer
> > > > properties in consumer end?
> > > >
> > > > Thanks and regards,
> > > > Naveen
> > > >
> > >
> >
> >
> > --
> > *Nicolas Carlot*
> >
> > Lead dev
> > |  | nicolas.car...@chronopost.fr
> >
> >
> > *Veuillez noter qu'à partir du 20 mai, le siège Chronopost déménage. La
> > nouvelle adresse est : 3 boulevard Romain Rolland 75014 Paris*
> >
> > [image: Logo Chronopost]
> > | chronopost.fr 
> > Suivez nous sur Facebook  et
> > Twitter
> > .
> >
> > [image: DPD Group]
> >
>


-- 
*Nicolas Carlot*

Lead dev
|  | nicolas.car...@chronopost.fr


*Veuillez noter qu'à partir du 20 mai, le siège Chronopost déménage. La
nouvelle adresse est : 3 boulevard Romain Rolland 75014 Paris*

[image: Logo Chronopost]
| chronopost.fr 
Suivez nous sur Facebook  et Twitter
.

[image: DPD Group]


Re: [kafka-clients] Re: [VOTE] 2.5.0 RC3

2020-04-10 Thread Bill Bejeck
Hi David,

+1 (non-binding) Verified signatures, built jars from source, ran unit and
integration tests, all passed.

Thanks for running the release.

-Bill

On Wed, Apr 8, 2020 at 10:10 AM David Arthur  wrote:

> Passing Jenkins build on 2.5 branch:
> https://builds.apache.org/job/kafka-2.5-jdk8/90/
>
> On Wed, Apr 8, 2020 at 12:03 AM David Arthur  wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the forth candidate for release of Apache Kafka 2.5.0.
>>
>> * TLS 1.3 support (1.2 is now the default)
>> * Co-groups for Kafka Streams
>> * Incremental rebalance for Kafka Consumer
>> * New metrics for better operational insight
>> * Upgrade Zookeeper to 3.5.7
>> * Deprecate support for Scala 2.11
>>
>> Release notes for the 2.5.0 release:
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Friday April 10th 5pm PT
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> https://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>>
>> * Javadoc:
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
>>
>> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
>> https://github.com/apache/kafka/releases/tag/2.5.0-rc3
>>
>> * Documentation:
>> https://kafka.apache.org/25/documentation.html
>>
>> * Protocol:
>> https://kafka.apache.org/25/protocol.html
>>
>> Successful Jenkins builds to follow
>>
>> Thanks!
>> David
>>
>
>
> --
> David Arthur
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6oy4_Vw6B4M%3DoFtLvfk0OZAnioQW2u1xjgqe9r%3D3sC%2B5A%40mail.gmail.com
> 
> .
>


Re: How to set custom properties and message expiration for Kafka topic message.

2020-04-10 Thread Naveen Kumar M
Hello Nicolas,

Thanks a lot for the response. "Custom properties" are nothing but
additional message properties based on our requirement and which are not
part of message payload. For example trackingId which I need to add this
additional property while producing message and read the same in consumer
end.

In JMS messaging system, we can add JMS Custom properties while sending
messages to consumer and consumer can read the same. I am looking for
similar functionality in Kafka.

Thanks and regards,
Naveen

On Fri, Apr 10, 2020, 7:22 PM Nicolas Carlot 
wrote:

> Hello,
>
> Message expiration is based on topic configuration, not on your producer
> configuration.
> Look at kafka server configuration. retention.ms is one of the properties
> that drive the message deletion
> I don't know what is "customer properties", if you're talking about "custom
> properties", what do you mean by reading them on the consumer end ?
>
> Le ven. 10 avr. 2020 à 15:46, Naveen Kumar M 
> a écrit :
>
> > Hello Team,
> >
> > Could you please help me on this?
> >
> > Thanks and regards,
> > Naveen
> >
> > On Tue, Apr 7, 2020, 2:27 PM Naveen Kumar M  >
> > wrote:
> >
> > > Hello Friends,
> > >
> > > Hope you are all doing good!
> > >
> > > I am bit new to Kafka messaging system.
> > >
> > > We are using Kafka as messaging broker.
> > >
> > > Could someone let me know how to set custom properties and message
> > > expiration while sending message to Kafka topic and how to read
> customer
> > > properties in consumer end?
> > >
> > > Thanks and regards,
> > > Naveen
> > >
> >
>
>
> --
> *Nicolas Carlot*
>
> Lead dev
> |  | nicolas.car...@chronopost.fr
>
>
> *Veuillez noter qu'à partir du 20 mai, le siège Chronopost déménage. La
> nouvelle adresse est : 3 boulevard Romain Rolland 75014 Paris*
>
> [image: Logo Chronopost]
> | chronopost.fr 
> Suivez nous sur Facebook  et
> Twitter
> .
>
> [image: DPD Group]
>


Re: How to set custom properties and message expiration for Kafka topic message.

2020-04-10 Thread Nicolas Carlot
Hello,

Message expiration is based on topic configuration, not on your producer
configuration.
Look at kafka server configuration. retention.ms is one of the properties
that drive the message deletion
I don't know what is "customer properties", if you're talking about "custom
properties", what do you mean by reading them on the consumer end ?

Le ven. 10 avr. 2020 à 15:46, Naveen Kumar M 
a écrit :

> Hello Team,
>
> Could you please help me on this?
>
> Thanks and regards,
> Naveen
>
> On Tue, Apr 7, 2020, 2:27 PM Naveen Kumar M 
> wrote:
>
> > Hello Friends,
> >
> > Hope you are all doing good!
> >
> > I am bit new to Kafka messaging system.
> >
> > We are using Kafka as messaging broker.
> >
> > Could someone let me know how to set custom properties and message
> > expiration while sending message to Kafka topic and how to read customer
> > properties in consumer end?
> >
> > Thanks and regards,
> > Naveen
> >
>


-- 
*Nicolas Carlot*

Lead dev
|  | nicolas.car...@chronopost.fr


*Veuillez noter qu'à partir du 20 mai, le siège Chronopost déménage. La
nouvelle adresse est : 3 boulevard Romain Rolland 75014 Paris*

[image: Logo Chronopost]
| chronopost.fr 
Suivez nous sur Facebook  et Twitter
.

[image: DPD Group]


Re: How to set custom properties and message expiration for Kafka topic message.

2020-04-10 Thread Naveen Kumar M
Hello Team,

Could you please help me on this?

Thanks and regards,
Naveen

On Tue, Apr 7, 2020, 2:27 PM Naveen Kumar M 
wrote:

> Hello Friends,
>
> Hope you are all doing good!
>
> I am bit new to Kafka messaging system.
>
> We are using Kafka as messaging broker.
>
> Could someone let me know how to set custom properties and message
> expiration while sending message to Kafka topic and how to read customer
> properties in consumer end?
>
> Thanks and regards,
> Naveen
>


Re: how to start DEBUG level

2020-04-10 Thread Jacek Szewczyk
Please change 
log4j.logger.kafka=DEBUG, kafkaAppender

And it will work.

Jacek

> On Apr 10, 2020, at 07:51, deadwind4  wrote:
> 
> Hi 
> 
> 
> I wanna change kafka log to DEBUG level
> 
> 
> I change option log4j.rootLogger=DEBUG, stdout, kafkaAppender in 
> log4j.properties file
> 
> 
> but in logs/server.log I still see logs without any DEBUG message. why?
> 
> 
> Thank you.



Re: Global state store: Lazy loading

2020-04-10 Thread Navneeth Krishnan
Hi All,

Any suggestions on how I can achieve this?

Thanks

On Fri, Apr 3, 2020 at 12:49 AM Navneeth Krishnan 
wrote:

> Hi Boyang,
>
> Basically I don’t want to load all the states upfront. For local kv store,
> when the very first message arrives I basically do a http request to an
> external service and load the data.
>
> I can do the same for global state store but since the global states are
> read only I couldn’t figure out a better way to do it.
>
> Thanks
>
> On Thu, Apr 2, 2020 at 1:38 PM Boyang Chen 
> wrote:
>
>> Hey Navneeth,
>>
>> could you clarify a bit on what you mean by `lazy load`, specifically how
>> you make it happen with local KV store?
>>
>> On Thu, Apr 2, 2020 at 12:09 PM Navneeth Krishnan <
>> reachnavnee...@gmail.com>
>> wrote:
>>
>> > Hi All,
>> >
>> > Is there a recommend way for lazy loading the global state store. I'm
>> using
>> > PAPI and I have the local KV state stores in lazy load fashion so that I
>> > don't end up loading unnecessary data. Similarly I want to get the
>> global
>> > state store to be loaded only when the request has the id for which the
>> > data is not present.
>> >
>> > Thanks
>> >
>>
>