Re: KTable Suppress not working

2020-01-20 Thread Sachin Mittal
Hi,
As far as my understanding goes, aggregated result for a window is not
included in next window.
Window would stay in state store till it gets deleted based on certain
setting however aggregated result for that window will include only the
records that occur within the window duration.

If you have sliding window then there will be overlap in the records
between the two windows, so aggregated result would be based on some common
records.

Example your records in first window is (R1, R2, R3, R4) and in second it
is (R3, R4, R5, R6)
So final data stored would be [W1, Aggr( R1, R2, R3, R4 )] and [W2, Aggr(
R3, R4, R5, R6 )]

As John pointed out that emitted data to downstream may or may not happen
after each record is aggregated for that window. It depends on how frequent
you want to commit your data.
So aggregated data will be built on following way:
Aggr( R1 )
Aggr( R1, R2 )
Aggr( R1, R2, R3 )
Aggr( R1, R2, R3, R4 )

But not all of these aggregated result may be emitted downstream.

Hope this helps.

Thanks
Sachin



On Tue, Jan 21, 2020 at 10:25 AM Sushrut Shivaswamy <
sushrut.shivasw...@gmail.com> wrote:

> Thanks John.
> That partially answers my question.
> I'm a little confused about when a window will expire.
> As you said, I will receive at most 20 events at T2 but as time goes on
> will the data from the first window always be included in the aggregated
> result?
>
> On Mon, Jan 20, 2020 at 7:55 AM John Roesler  wrote:
>
> > Hi Sushrut,
> >
> > I have to confess I don’t think I fully understand your last message, but
> > I will try to help.
> >
> > It sounds like maybe you’re thinking that streams would just repeatedly
> > emit everything every commit? That is certainly not the case. If there
> are
> > only 10 events in window 1 and 10 in window 2, you would see at most 20
> > output events, regardless of any caching or suppression. That is, if you
> > disable all caches, you get one output record ( an updated aggregation
> > result) for each input record. Enabling caches only serves to reduce the
> > number.
> >
> > I hope this helps,
> > John
> >
> >
> > On Sat, Jan 18, 2020, at 08:36, Sushrut Shivaswamy wrote:
> > > Hey John,
> > >
> > > I tried following the docs here about the configs:
> > >
> `streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG,
> > > 10 * 1024 * 1024L);
> > > // Set commit interval to 1 second.
> > > streamsConfiguration.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG,
> 1000);`
> > >
> >
> https://kafka.apache.org/10/documentation/streams/developer-guide/memory-mgmt
> > >
> > > I'm trying to group events by id by accumulating them in a list and
> then
> > > spilt the aggregated list
> > > into smaller chunks for processing.
> > > I have a doubt about when windows expire and how aggregated values are
> > > flushed out.
> > > Lets assume in window 1(W1) 10 records arrived and in window 2(W2) 10
> > more
> > > records arrived for the same key.
> > > Assuming the cache can hold only 10 records in memory.
> > > Based on my understanding:
> > > At T1: 10 records from W1 are flushed
> > > At T2: 20 records from W1 + W2 are flushed.
> > > The records from W1 will be duplicated at the next commit time till
> that
> > > window expires.
> > > Is this accurate?
> > > If it is, can you share any way I can avoid/limit the number of times
> > > duplicate data is flushed?
> > >
> > > Thanks,
> > > Sushrut
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Sat, Jan 18, 2020 at 12:00 PM Sushrut Shivaswamy <
> > > sushrut.shivasw...@gmail.com> wrote:
> > >
> > > > Thanks John,
> > > > I'll try increasing the "CACHE_MAX_BYTES_BUFFERING_CONFIG"
> > > > and "COMMIT_INTERVAL_MS_CONFIG" configurations.
> > > >
> > > > Thanks,
> > > > Sushrut
> > > >
> > > > On Sat, Jan 18, 2020 at 11:31 AM John Roesler 
> > wrote:
> > > >
> > > >> Ah, I should add, if you actually want to use suppression, or
> > > >> you need to resolve a similar error message in the future, you
> > > >> probably need to tweak the batch sizes and/or timeout configs
> > > >> of the various clients, and maybe the server as well.
> > > >>
> > > >> That error message kind of sounds like the server went silent
> > > >> long enough that the http session expired, or maybe it suffered
> > > >> a long pause of some kind (GC, de-scheduling, etc.) that caused
> > > >> the OS to hang up the socket.
> > > >>
> > > >> I'm not super familiar with diagnosing these issues; I'm just
> > > >> trying to point you in the right direction in case you wanted
> > > >> to directly solve the given error instead of trying something
> > > >> different.
> > > >>
> > > >> Thanks,
> > > >> -John
> > > >>
> > > >> On Fri, Jan 17, 2020, at 23:33, John Roesler wrote:
> > > >> > Hi Sushrut,
> > > >> >
> > > >> > That's frustrating... I haven't seen that before, but looking at
> the
> > > >> error
> > > >> > in combination with what you say happens without suppress makes
> > > >> > me think there's a large volume of data involved here. Probably,
> > > >> 

Re: KTable Suppress not working

2020-01-20 Thread Sushrut Shivaswamy
Thanks John.
That partially answers my question.
I'm a little confused about when a window will expire.
As you said, I will receive at most 20 events at T2 but as time goes on
will the data from the first window always be included in the aggregated
result?

On Mon, Jan 20, 2020 at 7:55 AM John Roesler  wrote:

> Hi Sushrut,
>
> I have to confess I don’t think I fully understand your last message, but
> I will try to help.
>
> It sounds like maybe you’re thinking that streams would just repeatedly
> emit everything every commit? That is certainly not the case. If there are
> only 10 events in window 1 and 10 in window 2, you would see at most 20
> output events, regardless of any caching or suppression. That is, if you
> disable all caches, you get one output record ( an updated aggregation
> result) for each input record. Enabling caches only serves to reduce the
> number.
>
> I hope this helps,
> John
>
>
> On Sat, Jan 18, 2020, at 08:36, Sushrut Shivaswamy wrote:
> > Hey John,
> >
> > I tried following the docs here about the configs:
> > `streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG,
> > 10 * 1024 * 1024L);
> > // Set commit interval to 1 second.
> > streamsConfiguration.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1000);`
> >
> https://kafka.apache.org/10/documentation/streams/developer-guide/memory-mgmt
> >
> > I'm trying to group events by id by accumulating them in a list and then
> > spilt the aggregated list
> > into smaller chunks for processing.
> > I have a doubt about when windows expire and how aggregated values are
> > flushed out.
> > Lets assume in window 1(W1) 10 records arrived and in window 2(W2) 10
> more
> > records arrived for the same key.
> > Assuming the cache can hold only 10 records in memory.
> > Based on my understanding:
> > At T1: 10 records from W1 are flushed
> > At T2: 20 records from W1 + W2 are flushed.
> > The records from W1 will be duplicated at the next commit time till that
> > window expires.
> > Is this accurate?
> > If it is, can you share any way I can avoid/limit the number of times
> > duplicate data is flushed?
> >
> > Thanks,
> > Sushrut
> >
> >
> >
> >
> >
> >
> > On Sat, Jan 18, 2020 at 12:00 PM Sushrut Shivaswamy <
> > sushrut.shivasw...@gmail.com> wrote:
> >
> > > Thanks John,
> > > I'll try increasing the "CACHE_MAX_BYTES_BUFFERING_CONFIG"
> > > and "COMMIT_INTERVAL_MS_CONFIG" configurations.
> > >
> > > Thanks,
> > > Sushrut
> > >
> > > On Sat, Jan 18, 2020 at 11:31 AM John Roesler 
> wrote:
> > >
> > >> Ah, I should add, if you actually want to use suppression, or
> > >> you need to resolve a similar error message in the future, you
> > >> probably need to tweak the batch sizes and/or timeout configs
> > >> of the various clients, and maybe the server as well.
> > >>
> > >> That error message kind of sounds like the server went silent
> > >> long enough that the http session expired, or maybe it suffered
> > >> a long pause of some kind (GC, de-scheduling, etc.) that caused
> > >> the OS to hang up the socket.
> > >>
> > >> I'm not super familiar with diagnosing these issues; I'm just
> > >> trying to point you in the right direction in case you wanted
> > >> to directly solve the given error instead of trying something
> > >> different.
> > >>
> > >> Thanks,
> > >> -John
> > >>
> > >> On Fri, Jan 17, 2020, at 23:33, John Roesler wrote:
> > >> > Hi Sushrut,
> > >> >
> > >> > That's frustrating... I haven't seen that before, but looking at the
> > >> error
> > >> > in combination with what you say happens without suppress makes
> > >> > me think there's a large volume of data involved here. Probably,
> > >> > the problem isn't specific to suppression, but it's just that the
> > >> > interactions on the suppression buffers are pushing the system over
> > >> > the edge.
> > >> >
> > >> > Counterintuitively, adding Suppression can actually increase your
> > >> > broker traffic because the Suppression buffer has to provide
> resiliency
> > >> > guarantees, so it needs its own changelog, even though the
> aggregation
> > >> > immediately before it _also_ has a changelog.
> > >> >
> > >> > Judging from your description, you were just trying to batch more,
> > >> rather
> > >> > than specifically trying to get "final results" semantics for the
> window
> > >> > results. In that case, you might want to try removing the
> suppression
> > >> > and instead increasing the "CACHE_MAX_BYTES_BUFFERING_CONFIG"
> > >> > and "COMMIT_INTERVAL_MS_CONFIG" configurations.
> > >> >
> > >> > Hope this helps,
> > >> > -John
> > >> >
> > >> > On Fri, Jan 17, 2020, at 22:02, Sushrut Shivaswamy wrote:
> > >> > > Hey,
> > >> > >
> > >> > > I'm building a streams application where I'm trying to aggregate a
> > >> stream
> > >> > > of events
> > >> > > and getting a list of events per key.
> > >> > > `eventStream
> > >> > > .groupByKey(Grouped.with(Serdes.String(), eventSerde))
> > >> > >
> > >>
> 

Kafka Debug Instructions Updated on Cwiki

2020-01-20 Thread M. Manna
Hey all,

I meant to do this a while back, so apologies for the delay.

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=145722808

The above has a working instruction on how to start debugger in Eclipse
Scala. I have't proofread the text, but the solution works correctly. If
anyone wants to extend it by adding IntelliJ or other variants please feel
free to do so.

Also, any feedback/comments are welcome.

Regards,


Re: Minor version upgrade instructions from 0.11.0.0 to 0.11.0.3

2020-01-20 Thread Ismael Juma
One note is that 0.11.0.3 is pretty old by now and no new releases in that
series are planned. I recommend planning an upgrade to the 2.x series
whenever possible.

Ismael

On Mon, Jan 20, 2020 at 12:47 AM Manikumar 
wrote:

> Hi,
>
> Your approach is correct.  For minor version upgrade (0.11.0.0 to
> 0.11.0.3), we can just update the brokers to new version.
>
> Thanks,
>
> On Mon, Jan 20, 2020 at 1:27 AM Sarath Babu
>  wrote:
>
> > Hi all,
> > Appreciate any help/ pointers on how to upgrade.One thought is to start
> > the cluster brokers with 0.11.0.3 and using same old server.properties.
> > However I am not completely sure if this is the correct approach, as
> major
> > version upgrade documentation talk a lot about juggling the protocol
> > version and Kafka version.
> > Thanks in advance.
> > Regards,Sarath
> >
> > On Friday, January 17, 2020, 18:05, Sarath Babu <
> > maddalasarathb...@yahoo.com.INVALID> wrote:
> >
> >  Hi,
> >
> > We are using kafka 0.11.0.0 and I believe hitting this defect
> > - [KAFKA-6003] Replication Fetcher thread for a partition with no data
> > fails to start - ASF JIRA
> >
> > |
> > |
> > |  |
> > [KAFKA-6003] Replication Fetcher thread for a partition with no data
> fai...
> >
> >
> >  |
> >
> >  |
> >
> >  |
> >
> >
> > Solution was patched to 0.11.0.2 so I would like to patch highest minor
> > available which is 0.11.0.3
> > I couldn't find the minor version upgrade instructions after searching
> for
> > some time.Could anyone of you share the wiki?
> > Ours is a production cluster and don't want to create any issues with
> this
> > patch application.
> > Thanks in advance.
> > Regards.
> >
> >
> >
>


Offset commit failed when appending to log due to org.apache.kafka.common.errors.TimeoutException

2020-01-20 Thread Roman Nikolaevich
Hi folks,

We have recently got a strange issue with kafka consumer.
Consumer thread was hanging infinitely.

Can someone advise how to approach this?
kafka version is 1.0.2.

we have found following error at broker side.
[2020-01-14 12:17:54,454] DEBUG [GroupMetadataManager brokerId=1002] Offset
commit Map(topic_name -> [OffsetMetadata[4387852,NO_METADATA],CommitTime
1579004269453,ExpirationTime 6763004269453]) from group group_name,
consumer consumer-1-3b77c87c-81d7-4d23-90c3-2f5243114b87 with generation
471 failed when appending to log due to
org.apache.kafka.common.errors.TimeoutException

here is also tread dump on consumer side
com.playtech.crossteam.boot.kafkaacceptorpatternchannel-pool-16-1"
java.lang.Thread.State: RUNNABLE
at java.base@11.0.2/sun.nio.ch.EPoll.wait(Native Method)
at java.base@11.0.2
/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)
at java.base@11.0.2
/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)
at java.base@11.0.2
/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:136)
at
org.apache.kafka.common.network.Selector.select(Selector.java:674)
at
org.apache.kafka.common.network.Selector.poll(Selector.java:396)
at
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:258)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:230)
at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:206)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:601)
at
org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1250)

Thanks in advance,
Roman


Re: Does Merging two kafka-streams preserve co-partitioning

2020-01-20 Thread Yair Halberstadt
Thanks John!

I don't think transformValues will work here because I need to remove
records which already have manual data?

Either way it doesn't matter too much as I just write them straight to
kafka.

Thanks for your help!

On Mon, Jan 20, 2020 at 4:48 PM John Roesler  wrote:

> Hi Yair,
>
> You should be fine!
>
> Merging does preserve copartitioning.
>
> Also processing on that partition is single-threaded, so you don’t have to
> worry about races on the same key in your transformer.
>
> Actually, you might want to use transformValues to inform Streams that you
> haven’t changed the key. Otherwise, it would need to repartition the result
> before you could do further stateful processing.
>
> I hope this helps!
>
> Thanks,
> John
>
> On Mon, Jan 20, 2020, at 05:27, Yair Halberstadt wrote:
> > Hi
> > I asked this question on stack-overflow and was wondering if anyone here
> > could answer it:
> >
> https://stackoverflow.com/questions/59820243/does-merging-two-kafka-streams-preserve-co-partitioning
> >
> >
> > I have 2 co-partitioned kafka topics. One contains automatically
> generated
> > data, and the other manual overrides.
> >
> > I want to merge them and filter out any automatically generated data that
> > has already been manually overidden, and then forward everything to a
> > combined ChangeLog topic.
> >
> > To do so I create a stream from each topic, and [merge the streams](
> >
> https://kafka.apache.org/23/javadoc/org/apache/kafka/streams/kstream/KStream.html#merge-org.apache.kafka.streams.kstream.KStream-
> )
> > using the dsl API.
> >
> > I then apply the following transform, which stores any manual data, and
> > deletes any automatic data which has already been manually overidden:
> > (Scala but should be pretty easy to understand if you know java)
> >
> > ```scala
> > class FilterManuallyClassifiedTransformer(manualOverridesStoreName :
> String)
> >   extends Transformer[Long, Data, KeyValue[Long, Data]] {
> >
> >   // Array[Byte] used as dummy value since we don't use the value
> >   var store: KeyValueStore[Long, Array[Byte]] = _
> >
> >   override def init(context: ProcessorContext): Unit = {
> > store = context.getStateStore(manualOverridesStoreName
> > ).asInstanceOf[KeyValueStore[Long, Array[Byte]]]
> >   }
> >
> >   override def close(): Unit = {}
> >
> >   override def transform(key: Long, value: Data): KeyValue[Long, Data] =
> {
> > if (value.getIsManual) {
> >   store.put(key, Array.emptyByteArray)
> >   new KeyValue(key, value)
> > }
> > else if (store.get(key) == null) {
> >   new KeyValue(key, value)
> > }
> > else {
> >   null
> > }
> >   }
> > }
> > ```
> >
> > If I understand correctly, there is no guarantee this will work unless
> > manual data and automatic data with the same key are in the same
> partition.
> > Otherwise the manual override might be stored in a different state store
> to
> > the one that the automatic data checks.
> >
> > And even if they are stored in the same StateStore there might be race
> > conditions, where an automatic data checks the state store, then the
> manual
> > override is added to the state store, then the manual override is written
> > to the output topic, then the automatic data is written to the output
> > topic, leading to the automatic data overwriting the manual override.
> >
> > Is that correct?
> >
> > And if so will `merge` preserve the co-partitioning guarantee I need?
> >
> > Thanks for your help
> >
>


Re: Does Merging two kafka-streams preserve co-partitioning

2020-01-20 Thread John Roesler
Hi Yair,

You should be fine! 

Merging does preserve copartitioning.

Also processing on that partition is single-threaded, so you don’t have to 
worry about races on the same key in your transformer.

Actually, you might want to use transformValues to inform Streams that you 
haven’t changed the key. Otherwise, it would need to repartition the result 
before you could do further stateful processing. 

I hope this helps!

Thanks,
John

On Mon, Jan 20, 2020, at 05:27, Yair Halberstadt wrote:
> Hi
> I asked this question on stack-overflow and was wondering if anyone here
> could answer it:
> https://stackoverflow.com/questions/59820243/does-merging-two-kafka-streams-preserve-co-partitioning
> 
> 
> I have 2 co-partitioned kafka topics. One contains automatically generated
> data, and the other manual overrides.
> 
> I want to merge them and filter out any automatically generated data that
> has already been manually overidden, and then forward everything to a
> combined ChangeLog topic.
> 
> To do so I create a stream from each topic, and [merge the streams](
> https://kafka.apache.org/23/javadoc/org/apache/kafka/streams/kstream/KStream.html#merge-org.apache.kafka.streams.kstream.KStream-)
> using the dsl API.
> 
> I then apply the following transform, which stores any manual data, and
> deletes any automatic data which has already been manually overidden:
> (Scala but should be pretty easy to understand if you know java)
> 
> ```scala
> class FilterManuallyClassifiedTransformer(manualOverridesStoreName : String)
>   extends Transformer[Long, Data, KeyValue[Long, Data]] {
> 
>   // Array[Byte] used as dummy value since we don't use the value
>   var store: KeyValueStore[Long, Array[Byte]] = _
> 
>   override def init(context: ProcessorContext): Unit = {
> store = context.getStateStore(manualOverridesStoreName
> ).asInstanceOf[KeyValueStore[Long, Array[Byte]]]
>   }
> 
>   override def close(): Unit = {}
> 
>   override def transform(key: Long, value: Data): KeyValue[Long, Data] = {
> if (value.getIsManual) {
>   store.put(key, Array.emptyByteArray)
>   new KeyValue(key, value)
> }
> else if (store.get(key) == null) {
>   new KeyValue(key, value)
> }
> else {
>   null
> }
>   }
> }
> ```
> 
> If I understand correctly, there is no guarantee this will work unless
> manual data and automatic data with the same key are in the same partition.
> Otherwise the manual override might be stored in a different state store to
> the one that the automatic data checks.
> 
> And even if they are stored in the same StateStore there might be race
> conditions, where an automatic data checks the state store, then the manual
> override is added to the state store, then the manual override is written
> to the output topic, then the automatic data is written to the output
> topic, leading to the automatic data overwriting the manual override.
> 
> Is that correct?
> 
> And if so will `merge` preserve the co-partitioning guarantee I need?
> 
> Thanks for your help
>


Does Merging two kafka-streams preserve co-partitioning

2020-01-20 Thread Yair Halberstadt
Hi
I asked this question on stack-overflow and was wondering if anyone here
could answer it:
https://stackoverflow.com/questions/59820243/does-merging-two-kafka-streams-preserve-co-partitioning


I have 2 co-partitioned kafka topics. One contains automatically generated
data, and the other manual overrides.

I want to merge them and filter out any automatically generated data that
has already been manually overidden, and then forward everything to a
combined ChangeLog topic.

To do so I create a stream from each topic, and [merge the streams](
https://kafka.apache.org/23/javadoc/org/apache/kafka/streams/kstream/KStream.html#merge-org.apache.kafka.streams.kstream.KStream-)
using the dsl API.

I then apply the following transform, which stores any manual data, and
deletes any automatic data which has already been manually overidden:
(Scala but should be pretty easy to understand if you know java)

```scala
class FilterManuallyClassifiedTransformer(manualOverridesStoreName : String)
  extends Transformer[Long, Data, KeyValue[Long, Data]] {

  // Array[Byte] used as dummy value since we don't use the value
  var store: KeyValueStore[Long, Array[Byte]] = _

  override def init(context: ProcessorContext): Unit = {
store = context.getStateStore(manualOverridesStoreName
).asInstanceOf[KeyValueStore[Long, Array[Byte]]]
  }

  override def close(): Unit = {}

  override def transform(key: Long, value: Data): KeyValue[Long, Data] = {
if (value.getIsManual) {
  store.put(key, Array.emptyByteArray)
  new KeyValue(key, value)
}
else if (store.get(key) == null) {
  new KeyValue(key, value)
}
else {
  null
}
  }
}
```

If I understand correctly, there is no guarantee this will work unless
manual data and automatic data with the same key are in the same partition.
Otherwise the manual override might be stored in a different state store to
the one that the automatic data checks.

And even if they are stored in the same StateStore there might be race
conditions, where an automatic data checks the state store, then the manual
override is added to the state store, then the manual override is written
to the output topic, then the automatic data is written to the output
topic, leading to the automatic data overwriting the manual override.

Is that correct?

And if so will `merge` preserve the co-partitioning guarantee I need?

Thanks for your help


kafka with sasl plaintext auth

2020-01-20 Thread Хлебалов Степан Иванович
Hello.
Can anyone explain me please what I'm doing wrong?

I'm trying to add sasl plaintext auth to kafka 2.2.2.

Configuration steps are below:

1. config/server.properties
sasl.enabled.mechanisms=PLAIN
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
listeners=SASL_PLAINTEXT://:9094
security.protocol=SASL_PLAINTEXT

2. config/kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="user-admin-secret"
user_admin="user-admin-secret"
user_alice="alice-secret";
};

3. /etc/systemd/system/kafka-2.2.2.service
[Unit]
Requires=zookeeper.service
After=zookeeper.service
[Service]
Type=simple
User=kafka
Group=kafka
Environment=KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.2.2/config/kafka_server_jaas.conf
ExecStart=/opt/kafka_2.12-2.2.2/bin/kafka-server-start.sh 
/opt/kafka_2.12-2.2.2/config/server.properties
ExecStop=/opt/kafka_2.12-2.2.2/bin/kafka-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target

4. config/kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="alice-secret";
};

5. bin/sasl-kafka-topics.sh
exec $(dirname $0)/kafka-run-class.sh 
-Djava.security.auth.login.config=$(dirname 
$0)/../config/kafka_client_jaas.conf kafka.admin.TopicCommand "$@"

After that when I'm trying to create topic with:
bin/sasl-kafka-topics.sh --create --bootstrap-server localhost:9094 
--replication-factor 1 --partitions 1 --topic my-topic
I got an error:
kafka-server-start.sh[19311]: [2020-01-20 13:47:09,404] INFO [SocketServer 
brokerId=0] Failed authentication with /127.0.0.1 (Unexpected Kafka request of 
type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)

What did I miss?
Thanks.


ops questions MM2

2020-01-20 Thread Nils Hermansson
Got some operation questions for MM2.

1. Whats is the best practice way to start MM2 after a reboot of host?
Add  connect-mirror-maker.sh config/connect-mirror-maker.properties to a
systemd script to run after kafka starts?

2. We will use MM2 to mirror primary to a backup(secondary) cluster.
Today primary cluster just have topic1 topic2... In case of having to setup
a new primary cluster and recover from secondary each disaster recovery
will generate a new prefix on the primary cluster primary.primary.topic1
primary.primary.topic2.
Am i getting this correct since it seems you must set a prefix? If yes
anyway to get around this?

I can mention that primary cluster is not hosted by ourself so have no host
access to those machines.


Re: Minor version upgrade instructions from 0.11.0.0 to 0.11.0.3

2020-01-20 Thread Manikumar
Hi,

Your approach is correct.  For minor version upgrade (0.11.0.0 to
0.11.0.3), we can just update the brokers to new version.

Thanks,

On Mon, Jan 20, 2020 at 1:27 AM Sarath Babu
 wrote:

> Hi all,
> Appreciate any help/ pointers on how to upgrade.One thought is to start
> the cluster brokers with 0.11.0.3 and using same old server.properties.
> However I am not completely sure if this is the correct approach, as major
> version upgrade documentation talk a lot about juggling the protocol
> version and Kafka version.
> Thanks in advance.
> Regards,Sarath
>
> On Friday, January 17, 2020, 18:05, Sarath Babu <
> maddalasarathb...@yahoo.com.INVALID> wrote:
>
>  Hi,
>
> We are using kafka 0.11.0.0 and I believe hitting this defect
> - [KAFKA-6003] Replication Fetcher thread for a partition with no data
> fails to start - ASF JIRA
>
> |
> |
> |  |
> [KAFKA-6003] Replication Fetcher thread for a partition with no data fai...
>
>
>  |
>
>  |
>
>  |
>
>
> Solution was patched to 0.11.0.2 so I would like to patch highest minor
> available which is 0.11.0.3
> I couldn't find the minor version upgrade instructions after searching for
> some time.Could anyone of you share the wiki?
> Ours is a production cluster and don't want to create any issues with this
> patch application.
> Thanks in advance.
> Regards.
>
>
>