Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-26 Thread Jan Filipiak

Hi,

just want to throw my though in. In general the functionality is very 
usefull, we should though not try to find the architecture to hard while 
implementing.


The manual steps would be to

create a new topic
the mirrormake from the new old topic to the new topic
wait for mirror making to catch up.
then put the consumers onto the new topic
(having mirrormaker spit out a mapping from old offsets to new offsets:
if topic is increased by factor X there is gonna be a clean 
mapping from 1 offset in the old topic to X offsets in the new topic,
if there is no factor then there is no chance to generate a 
mapping that can be reasonable used for continuing)
make consumers stop at appropriate points and continue consumption 
with offsets from the mapping.

have the producers stop for a minimal time.
wait for mirrormaker to finish
let producer produce with the new metadata.


Instead of implementing the approach suggest in the KIP which will leave 
log compacted topic completely crumbled and unusable.
I would much rather try to build infrastructure to support the mentioned 
above operations more smoothly.

Especially having producers stop and use another topic is difficult and
it would be nice if one can trigger "invalid metadata" exceptions for 
them and
if one could give topics aliases so that their produces with the old 
topic will arrive in the new topic.


The downsides are obvious I guess ( having the same data twice for the 
transition period, but kafka tends to scale well with datasize). So its 
a nicer fit into the architecture.


I further want to argument that the functionality by the KIP can 
completely be implementing in "userland" with a custom partitioner that 
handles the transition as needed. I would appreciate if someone could 
point out what a custom partitioner couldn't handle in this case?


With the above approach, shrinking a topic becomes the same steps. 
Without loosing keys in the discontinued partitions.


Would love to hear what everyone thinks.

Best Jan

















On 11.02.2018 00:35, Dong Lin wrote:

Hi all,

I have created KIP-253: Support in-order message delivery with partition
expansion. See
https://cwiki.apache.org/confluence/display/KAFKA/KIP-253%3A+Support+in-order+message+delivery+with+partition+expansion
.

This KIP provides a way to allow messages of the same key from the same
producer to be consumed in the same order they are produced even if we
expand partition of the topic.

Thanks,
Dong





Re: [VOTE] 1.0.1 RC2

2018-02-26 Thread Manikumar
+1 (non-binding)
Built src and ran tests
Ran core quick start

On Sat, Feb 24, 2018 at 8:44 PM, Jakub Scholz  wrote:

> +1 (non-binding) ... I used the Scala 2.12 binaries and run my tests with
> producers / consumers.
>
> On Thu, Feb 22, 2018 at 1:06 AM, Ewen Cheslack-Postava 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 1.0.1.
> >
> > This is a bugfix release for the 1.0 branch that was first released with
> > 1.0.0 about 3 months ago. We've fixed 49 issues since that release. Most
> of
> > these are non-critical, but in aggregate these fixes will have
> significant
> > impact. A few of the more significant fixes include:
> >
> > * KAFKA-6277: Make loadClass thread-safe for class loaders of Connect
> > plugins
> > * KAFKA-6185: Selector memory leak with high likelihood of OOM in case of
> > down conversion
> > * KAFKA-6269: KTable state restore fails after rebalance
> > * KAFKA-6190: GlobalKTable never finishes restoring when consuming
> > transactional messages
> > * KAFKA-6529: Stop file descriptor leak when client disconnects with
> staged
> > receives
> > * KAFKA-6238: Issues with protocol version when applying a rolling
> upgrade
> > to 1.0.0
> >
> > Release notes for the 1.0.1 release:
> > http://home.apache.org/~ewencp/kafka-1.0.1-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Saturday Feb 24, 9pm PT ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~ewencp/kafka-1.0.1-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~ewencp/kafka-1.0.1-rc2/javadoc/
> >
> > * Tag to be voted upon (off 1.0 branch) is the 1.0.1 tag:
> > https://github.com/apache/kafka/tree/1.0.1-rc2
> >
> > * Documentation:
> > http://kafka.apache.org/10/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/10/protocol.html
> >
> > /**
> >
> > Thanks,
> > Ewen Cheslack-Postava
> >
>


[STREAMS] KafkaStreams.toString() deprecated?!

2018-02-26 Thread Jacek Laskowski
Hi,

I've just found that KafkaStreams.toString [1] is deprecated, but I think
that it does not make sense.

The parameterless toString is simply part of the java.lang.Object contract
and I don't think it could ever get deprecated (unless it is by Object).

I think it'd be much better if the method used whatever it's recommended
for a toString-like functionality (which seems that
KafkaStreams.localThreadsMetadata [2] is or something based on that).

Thoughts?

p.s. I also think that since KafkaStreams is marked
as @InterfaceStability.Evolving using @Deprecated markers does not add much
if anything. I thought that Evolving was to say that literally everything
could change without warning at any time, couldn't it?

[1]
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java?utf8=%E2%9C%93#L900

[2]
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java?utf8=%E2%9C%93#L1061

Pozdrawiam,
Jacek Laskowski

https://about.me/JacekLaskowski
Mastering Spark SQL https://bit.ly/mastering-spark-sql
Spark Structured Streaming https://bit.ly/spark-structured-streaming
Mastering Kafka Streams https://bit.ly/mastering-kafka-streams
Follow me at https://twitter.com/jaceklaskowski


Re: [STREAMS] KafkaStreams.toString() deprecated?!

2018-02-26 Thread Matthias J. Sax
Thanks for the feedback.

You are of course right, that we cannot remove toString() -- we used the
@deprecated annotation to imply that the overwrite will be removed and
toString() will fall back to Object.toString() in the future and thus
not provide any useful information (at least this was the plan).

Using localThreadMetadata() within toString() and print whatever it
return might be an alternative. However, I am not sure if we gain much.
Also, KafkaConsumer/Producer/AdminClient don't overwrite toString() --
IMHO, not overwriting toString() aligns better with other parts of the code.

About @Evolving: that is correct, too. We just try to be nice and
provide backward compatibility even if we never promised :)


-Matthias

On 2/26/18 1:38 AM, Jacek Laskowski wrote:
> Hi,
> 
> I've just found that KafkaStreams.toString [1] is deprecated, but I think
> that it does not make sense.
> 
> The parameterless toString is simply part of the java.lang.Object contract
> and I don't think it could ever get deprecated (unless it is by Object).
> 
> I think it'd be much better if the method used whatever it's recommended
> for a toString-like functionality (which seems that
> KafkaStreams.localThreadsMetadata [2] is or something based on that).
> 
> Thoughts?
> 
> p.s. I also think that since KafkaStreams is marked
> as @InterfaceStability.Evolving using @Deprecated markers does not add much
> if anything. I thought that Evolving was to say that literally everything
> could change without warning at any time, couldn't it?
> 
> [1]
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java?utf8=%E2%9C%93#L900
> 
> [2]
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java?utf8=%E2%9C%93#L1061
> 
> Pozdrawiam,
> Jacek Laskowski
> 
> https://about.me/JacekLaskowski
> Mastering Spark SQL https://bit.ly/mastering-spark-sql
> Spark Structured Streaming https://bit.ly/spark-structured-streaming
> Mastering Kafka Streams https://bit.ly/mastering-kafka-streams
> Follow me at https://twitter.com/jaceklaskowski
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2018-02-26 Thread Guozhang Wang
Hi Jorge,

I agree on being consistent across our tools.

Besides the kafka-consumer-groups and kafka-streams-application-reset, a
couple of other classes to consider adding the --execute options for the
next major release:

1. kafka-preferred-replica-elections
2. kafka-reassign-partitions
3. kafka-delete-records
4. kafka-topics
5. kafka-acls
6. kafka-configs
7. kafka-delegation-tokens


Guozhang

On Mon, Feb 26, 2018 at 3:03 AM, Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Hi all,
>
> Thanks for the feedback.
>
> I have updated the "Compatibility, Deprecation, and Migration Plan" section
> to document this to support the rollback. I probably should have handled
> this change, as small as it looks, as a new KIP to avoid this issue.
>
> I like Colin's idea about asking for confirmation, although I'm not sure if
> another tool has already this behavior and could create more confusion
> (e.g. why this command ask for confirmation and others don't). Maybe we
> will require a more broad looks at the CLI tools to agree on this?
>
> Jorge.
>
> El jue., 22 feb. 2018 a las 21:09, Guozhang Wang ()
> escribió:
>
> > Yup, agreed.
> >
> > On Thu, Feb 22, 2018 at 11:46 AM, Ismael Juma  wrote:
> >
> > > Hi Guozhang,
> > >
> > > To clarify my comment: any change with a backwards compatibility impact
> > > should be mentioned in the "Compatibility, Deprecation, and Migration
> > Plan"
> > > section (in addition to the deprecation period and only happening in a
> > > major release as you said).
> > >
> > > Ismael
> > >
> > > On Thu, Feb 22, 2018 at 11:10 AM, Guozhang Wang 
> > > wrote:
> > >
> > > > Just to clarify, the KIP itself has mentioned about the change so the
> > PR
> > > > was not un-intentional:
> > > >
> > > > "
> > > >
> > > > 3. Keep execution parameters uniform between both tools: It will
> > execute
> > > by
> > > > default, and have a `dry-run` parameter just show the results. This
> > will
> > > > involve change current `ConsumerGroupCommand` to change execution
> > > options.
> > > >
> > > > "
> > > >
> > > > We were agreed that the proposed change is better than the current
> > > status,
> > > > since may people not using "--execute" on consumer reset tool were
> > > actually
> > > > surprised that nothing gets executed. What we were concerning as a
> > > > hind-sight is that instead of doing such change in a minor release
> like
> > > > 1.1, we should consider only doing that in the next major release as
> it
> > > > breaks compatibility. In the past when we are going to remove /
> replace
> > > > certain option we would first add a going-to-be-deprecated warning in
> > the
> > > > previous releases until it was finally removed. So Jason's suggestion
> > is
> > > to
> > > > do the same: we are not reverting this change forever, but trying to
> > > delay
> > > > it after 1.1.
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Thu, Feb 22, 2018 at 10:56 AM, Colin McCabe 
> > > wrote:
> > > >
> > > > > Perhaps, if the user doesn't pass the --execute flag, the tool
> should
> > > > > print a prompt like "would you like to perform this reset?" and
> wait
> > > for
> > > > a
> > > > > Y / N (or yes or no) input from the command-line.  Then, if the
> > > --execute
> > > > > flag is passed, we skip this.  That seems 99% compatible, and also
> > > > > accomplishes the goal of making the tool less confusing.
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > >
> > > > > On Thu, Feb 22, 2018, at 10:23, Ismael Juma wrote:
> > > > > > Yes, let's revert the incompatible changes. There was no mention
> of
> > > > > > compatibility impact on the KIP and we should ensure that is the
> > case
> > > > for
> > > > > > 1.1.0.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Thu, Feb 22, 2018 at 9:55 AM, Jason Gustafson <
> > ja...@confluent.io
> > > >
> > > > > wrote:
> > > > > >
> > > > > > > I know it's a been a while since this vote passed, but I think
> we
> > > > need
> > > > > to
> > > > > > > reconsider the incompatible changes to the consumer reset tool.
> > > > > > > Specifically, we have removed the --execute option without
> > > > deprecating
> > > > > it
> > > > > > > first, and we have changed the default behavior to execute
> rather
> > > > than
> > > > > do a
> > > > > > > dry run. The latter in particular seems dangerous since users
> who
> > > > were
> > > > > > > previously using the default behavior to view offsets will now
> > > > suddenly
> > > > > > > find the offsets already committed. As far as I can tell, this
> > > change
> > > > > was
> > > > > > > done mostly for cosmetic reasons. Without a compelling reason,
> I
> > > > think
> > > > > we
> > > > > > > should err on the side of maintaining compatibility. At a
> > minimum,
> > > if
> > > > > we
> > > > > > > really want to break compatibility, we should wait for the next
> > > major
> > > > > > > release.
> > > > > > >

about consumer offset lost

2018-02-26 Thread zho...@cnsuning.com
hi,i am a kafka User,i meet a question:
Qustion desc:
  continuously consuming a topic,and invoke commitOffseSynz(),after 
a period of time(Time is not fixed,about one hour)
it will reprocess from earliest of target topic. why?

consumer config:
   enable.auto.commit=false
   auto.offset.reset=earliest



周虎(12091706)
苏宁云商集团 数据云公司 数据库研发中心
电话:18651662319
邮箱地址:zho...@cnsuning.com
公司地址:南京市玄武区徐庄软件园苏宁大道1号总部
邮编:210042


[jira] [Resolved] (KAFKA-4921) AssignedPartition should implement equals

2018-02-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4921.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.0

> AssignedPartition should implement equals
> -
>
> Key: KAFKA-4921
> URL: https://issues.apache.org/jira/browse/KAFKA-4921
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Marc Juchli
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> We ran FundBug, which resulted in the "Bad practice warning":
>   
> Bug type EQ_COMPARETO_USE_OBJECT_EQUALS (click for details) 
> In class 
> org.apache.kafka.streams.processor.internals.StreamPartitionAssignor$AssignedPartition
> In method 
> org.apache.kafka.streams.processor.internals.StreamPartitionAssignor$AssignedPartition.compareTo(StreamPartitionAssignor$AssignedPartition)
> At StreamPartitionAssignor.java:[line 75]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk7 #3214

2018-02-26 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-5327) Console Consumer should only poll for up to max messages

2018-02-26 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-5327.

Resolution: Fixed

> Console Consumer should only poll for up to max messages
> 
>
> Key: KAFKA-5327
> URL: https://issues.apache.org/jira/browse/KAFKA-5327
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Dustin Cote
>Assignee: huxihx
>Priority: Minor
> Fix For: 1.2.0
>
>
> The ConsoleConsumer has a --max-messages flag that can be used to limit the 
> number of messages consumed. However, the number of records actually consumed 
> is governed by max.poll.records. This means you see one message on the 
> console, but your offset has moved forward a default of 500, which is kind of 
> counterintuitive. It would be good to only commit offsets for messages we 
> have printed to the console.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #2439

2018-02-26 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk9 #435

2018-02-26 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5327; Console Consumer should not commit messages not printed

--
[...truncated 1.89 MB...]

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
shouldForwardToCorrectProcessorNodeWhenMultiCacheEvictions STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
shouldForwardToCorrectProcessorNodeWhenMultiCacheEvictions PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testCount 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountWithInternalStore STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountWithInternalStore PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggCoalesced STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggCoalesced PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testRemoveOldBeforeAddNew PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountCoalesced STARTED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testCountCoalesced PASSED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testForeach 
STARTED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testTypeVariance 
STARTED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testTypeVariance 
PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeShouldBeGapIfGapIsLargerThanDefaultRetentionTime STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeShouldBeGapIfGapIsLargerThanDefaultRetentionTime PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > shouldSetWindowGap STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > shouldSetWindowGap PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldBeEqualWhenGapAndMaintainMsAreTheSame STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldBeEqualWhenGapAndMaintainMsAreTheSame PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldNotBeEqualWhenMaintainMsDifferent STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldNotBeEqualWhenMaintainMsDifferent PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > windowSizeMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
windowSizeMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
windowSizeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldSetWindowRetentionTime STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldSetWindowRetentionTime PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldNotBeEqualWhenGapIsDifferent STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldNotBeEqualWhenGapIsDifferent PASSED

org.apache.kafka.streams.kstream.WindowTest > 
shouldThrowIfEndIsSmallerThanStart STARTED

org.apache.kafka.streams.kstream.WindowTest > 
shouldThrowIfEndIsSmallerThanStart PASSED

org.apache.kafka.streams.kstream.WindowTest > 
shouldNotBeEqualIfDifferentWindowType STARTED

org.apache.kafka.streams.kstream.WindowTest > 
shouldNotBeEqualIfDifferentWindowType PASSED

org.apache.kafka.streams.kstream.WindowTest > shouldBeEqualIfStartAndEndSame 
STARTED

org.apache.kafka.streams.kstream.WindowTest > shouldBeEqualIfStartAndEndSame 
PASSED

org.apache.kafka.streams.kstream.WindowTest > shouldNotBeEqualIfNull STARTED

org.apache.kafka.streams.kstream.WindowTest > shouldNotBeEqualIfNull PASSED

org.apache.kafka.streams.kstream.WindowTest > shouldThrowIfStartIsNegative 
STARTED

org.apache.kafka.streams.kstream.WindowTest > shouldThrowIfStartIsNegative 
PASSED

org.apache.kafka.streams.kstream.WindowTest > 
shouldNotBeEqualIfStartOrEndIsDifferent STARTED

org.apache.kafka.streams.kstream.WindowTest > 
shouldNotBeEqualIfStartOrEndIsDifferent PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 
shouldAddRegexTopicToLatestAutoOffsetResetList STARTED

org.apache.kafka.streams.kstream.KStreamBuilderTest > 

Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2018-02-26 Thread Matthias J. Sax
I agree on consistency, too.

However, I am not sure if we should introduce an explicit --execute
option. Anybody familiar with Linux tools will expect a command to
execute by default.

Thus, I would suggest to remove --execute for all tools that use this
option atm.

Btw: there is a related Jira:
https://issues.apache.org/jira/browse/KAFKA-1299

Furthermore, this also affect arguments like

--bootstrap-servers
vs
--broker-list

and maybe others.

IMHO, all tools should use the same names. Thus, it's a larger change...
But totally worth doing it.


-Matthias

On 2/26/18 10:09 AM, Guozhang Wang wrote:
> Hi Jorge,
> 
> I agree on being consistent across our tools.
> 
> Besides the kafka-consumer-groups and kafka-streams-application-reset, a
> couple of other classes to consider adding the --execute options for the
> next major release:
> 
> 1. kafka-preferred-replica-elections
> 2. kafka-reassign-partitions
> 3. kafka-delete-records
> 4. kafka-topics
> 5. kafka-acls
> 6. kafka-configs
> 7. kafka-delegation-tokens
> 
> 
> Guozhang
> 
> On Mon, Feb 26, 2018 at 3:03 AM, Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
> 
>> Hi all,
>>
>> Thanks for the feedback.
>>
>> I have updated the "Compatibility, Deprecation, and Migration Plan" section
>> to document this to support the rollback. I probably should have handled
>> this change, as small as it looks, as a new KIP to avoid this issue.
>>
>> I like Colin's idea about asking for confirmation, although I'm not sure if
>> another tool has already this behavior and could create more confusion
>> (e.g. why this command ask for confirmation and others don't). Maybe we
>> will require a more broad looks at the CLI tools to agree on this?
>>
>> Jorge.
>>
>> El jue., 22 feb. 2018 a las 21:09, Guozhang Wang ()
>> escribió:
>>
>>> Yup, agreed.
>>>
>>> On Thu, Feb 22, 2018 at 11:46 AM, Ismael Juma  wrote:
>>>
 Hi Guozhang,

 To clarify my comment: any change with a backwards compatibility impact
 should be mentioned in the "Compatibility, Deprecation, and Migration
>>> Plan"
 section (in addition to the deprecation period and only happening in a
 major release as you said).

 Ismael

 On Thu, Feb 22, 2018 at 11:10 AM, Guozhang Wang 
 wrote:

> Just to clarify, the KIP itself has mentioned about the change so the
>>> PR
> was not un-intentional:
>
> "
>
> 3. Keep execution parameters uniform between both tools: It will
>>> execute
 by
> default, and have a `dry-run` parameter just show the results. This
>>> will
> involve change current `ConsumerGroupCommand` to change execution
 options.
>
> "
>
> We were agreed that the proposed change is better than the current
 status,
> since may people not using "--execute" on consumer reset tool were
 actually
> surprised that nothing gets executed. What we were concerning as a
> hind-sight is that instead of doing such change in a minor release
>> like
> 1.1, we should consider only doing that in the next major release as
>> it
> breaks compatibility. In the past when we are going to remove /
>> replace
> certain option we would first add a going-to-be-deprecated warning in
>>> the
> previous releases until it was finally removed. So Jason's suggestion
>>> is
 to
> do the same: we are not reverting this change forever, but trying to
 delay
> it after 1.1.
>
>
> Guozhang
>
>
> On Thu, Feb 22, 2018 at 10:56 AM, Colin McCabe 
 wrote:
>
>> Perhaps, if the user doesn't pass the --execute flag, the tool
>> should
>> print a prompt like "would you like to perform this reset?" and
>> wait
 for
> a
>> Y / N (or yes or no) input from the command-line.  Then, if the
 --execute
>> flag is passed, we skip this.  That seems 99% compatible, and also
>> accomplishes the goal of making the tool less confusing.
>>
>> best,
>> Colin
>>
>>
>> On Thu, Feb 22, 2018, at 10:23, Ismael Juma wrote:
>>> Yes, let's revert the incompatible changes. There was no mention
>> of
>>> compatibility impact on the KIP and we should ensure that is the
>>> case
> for
>>> 1.1.0.
>>>
>>> Ismael
>>>
>>> On Thu, Feb 22, 2018 at 9:55 AM, Jason Gustafson <
>>> ja...@confluent.io
>
>> wrote:
>>>
 I know it's a been a while since this vote passed, but I think
>> we
> need
>> to
 reconsider the incompatible changes to the consumer reset tool.
 Specifically, we have removed the --execute option without
> deprecating
>> it
 first, and we have changed the default behavior to execute
>> rather
> than
>> do a
 dry run. The latter in particular seems dangerous since users
>> who
> were
 previously using the default behavior to view 

[jira] [Resolved] (KAFKA-4920) Stamped should implement equals

2018-02-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4920.
--
Resolution: Won't Fix

> Stamped should implement equals
> ---
>
> Key: KAFKA-4920
> URL: https://issues.apache.org/jira/browse/KAFKA-4920
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Marc Juchli
>Priority: Minor
>
> We ran FundBug, which resulted in the "Bad practice warning": 
> Bug type EQ_COMPARETO_USE_OBJECT_EQUALS (click for details) 
> In class org.apache.kafka.streams.processor.internals.Stamped
> In method 
> org.apache.kafka.streams.processor.internals.Stamped.compareTo(Object)
> At Stamped.java:[lines 31-35]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [STREAMS] KafkaStreams.toString() deprecated?!

2018-02-26 Thread Jacek Laskowski
Hi Matthias,

That makes things so much clearer. Thanks for your help understanding the
codebase better.

Speaking of the code base of Kafka Streams, I've got typo fixes and docs
improvements locally here and there scattered throughout the code of Kafka
Streams, what are the proper steps to get them merged? Just a pull request?
Or should I file an JIRA issue and...please guide. Appreciated.

Pozdrawiam,
Jacek Laskowski

https://about.me/JacekLaskowski
Mastering Spark SQL https://bit.ly/mastering-spark-sql
Spark Structured Streaming https://bit.ly/spark-structured-streaming
Mastering Kafka Streams https://bit.ly/mastering-kafka-streams
Follow me at https://twitter.com/jaceklaskowski

On Mon, Feb 26, 2018 at 6:37 PM, Matthias J. Sax 
wrote:

> Thanks for the feedback.
>
> You are of course right, that we cannot remove toString() -- we used the
> @deprecated annotation to imply that the overwrite will be removed and
> toString() will fall back to Object.toString() in the future and thus
> not provide any useful information (at least this was the plan).
>
> Using localThreadMetadata() within toString() and print whatever it
> return might be an alternative. However, I am not sure if we gain much.
> Also, KafkaConsumer/Producer/AdminClient don't overwrite toString() --
> IMHO, not overwriting toString() aligns better with other parts of the
> code.
>
> About @Evolving: that is correct, too. We just try to be nice and
> provide backward compatibility even if we never promised :)
>
>
> -Matthias
>
> On 2/26/18 1:38 AM, Jacek Laskowski wrote:
> > Hi,
> >
> > I've just found that KafkaStreams.toString [1] is deprecated, but I think
> > that it does not make sense.
> >
> > The parameterless toString is simply part of the java.lang.Object
> contract
> > and I don't think it could ever get deprecated (unless it is by Object).
> >
> > I think it'd be much better if the method used whatever it's recommended
> > for a toString-like functionality (which seems that
> > KafkaStreams.localThreadsMetadata [2] is or something based on that).
> >
> > Thoughts?
> >
> > p.s. I also think that since KafkaStreams is marked
> > as @InterfaceStability.Evolving using @Deprecated markers does not add
> much
> > if anything. I thought that Evolving was to say that literally everything
> > could change without warning at any time, couldn't it?
> >
> > [1]
> > https://github.com/apache/kafka/blob/trunk/streams/src/
> main/java/org/apache/kafka/streams/KafkaStreams.java?utf8=%E2%9C%93#L900
> >
> > [2]
> > https://github.com/apache/kafka/blob/trunk/streams/src/
> main/java/org/apache/kafka/streams/KafkaStreams.java?utf8=%E2%9C%93#L1061
> >
> > Pozdrawiam,
> > Jacek Laskowski
> > 
> > https://about.me/JacekLaskowski
> > Mastering Spark SQL https://bit.ly/mastering-spark-sql
> > Spark Structured Streaming https://bit.ly/spark-structured-streaming
> > Mastering Kafka Streams https://bit.ly/mastering-kafka-streams
> > Follow me at https://twitter.com/jaceklaskowski
> >
>
>


[jira] [Created] (KAFKA-6593) Coordinator disconnect in heartbeat thread can cause commitSync to block indefinitely

2018-02-26 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6593:
--

 Summary: Coordinator disconnect in heartbeat thread can cause 
commitSync to block indefinitely
 Key: KAFKA-6593
 URL: https://issues.apache.org/jira/browse/KAFKA-6593
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.11.0.2, 1.0.0
Reporter: Jason Gustafson
Assignee: Jason Gustafson


If a coordinator disconnect is observed in the heartbeat thread, it can cause a 
pending offset commit to be cancelled just before the foreground thread begins 
waiting on its response in poll(). Since the poll timeout is Long.MAX_VALUE, 
this will cause the consumer to effectively hang until some other network event 
causes the poll() to return. We try to protect this case with a poll condition 
on the future, but this isn't bulletproof since the future can be completed 
outside of the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [STREAMS] KafkaStreams.toString() deprecated?!

2018-02-26 Thread Guozhang Wang
Hello Jacek,

Please see this guidance on submitting a PR; usually for typo fixes and
minor doc changes, one does not need to create a JIRA but can list the PR
title as "MINOR:..." or "HOTFIX: .."

https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest

Guozhang

On Mon, Feb 26, 2018 at 1:02 PM, Jacek Laskowski  wrote:

> Hi Matthias,
>
> That makes things so much clearer. Thanks for your help understanding the
> codebase better.
>
> Speaking of the code base of Kafka Streams, I've got typo fixes and docs
> improvements locally here and there scattered throughout the code of Kafka
> Streams, what are the proper steps to get them merged? Just a pull request?
> Or should I file an JIRA issue and...please guide. Appreciated.
>
> Pozdrawiam,
> Jacek Laskowski
> 
> https://about.me/JacekLaskowski
> Mastering Spark SQL https://bit.ly/mastering-spark-sql
> Spark Structured Streaming https://bit.ly/spark-structured-streaming
> Mastering Kafka Streams https://bit.ly/mastering-kafka-streams
> Follow me at https://twitter.com/jaceklaskowski
>
> On Mon, Feb 26, 2018 at 6:37 PM, Matthias J. Sax 
> wrote:
>
> > Thanks for the feedback.
> >
> > You are of course right, that we cannot remove toString() -- we used the
> > @deprecated annotation to imply that the overwrite will be removed and
> > toString() will fall back to Object.toString() in the future and thus
> > not provide any useful information (at least this was the plan).
> >
> > Using localThreadMetadata() within toString() and print whatever it
> > return might be an alternative. However, I am not sure if we gain much.
> > Also, KafkaConsumer/Producer/AdminClient don't overwrite toString() --
> > IMHO, not overwriting toString() aligns better with other parts of the
> > code.
> >
> > About @Evolving: that is correct, too. We just try to be nice and
> > provide backward compatibility even if we never promised :)
> >
> >
> > -Matthias
> >
> > On 2/26/18 1:38 AM, Jacek Laskowski wrote:
> > > Hi,
> > >
> > > I've just found that KafkaStreams.toString [1] is deprecated, but I
> think
> > > that it does not make sense.
> > >
> > > The parameterless toString is simply part of the java.lang.Object
> > contract
> > > and I don't think it could ever get deprecated (unless it is by
> Object).
> > >
> > > I think it'd be much better if the method used whatever it's
> recommended
> > > for a toString-like functionality (which seems that
> > > KafkaStreams.localThreadsMetadata [2] is or something based on that).
> > >
> > > Thoughts?
> > >
> > > p.s. I also think that since KafkaStreams is marked
> > > as @InterfaceStability.Evolving using @Deprecated markers does not add
> > much
> > > if anything. I thought that Evolving was to say that literally
> everything
> > > could change without warning at any time, couldn't it?
> > >
> > > [1]
> > > https://github.com/apache/kafka/blob/trunk/streams/src/
> > main/java/org/apache/kafka/streams/KafkaStreams.java?utf8=%E2%9C%93#L900
> > >
> > > [2]
> > > https://github.com/apache/kafka/blob/trunk/streams/src/
> > main/java/org/apache/kafka/streams/KafkaStreams.java?
> utf8=%E2%9C%93#L1061
> > >
> > > Pozdrawiam,
> > > Jacek Laskowski
> > > 
> > > https://about.me/JacekLaskowski
> > > Mastering Spark SQL https://bit.ly/mastering-spark-sql
> > > Spark Structured Streaming https://bit.ly/spark-structured-streaming
> > > Mastering Kafka Streams https://bit.ly/mastering-kafka-streams
> > > Follow me at https://twitter.com/jaceklaskowski
> > >
> >
> >
>



-- 
-- Guozhang


KAFKA-5609

2018-02-26 Thread Asutosh Pandya
Hello Team,

Have created below Pull Request for KAFKA-5609. Also am attaching the patch
for reference.

Jira: https://issues.apache.org/jira/browse/KAFKA-5609
PR: https://github.com/apache/kafka/pull/4619

Let me know if any more information is needed in order to merge the Pull
Request.

Best Regards,
Asutosh


Wiki permission

2018-02-26 Thread Panuwat Anawatmongkhon
Hi all,
Can I have permission for create and edit KIP?
My profile name is Panuwat Anawatmongkhon.
Cheers,
Benz


[jira] [Created] (KAFKA-6595) Kafka connect commit offset incorrectly.

2018-02-26 Thread Hanlin Liu (JIRA)
Hanlin Liu created KAFKA-6595:
-

 Summary: Kafka connect commit offset incorrectly.
 Key: KAFKA-6595
 URL: https://issues.apache.org/jira/browse/KAFKA-6595
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.2.0
Reporter: Hanlin Liu


SourceTaskOffsetComitter calls commitOffset() and waits for all incomplete 
records to be sent. While the task is stopped, commitOffset() is called again 
by the final block in WorkerSourceTask.execute(), it will throw {{Invalid call 
to OffsetStorageWriter flush() while already flushing, the framework should not 
allow this}} exception. This will trigger closing Producer without waiting the 
flush timeout.

After 30 seconds, all incomplete records has been forcefully aborted. If the 
{{offset.flush.timeout.ms}} is configured larger than 30 seconds, 
WorkerSourceTask will consider those aborted records as sent within flush 
timeout, which results in incorrectly flushing the source offset.

 
{code:java}
// code placeholder

2018-02-27 02:59:33,134 INFO  [] Stopping connector 
dp-sqlserver-connector-dptask_455   [pool-3-thread-6][Worker.java:254]

2018-02-27 02:59:33,134 INFO  [] Stopped connector 
dp-sqlserver-connector-dptask_455   [pool-3-thread-6][Worker.java:264]



2018-02-27 02:59:34,121 ERROR [] Invalid call to OffsetStorageWriter flush() 
while already flushing, the framework should not allow this   
[pool-1-thread-13][OffsetStorageWriter.java:110]

2018-02-27 02:59:34,121 ERROR [] Task dp-sqlserver-connector-dptask_455-0 threw 
an uncaught and unrecoverable exception   
[pool-1-thread-13][WorkerTask.java:141]

org.apache.kafka.connect.errors.ConnectException: OffsetStorageWriter is 
already flushing

        at 
org.apache.kafka.connect.storage.OffsetStorageWriter.beginFlush(OffsetStorageWriter.java:112)

        at 
org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:294)

        at 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:177)

        at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)

        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)

        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

        at java.util.concurrent.FutureTask.run(FutureTask.java:266)

        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        at java.lang.Thread.run(Thread.java:745)

2018-02-27 03:00:00,734 ERROR [] Graceful stop of task 
dp-sqlserver-connector-dptask_455-0 failed.   [pool-3-thread-1][Worker.java:405]

2018-02-27 03:00:04,126 INFO  [] Proceeding to force close the producer since 
pending requests could not be completed within timeout 30 ms.   
[pool-1-thread-13][KafkaProducer.java:713]

2018-02-27 03:00:04,127 ERROR [] dp-sqlserver-connector-dptask_455-0 failed to 
send record to dptask_455.JF_TEST_11.jf_test_tab_8: {}   
[kafka-producer-network-thread | producer-31][WorkerSourceTask.java:228]

java.lang.IllegalStateException: Producer is closed forcefully.

        at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:522)

        at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:502)

        at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:147)

        at java.lang.Thread.run(Thread.java:745)

2018-02-27 03:00:09,920 INFO  [] Finished 
WorkerSourceTask{id=dp-sqlserver-connector-dptask_455-0} commitOffsets 
successfully in 47088 ms   [pool-4-thread-1][WorkerSourceTask.java:371]
{code}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk9 #436

2018-02-26 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #2441

2018-02-26 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Rename stream partition assignor to streams partition assignor

--
[...truncated 415.19 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED


Jenkins build is back to normal : kafka-trunk-jdk7 #3216

2018-02-26 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.1.0 RC0

2018-02-26 Thread Vahid S Hashemian
+1 (non-binding)

Built the source and ran quickstart (including streams) successfully on 
Ubuntu (with both Java 8 and Java 9).

I understand the Windows platform is not officially supported, but I ran 
the same on Windows 10, and except for Step 7 (Connect) everything else 
worked fine.

There are a number of warning and errors (including 
java.lang.ClassNotFoundException). Here's the final error message:

> bin\windows\connect-standalone.bat config\connect-standalone.properties 
config\connect-file-source.properties config\connect-file-sink.properties
...
[2018-02-26 14:55:56,529] ERROR Stopping after connector error 
(org.apache.kafka.connect.cli.ConnectStandalone)
java.lang.NoClassDefFoundError: 
org/apache/kafka/connect/transforms/util/RegexValidator
at 
org.apache.kafka.connect.runtime.SinkConnectorConfig.(SinkConnectorConfig.java:46)
at 
org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:263)
at 
org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:164)
at 
org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
Caused by: java.lang.ClassNotFoundException: 
org.apache.kafka.connect.transforms.util.RegexValidator
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:582)
at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:185)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)
... 4 more

Thanks for running the release.
--Vahid




From:   Damian Guy 
To: dev@kafka.apache.org, us...@kafka.apache.org, 
kafka-clie...@googlegroups.com
Date:   02/24/2018 08:16 AM
Subject:[VOTE] 1.1.0 RC0



Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 1.1.0.

This is minor version release of Apache Kakfa. It Includes 29 new KIPs.
Please see the release plan for more details:

https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage.action-3FpageId-3D71764913=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=M1liORvtcIt7pZ8e5GnLr9a1i6SOUY4bvjHYOrY_zcE=


A few highlights:

* Significant Controller improvements (much faster and session expiration
edge cases fixed)
* Data balancing across log directories (JBOD)
* More efficient replication when the number of partitions is large
* Dynamic Broker Configs
* Delegation tokens (KIP-48)
* Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)

Release notes for the 1.1.0 release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Edamianguy_kafka-2D1.1.0-2Drc0_RELEASE-5FNOTES.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=H6-O0mkXk2tT_7RlN4W9bJd_lpoOt5ranhTx28WdRnQ=


*** Please download, test and vote by Wednesday, February 28th, 5pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_KEYS=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=Eo5JrktOPUlA2-7W11222zSVYfR6oqzd9uiaUEod2D4=


* Release artifacts to be voted upon (source and binary):
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Edamianguy_kafka-2D1.1.0-2Drc0_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=LkMdsPX_jln_lIgxbKUbnElAiqkNdAWJCkA5kuIRU64=


* Maven artifacts to be voted upon:
https://urldefense.proofpoint.com/v2/url?u=https-3A__repository.apache.org_content_groups_staging_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=E-Tj8DN83xkbvX8b6Vcel0z7v3AiRIusBmNtOIAUt_c=


* Javadoc:
https://urldefense.proofpoint.com/v2/url?u=http-3A__home.apache.org_-7Edamianguy_kafka-2D1.1.0-2Drc0_javadoc_=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=kfh4ovj2d15WkXcWajx2-ugtcAVvjOTklZhtF9jWDY8=


* Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_tree_1.1.0-2Drc0=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=7maGMa53WObbHkEL4GVMLf8RppBtSTPH9z3dtfRc8Pk=



* Documentation:
https://urldefense.proofpoint.com/v2/url?u=http-3A__kafka.apache.org_11_documentation.html=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=K9Iz2hWA2pj4QGxW6fleW20K0M7oEeWCbqs5nbbUY0c=JZ7LIhElx6wuu-F9awErsFSpunY5IhcZBxM2lWTR_eI=


* Protocol:

Re: about consumer offset lost

2018-02-26 Thread naresh Goud
Does your offset commitOffseSynz()  get succeeded?

if your controlling offset commit, i believe you shouldn;t require
auto.offset.reset=earliest.

Thank you,
Naresh

On Mon, Feb 26, 2018 at 10:26 AM, zho...@cnsuning.com 
wrote:

> hi,i am a kafka User,i meet a question:
> Qustion desc:
>   continuously consuming a topic,and invoke
> commitOffseSynz(),after a period of time(Time is not fixed,about one hour)
> it will reprocess from earliest of target topic. why?
>
> consumer config:
>enable.auto.commit=false
>auto.offset.reset=earliest
>
>
>
> 周虎(12091706)
> 苏宁云商集团 数据云公司 数据库研发中心
> 电话:18651662319
> 邮箱地址:zho...@cnsuning.com
> 公司地址:南京市玄武区徐庄软件园苏宁大道1号总部
> 邮编:210042
>


Build failed in Jenkins: kafka-trunk-jdk7 #3215

2018-02-26 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-5327; Console Consumer should not commit messages not printed

--
[...truncated 243.71 KB...]
kafka.api.AuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSendOffsetsWithNoConsumerGroupDescribeAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId STARTED

kafka.api.AuthorizerIntegrationTest > 
testIdempotentProducerNoIdempotentWriteAclInInitProducerId PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithExplicitSeekAndNoGroupAccess PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testProduceConsumeViaAssign 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDataChange PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperSessionStateMetric PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnection STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnection PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForCreation PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionLossRequestTermination 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout STARTED

kafka.zookeeper.ZooKeeperClientTest > testConnectionTimeout PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > 

Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2018-02-26 Thread Jorge Esteban Quilcate Otoya
Hi all,

Thanks for the feedback.

I have updated the "Compatibility, Deprecation, and Migration Plan" section
to document this to support the rollback. I probably should have handled
this change, as small as it looks, as a new KIP to avoid this issue.

I like Colin's idea about asking for confirmation, although I'm not sure if
another tool has already this behavior and could create more confusion
(e.g. why this command ask for confirmation and others don't). Maybe we
will require a more broad looks at the CLI tools to agree on this?

Jorge.

El jue., 22 feb. 2018 a las 21:09, Guozhang Wang ()
escribió:

> Yup, agreed.
>
> On Thu, Feb 22, 2018 at 11:46 AM, Ismael Juma  wrote:
>
> > Hi Guozhang,
> >
> > To clarify my comment: any change with a backwards compatibility impact
> > should be mentioned in the "Compatibility, Deprecation, and Migration
> Plan"
> > section (in addition to the deprecation period and only happening in a
> > major release as you said).
> >
> > Ismael
> >
> > On Thu, Feb 22, 2018 at 11:10 AM, Guozhang Wang 
> > wrote:
> >
> > > Just to clarify, the KIP itself has mentioned about the change so the
> PR
> > > was not un-intentional:
> > >
> > > "
> > >
> > > 3. Keep execution parameters uniform between both tools: It will
> execute
> > by
> > > default, and have a `dry-run` parameter just show the results. This
> will
> > > involve change current `ConsumerGroupCommand` to change execution
> > options.
> > >
> > > "
> > >
> > > We were agreed that the proposed change is better than the current
> > status,
> > > since may people not using "--execute" on consumer reset tool were
> > actually
> > > surprised that nothing gets executed. What we were concerning as a
> > > hind-sight is that instead of doing such change in a minor release like
> > > 1.1, we should consider only doing that in the next major release as it
> > > breaks compatibility. In the past when we are going to remove / replace
> > > certain option we would first add a going-to-be-deprecated warning in
> the
> > > previous releases until it was finally removed. So Jason's suggestion
> is
> > to
> > > do the same: we are not reverting this change forever, but trying to
> > delay
> > > it after 1.1.
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Thu, Feb 22, 2018 at 10:56 AM, Colin McCabe 
> > wrote:
> > >
> > > > Perhaps, if the user doesn't pass the --execute flag, the tool should
> > > > print a prompt like "would you like to perform this reset?" and wait
> > for
> > > a
> > > > Y / N (or yes or no) input from the command-line.  Then, if the
> > --execute
> > > > flag is passed, we skip this.  That seems 99% compatible, and also
> > > > accomplishes the goal of making the tool less confusing.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Thu, Feb 22, 2018, at 10:23, Ismael Juma wrote:
> > > > > Yes, let's revert the incompatible changes. There was no mention of
> > > > > compatibility impact on the KIP and we should ensure that is the
> case
> > > for
> > > > > 1.1.0.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Thu, Feb 22, 2018 at 9:55 AM, Jason Gustafson <
> ja...@confluent.io
> > >
> > > > wrote:
> > > > >
> > > > > > I know it's a been a while since this vote passed, but I think we
> > > need
> > > > to
> > > > > > reconsider the incompatible changes to the consumer reset tool.
> > > > > > Specifically, we have removed the --execute option without
> > > deprecating
> > > > it
> > > > > > first, and we have changed the default behavior to execute rather
> > > than
> > > > do a
> > > > > > dry run. The latter in particular seems dangerous since users who
> > > were
> > > > > > previously using the default behavior to view offsets will now
> > > suddenly
> > > > > > find the offsets already committed. As far as I can tell, this
> > change
> > > > was
> > > > > > done mostly for cosmetic reasons. Without a compelling reason, I
> > > think
> > > > we
> > > > > > should err on the side of maintaining compatibility. At a
> minimum,
> > if
> > > > we
> > > > > > really want to break compatibility, we should wait for the next
> > major
> > > > > > release.
> > > > > >
> > > > > > Note that I have submitted a patch to revert this change here:
> > > > > > https://github.com/apache/kafka/pull/4611.
> > > > > >
> > > > > > Thoughts?
> > > > > >
> > > > > > Thanks,
> > > > > > Jason
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Tue, Nov 14, 2017 at 3:26 AM, Jorge Esteban Quilcate Otoya <
> > > > > > quilcate.jo...@gmail.com> wrote:
> > > > > >
> > > > > > > Thanks to everyone for your feedback.
> > > > > > >
> > > > > > > KIP has been accepted and discussion is moved to PR.
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Jorge.
> > > > > > >
> > > > > > > El lun., 6 nov. 2017 a las 17:31, Rajini Sivaram (<
> > > > > > rajinisiva...@gmail.com
> > > > > > > >)
> > > > > > > escribió:
> > > > > > >
> > > > > > > > +1 (binding)

Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-02-26 Thread Allen Wang
Hi Dong,

Please see my comments inline.

Thanks,
Allen

On Sun, Feb 25, 2018 at 3:33 PM, Dong Lin  wrote:

> Hey Allen,
>
> Thanks for your comment. I will comment inline.
>
> On Thu, Feb 22, 2018 at 3:05 PM, Allen Wang 
> wrote:
>
> > Overall this is a very useful feature. With this we can finally scale
> keyed
> > messages.
> >
> > +1 on the ability to remove partitions. This will greatly increase
> Kafka's
> > scalability in cloud.
> >
> > For example, when there is traffic increase, we can add brokers and
> assign
> > new partitions to the new brokers. When traffic decreases, we can mark
> > these new partitions as read only and remove them afterwards, together
> with
> > the brokers that host these partitions. This will be a light-weight
> > approach to scale a Kafka cluster compared to partition reassignment
> where
> > you will always have to move data.
> >
> > I have some suggestions:
> >
> > - The KIP described each step in detail which is great. However, it lacks
> > the "why" part to explain the high level goal we want to achieve with
> each
> > step. For example, the purpose of step 5 may be described as "Make sure
> > consumers always first finish consuming all data prior to partition
> > expansion to enforce message ordering".
> >
>
> Yeah I think this is useful. This is a non-trivial KIP and it is useful to
> explain the motivation of each change to help reading. I will added
> motivation for each change in the KIP. Please let me know if there is
> anything else that can make the KIP more readable.
>
>
> >
> > - The rejection of produce request at partition expansion should be
> > configurable because it does not matter for non-keyed messages. Same with
> > the consumer behavior for step 5. This will ensure that for non-keyed
> > messages, partition expansion does not add the cost of possible message
> > drop on producer or message latency on the consumer.
> >
>
> Ideally we would like to avoid adding extra configs to keep the interface
> simple. I think the current overhead in the producer is actually very
> small. Partition expansion or deletion should happen very infrequently.
> Note that our producer today needs to refresh metadata whenever there is
> leadership movement, i.e. producer will receive
> NotLeaderForPartitionException from the old leader and keep refreshing
> metadata until it gets the new leader of the partition, which happens much
> more frequently than Partition expansion or deletion. So I am not sure we
> should add a config to optimize this.
>

I was concerned that at high message rate, rejecting requests could lead to
producer side buffer full and lead to unnecessary message drop on producer
side for non-keyed messages.

What about the delay on consumer? It could be significant when one consumer
is lagging for certain partitions and all consumers in the same group have
to wait. This delay could be significant and again unnecessary for messages
where the order does not matter.


>
>
>
> > - Since we now allow adding partitions for keyed messages while
> preserving
> > the message ordering on the consumer side, the default producer
> partitioner
> > seems to be inadequate as it rehashes all keys. As part of this KIP,
> should
> > we also include a partitioner that better handles partition changes, for
> > example, with consistent hashing?
> >

I am not sure I understand the problem with the default partitioner. Can
> you explain a bit more why default producer partitioner is inadequate with
> this KIP? And why consistent hashing can be helpful?
>
>
>
The default partitioner use this algorithm for keyed messages:

Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions

As you can see, when the number of partitions are changed, there will be
complete rehash of keys to partitions. This causes a lot of changes to key
-> partition mapping and could be costly for stateful consumers. This is
not an issue before since we cannot change number of partitions for keyed
messages anyway (to avoid impact on message orders).

If we introduce a partitioner with consistent hashing, the changes of key
to partition mapping will be minimized and I think it will help consumers.

>
> > Thanks,
> > Allen
> >
> >
> > On Thu, Feb 22, 2018 at 11:52 AM, Jun Rao  wrote:
> >
> > > Hi, Dong,
> > >
> > > Regarding deleting partitions, Gwen's point is right on. In some of the
> > > usage of Kafka, the traffic can be bursty. When the traffic goes up,
> > adding
> > > partitions is a quick way of shifting some traffic to the newly added
> > > brokers. Once the traffic goes down, the newly added brokers will be
> > > reclaimed (potentially by moving replicas off those brokers). However,
> if
> > > one can only add partitions without removing, eventually, one will hit
> > the
> > > limit.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, Feb 21, 2018 at 12:23 PM, Dong Lin 
> wrote:
> > >
> > > > Hey Jun,
> > > 

[jira] [Created] (KAFKA-6594) 第二次启动kafka是报错00000000000000000000.timeindex: 另一个程序正在使用此文件,进程无法访问。

2018-02-26 Thread JIRA
徐兴强 created KAFKA-6594:
--

 Summary: 第二次启动kafka是报错.timeindex: 
另一个程序正在使用此文件,进程无法访问。
 Key: KAFKA-6594
 URL: https://issues.apache.org/jira/browse/KAFKA-6594
 Project: Kafka
  Issue Type: Bug
  Components: config
Affects Versions: 1.0.0
 Environment: 环境:win10-1607 X64

kafka:1.0.0(kafka_2.12-1.0.0)

zookeeper:3.5.2
Reporter: 徐兴强
 Attachments: kafka报错.png

 

当我第一次运行kafka时,没有任何问题,但是当我关闭kafka(Ctrl+C)后,在第二次启动时,报错,提示.timeindex:
 另一个程序正在使用此文件,进程无法访问。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6196) Kafka Transactional producer with broker on windows

2018-02-26 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6196.

Resolution: Duplicate

Thanks for the patch. Marking this as a dup since it seems to be the same issue.

> Kafka Transactional producer with broker on windows
> ---
>
> Key: KAFKA-6196
> URL: https://issues.apache.org/jira/browse/KAFKA-6196
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.11.0.0, 1.0.0
> Environment: OS : Windows 10 Pro 64-bit,
> RAM : 10GB
> Processor : Intel Core i3-3240, x64-based processor
>Reporter: Abhishek Verma
>Priority: Blocker
>  Labels: windows
> Attachments: consumer logs.txt, consumer.java, producer logs.txt, 
> producer.java, topic dump log.txt, transaction state dump log.txt
>
>
> While using Kafka Transactional Producer and Consumer transactional state in 
> "__transaction_state" topic is not getting updated as "CompleteCommit" or 
> "CompleteAbort" and in destination topic (which I named as "topic") is not 
> getting either ABORT or COMMIT in "endTxnMarker" field (means end marker 
> message is not generated).
> I have attached producer and consumer console logs, Log dump segment for 
> "__transaction_state" and "topic" topic. with producer and consumer code I am 
> using.
> If my brokers are on windows machine Kafka transaction is not getting 
> committed but on Linux machine it is working properly.
> In producer console log it is saying that transaction is completed 
> successfully but not reflecting in transactional state topic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6153) Kafka Transactional Messaging does not work on windows but on linux

2018-02-26 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6153.

Resolution: Duplicate

Seems like the same underlying issue as KAFKA-6052. Marking as dup for now, but 
we can reopen if we find reason to think it's different.

> Kafka Transactional Messaging does not work on windows but on linux
> ---
>
> Key: KAFKA-6153
> URL: https://issues.apache.org/jira/browse/KAFKA-6153
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.11.0.1
>Reporter: Changhai Han
>Priority: Critical
>  Labels: windows
> Attachments: TransactionalProducer_Notworking.txt
>
>
> As mentioned in title, the kafka transaction messaging does not work on 
> windows but on linux.
> The code is like below:
> {code:java}
>  stringProducer.initTransactions();
> while(true){
> ConsumerRecords records = 
> stringConsumer.poll(2000);
> if(!records.isEmpty()){
> stringProducer.beginTransaction();
> try{
> for(ConsumerRecord record : records){
> LOGGER.info(record.value().toString());
> stringProducer.send(new ProducerRecord String>("kafka-test-out", record.value().toString()));
> }
> stringProducer.commitTransaction();
> }catch (ProducerFencedException e){
> LOGGER.warn(e.getMessage());
> stringProducer.close();
> stringConsumer.close();
> }catch (KafkaException e){
> LOGGER.warn(e.getMessage());
> stringProducer.abortTransaction();
> }
> }
> }
> {code}
> When I debug it, it seems to it stuck on committing the transaction. Does 
> anyone also experience the same thing? Is there any specific configs that i 
> need to add in the producer config? Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6076) Using new producer api of transaction twice failed when server run on Windows OS

2018-02-26 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6076.

Resolution: Duplicate

Marking as dup of KAFKA-6052 since it is likely the same underlying cause. We 
can reopen later if we find reason to think it's different.

> Using new producer api of transaction twice failed when server run on Windows 
> OS
> 
>
> Key: KAFKA-6076
> URL: https://issues.apache.org/jira/browse/KAFKA-6076
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.1
> Environment: OS: Windows 10 64bit
> Kafka:  
> kafka_2.11-0.11.0.1(https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/kafka_2.11-0.11.0.1.tgz)
> JDK: 1.8.0_144  64bit
> Client: kafka-clients 0.11.0.1
>Reporter: Orwen Xiang
>Priority: Major
>  Labels: windows
>
> Can't invoke twice (begin,commit transaction) on same Kafka Producer instance 
>  when it connected Kafka server run on windows 10.
> But same code can run successfully when Kafka server run on CentOS 7.3 64bit 
> with same Kafka server code base and config.
> Producer code looks like:
> Map props = new HashMap<>();
> props.put("bootstrap.servers", "localhost:9092");
> props.put("transactional.id", "my-transactional-id");
> Producer producer = new KafkaProducer<>(props, new 
> StringSerializer(), new StringSerializer());
> producer.initTransactions();
> try {
>   producer.beginTransaction();
>   for (int i = 0; i < 100; i++)
> producer.send(new ProducerRecord<>("test-2", Integer.toString(i), 
> Integer.toString(i)));
>   producer.commitTransaction();
>   System.out.println("sent one time done");
>   producer.beginTransaction();
>   for (int i = 0; i < 100; i++)
>producer.send(new ProducerRecord<>("test-2", Integer.toString(i), 
> Integer.toString(i)));
>   producer.commitTransaction();
>   System.out.println("sent two time done");
> } catch (ProducerFencedException | OutOfOrderSequenceException | 
> AuthorizationException e) {
>   producer.close();
> } catch (KafkaException e) {
>   producer.abortTransaction();
> }
> producer.close();



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)