Build failed in Jenkins: kafka-trunk-jdk10 #558

2018-10-02 Thread Apache Jenkins Server
See 


Changes:

[mjsax] MINOR KAFKA-7406: Follow up and address final comments (#5730)

--
[...truncated 2.24 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED


Re: Edit permissions for https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem

2018-10-02 Thread Bill Mclane
User ID should be wmclane

Thanks Matthias...

Bill—

William P. McLane
Messaging Evangelist
TIBCO Software

> On Oct 2, 2018, at 8:31 PM, Matthias J. Sax  wrote:
>
> Could not find your user -- please provide your wiki user ID so we can
> grant permissions to edit wiki pages.
>
> If you don't have an account yet, you can just create one.
>
>
> -Matthias
>
>> On 10/2/18 1:33 PM, Bill Mclane wrote:
>> Hi can you either enable edit permissions for wmcl...@tibco.com 
>>  for the Kafka Project cwiki or modify the 
>> Ecosystem Page to include the below under the Distributions & Packaging 
>> section:
>>
>> TIBCO Platform  - https://www.tibco.com/products/apache-kafka 
>>  Downloads - 
>> https://www.tibco.com/products/tibco-messaging/downloads 
>> 
>>
>> Thank you,
>>
>> Bill—
>>
>>
>


Re: Edit permissions for https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem

2018-10-02 Thread Matthias J. Sax
Could not find your user -- please provide your wiki user ID so we can
grant permissions to edit wiki pages.

If you don't have an account yet, you can just create one.


-Matthias

On 10/2/18 1:33 PM, Bill Mclane wrote:
> Hi can you either enable edit permissions for wmcl...@tibco.com 
>  for the Kafka Project cwiki or modify the 
> Ecosystem Page to include the below under the Distributions & Packaging 
> section:
> 
> TIBCO Platform  - https://www.tibco.com/products/apache-kafka 
>  Downloads - 
> https://www.tibco.com/products/tibco-messaging/downloads 
>  
> 
> Thank you,
> 
> Bill—
> 
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (KAFKA-7476) SchemaProjector is not properly handling Date-based logical types

2018-10-02 Thread Robert Yokota (JIRA)
Robert Yokota created KAFKA-7476:


 Summary: SchemaProjector is not properly handling Date-based 
logical types
 Key: KAFKA-7476
 URL: https://issues.apache.org/jira/browse/KAFKA-7476
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Robert Yokota
Assignee: Robert Yokota


SchemaProjector is not properly handling Date-based logical types.  An 
exception of the following form is thrown:  `Caused by: 
java.lang.ClassCastException: java.util.Date cannot be cast to java.lang.Number`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Incremental Cooperative Rebalancing

2018-10-02 Thread Konstantine Karantasis
Hey everyone,

I'd like to bring to your attention a general design document that was just
published in Apache Kafka's wiki space:

https://cwiki.apache.org/confluence/display/KAFKA/Incremental+Cooperative+Rebalancing%3A+Support+and+Policies

It deals with the subject of Rebalancing of groups in Kafka and proposes
basic infrastructure to support improvements on the current rebalancing
protocol as well as a set of policies that can be implemented to optimize
rebalancing under a number of real-world scenarios.

Currently, this wiki page is meant to serve as a reference to the
proposition of Incremental Cooperative Rebalancing overall. Specific KIPs
will follow in order to describe in more detail - using the standard KIP
format - the basic infrastructure and the first policies that will be
proposed for implementation in components such as Connect, the Kafka
Consumer and Streams.

Stay tuned!
Konstantine


Build failed in Jenkins: kafka-trunk-jdk10 #557

2018-10-02 Thread Apache Jenkins Server
See 


Changes:

[colin] KAFKA-7429: Enable key/truststore update with same filename/password

--
[...truncated 2.24 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED


Re: [DISCUSSION] KIP-376: Implement AutoClosable on appropriate classes that has close()

2018-10-02 Thread Yishun Guan
Hi Matthias, thank you for pointing this out! I have changed the KIP
to 'RecordCollector extends AutoCloseable`.

As your first concern regarding incompatibility, can you explain why
this is a breaking change? Although that `AutoClosable#close()`
declares `throws
Exception, I think the overridden 'close()' doesn't have to throw an
exception (either throw a subclass of the parent exception, or not
throw one at all is fine, I could be totally wrong.)

Chia-Ping, I am not quite sure I understand your idea. With the same
idea above, `AutoClosable#close()` is board enough to cover unchecked
and checked exception, right? Could you elaborate?

Thanks,
Yishun
On Sun, Sep 30, 2018 at 1:20 PM Matthias J. Sax  wrote:
>
> Closeable is part of `java.io` while AutoClosable is part of
> `java.lang`. Thus, the second one is more generic. Also, JavaDoc points
> out that `Closable#close()` must be idempotent while
> `AutoClosable#close()` can have side effects.
>
> Thus, I am not sure atm which one suits better.
>
> However, it's a good hint, that `AutoClosable#close()` declares `throws
> Exception` and thus, it seems to be a backward incompatible change.
> Hence, I am not sure if we can actually move forward easily with this KIP.
>
> Nit: `RecordCollectorImpl` is an internal class that implements
> `RecordCollector` -- should `RecordCollector extends AutoCloseable`?
>
>
> -Matthias
>
>
> On 9/27/18 7:46 PM, Chia-Ping Tsai wrote:
> >> (Although I am not quite sure
> >> when one is more desirable than the other)
> >
> > Most kafka's classes implementing Closeable/AutoCloseable doesn't throw 
> > checked exception in close() method. Perhaps we should have a 
> > "KafkaCloseable" interface which has a close() method without throwing any 
> > checked exception...
> >
> > On 2018/09/27 19:11:20, Yishun Guan  wrote:
> >> Hi All,
> >>
> >> Chia-Ping, I agree, similar to VarifiableConsumer, VarifiableProducer
> >> should be implementing Closeable as well (Although I am not quite sure
> >> when one is more desirable than the other), also I just looked through
> >> your list - these are some great additions, I will add them to the
> >> list.
> >>
> >> Thanks,
> >> Yishun
> >> On Thu, Sep 27, 2018 at 3:26 AM Dongjin Lee  wrote:
> >>>
> >>> Hi Yishun,
> >>>
> >>> Thank you for your great KIP. In fact, I have also encountered the cases
> >>> where Autoclosable is so desired several times! Let me inspect more
> >>> candidate classes as well.
> >>>
> >>> +1. I also refined your KIP a little bit.
> >>>
> >>> Best,
> >>> Dongjin
> >>>
> >>> On Thu, Sep 27, 2018 at 12:21 PM Chia-Ping Tsai  
> >>> wrote:
> >>>
>  hi Yishun
> 
>  Thanks for nice KIP!
> 
>  Q1)
>  Why VerifiableProducer extend Closeable rather than AutoCloseable?
> 
>  Q2)
>  I grep project and then noticed there are other close methods but do not
>  implement AutoCloseable.
>  For example:
>  1) WorkerConnector
>  2) MemoryRecordsBuilder
>  3) MetricsReporter
>  4) ExpiringCredentialRefreshingLogin
>  5) KafkaChannel
>  6) ConsumerInterceptor
>  7) SelectorMetrics
>  8) HeartbeatThread
> 
>  Cheers,
>  Chia-Ping
> 
> 
>  On 2018/09/26 23:44:31, Yishun Guan  wrote:
> > Hi All,
> >
> > Here is a trivial KIP:
> >
>  https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=93325308
> >
> > Suggestions are welcome.
> >
> > Thanks,
> > Yishun
> >
> 
> >>>
> >>>
> >>> --
> >>> *Dongjin Lee*
> >>>
> >>> *A hitchhiker in the mathematical world.*
> >>>
> >>> *github:  github.com/dongjinleekr
> >>> linkedin: kr.linkedin.com/in/dongjinleekr
> >>> slideshare:
> >>> www.slideshare.net/dongjinleekr
> >>> *
> >>
>


Edit permissions for https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem

2018-10-02 Thread Bill Mclane
Hi can you either enable edit permissions for wmcl...@tibco.com 
 for the Kafka Project cwiki or modify the Ecosystem 
Page to include the below under the Distributions & Packaging section:

TIBCO Platform  - https://www.tibco.com/products/apache-kafka 
 Downloads - 
https://www.tibco.com/products/tibco-messaging/downloads 
 

Thank you,

Bill—



[jira] [Created] (KAFKA-7475) print the actual cluster bootstrap address on authentication failures

2018-10-02 Thread radai rosenblatt (JIRA)
radai rosenblatt created KAFKA-7475:
---

 Summary: print the actual cluster bootstrap address on 
authentication failures
 Key: KAFKA-7475
 URL: https://issues.apache.org/jira/browse/KAFKA-7475
 Project: Kafka
  Issue Type: Improvement
Reporter: radai rosenblatt


currently when a kafka client fails to connect to a cluster, users see 
something like this:
{code}
Connection to node -1 terminated during authentication. This may indicate that 
authentication failed due to invalid credentials. 
{code}

that log line is mostly useless in identifying which (of potentially many) 
kafka client is having issues and what kafka cluster is it having issues with.

would be nice to record the remote host/port



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7474) Conflicting statements in the replication docs

2018-10-02 Thread Joel Hamill (JIRA)
Joel Hamill created KAFKA-7474:
--

 Summary: Conflicting statements in the replication docs
 Key: KAFKA-7474
 URL: https://issues.apache.org/jira/browse/KAFKA-7474
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Hamill
 Attachments: image-2018-10-02-12-12-22-902.png

http://kafka.apache.org/documentation.html#replication

In our replication documentation we say that messages can be consumed even if 
they are not committed to the minimum number of ISRs if producer requests less 
stringent acknowledgement. Earlier in the same paragraph we say than only 
committed messages can ever be consumed.

 

!image-2018-10-02-12-12-22-902.png!

 

cc: [~kiril_p]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-291: Have separate queues for control requests and data requests

2018-10-02 Thread Lucas Wang
Thanks for the further comments, Jun.

200. Currently in the code base, we have the term of "ControlBatch" related
to
idempotent/transactional producing. Do you think it's a concern for reusing
the term "control"?

201. It's not clear to me how it would work by following the same strategy
for "controller.listener.name".
Say the new controller has its "controller.listener.name" set to the value
"CONTROLLER", and broker 1
has picked up this KIP by announcing
"endpoints": [
"CONTROLLER://broker1.example.com:9091",
"INTERNAL://broker1.example.com:9092",
"EXTERNAL://host1.example.com:9093"
],

while broker2 has not picked up the change, and is announcing
"endpoints": [
"INTERNAL://broker2.example.com:9092",
"EXTERNAL://host2.example.com:9093"
],
to support both broker 1 for the new behavior and broker 2 for the old
behavior, it seems the controller must
check their published endpoints. Am I missing something?

Thanks!
Lucas

On Mon, Oct 1, 2018 at 6:29 PM Jun Rao  wrote:

> Hi, Lucas,
>
> Sorry for the delay. The updated wiki looks good to me overall. Just a
> couple more minor comments.
>
> 200. kafka.network:name=ControllerRequestQueueSize,type=RequestChannel: The
> name ControllerRequestQueueSize gives the impression that it's only for the
> controller broker. Perhaps we can just rename all metrics and configs from
> controller to control. This indicates that the threads and the queues are
> for the control requests (as oppose to data requests).
>
> 201. ": In this scenario, the controller will
> have the "controller.listener.name" config set to a value like
> "CONTROLLER", however the broker's exposed endpoints do not have an entry
> corresponding to the new listener name. Hence the controller should
> preserve the existing behavior by determining the endpoint using
> *inter-broker-listener-name *value. The end result should be the same
> behavior as today." Currently, the controller makes connections based on
> its local inter.broker.listener.name config without checking the target
> broker's ZK registration. For consistency, perhaps we can just follow the
> same strategy for controller.listener.name. This existing behavior seems
> simpler to understand and has the benefit of catching inconsistent configs
> across brokers.
>
> Thanks,
>
> Jun
>
> On Mon, Oct 1, 2018 at 8:43 AM, Lucas Wang  wrote:
>
> > Hi Jun,
> >
> > Sorry to bother you again. Can you please take a look at the wiki again
> > when you have time?
> >
> > Thanks a lot!
> > Lucas
> >
> > On Wed, Sep 19, 2018 at 3:57 PM Lucas Wang 
> wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks a lot for the detailed explanation.
> > > I've restored the wiki to a previous version that does not require
> config
> > > changes,
> > > and keeps the current behavior with the proposed changes turned off by
> > > default.
> > > I'd appreciate it if you can review it again.
> > >
> > > Thanks!
> > > Lucas
> > >
> > > On Tue, Sep 18, 2018 at 1:48 PM Jun Rao  wrote:
> > >
> > >> Hi, Lucas,
> > >>
> > >> When upgrading to a minor release, I think the expectation is that a
> > user
> > >> wouldn't need to make any config changes, other than the usual
> > >> inter.broker.protocol. If we require other config changes during an
> > >> upgrade, then it's probably better to do that in a major release.
> > >>
> > >> Regarding your proposal, I think removing host/advertised_host in
> favor
> > of
> > >> listeners:advertised_listeners seems useful regardless of this KIP.
> > >> However, that can probably wait until a major release.
> > >>
> > >> As for the controller listener, I am not sure if one has to set it. To
> > >> make
> > >> a cluster healthy, one sort of have to make sure that the request
> queue
> > is
> > >> never full and no request will be sitting in the request queue for
> long.
> > >> If
> > >> one does that, setting the controller listener may not be necessary.
> On
> > >> the
> > >> flip side, even if one sets the controller listener, but the request
> > queue
> > >> and the request time for the data part are still high, the cluster may
> > >> still not be healthy. Given that we have already started the 2.1
> release
> > >> planning, perhaps we can start with not requiring the controller
> > listener.
> > >> If this is indeed something that everyone wants to set, we can make
> it a
> > >> required config in a major release.
> > >>
> > >> Thanks,
> > >>
> > >> Jun
> > >>
> > >> On Tue, Sep 11, 2018 at 3:46 PM, Lucas Wang 
> > >> wrote:
> > >>
> > >> > @Jun Rao 
> > >> >
> > >> > I made the recent config changes after thinking about the default
> > >> behavior
> > >> > for adopting this KIP.
> > >> > I think there are basically two options:
> > >> > 1. By default, the behavior proposed in this KIP is turned off, and
> > >> > operators can turn it
> > >> > on by adding the "controller.listener.name" config and entries in
> the
> > >> > "listeners" and "advertised.listeners" list.
> > >> > If no 

[jira] [Resolved] (KAFKA-7355) Topic Configuration Changes are not applied until reboot

2018-10-02 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7355.
--
Resolution: Duplicate

Resolving as duplicate of KAFKA-7366. Raised a PR for KAFKA-7366. 

> Topic Configuration Changes are not applied until reboot
> 
>
> Key: KAFKA-7355
> URL: https://issues.apache.org/jira/browse/KAFKA-7355
> Project: Kafka
>  Issue Type: Bug
>  Components: config, core
>Affects Versions: 2.0.0
>Reporter: Stephane Maarek
>Assignee: kevin.chen
>Priority: Critical
>
> Steps to reproduce:
> {code}
> kafka-topics --zookeeper 127.0.0.1:2181 --create --topic employee-salary 
> --partitions 1 --replication-factor 1
> kafka-configs --zookeeper 127.0.0.1:2181 --alter --entity-type topics 
> --entity-name employee-salary --add-config 
> cleanup.policy=compact,min.cleanable.dirty.ratio=0.001,segment.ms=5000
> kafka-configs --zookeeper 127.0.0.1:2181 --alter --entity-type topics 
> --entity-name employee-salary
> kafka-console-producer --broker-list 127.0.0.1:9092 --topic employee-salary 
> --property parse.key=true --property key.separator=,
> {code}
> Try publishing a bunch of data, and no segment roll over will happen (even 
> though segment.ms=5000). I looked at the kafka directory and the kafka logs 
> to ensure 
> I noticed the broker processed the notification of config changes, but the 
> behaviour was not updated to use the new config values nonetheless. 
> After restarting the broker, the expected behaviour is observed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7473) allow configuring kafka client configs to not warn for unknown config peoperties

2018-10-02 Thread radai rosenblatt (JIRA)
radai rosenblatt created KAFKA-7473:
---

 Summary: allow configuring kafka client configs to not warn for 
unknown config peoperties
 Key: KAFKA-7473
 URL: https://issues.apache.org/jira/browse/KAFKA-7473
 Project: Kafka
  Issue Type: Improvement
Reporter: radai rosenblatt


as the config handed to a client may contain config keys for use by either 
modular code in the client (serializers, deserializers, interceptors) or the 
subclasses of the client class, having "unknown" (to the vanilla client) 
configs logged as a warning is an annoyance.

it would be nice to have a constructor parameter that controls this behavior 
(just like there's already a flag for `boolean doLog`)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


KIP-358: Merge required

2018-10-02 Thread Nikolay Izhikov
Hello, Kafka commiters.

I've implemented KIP-358 [1]

My PR [2] accepted by John Roesler and Bill Bejeck.
Tests passed.

Can you merge it to the trunk?

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-358%3A+Migrate+Streams+API+to+Duration+instead+of+long+ms+times
[2] https://github.com/apache/kafka/pull/5682


signature.asc
Description: This is a digitally signed message part


Jenkins build is back to normal : kafka-trunk-jdk8 #3048

2018-10-02 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Replacing EasyMock with Mockito in Kafka

2018-10-02 Thread Rajini Sivaram
Hi Ismael,

Thanks for starting this discussion. I had a quick look at the PR and I
agree that the updated tests are more readable. So +1 from me.

On Tue, Oct 2, 2018 at 12:06 AM Colin McCabe  wrote:

> +1.  It would be good to standardize on one (well-maintained) mocking
> library in the project :)
>
> best,
> Colin
>
>
> On Sun, Sep 30, 2018, at 20:32, Ismael Juma wrote:
> > Hi all,
> >
> > As described in KAFKA-7438
> > , EasyMock's
> development
> > has stagnated. This presents a number of issues:
> >
> > 1. Blocks us from running tests with newer Java versions, which is a
> > frequent occurrence give the new Java release cadence. It is the main
> > blocker in switching Jenkins from Java 10 to Java 11 at the moment.
> > 2. Integration with newer testing libraries like JUnit 5 is slow to
> appear
> > (if it appears at all).
> > 3. No API improvements. Mockito started as an EasyMock fork, but has
> > continued to evolve and, in my opinion, it's more intuitive now.
> >
> > I think we should switch to Mockito for new tests and to incrementally
> > migrate the existing ones as time allows. To make the proposal concrete,
> I
> > went ahead and converted all the tests in the `clients` module:
> >
> > https://github.com/apache/kafka/pull/5691
> >
> > I think the updated tests are nicely readable. I also removed PowerMock
> > from the `clients` tests as we didn't really need it and its development
> > has also stagnated a few months ago. I think we can easily remove
> PowerMock
> > elsewhere with the exception of `Connect` where we may need to keep it
> for
> > a while.
> >
> > Let me know your thoughts. Aside from the general future direction, I'd
> > like to get the PR for KAFKA-7439 reviewed and merged soonish as merge
> > conflicts will creep in quickly.
> >
> > Ismael
>


[jira] [Resolved] (KAFKA-5018) LogCleaner tests to verify behaviour of message format v2

2018-10-02 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-5018.
-
Resolution: Won't Do

> LogCleaner tests to verify behaviour of message format v2
> -
>
> Key: KAFKA-5018
> URL: https://issues.apache.org/jira/browse/KAFKA-5018
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Priority: Major
> Fix For: 2.1.0
>
>
> It would be good to add LogCleaner tests to verify the behaviour of fields 
> like baseOffset after compaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (KAFKA-5018) LogCleaner tests to verify behaviour of message format v2

2018-10-02 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin reopened KAFKA-5018:
-

> LogCleaner tests to verify behaviour of message format v2
> -
>
> Key: KAFKA-5018
> URL: https://issues.apache.org/jira/browse/KAFKA-5018
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>Priority: Major
> Fix For: 2.2.0
>
>
> It would be good to add LogCleaner tests to verify the behaviour of fields 
> like baseOffset after compaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6761) Reduce Kafka Streams Footprint

2018-10-02 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-6761.
-
Resolution: Fixed

> Reduce Kafka Streams Footprint
> --
>
> Key: KAFKA-6761
> URL: https://issues.apache.org/jira/browse/KAFKA-6761
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 2.1.0
>
>
> The persistent storage footprint of a Kafka Streams application contains the 
> following aspects:
>  # The internal topics created on the Kafka cluster side.
>  # The materialized state stores on the Kafka Streams application instances 
> side.
> There have been some questions about reducing these footprints, especially 
> since many of them are not necessary. For example, there are redundant 
> internal topics, as well as unnecessary state stores that takes up space but 
> also affect performance. When people are pushing Streams to production with 
> high traffic, this issue would be more common and severe. Reducing the 
> footprint of Streams have clear benefits for reducing resource utilization of 
> Kafka Streams applications, and also not creating pressure on broker's 
> capacities.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6438) NSEE while concurrently creating and deleting a topic

2018-10-02 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-6438.
-
Resolution: Won't Fix

> NSEE while concurrently creating and deleting a topic
> -
>
> Key: KAFKA-6438
> URL: https://issues.apache.org/jira/browse/KAFKA-6438
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 1.0.0
> Environment: kafka_2.11-1.0.0.jar
> OpenJDK Runtime Environment (build 1.8.0_102-b14), OpenJDK 64-Bit Server VM 
> (build 25.102-b14, mixed mode)
> CentOS Linux release 7.3.1611 (Core)
>Reporter: Adam Kotwasinski
>Priority: Major
>  Labels: reliability
>
> It appears that deleting a topic and creating it at the same time can cause 
> NSEE, what later results in a forced controller shutdown.
> Most probably topics are being created because consumers/producers are still 
> active (yes, this means the deletion is happening blindly).
> The main problem here (for me) is the controller switch, the data loss and 
> following unclean election is acceptable (as we admit to deleting blindly).
> Environment description:
> 20 kafka brokers
> 80k partitions (20k topics 4partitions each)
> 3 node ZK
> Incident:
> {code:java}
> [2018-01-09 11:19:05,912] INFO [Topic Deletion Manager 6], Partition deletion 
> callback for mytopic-2,mytopic-0,mytopic-1,mytopic-3 
> (kafka.controller.TopicDeletionManager)
> [2018-01-09 11:19:06,237] INFO [Controller id=6] New leader and ISR for 
> partition mytopic-0 is {"leader":-1,"leader_epoch":1,"isr":[]} 
> (kafka.controller.KafkaController)
> [2018-01-09 11:19:06,412] INFO [Topic Deletion Manager 6], Deletion for 
> replicas 12,9,10,11 for partition mytopic-3,mytopic-0,mytopic-1,mytopic-2 of 
> topic mytopic in progress (kafka.controller.TopicDeletionManager)
> [2018-01-09 11:19:07,218] INFO [Topic Deletion Manager 6], Deletion for 
> replicas 12,9,10,11 for partition mytopic-3,mytopic-0,mytopic-1,mytopic-2 of 
> topic mytopic in progress (kafka.controller.TopicDeletionManager)
> [2018-01-09 11:19:07,304] INFO [Topic Deletion Manager 6], Deletion for 
> replicas 12,9,10,11 for partition mytopic-3,mytopic-0,mytopic-1,mytopic-2 of 
> topic mytopic in progress (kafka.controller.TopicDeletionManager)
> [2018-01-09 11:19:07,383] INFO [Topic Deletion Manager 6], Deletion for 
> replicas 12,9,10,11 for partition mytopic-3,mytopic-0,mytopic-1,mytopic-2 of 
> topic mytopic in progress (kafka.controller.TopicDeletionManager)
> [2018-01-09 11:19:07,510] INFO [Topic Deletion Manager 6], Deletion for 
> replicas 12,9,10,11 for partition mytopic-3,mytopic-0,mytopic-1,mytopic-2 of 
> topic mytopic in progress (kafka.controller.TopicDeletionManager)
> [2018-01-09 11:19:07,661] INFO [Topic Deletion Manager 6], Deletion for 
> replicas 12,9,10,11 for partition mytopic-3,mytopic-0,mytopic-1,mytopic-2 of 
> topic mytopic in progress (kafka.controller.TopicDeletionManager)
> [2018-01-09 11:19:07,728] INFO [Topic Deletion Manager 6], Deletion for 
> replicas 9,10,11 for partition mytopic-0,mytopic-1,mytopic-2 of topic mytopic 
> in progress (kafka.controller.TopicDeletionManager)
> [2018-01-09 11:19:07,924] INFO [PartitionStateMachine controllerId=6] 
> Invoking state change to OfflinePartition for partitions 
> mytopic-2,mytopic-0,mytopic-1,mytopic-3 
> (kafka.controller.PartitionStateMachine)
> [2018-01-09 11:19:07,924] INFO [PartitionStateMachine controllerId=6] 
> Invoking state change to NonExistentPartition for partitions 
> mytopic-2,mytopic-0,mytopic-1,mytopic-3 
> (kafka.controller.PartitionStateMachine)
> [2018-01-09 11:19:08,592] INFO [Controller id=6] New topics: [Set(mytopic, 
> other, other2)], deleted topics: [Set()], new partition replica assignment 
> [Map(other-0 -> Vector(8), mytopic-2 -> Vector(6), mytopic-0 -> Vector(4), 
> other-2 -> Vector(10), mytopic-1 -> Vector(5), mytopic-3 -> Vector(7), 
> other-1 -> Vector(9), other-3 -> Vector(11))] 
> (kafka.controller.KafkaController)
> [2018-01-09 11:19:08,593] INFO [Controller id=6] New topic creation callback 
> for other-0,mytopic-2,mytopic-0,other-2,mytopic-1,mytopic-3,other-1,other-3 
> (kafka.controller.KafkaController)
> [2018-01-09 11:19:08,596] INFO [Controller id=6] New partition creation 
> callback for 
> other-0,mytopic-2,mytopic-0,other-2,mytopic-1,mytopic-3,other-1,other-3 
> (kafka.controller.KafkaController)
> [2018-01-09 11:19:08,596] INFO [PartitionStateMachine controllerId=6] 
> Invoking state change to NewPartition for partitions 
> other-0,mytopic-2,mytopic-0,other-2,mytopic-1,mytopic-3,other-1,other-3 
> (kafka.controller.PartitionStateMachine)
> [2018-01-09 11:19:08,642] INFO [PartitionStateMachine controllerId=6] 
> Invoking state change to OnlinePartition for partitions 
> 

[jira] [Resolved] (KAFKA-6415) KafkaLog4jAppender deadlocks when logging from producer network thread

2018-10-02 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-6415.
-
Resolution: Fixed

> KafkaLog4jAppender deadlocks when logging from producer network thread
> --
>
> Key: KAFKA-6415
> URL: https://issues.apache.org/jira/browse/KAFKA-6415
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.1.0
>
>
> When a log entry is appended to a Kafka topic using KafkaLog4jAppender, the 
> producer.send operation may block waiting for metadata. This can result in 
> deadlocks in a couple of scenarios if a log entry from the producer network 
> thread is also at a log level that results in the entry being appended to a 
> Kafka topic.
> 1. Producer's network thread will attempt to send data to a Kafka topic and 
> this is unsafe since producer.send may block waiting for metadata, causing a 
> deadlock since the thread will not process the metadata request/response.
> 2. KafkaLog4jAppender#append is invoked while holding the lock of the logger. 
> So the thread waiting for metadata in the initial send will be holding the 
> logger lock. If the producer network thread has.a log entry that needs to be 
> appended, it will attempt to acquire the logger lock and deadlock.
> This was probably the case right from the beginning when KafkaLog4jAppender 
> was introduced, but did not cause any issues so far since there were only 
> debug log entries in that path which were not logged to a Kafka topic by any 
> of the tests. A recent info level log entry introduced by the commit 
> https://github.com/apache/kafka/commit/a3aea3cf4dbedb293f2d7859e0298bebc8e2185f
>  is causing system test failures in log4j_appender_test.py due to the 
> deadlock.
> The asynchronous append case can be fixed by moving all send operations to a 
> separate thread. But KafkaLog4jAppender also has a syncSend option which 
> blocks append while holding the logger lock until the send completes. Not 
> sure how this can be fixed if we want to support log appends from the 
> producer network thread.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7472) Implement KIP-145 transformations

2018-10-02 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-7472:


 Summary: Implement KIP-145 transformations 
 Key: KAFKA-7472
 URL: https://issues.apache.org/jira/browse/KAFKA-7472
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Affects Versions: 1.1.0
Reporter: Randall Hauch
Assignee: Randall Hauch


As part of 
[KIP-145|https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect],
 several SMTs were described and approved. However, they were never implemented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7471) Multiple Consumer Group Management (Describe, Reset, Delete)

2018-10-02 Thread Alex Dunayevsky (JIRA)
Alex Dunayevsky created KAFKA-7471:
--

 Summary: Multiple Consumer Group Management (Describe, Reset, 
Delete)
 Key: KAFKA-7471
 URL: https://issues.apache.org/jira/browse/KAFKA-7471
 Project: Kafka
  Issue Type: New Feature
  Components: tools
Affects Versions: 2.0.0, 1.0.0
Reporter: Alex Dunayevsky
Assignee: Alex Dunayevsky
 Fix For: 2.0.1


Functionality needed:
 * Describe/Delete/Reset offsets on multiple consumer groups at a time 
(including each group by repeating `--group` parameter)
 * Describe/Delete/Reset offsets on ALL consumer groups at a time (add key 
`--groups-all`, similar to `--topics-all`)
 * Generate CSV for multiple consumer groups

What are the benifits? 
 * No need to start a new JVM to perform each query on every single consumer 
group
 * Abiltity to query groups by their status (for instance, `-v grepping` by 
`Stable` to spot problematic/dead/empty groups)
 * Ability to export offsets to reset for multiple consumer groups to a CSV 
file (needs CSV generation export/import format rework)

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7406) Naming Join and Grouping Repartition Topics

2018-10-02 Thread Dong Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-7406.
-
Resolution: Fixed

> Naming Join and Grouping Repartition Topics
> ---
>
> Key: KAFKA-7406
> URL: https://issues.apache.org/jira/browse/KAFKA-7406
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.1.0
>
>
> To help make Streams compatible with topology changes, we will need to give 
> users the ability to name some operators so after adjusting the topology a 
> rolling upgrade is possible.  
> This Jira is the first in this effort to allow for giving operators 
> deterministic names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #556

2018-10-02 Thread Apache Jenkins Server
See 


Changes:

[mjsax] KAFKA-7406: Name join group repartition topics (#5709)

[mjsax] KAFKA-7223: In-Memory Suppression Buffering (#5693)

--
[...truncated 2.17 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = 

Build failed in Jenkins: kafka-trunk-jdk8 #3047

2018-10-02 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2897, done.
remote: Counting objects:   0% (1/2897)   remote: Counting objects:   
1% (29/2897)   remote: Counting objects:   2% (58/2897)   
remote: Counting objects:   3% (87/2897)   remote: Counting objects:   
4% (116/2897)   remote: Counting objects:   5% (145/2897)   
remote: Counting objects:   6% (174/2897)   remote: Counting objects:   
7% (203/2897)   remote: Counting objects:   8% (232/2897)   
remote: Counting objects:   9% (261/2897)   remote: Counting objects:  
10% (290/2897)   remote: Counting objects:  11% (319/2897)   
remote: Counting objects:  12% (348/2897)   remote: Counting objects:  
13% (377/2897)   remote: Counting objects:  14% (406/2897)   
remote: Counting objects:  15% (435/2897)   remote: Counting objects:  
16% (464/2897)   remote: Counting objects:  17% (493/2897)   
remote: Counting objects:  18% (522/2897)   remote: Counting objects:  
19% (551/2897)   remote: Counting objects:  20% (580/2897)   
remote: Counting objects:  21% (609/2897)   remote: Counting objects:  
22% (638/2897)   remote: Counting objects:  23% (667/2897)   
remote: Counting objects:  24% (696/2897)   remote: Counting objects:  
25% (725/2897)   remote: Counting objects:  26% (754/2897)   
remote: Counting objects:  27% (783/2897)   remote: Counting objects:  
28% (812/2897)   remote: Counting objects:  29% (841/2897)   
remote: Counting objects:  30% (870/2897)   remote: Counting objects:  
31% (899/2897)   remote: Counting objects:  32% (928/2897)   
remote: Counting objects:  33% (957/2897)   remote: Counting objects:  
34% (985/2897)   remote: Counting objects:  35% (1014/2897)   
remote: Counting objects:  36% (1043/2897)   remote: Counting objects:  
37% (1072/2897)   remote: Counting objects:  38% (1101/2897)   
remote: Counting objects:  39% (1130/2897)   remote: Counting objects:  
40% (1159/2897)   remote: Counting objects:  41% (1188/2897)   
remote: Counting objects:  42% (1217/2897)   remote: Counting objects:  
43% (1246/2897)   remote: Counting objects:  44% (1275/2897)   
remote: Counting objects:  45% (1304/2897)   remote: Counting objects:  
46% (1333/2897)   remote: Counting objects:  47% (1362/2897)   
remote: Counting objects:  48% (1391/2897)   remote: Counting objects:  
49% (1420/2897)   remote: Counting objects:  50% (1449/2897)   
remote: Counting objects:  51% (1478/2897)   remote: Counting objects:  
52% (1507/2897)   remote: Counting objects:  53% (1536/2897)   
remote: Counting objects:  54% (1565/2897)   remote: Counting objects:  
55% 

Build failed in Jenkins: kafka-trunk-jdk10 #555

2018-10-02 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
error: Could not read a9692ff66fccc96ccf95526682136cddb5af0627
remote: Enumerating objects: 2934, done.
remote: Counting objects:   0% (1/2934)   remote: Counting objects:   
1% (30/2934)   remote: Counting objects:   2% (59/2934)   
remote: Counting objects:   3% (89/2934)   remote: Counting objects:   
4% (118/2934)   remote: Counting objects:   5% (147/2934)   
remote: Counting objects:   6% (177/2934)   remote: Counting objects:   
7% (206/2934)   remote: Counting objects:   8% (235/2934)   
remote: Counting objects:   9% (265/2934)   remote: Counting objects:  
10% (294/2934)   remote: Counting objects:  11% (323/2934)   
remote: Counting objects:  12% (353/2934)   remote: Counting objects:  
13% (382/2934)   remote: Counting objects:  14% (411/2934)   
remote: Counting objects:  15% (441/2934)   remote: Counting objects:  
16% (470/2934)   remote: Counting objects:  17% (499/2934)   
remote: Counting objects:  18% (529/2934)   remote: Counting objects:  
19% (558/2934)   remote: Counting objects:  20% (587/2934)   
remote: Counting objects:  21% (617/2934)   remote: Counting objects:  
22% (646/2934)   remote: Counting objects:  23% (675/2934)   
remote: Counting objects:  24% (705/2934)   remote: Counting objects:  
25% (734/2934)   remote: Counting objects:  26% (763/2934)   
remote: Counting objects:  27% (793/2934)   remote: Counting objects:  
28% (822/2934)   remote: Counting objects:  29% (851/2934)   
remote: Counting objects:  30% (881/2934)   remote: Counting objects:  
31% (910/2934)   remote: Counting objects:  32% (939/2934)   
remote: Counting objects:  33% (969/2934)   remote: Counting objects:  
34% (998/2934)   remote: Counting objects:  35% (1027/2934)   
remote: Counting objects:  36% (1057/2934)   remote: Counting objects:  
37% (1086/2934)   remote: Counting objects:  38% (1115/2934)   
remote: Counting objects:  39% (1145/2934)   remote: Counting objects:  
40% (1174/2934)   remote: Counting objects:  41% (1203/2934)   
remote: Counting objects:  42% (1233/2934)   remote: Counting objects:  
43% (1262/2934)   remote: Counting objects:  44% (1291/2934)   
remote: Counting objects:  45% (1321/2934)   remote: Counting objects:  
46% (1350/2934)   remote: Counting objects:  47% (1379/2934)   
remote: Counting objects:  48% (1409/2934)   remote: Counting objects:  
49% (1438/2934)   remote: Counting objects:  50% (1467/2934)   
remote: Counting objects:  51% (1497/2934)   remote: Counting objects:  
52% (1526/2934)   remote: Counting objects:  53% (1556/2934)   
remote: Counting objects:  54% (1585/2934)   remote: Counting objects:  
55% 

Build failed in Jenkins: kafka-trunk-jdk8 #3046

2018-10-02 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2897, done.
remote: Counting objects:   0% (1/2897)   remote: Counting objects:   
1% (29/2897)   remote: Counting objects:   2% (58/2897)   
remote: Counting objects:   3% (87/2897)   remote: Counting objects:   
4% (116/2897)   remote: Counting objects:   5% (145/2897)   
remote: Counting objects:   6% (174/2897)   remote: Counting objects:   
7% (203/2897)   remote: Counting objects:   8% (232/2897)   
remote: Counting objects:   9% (261/2897)   remote: Counting objects:  
10% (290/2897)   remote: Counting objects:  11% (319/2897)   
remote: Counting objects:  12% (348/2897)   remote: Counting objects:  
13% (377/2897)   remote: Counting objects:  14% (406/2897)   
remote: Counting objects:  15% (435/2897)   remote: Counting objects:  
16% (464/2897)   remote: Counting objects:  17% (493/2897)   
remote: Counting objects:  18% (522/2897)   remote: Counting objects:  
19% (551/2897)   remote: Counting objects:  20% (580/2897)   
remote: Counting objects:  21% (609/2897)   remote: Counting objects:  
22% (638/2897)   remote: Counting objects:  23% (667/2897)   
remote: Counting objects:  24% (696/2897)   remote: Counting objects:  
25% (725/2897)   remote: Counting objects:  26% (754/2897)   
remote: Counting objects:  27% (783/2897)   remote: Counting objects:  
28% (812/2897)   remote: Counting objects:  29% (841/2897)   
remote: Counting objects:  30% (870/2897)   remote: Counting objects:  
31% (899/2897)   remote: Counting objects:  32% (928/2897)   
remote: Counting objects:  33% (957/2897)   remote: Counting objects:  
34% (985/2897)   remote: Counting objects:  35% (1014/2897)   
remote: Counting objects:  36% (1043/2897)   remote: Counting objects:  
37% (1072/2897)   remote: Counting objects:  38% (1101/2897)   
remote: Counting objects:  39% (1130/2897)   remote: Counting objects:  
40% (1159/2897)   remote: Counting objects:  41% (1188/2897)   
remote: Counting objects:  42% (1217/2897)   remote: Counting objects:  
43% (1246/2897)   remote: Counting objects:  44% (1275/2897)   
remote: Counting objects:  45% (1304/2897)   remote: Counting objects:  
46% (1333/2897)   remote: Counting objects:  47% (1362/2897)   
remote: Counting objects:  48% (1391/2897)   remote: Counting objects:  
49% (1420/2897)   remote: Counting objects:  50% (1449/2897)   
remote: Counting objects:  51% (1478/2897)   remote: Counting objects:  
52% (1507/2897)   remote: Counting objects:  53% (1536/2897)   
remote: Counting objects:  54% (1565/2897)   remote: Counting objects:  
55% 

Build failed in Jenkins: kafka-trunk-jdk10 #554

2018-10-02 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
error: Could not read a9692ff66fccc96ccf95526682136cddb5af0627
remote: Enumerating objects: 2934, done.
remote: Counting objects:   0% (1/2934)   remote: Counting objects:   
1% (30/2934)   remote: Counting objects:   2% (59/2934)   
remote: Counting objects:   3% (89/2934)   remote: Counting objects:   
4% (118/2934)   remote: Counting objects:   5% (147/2934)   
remote: Counting objects:   6% (177/2934)   remote: Counting objects:   
7% (206/2934)   remote: Counting objects:   8% (235/2934)   
remote: Counting objects:   9% (265/2934)   remote: Counting objects:  
10% (294/2934)   remote: Counting objects:  11% (323/2934)   
remote: Counting objects:  12% (353/2934)   remote: Counting objects:  
13% (382/2934)   remote: Counting objects:  14% (411/2934)   
remote: Counting objects:  15% (441/2934)   remote: Counting objects:  
16% (470/2934)   remote: Counting objects:  17% (499/2934)   
remote: Counting objects:  18% (529/2934)   remote: Counting objects:  
19% (558/2934)   remote: Counting objects:  20% (587/2934)   
remote: Counting objects:  21% (617/2934)   remote: Counting objects:  
22% (646/2934)   remote: Counting objects:  23% (675/2934)   
remote: Counting objects:  24% (705/2934)   remote: Counting objects:  
25% (734/2934)   remote: Counting objects:  26% (763/2934)   
remote: Counting objects:  27% (793/2934)   remote: Counting objects:  
28% (822/2934)   remote: Counting objects:  29% (851/2934)   
remote: Counting objects:  30% (881/2934)   remote: Counting objects:  
31% (910/2934)   remote: Counting objects:  32% (939/2934)   
remote: Counting objects:  33% (969/2934)   remote: Counting objects:  
34% (998/2934)   remote: Counting objects:  35% (1027/2934)   
remote: Counting objects:  36% (1057/2934)   remote: Counting objects:  
37% (1086/2934)   remote: Counting objects:  38% (1115/2934)   
remote: Counting objects:  39% (1145/2934)   remote: Counting objects:  
40% (1174/2934)   remote: Counting objects:  41% (1203/2934)   
remote: Counting objects:  42% (1233/2934)   remote: Counting objects:  
43% (1262/2934)   remote: Counting objects:  44% (1291/2934)   
remote: Counting objects:  45% (1321/2934)   remote: Counting objects:  
46% (1350/2934)   remote: Counting objects:  47% (1379/2934)   
remote: Counting objects:  48% (1409/2934)   remote: Counting objects:  
49% (1438/2934)   remote: Counting objects:  50% (1467/2934)   
remote: Counting objects:  51% (1497/2934)   remote: Counting objects:  
52% (1526/2934)   remote: Counting objects:  53% (1556/2934)   
remote: Counting objects:  54% (1585/2934)   remote: Counting objects:  
55% 

[jira] [Created] (KAFKA-7470) Thread safe accumulator across all instances

2018-10-02 Thread sam (JIRA)
sam created KAFKA-7470:
--

 Summary: Thread safe accumulator across all instances
 Key: KAFKA-7470
 URL: https://issues.apache.org/jira/browse/KAFKA-7470
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Reporter: sam


For many situations in Big Data it is preferable to work with a small buffer of 
records at a go, rather than one record at a time.

The natural example is calling some external API that supports batching for 
efficiency.

How can we do this in Kafka Streams? I cannot find anything in the API that 
looks like what I want.

So far I have:

{{builder.stream[String, String]("my-input-topic") 
.mapValues(externalApiCall).to("my-output-topic")}}

What I want is:

{{builder.stream[String, String]("my-input-topic") .batched(chunkSize = 
2000).map(externalBatchedApiCall).to("my-output-topic")}}

In Scala and Akka Streams the function is called {{grouped}} or {{batch}}. In 
Spark Structured Streaming we can do 
{{mapPartitions.map(_.grouped(2000).map(externalBatchedApiCall))}}.

 

 

https://stackoverflow.com/questions/52366623/how-to-process-data-in-chunks-batches-with-kafka-streams



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3045

2018-10-02 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2897, done.
remote: Counting objects:   0% (1/2897)   remote: Counting objects:   
1% (29/2897)   remote: Counting objects:   2% (58/2897)   
remote: Counting objects:   3% (87/2897)   remote: Counting objects:   
4% (116/2897)   remote: Counting objects:   5% (145/2897)   
remote: Counting objects:   6% (174/2897)   remote: Counting objects:   
7% (203/2897)   remote: Counting objects:   8% (232/2897)   
remote: Counting objects:   9% (261/2897)   remote: Counting objects:  
10% (290/2897)   remote: Counting objects:  11% (319/2897)   
remote: Counting objects:  12% (348/2897)   remote: Counting objects:  
13% (377/2897)   remote: Counting objects:  14% (406/2897)   
remote: Counting objects:  15% (435/2897)   remote: Counting objects:  
16% (464/2897)   remote: Counting objects:  17% (493/2897)   
remote: Counting objects:  18% (522/2897)   remote: Counting objects:  
19% (551/2897)   remote: Counting objects:  20% (580/2897)   
remote: Counting objects:  21% (609/2897)   remote: Counting objects:  
22% (638/2897)   remote: Counting objects:  23% (667/2897)   
remote: Counting objects:  24% (696/2897)   remote: Counting objects:  
25% (725/2897)   remote: Counting objects:  26% (754/2897)   
remote: Counting objects:  27% (783/2897)   remote: Counting objects:  
28% (812/2897)   remote: Counting objects:  29% (841/2897)   
remote: Counting objects:  30% (870/2897)   remote: Counting objects:  
31% (899/2897)   remote: Counting objects:  32% (928/2897)   
remote: Counting objects:  33% (957/2897)   remote: Counting objects:  
34% (985/2897)   remote: Counting objects:  35% (1014/2897)   
remote: Counting objects:  36% (1043/2897)   remote: Counting objects:  
37% (1072/2897)   remote: Counting objects:  38% (1101/2897)   
remote: Counting objects:  39% (1130/2897)   remote: Counting objects:  
40% (1159/2897)   remote: Counting objects:  41% (1188/2897)   
remote: Counting objects:  42% (1217/2897)   remote: Counting objects:  
43% (1246/2897)   remote: Counting objects:  44% (1275/2897)   
remote: Counting objects:  45% (1304/2897)   remote: Counting objects:  
46% (1333/2897)   remote: Counting objects:  47% (1362/2897)   
remote: Counting objects:  48% (1391/2897)   remote: Counting objects:  
49% (1420/2897)   remote: Counting objects:  50% (1449/2897)   
remote: Counting objects:  51% (1478/2897)   remote: Counting objects:  
52% (1507/2897)   remote: Counting objects:  53% (1536/2897)   
remote: Counting objects:  54% (1565/2897)   remote: Counting objects:  
55% 

Build failed in Jenkins: kafka-trunk-jdk10 #553

2018-10-02 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 42f07849917fadb444802add590e7bb9ca4f6ba2
error: Could not read a9692ff66fccc96ccf95526682136cddb5af0627
remote: Enumerating objects: 2934, done.
remote: Counting objects:   0% (1/2934)   remote: Counting objects:   
1% (30/2934)   remote: Counting objects:   2% (59/2934)   
remote: Counting objects:   3% (89/2934)   remote: Counting objects:   
4% (118/2934)   remote: Counting objects:   5% (147/2934)   
remote: Counting objects:   6% (177/2934)   remote: Counting objects:   
7% (206/2934)   remote: Counting objects:   8% (235/2934)   
remote: Counting objects:   9% (265/2934)   remote: Counting objects:  
10% (294/2934)   remote: Counting objects:  11% (323/2934)   
remote: Counting objects:  12% (353/2934)   remote: Counting objects:  
13% (382/2934)   remote: Counting objects:  14% (411/2934)   
remote: Counting objects:  15% (441/2934)   remote: Counting objects:  
16% (470/2934)   remote: Counting objects:  17% (499/2934)   
remote: Counting objects:  18% (529/2934)   remote: Counting objects:  
19% (558/2934)   remote: Counting objects:  20% (587/2934)   
remote: Counting objects:  21% (617/2934)   remote: Counting objects:  
22% (646/2934)   remote: Counting objects:  23% (675/2934)   
remote: Counting objects:  24% (705/2934)   remote: Counting objects:  
25% (734/2934)   remote: Counting objects:  26% (763/2934)   
remote: Counting objects:  27% (793/2934)   remote: Counting objects:  
28% (822/2934)   remote: Counting objects:  29% (851/2934)   
remote: Counting objects:  30% (881/2934)   remote: Counting objects:  
31% (910/2934)   remote: Counting objects:  32% (939/2934)   
remote: Counting objects:  33% (969/2934)   remote: Counting objects:  
34% (998/2934)   remote: Counting objects:  35% (1027/2934)   
remote: Counting objects:  36% (1057/2934)   remote: Counting objects:  
37% (1086/2934)   remote: Counting objects:  38% (1115/2934)   
remote: Counting objects:  39% (1145/2934)   remote: Counting objects:  
40% (1174/2934)   remote: Counting objects:  41% (1203/2934)   
remote: Counting objects:  42% (1233/2934)   remote: Counting objects:  
43% (1262/2934)   remote: Counting objects:  44% (1291/2934)   
remote: Counting objects:  45% (1321/2934)   remote: Counting objects:  
46% (1350/2934)   remote: Counting objects:  47% (1379/2934)   
remote: Counting objects:  48% (1409/2934)   remote: Counting objects:  
49% (1438/2934)   remote: Counting objects:  50% (1467/2934)   
remote: Counting objects:  51% (1497/2934)   remote: Counting objects:  
52% (1526/2934)   remote: Counting objects:  53% (1556/2934)   
remote: Counting objects:  54% (1585/2934)   remote: Counting objects:  
55% 

Re: [ANNOUNCE] New committer: Colin McCabe

2018-10-02 Thread James Cheng
Congrats, Colin!

-James

> On Sep 25, 2018, at 1:39 AM, Ismael Juma  wrote:
> 
> Hi all,
> 
> The PMC for Apache Kafka has invited Colin McCabe as a committer and we are
> pleased to announce that he has accepted!
> 
> Colin has contributed 101 commits and 8 KIPs including significant
> improvements to replication, clients, code quality and testing. A few
> highlights were KIP-97 (Improved Clients Compatibility Policy), KIP-117
> (AdminClient), KIP-227 (Incremental FetchRequests to Increase Partition
> Scalability), the introduction of findBugs and adding Trogdor (fault
> injection and benchmarking tool).
> 
> In addition, Colin has reviewed 38 pull requests and participated in more
> than 50 KIP discussions.
> 
> Thank you for your contributions Colin! Looking forward to many more. :)
> 
> Ismael, for the Apache Kafka PMC



Build failed in Jenkins: kafka-trunk-jdk8 #3044

2018-10-02 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
error: Could not read bb745c0f9142717ddf68dc83bbd940dfe0c59b9a
remote: Enumerating objects: 2897, done.
remote: Counting objects:   0% (1/2897)   remote: Counting objects:   
1% (29/2897)   remote: Counting objects:   2% (58/2897)   
remote: Counting objects:   3% (87/2897)   remote: Counting objects:   
4% (116/2897)   remote: Counting objects:   5% (145/2897)   
remote: Counting objects:   6% (174/2897)   remote: Counting objects:   
7% (203/2897)   remote: Counting objects:   8% (232/2897)   
remote: Counting objects:   9% (261/2897)   remote: Counting objects:  
10% (290/2897)   remote: Counting objects:  11% (319/2897)   
remote: Counting objects:  12% (348/2897)   remote: Counting objects:  
13% (377/2897)   remote: Counting objects:  14% (406/2897)   
remote: Counting objects:  15% (435/2897)   remote: Counting objects:  
16% (464/2897)   remote: Counting objects:  17% (493/2897)   
remote: Counting objects:  18% (522/2897)   remote: Counting objects:  
19% (551/2897)   remote: Counting objects:  20% (580/2897)   
remote: Counting objects:  21% (609/2897)   remote: Counting objects:  
22% (638/2897)   remote: Counting objects:  23% (667/2897)   
remote: Counting objects:  24% (696/2897)   remote: Counting objects:  
25% (725/2897)   remote: Counting objects:  26% (754/2897)   
remote: Counting objects:  27% (783/2897)   remote: Counting objects:  
28% (812/2897)   remote: Counting objects:  29% (841/2897)   
remote: Counting objects:  30% (870/2897)   remote: Counting objects:  
31% (899/2897)   remote: Counting objects:  32% (928/2897)   
remote: Counting objects:  33% (957/2897)   remote: Counting objects:  
34% (985/2897)   remote: Counting objects:  35% (1014/2897)   
remote: Counting objects:  36% (1043/2897)   remote: Counting objects:  
37% (1072/2897)   remote: Counting objects:  38% (1101/2897)   
remote: Counting objects:  39% (1130/2897)   remote: Counting objects:  
40% (1159/2897)   remote: Counting objects:  41% (1188/2897)   
remote: Counting objects:  42% (1217/2897)   remote: Counting objects:  
43% (1246/2897)   remote: Counting objects:  44% (1275/2897)   
remote: Counting objects:  45% (1304/2897)   remote: Counting objects:  
46% (1333/2897)   remote: Counting objects:  47% (1362/2897)   
remote: Counting objects:  48% (1391/2897)   remote: Counting objects:  
49% (1420/2897)   remote: Counting objects:  50% (1449/2897)   
remote: Counting objects:  51% (1478/2897)   remote: Counting objects:  
52% (1507/2897)   remote: Counting objects:  53% (1536/2897)   
remote: Counting objects:  54% (1565/2897)   remote: Counting objects:  
55%