Build failed in Jenkins: kafka-trunk-jdk7 #1455

2016-08-01 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-4008: Module "tools" should not be dependent on "core"

--
[...truncated 7543 lines...]

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > wakeup 
PASSED

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > 
schedule STARTED

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > 
schedule PASSED

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > send 
STARTED

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > send 
PASSED

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > 
sendExpiry STARTED

org.apache.kafka.clients.consumer.internals.ConsumerNetworkClientTest > 
sendExpiry PASSED

org.apache.kafka.clients.consumer.internals.ConsumerInterceptorsTest > 
testOnCommitChain STARTED

org.apache.kafka.clients.consumer.internals.ConsumerInterceptorsTest > 
testOnCommitChain PASSED

org.apache.kafka.clients.consumer.internals.ConsumerInterceptorsTest > 
testOnConsumeChain STARTED

org.apache.kafka.clients.consumer.internals.ConsumerInterceptorsTest > 
testOnConsumeChain PASSED

org.apache.kafka.clients.consumer.MockConsumerTest > testSimpleMock STARTED

org.apache.kafka.clients.consumer.MockConsumerTest > testSimpleMock PASSED

org.apache.kafka.clients.consumer.RangeAssignorTest > testOneConsumerNoTopic 
STARTED

org.apache.kafka.clients.consumer.RangeAssignorTest > testOneConsumerNoTopic 
PASSED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testTwoConsumersTwoTopicsSixPartitions STARTED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testTwoConsumersTwoTopicsSixPartitions PASSED

org.apache.kafka.clients.consumer.RangeAssignorTest > testOneConsumerOneTopic 
STARTED

org.apache.kafka.clients.consumer.RangeAssignorTest > testOneConsumerOneTopic 
PASSED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testMultipleConsumersMixedTopics STARTED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testMultipleConsumersMixedTopics PASSED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testTwoConsumersOneTopicOnePartition STARTED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testTwoConsumersOneTopicOnePartition PASSED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testOneConsumerMultipleTopics STARTED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testOneConsumerMultipleTopics PASSED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testOnlyAssignsPartitionsFromSubscribedTopics STARTED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testOnlyAssignsPartitionsFromSubscribedTopics PASSED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testTwoConsumersOneTopicTwoPartitions STARTED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testTwoConsumersOneTopicTwoPartitions PASSED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testOneConsumerNonexistentTopic STARTED

org.apache.kafka.clients.consumer.RangeAssignorTest > 
testOneConsumerNonexistentTopic PASSED

org.apache.kafka.clients.consumer.ConsumerRecordTest > testOldConstructor 
STARTED

org.apache.kafka.clients.consumer.ConsumerRecordTest > testOldConstructor PASSED

org.apache.kafka.clients.consumer.ConsumerRecordsTest > iterator STARTED

org.apache.kafka.clients.consumer.ConsumerRecordsTest > iterator PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testConstructorClose 
STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testConstructorClose 
PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testAssignOnNullTopicInPartition STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testAssignOnNullTopicInPartition PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testPause STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testPause PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testInvalidSocketSendBufferSize STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testInvalidSocketSendBufferSize PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testCommitsFetchedDuringAssign STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testCommitsFetchedDuringAssign PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testOsDefaultSocketBufferSizes STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testOsDefaultSocketBufferSizes PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testAssignOnNullTopicPartition STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testAssignOnNullTopicPartition PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testSeekNegative STARTED

org.apache.kafka.clients.consumer.KafkaConsumerTest > testSeekNegative PASSED

org.apache.kafka.clients.consumer.KafkaConsumerTest > 
testAssignOnEmptyTopicInPartition STARTED


[jira] [Commented] (KAFKA-4008) Module "tools" should not be dependent on "core"

2016-08-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15403282#comment-15403282
 ] 

ASF GitHub Bot commented on KAFKA-4008:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1685


> Module "tools" should not be dependent on "core"
> 
>
> Key: KAFKA-4008
> URL: https://issues.apache.org/jira/browse/KAFKA-4008
> Project: Kafka
>  Issue Type: Bug
>  Components: core, tools
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Blocker
> Fix For: 0.10.1.0, 0.10.0.1
>
>
> The newly introduced "Stream Application Reset Tool" added the dependency to 
> {{core}} into module {{tools}}. We want to get rid of this dependency.
> Solution: move {{StreamsResetter}} into module {{core}}
> Remark: actually, {{StreamsResetter}} should be in module {{streams}} 
> however, this change is blocked by KIP-4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4008) Module "tools" should not be dependent on "core"

2016-08-01 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4008.
--
   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1685
[https://github.com/apache/kafka/pull/1685]

> Module "tools" should not be dependent on "core"
> 
>
> Key: KAFKA-4008
> URL: https://issues.apache.org/jira/browse/KAFKA-4008
> Project: Kafka
>  Issue Type: Bug
>  Components: core, tools
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Blocker
> Fix For: 0.10.1.0, 0.10.0.1
>
>
> The newly introduced "Stream Application Reset Tool" added the dependency to 
> {{core}} into module {{tools}}. We want to get rid of this dependency.
> Solution: move {{StreamsResetter}} into module {{core}}
> Remark: actually, {{StreamsResetter}} should be in module {{streams}} 
> however, this change is blocked by KIP-4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1685: KAFKA-4008: Module "tools" should not be dependent...

2016-08-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1685


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3042) updateIsr should stop after failed several times due to zkVersion issue

2016-08-01 Thread Kane Kim (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402962#comment-15402962
 ] 

Kane Kim commented on KAFKA-3042:
-

We've found it's also being triggered by packet loss from broker to ZK node. 
Controller doesn't have to be killed.

> updateIsr should stop after failed several times due to zkVersion issue
> ---
>
> Key: KAFKA-3042
> URL: https://issues.apache.org/jira/browse/KAFKA-3042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: jdk 1.7
> centos 6.4
>Reporter: Jiahongchao
> Fix For: 0.10.1.0
>
> Attachments: controller.log, server.log.2016-03-23-01, 
> state-change.log
>
>
> sometimes one broker may repeatly log
> "Cached zkVersion 54 not equal to that in zookeeper, skip updating ISR"
> I think this is because the broker consider itself as the leader in fact it's 
> a follower.
> So after several failed tries, it need to find out who is the leader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2016-08-01 Thread Joshua Dickerson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402900#comment-15402900
 ] 

Joshua Dickerson commented on KAFKA-2729:
-

This has bit us twice in our live environment. 0.9.0.1
Restarting the affected broker(s) is the only thing that seems to fix it. 

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4011) allow sizing RequestQueue in bytes

2016-08-01 Thread radai rosenblatt (JIRA)
radai rosenblatt created KAFKA-4011:
---

 Summary: allow sizing RequestQueue in bytes
 Key: KAFKA-4011
 URL: https://issues.apache.org/jira/browse/KAFKA-4011
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.10.0.0
Reporter: radai rosenblatt
 Fix For: 0.10.1.0


currently RequestChannel's requestQueue is sized in number of requests:

{code:title=RequestChannel.scala|borderStyle=solid}
private val requestQueue = new 
ArrayBlockingQueue[RequestChannel.Request](queueSize)
{code}

under the assumption that the end goal is a bound on server memory consumption, 
this requires the admin to know the avg request size.

I would like to propose sizing the requestQueue not by number of requests, but 
by their accumulated size (Request.buffer.capacity). this would probably make 
configuring and sizing an instance easier.

there would need to be a new configuration setting for this (queued.max.bytes?) 
- which could be either in addition to or instead of the current 
queued.max.requests setting



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4010) ConfigDef.toRst() should create sections for each group

2016-08-01 Thread Shikhar Bhushan (JIRA)
Shikhar Bhushan created KAFKA-4010:
--

 Summary: ConfigDef.toRst() should create sections for each group
 Key: KAFKA-4010
 URL: https://issues.apache.org/jira/browse/KAFKA-4010
 Project: Kafka
  Issue Type: Improvement
Reporter: Shikhar Bhushan
Priority: Minor


Currently the ordering seems a bit arbitrary. There is a logical grouping that 
connectors are now able to specify with the 'group' field, which we should use 
as section headers. Also it would be good to generate {{:ref:}} for each 
section.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4009) Data corruption or EIO leads to data loss

2016-08-01 Thread Aishwarya Ganesan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402818#comment-15402818
 ] 

Aishwarya Ganesan commented on KAFKA-4009:
--

Yes, I used ack=all in the producer.

N1 did detect the corruption. N1's log show the following message:

WARN Found invalid messages in log segment 
/data/corrupt-ds-apps/example/kafka/workload_dir1.mp/my-topic1-0/.log
 at byte offset 0: Message is corrupt (stored crc = 3276854168, computed crc = 
124471979). (kafka.log.LogSegment)

> Data corruption or EIO leads to data loss
> -
>
> Key: KAFKA-4009
> URL: https://issues.apache.org/jira/browse/KAFKA-4009
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.0
>Reporter: Aishwarya Ganesan
>
> I have a 3 node kafka cluster (N1,N2 and N3) with 
> log.flush.interval.messages=1, min.insync.replicas=3 and 
> unclean.leader.election.enable=false and a single Zookeeper node. My workload 
> inserts few messages and on completion of the workload, the 
> recovery-point-offset-checkpoint reflects the latest offset of the messages 
> committed. 
> I have a small testing tool that drives distributed applications into corner 
> cases by simulating possible error conditions like EIO, ENOSPC and EDQUOT 
> that can be encountered in all modern file systems such as ext4. The tool 
> also simulates on-disk silent data corruption. 
> When I introduce silent data corruption in a node (say N1) in the ISR, Kafka 
> is able to detect corruption using checksum and ignores the log entry from 
> that point onwards. Even though N1 has lost log entries and 
> recovery-point-offset-checkpoint file in N1 indicates the latest offsets, N1 
> is allowed to become the leader because it is in the ISR.  Also, the other 
> nodes N2 and N3 crash with the following log message:
> FATAL [ReplicaFetcherThread-0-1], Halting because log truncation is not 
> allowed for topic my-topic1, Current leader 1's latest offset 0 is less than 
> replica 3's latest offset 1 (kafka.server.ReplicaFetcherThread)
> The end result is that a silent data corruption leads to data loss because 
> querying the cluster returns only messages before the corrupted entry. Note 
> that the cluster at this point has only N1. This situation could have been 
> avoided if the node N1 which had to ignore the log entry wasn't allowed to 
> become the leader. This scenario wouldn't happen in a majority based leader 
> election as other nodes (N2 or N3) would have denied vote for N1 because N1's 
> log is not complete compared to N2 or N3's log.
> If this scenario happens in any of the followers, it ignores the log entry 
> and copies data from the leader and there is no data loss.
> Encountering an EIO thrown by the file system for a particular block results 
> in the same consequence of data loss on querying the cluster and the 
> remaining two nodes crash. An EIO on read could be thrown for a variety of 
> reasons including a latent sector error of one or more sectors on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4009) Data corruption or EIO leads to data loss

2016-08-01 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402740#comment-15402740
 ] 

Jun Rao commented on KAFKA-4009:


[~aganesan], thanks for reporting this. In your test, did the producer use 
ack=all? Also, did N1 detect the corruption during log recovery when restarting 
the broker or during appending to the log? 

> Data corruption or EIO leads to data loss
> -
>
> Key: KAFKA-4009
> URL: https://issues.apache.org/jira/browse/KAFKA-4009
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.0
>Reporter: Aishwarya Ganesan
>
> I have a 3 node kafka cluster (N1,N2 and N3) with 
> log.flush.interval.messages=1, min.insync.replicas=3 and 
> unclean.leader.election.enable=false and a single Zookeeper node. My workload 
> inserts few messages and on completion of the workload, the 
> recovery-point-offset-checkpoint reflects the latest offset of the messages 
> committed. 
> I have a small testing tool that drives distributed applications into corner 
> cases by simulating possible error conditions like EIO, ENOSPC and EDQUOT 
> that can be encountered in all modern file systems such as ext4. The tool 
> also simulates on-disk silent data corruption. 
> When I introduce silent data corruption in a node (say N1) in the ISR, Kafka 
> is able to detect corruption using checksum and ignores the log entry from 
> that point onwards. Even though N1 has lost log entries and 
> recovery-point-offset-checkpoint file in N1 indicates the latest offsets, N1 
> is allowed to become the leader because it is in the ISR.  Also, the other 
> nodes N2 and N3 crash with the following log message:
> FATAL [ReplicaFetcherThread-0-1], Halting because log truncation is not 
> allowed for topic my-topic1, Current leader 1's latest offset 0 is less than 
> replica 3's latest offset 1 (kafka.server.ReplicaFetcherThread)
> The end result is that a silent data corruption leads to data loss because 
> querying the cluster returns only messages before the corrupted entry. Note 
> that the cluster at this point has only N1. This situation could have been 
> avoided if the node N1 which had to ignore the log entry wasn't allowed to 
> become the leader. This scenario wouldn't happen in a majority based leader 
> election as other nodes (N2 or N3) would have denied vote for N1 because N1's 
> log is not complete compared to N2 or N3's log.
> If this scenario happens in any of the followers, it ignores the log entry 
> and copies data from the leader and there is no data loss.
> Encountering an EIO thrown by the file system for a particular block results 
> in the same consequence of data loss on querying the cluster and the 
> remaining two nodes crash. An EIO on read could be thrown for a variety of 
> reasons including a latent sector error of one or more sectors on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4009) Data corruption or EIO leads to data loss

2016-08-01 Thread Aishwarya Ganesan (JIRA)
Aishwarya Ganesan created KAFKA-4009:


 Summary: Data corruption or EIO leads to data loss
 Key: KAFKA-4009
 URL: https://issues.apache.org/jira/browse/KAFKA-4009
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.9.0.0
Reporter: Aishwarya Ganesan


I have a 3 node kafka cluster (N1,N2 and N3) with 
log.flush.interval.messages=1, min.insync.replicas=3 and 
unclean.leader.election.enable=false and a single Zookeeper node. My workload 
inserts few messages and on completion of the workload, the 
recovery-point-offset-checkpoint reflects the latest offset of the messages 
committed. 

I have a small testing tool that drives distributed applications into corner 
cases by simulating possible error conditions like EIO, ENOSPC and EDQUOT that 
can be encountered in all modern file systems such as ext4. The tool also 
simulates on-disk silent data corruption. 

When I introduce silent data corruption in a node (say N1) in the ISR, Kafka is 
able to detect corruption using checksum and ignores the log entry from that 
point onwards. Even though N1 has lost log entries and 
recovery-point-offset-checkpoint file in N1 indicates the latest offsets, N1 is 
allowed to become the leader because it is in the ISR.  Also, the other nodes 
N2 and N3 crash with the following log message:

FATAL [ReplicaFetcherThread-0-1], Halting because log truncation is not allowed 
for topic my-topic1, Current leader 1's latest offset 0 is less than replica 
3's latest offset 1 (kafka.server.ReplicaFetcherThread)

The end result is that a silent data corruption leads to data loss because 
querying the cluster returns only messages before the corrupted entry. Note 
that the cluster at this point has only N1. This situation could have been 
avoided if the node N1 which had to ignore the log entry wasn't allowed to 
become the leader. This scenario wouldn't happen in a majority based leader 
election as other nodes (N2 or N3) would have denied vote for N1 because N1's 
log is not complete compared to N2 or N3's log.

If this scenario happens in any of the followers, it ignores the log entry and 
copies data from the leader and there is no data loss.

Encountering an EIO thrown by the file system for a particular block results in 
the same consequence of data loss on querying the cluster and the remaining two 
nodes crash. An EIO on read could be thrown for a variety of reasons including 
a latent sector error of one or more sectors on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1211) Hold the produce request with ack > 1 in purgatory until replicas' HW has larger than the produce offset

2016-08-01 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402622#comment-15402622
 ] 

Jun Rao commented on KAFKA-1211:


The following is a draft proposal. [~fpj], does that look reasonable to you?

1. In every log directory, we create a new leader-generation-checkpoint file, 
where we store the sequence (LGS) of leader generation and the start offset of 
messages produced in that generation.
2. When a replica becomes a leader, it first adds the leader generation and the 
log end offset of the replica to the end of leader-generation-checkpoint file 
and flushes the file. It then remembers its last leader generation (LLG) and 
becomes the leader.
3. When a replica becomes a follower, it does the following steps.
  3.1 Send a new RetreiveLeaderGeneration request for the partition to the 
leader.
  3.2 The leader responds with its LGS in the RetreiveLeaderGeneration response
  3.3 The follower finds the first leader generation whose start offset differs 
between its local LGS and the leader's LGS. It then truncates its local log to 
the smaller of the start offset of the identified leader generation, if needed.
  3.4 The follower flushes the LGS from the leader to its local 
leader-generation-checkpoint file and also remembers the expected LLG from the 
leader's LGS.
  3.5 The follower starts fetching from the leader from its log end offset.
  3.5.1 During fetching, we extend the FetchResponse to add a new field per 
partition for the LLG in the leader.
  3.5.2 If the follower sees the returned LLG in the FetchResponse not matching 
its expected LLG, go back to 3.1. (This can only happen if the leader changes 
more than once between 2 consecutive fetch requests and should be rare. We 
could also just stop the follower and wait for the next becoming follower 
request from the controller.)
  3.5.3 Otherwise, the follower proceeds to append the fetched data to its 
local log in the normal way.

Implementation wise. We probably need to extend ReplicaFetchThread to maintain 
an additional state per partition. When a partition is added to a 
ReplicaFetchThread, it needs to go through steps 3.1 to 3.4 first before 
starting fetching the data.

> Hold the produce request with ack > 1 in purgatory until replicas' HW has 
> larger than the produce offset
> 
>
> Key: KAFKA-1211
> URL: https://issues.apache.org/jira/browse/KAFKA-1211
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.11.0.0
>
>
> Today during leader failover we will have a weakness period when the 
> followers truncate their data before fetching from the new leader, i.e., 
> number of in-sync replicas is just 1. If during this time the leader has also 
> failed then produce requests with ack >1 that have get responded will still 
> be lost. To avoid this scenario we would prefer to hold the produce request 
> in purgatory until replica's HW has larger than the offset instead of just 
> their end-of-log offsets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1454

2016-08-01 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: lower logging severity for offset reset

--
[...truncated 5624 lines...]

kafka.common.TopicTest > testTopicHasCollision STARTED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > testTopicHasCollisionChars STARTED

kafka.common.TopicTest > testTopicHasCollisionChars PASSED

kafka.common.ConfigTest > testInvalidGroupIds STARTED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds STARTED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] STARTED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] STARTED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] STARTED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[0] STARTED

kafka.zk.ZKEphemeralTest > testSameSession[0] PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] STARTED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] STARTED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] STARTED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[1] STARTED

kafka.zk.ZKEphemeralTest > testSameSession[1] PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException STARTED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists STARTED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists STARTED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED

kafka.zk.ZKPathTest > testCreatePersistentPath STARTED

kafka.zk.ZKPathTest > testCreatePersistentPath PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException STARTED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException STARTED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException STARTED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists STARTED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic STARTED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList STARTED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas STARTED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic STARTED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition STARTED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed STARTED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown STARTED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.AdminClientTest > testDescribeGroup STARTED

kafka.api.AdminClientTest > testDescribeGroup PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroup STARTED

kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

kafka.api.AdminClientTest > testListGroups STARTED

kafka.api.AdminClientTest > testListGroups PASSED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup STARTED

kafka.api.AdminClientTest > testDescribeConsumerGroupForNonExistentGroup PASSED

kafka.api.test.ProducerCompressionTest > testCompression[0] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[0] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[1] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[1] PASSED

kafka.api.test.ProducerCompressionTest > testCompression[2] STARTED

kafka.api.test.ProducerCompressionTest > testCompression[2] PASSED

kafka.api.test.ProducerCompressionTest > 

[jira] [Updated] (KAFKA-3847) Connect tasks should not share a producer

2016-08-01 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3847:
-
Assignee: Liquan Pei  (was: Ewen Cheslack-Postava)

> Connect tasks should not share a producer
> -
>
> Key: KAFKA-3847
> URL: https://issues.apache.org/jira/browse/KAFKA-3847
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> Currently the tasks share a producer. This is nice in terms of potentially 
> coalescing requests to the same broker, keeping port usage reasonable, 
> minimizing the # of connections to brokers (which is nice for brokers, not so 
> important for connect itself). But it also means we unnecessarily tie tasks 
> to each other in other ways -- e.g. when one needs to flush, it we 
> effectively block it on other connector's data being produced and acked.
> Given that we allocate a consumer per sink, a lot of the arguments for 
> sharing a producer effectively go away. We should decouple the tasks by using 
> a separate producer for each task (or, at a minimum, for each connector's 
> tasks).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4000) Consumer per-topic metrics do not aggregate partitions from the same topic

2016-08-01 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4000 started by Vahid Hashemian.
--
> Consumer per-topic metrics do not aggregate partitions from the same topic
> --
>
> Key: KAFKA-4000
> URL: https://issues.apache.org/jira/browse/KAFKA-4000
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>Priority: Minor
>
> In the Consumer Fetcher code, we have per-topic fetch metrics, but they seem 
> to be computed from each partition separately. It seems like we should 
> aggregate them by topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1691: MINOR: lower logging severity for offset reset

2016-08-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1691


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2063) Bound fetch response size

2016-08-01 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15402263#comment-15402263
 ] 

Jun Rao commented on KAFKA-2063:


[~nepal], thanks for the proposal.

For a), which server side setting are you referring to? Is it 
replica.fetch.max.bytes? If we want to remove the per partition limit in the 
fetch request, we probably just want to deprecate replica.fetch.max.bytes as 
well.

For b), it seems that the only goal of reordering is for every partition to 
make progress. If this is the case, we probably don't need to do anything more 
than just simple randomization/round robin of the partitions.

Could you clarify btw c) and d)? They seem to be in conflict. Doing round robin 
on the client side is probably a bit better than randomization since it's more 
deterministic, but requires every client to do the implementation. We will have 
to document this clearly in the request protocol.

> Bound fetch response size
> -
>
> Key: KAFKA-2063
> URL: https://issues.apache.org/jira/browse/KAFKA-2063
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jay Kreps
>
> Currently the only bound on the fetch response size is 
> max.partition.fetch.bytes * num_partitions. There are two problems:
> 1. First this bound is often large. You may chose 
> max.partition.fetch.bytes=1MB to enable messages of up to 1MB. However if you 
> also need to consume 1k partitions this means you may receive a 1GB response 
> in the worst case!
> 2. The actual memory usage is unpredictable. Partition assignment changes, 
> and you only actually get the full fetch amount when you are behind and there 
> is a full chunk of data ready. This means an application that seems to work 
> fine will suddenly OOM when partitions shift or when the application falls 
> behind.
> We need to decouple the fetch response size from the number of partitions.
> The proposal for doing this would be to add a new field to the fetch request, 
> max_bytes which would control the maximum data bytes we would include in the 
> response.
> The implementation on the server side would grab data from each partition in 
> the fetch request until it hit this limit, then send back just the data for 
> the partitions that fit in the response. The implementation would need to 
> start from a random position in the list of topics included in the fetch 
> request to ensure that in a case of backlog we fairly balance between 
> partitions (to avoid first giving just the first partition until that is 
> exhausted, then the next partition, etc).
> This setting will make the max.partition.fetch.bytes field in the fetch 
> request much less useful and we  should discuss just getting rid of it.
> I believe this also solves the same thing we were trying to address in 
> KAFKA-598. The max_bytes setting now becomes the new limit that would need to 
> be compared to max_message size. This can be much larger--e.g. setting a 50MB 
> max_bytes setting would be okay, whereas now if you set 50MB you may need to 
> allocate 50MB*num_partitions.
> This will require evolving the fetch request protocol version to add the new 
> field and we should do a KIP for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1692: MINOR: Fixed documentation for KStream left join K...

2016-08-01 Thread jpzk
GitHub user jpzk opened a pull request:

https://github.com/apache/kafka/pull/1692

MINOR: Fixed documentation for KStream left join KStream-KTable

We are not joining in a window here. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jpzk/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1692.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1692


commit 78e866ed2e3edefe2bac7199f91c4a7803facadb
Author: Jendrik Poloczek 
Date:   2016-08-01T15:16:59Z

MINOR: Fixed documentation for KStream left join KStream-KTable




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1691: MINOR: lower logging severity for offset reset

2016-08-01 Thread cotedm
GitHub user cotedm opened a pull request:

https://github.com/apache/kafka/pull/1691

MINOR: lower logging severity for offset reset

When resetting the first dirty offset to the log start offset, we currently 
log an ERROR which makes users think the log cleaner has a problem and maybe 
has exited.  We should log a WARN instead to avoid alarming the users.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cotedm/kafka minorlogcleanerlogging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1691.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1691


commit 54d611ce10ec28946a439e41bd28eaf9996f292f
Author: Dustin Cote 
Date:   2016-08-01T15:11:59Z

MINOR: lower logging severity for offset reset

When resetting the first dirty offset to the log start offset, we currently 
log an ERROR which makes users think the log cleaner has a problem and maybe 
has exited.  We should log a WARN instead to avoid alarming the users.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [kafka-clients] [VOTE] 0.10.0.1 RC0

2016-08-01 Thread Harsha Ch
Thanks Ismael.

On Sat, Jul 30, 2016 at 7:43 PM Ismael Juma  wrote:

> Hi Dana,
>
> Thanks for testing releases so promptly. Very much appreciated!
>
> It's funny, Ewen had suggested something similar with regards to the
> release notes a couple of days ago. We now have a Python script for
> generating the release notes:
>
> https://github.com/apache/kafka/blob/trunk/release_notes.py
>
> It should be straightforward to change it to do the grouping. Contributions
> encouraged. :)
>
> Ismael
>
> On Fri, Jul 29, 2016 at 5:02 PM, Dana Powers 
> wrote:
>
> > +1
> >
> > tested against kafka-python integration test suite = pass.
> >
> > Aside: as the scope of kafka gets bigger, it may be useful to organize
> > release notes into functional groups like core, brokers, clients,
> > kafka-streams, etc. I've found this useful when organizing
> > kafka-python release notes.
> >
> > -Dana
> >
> > On Fri, Jul 29, 2016 at 7:46 AM, Ismael Juma  wrote:
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the first candidate for the release of Apache Kafka 0.10.0.1.
> > This
> > > is a bug fix release and it includes fixes and improvements from 50
> JIRAs
> > > (including a few critical bugs). See the release notes for more
> details:
> > >
> > > http://home.apache.org/~ijuma/kafka-0.10.0.1-rc0/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Monday, 1 August, 8am PT ***
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~ijuma/kafka-0.10.0.1-rc0/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging
> > >
> > > * Javadoc:
> > > http://home.apache.org/~ijuma/kafka-0.10.0.1-rc0/javadoc/
> > >
> > > * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=0c2322c2cf7ab7909cfd8b834d1d2fffc34db109
> > >
> > > * Documentation:
> > > http://kafka.apache.org/0100/documentation.html
> > >
> > > * Protocol:
> > > http://kafka.apache.org/0100/protocol.html
> > >
> > > * Successful Jenkins builds for the 0.10.0 branch:
> > > Unit/integration tests:
> > https://builds.apache.org/job/kafka-0.10.0-jdk7/170/
> > > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka-0.10.0/130/
> > >
> > > Thanks,
> > > Ismael
> > >
> > > --
> > > You received this message because you are subscribed to the Google
> Groups
> > > "kafka-clients" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> an
> > > email to kafka-clients+unsubscr...@googlegroups.com.
> > > To post to this group, send email to kafka-clie...@googlegroups.com.
> > > Visit this group at https://groups.google.com/group/kafka-clients.
> > > To view this discussion on the web visit
> > >
> >
> https://groups.google.com/d/msgid/kafka-clients/CAD5tkZYz8fbLAodpqKg5eRiCsm4ze9QK3ufTz3Q4U%3DGs0CRb1A%40mail.gmail.com
> > .
> > > For more options, visit https://groups.google.com/d/optout.
> >
>


Request: Please add me to contributor list.

2016-08-01 Thread chetan singh
Hello,

I am interested in contributing to Kafka project. Can I please be added to
the contributor list so that I can assign newbie tickets to myself and
start working on them.

Thank you
Chetan Singh


Build failed in Jenkins: kafka-0.10.0-jdk7 #174

2016-08-01 Thread Apache Jenkins Server
See 

--
[...truncated 6390 lines...]
org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSetNull 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.TableTest > basicOperations PASSED

org.apache.kafka.connect.runtime.AbstractHerderTest > connectorStatus PASSED

org.apache.kafka.connect.runtime.AbstractHerderTest > taskStatus PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > stopBeforeStarting PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > standardStartup PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > cancelBeforeStopping PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testStartPaused PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPause PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > testPollRedelivery PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionRevocation PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testErrorInRebalancePartitionAssignment PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskTest > 
testWakeupInCommitSyncCausesRetry PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testStartPaused PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPause PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > 
testSendRecordsConvertsData PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSendRecordsRetries 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > 
testSendRecordsTaskCommitRecordFail PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testSlowTaskStart PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.connect.runtime.WorkerSourceTaskTest > testFailureInPoll PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #790

2016-08-01 Thread Apache Jenkins Server
See 

--
[...truncated 11729 lines...]
org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionIfNoStoresExistOnRange PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportAllAcrossMultipleStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportAllAcrossMultipleStores PASSED

org.apache.kafka.streams.state.internals.StoreChangeLoggerTest > testAddRemove 
STARTED

org.apache.kafka.streams.state.internals.StoreChangeLoggerTest > testAddRemove 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldUseCustomRocksDbConfigSetter STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
shouldUseCustomRocksDbConfigSetter PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testSize 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testSize 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testEvict 
STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testEvict 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testSize 
STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testSize 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNoStoreOfTypeFound STARTED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNoStoreOfTypeFound PASSED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindWindowStores STARTED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindWindowStores PASSED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindKeyValueStores STARTED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindKeyValueStores PASSED

org.apache.kafka.streams.state.internals.WindowStoreUtilsTest > 
testSerialization STARTED

org.apache.kafka.streams.state.internals.WindowStoreUtilsTest > 
testSerialization PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldNotReturnKVStoreWhenIsWindowStore STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldNotReturnKVStoreWhenIsWindowStore PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfKVStoreDoesntExist STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfKVStoreDoesntExist PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfWindowStoreDoesntExist STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfWindowStoreDoesntExist 

Build failed in Jenkins: kafka-trunk-jdk7 #1453

2016-08-01 Thread Apache Jenkins Server
See 

--
[...truncated 6769 lines...]

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression STARTED

kafka.message.ByteBufferMessageSetTest > testValidBytesWithCompression PASSED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially STARTED

kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially PASSED

kafka.message.ByteBufferMessageSetTest > 
testOffsetAssignmentAfterMessageFormatConversion STARTED

kafka.message.ByteBufferMessageSetTest > 
testOffsetAssignmentAfterMessageFormatConversion PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent STARTED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testAbsoluteOffsetAssignment STARTED

kafka.message.ByteBufferMessageSetTest > testAbsoluteOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testCreateTime STARTED

kafka.message.ByteBufferMessageSetTest > testCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testInvalidCreateTime STARTED

kafka.message.ByteBufferMessageSetTest > testInvalidCreateTime PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testLogAppendTime STARTED

kafka.message.ByteBufferMessageSetTest > testLogAppendTime PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo STARTED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator STARTED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.message.ByteBufferMessageSetTest > testRelativeOffsetAssignment STARTED

kafka.message.ByteBufferMessageSetTest > testRelativeOffsetAssignment PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown 

JDK configuration for Kafka jobs in Jenkins

2016-08-01 Thread Ismael Juma
Hi all,

Just a quick update with regards to the JDK configuration for Kafka Jobs in
Jenkins. The Infra team has made some changes on how the JDK is installed
in Jenkins slaves and how it should be configured in Jenkins jobs. See the
following for details:

https://mail-archives.apache.org/mod_mbox/www-builds/201608.mbox/%3CCAN0Gg1eNFn9FP_mdyQBB_9gWHg87B9sjwQ82JbWtkGob42%2B5%2Bw%40mail.gmail.com%3E

I have updated the Kafka Jenkins jobs to use the new configuration options.
JDK 7 jobs now use "JDK 1.7 (latest)" (jdk1.7.0_80) and JDK 8 jobs now use "JDK
1.8 (latest)" (jdk1.8.0_102). Updates within the same major JDK version
will be automatic.

Ismael