Build failed in Jenkins: kafka_system_tests #121

2015-10-26 Thread ewen
See 

--
[...truncated 1671 lines...]


test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.num_producers=3.acks=1

status: PASS

run time:   1 minute 47.903 seconds

{"records_per_sec": 171342.8457578, "mb_per_sec": 16.34}



test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=10

status: PASS

run time:   2 minutes 18.435 seconds

{"records_per_sec": 219615.020862, "mb_per_sec": 2.09}



test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=SSL.acks=1.message_size=10

status: PASS

run time:   3 minutes 6.862 seconds

{"records_per_sec": 134709.409344, "mb_per_sec": 1.28}



test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=100

status: PASS

run time:   1 minute 41.427 seconds

{"records_per_sec": 67486.775945, "mb_per_sec": 6.44}



test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=SSL.acks=1.message_size=100

status: PASS

run time:   2 minutes 6.295 seconds

{"records_per_sec": 42687.39266, "mb_per_sec": 4.07}



test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=1000

status: PASS

run time:   1 minute 36.038 seconds

{"records_per_sec": 8326.633166, "mb_per_sec": 7.94}



test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=SSL.acks=1.message_size=1000

status: PASS

run time:   1 minute 50.890 seconds

{"records_per_sec": 5316.366949, "mb_per_sec": 5.07}



test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=1

status: PASS

run time:   1 minute 32.030 seconds

{"records_per_sec": 967.070183, "mb_per_sec": 9.22}



test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=SSL.acks=1.message_size=1

status: PASS

run time:   1 minute 52.680 seconds

{"records_per_sec": 609.990001, "mb_per_sec": 5.82}



test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=PLAINTEXT.acks=1.message_size=10

status: PASS

run time:   1 minute 26.415 seconds

{"records_per_sec": 234.615385, "mb_per_sec": 22.37}



test_id:
2015-10-26--001.kafkatest.tests.benchmark_test.Benchmark.test_producer_throughput.topic=topic-replication-factor-three.security_protocol=SSL.acks=1.message_size=10

status: PASS

run time:   1 minute 47.156 seconds

{"records_per_sec": 87.075006, "mb_per_sec": 8.3}



test_id:
2015-10-26--001.kafkatest.sanity_checks.test_console_consumer.ConsoleConsumerTest.test_lifecycle.security_protocol=SSL.new_consumer=True

status: PASS

run time:   1 minute 4.236 seconds



test_id:
2015-10-26--001.kafkatest.sanity_checks.test_console_consumer.ConsoleConsumerTest.test_lifecycle.security_protocol=PLAINTEXT.new_consumer=False

status: PASS

run time:   57.163 seconds



test_id:

[jira] [Created] (KAFKA-2691) Improve handling of authorization failure during metadata refresh

2015-10-26 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-2691:
--

 Summary: Improve handling of authorization failure during metadata 
refresh
 Key: KAFKA-2691
 URL: https://issues.apache.org/jira/browse/KAFKA-2691
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Ismael Juma
 Fix For: 0.9.0.0


There are two problems, one more severe than the other:

1. The consumer blocks indefinitely if there is non-transient authorization 
failure during metadata refresh due to KAFKA-2391
2. We get a TimeoutException instead of an AuthorizationException in the 
producer for the same case

If the fix for KAFKA-2391 is to add a timeout, then we will have issue `2` in 
both producer and consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2644: Run relevant ducktape tests with S...

2015-10-26 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/358

KAFKA-2644: Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

Run sanity check, replication tests and benchmarks with SASL/Kerberos using 
MiniKdc.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2644

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/358.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #358


commit 65c001a719c26da325b5a1154a61ec4e095afd70
Author: Rajini Sivaram 
Date:   2015-10-25T16:44:43Z

KAFKA-2644: Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2658) Implement SASL/PLAIN

2015-10-26 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973909#comment-14973909
 ] 

Rajini Sivaram commented on KAFKA-2658:
---

[~junrao] Can we include this in 0.9.0.0? I can submit ducktape tests for 
SASL/PLAIN later today if the implementation can be included in the release.

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973887#comment-14973887
 ] 

ASF GitHub Bot commented on KAFKA-2644:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/358

KAFKA-2644: Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

Run sanity check, replication tests and benchmarks with SASL/Kerberos using 
MiniKdc.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2644

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/358.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #358


commit 65c001a719c26da325b5a1154a61ec4e095afd70
Author: Rajini Sivaram 
Date:   2015-10-25T16:44:43Z

KAFKA-2644: Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL




> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-2644:
--
Status: Patch Available  (was: Open)

I have run the tests using iBM JDK. It will be good if someone can run this 
using Oracle JDK prior to commit.

The jars for running MiniKdc are copied using a new target 
copyDependantTestLibs. I have updated tests/README.md to reflect this, but 
automated test runs need to build this new target. Please let me know if this 
is an issue and if it could be done better.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-36 - Rack aware replica assignment

2015-10-26 Thread Aditya Auradkar
Hi Allen,

For TopicMetadataResponse to understand version, you can bump up the
request version itself. Based on the version of the request, the response
can be appropriately serialized. It shouldn't be a huge change. For
example: We went through something similar for ProduceRequest recently (
https://reviews.apache.org/r/33378/)
I guess the reason protocol information is not included in the TMR is
because the topic itself is independent of any particular protocol (SSL vs
Plaintext). Having said that, I'm not sure we even need rack information in
TMR. What usecase were you thinking of initially?

For 1 - I'd be fine with adding an option to the command line tools that
check rack assignment. For e.g. "--strict-assignment" or something similar.

Aditya

On Thu, Oct 22, 2015 at 6:44 PM, Allen Wang  wrote:

> For 2 and 3, I have updated the KIP. Please take a look. One thing I have
> changed is removing the proposal to add rack to TopicMetadataResponse. The
> reason is that unlike UpdateMetadataRequest, TopicMetadataResponse does not
> understand version. I don't see a way to include rack without breaking old
> version of clients. That's probably why secure protocol is not included in
> the TopicMetadataResponse either. I think it will be a much bigger change
> to include rack in TopicMetadataResponse.
>
> For 1, my concern is that doing rack aware assignment without complete
> broker to rack mapping will result in assignment that is not rack aware and
> fail to provide fault tolerance in the event of rack outage. This kind of
> problem will be difficult to surface. And the cost of this problem is high:
> you have to do partition reassignment if you are lucky to spot the problem
> early on or face the consequence of data loss during real rack outage.
>
> I do see the concern of fail-fast as it might also cause data loss if
> producer is not able produce the message due to topic creation failure. Is
> it feasible to treat dynamic topic creation and command tools differently?
> We allow dynamic topic creation with incomplete broker-rack mapping and
> fail fast in command line. Another option is to let user determine the
> behavior for command line. For example, by default fail fast in command
> line but allow incomplete broker-rack mapping if another switch is
> provided.
>
>
>
>
> On Tue, Oct 20, 2015 at 10:05 AM, Aditya Auradkar <
> aaurad...@linkedin.com.invalid> wrote:
>
> > Hey Allen,
> >
> > 1. If we choose fail fast topic creation, we will have topic creation
> > failures while upgrading the cluster. I really doubt we want this
> behavior.
> > Ideally, this should be invisible to clients of a cluster. Currently,
> each
> > broker is effectively its own rack. So we probably can use the rack
> > information whenever possible but not make it a hard requirement. To
> extend
> > Gwen's example, one badly configured broker should not degrade topic
> > creation for the entire cluster.
> >
> > 2. Upgrade scenario - Can you add a section on the upgrade piece to
> confirm
> > that old clients will not see errors? I believe
> ZookeeperConsumerConnector
> > reads the Broker objects from ZK. I wanted to confirm that this will not
> > cause any problems.
> >
> > 3. Could you elaborate your proposed changes to the UpdateMetadataRequest
> > in the "Public Interfaces" section? Personally, I find this format easy
> to
> > read in terms of wire protocol changes:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations#KIP-4-Commandlineandcentralizedadministrativeoperations-CreateTopicRequest
> >
> > Aditya
> >
> > On Fri, Oct 16, 2015 at 3:45 PM, Allen Wang 
> wrote:
> >
> > > KIP is updated include rack as an optional property for broker. Please
> > take
> > > a look and let me know if more details are needed.
> > >
> > > For the case where some brokers have rack and some do not, the current
> > KIP
> > > uses the fail-fast behavior. If there are concerns, we can further
> > discuss
> > > this in the email thread or next hangout.
> > >
> > >
> > >
> > > On Thu, Oct 15, 2015 at 10:42 AM, Allen Wang 
> > wrote:
> > >
> > > > That's a good question. I can think of three actions if the rack
> > > > information is incomplete:
> > > >
> > > > 1. Treat the node without rack as if it is on its unique rack
> > > > 2. Disregard all rack information and fallback to current algorithm
> > > > 3. Fail-fast
> > > >
> > > > Now I think about it, one and three make more sense. The reason for
> > > > fail-fast is that user mistake for not providing the rack may never
> be
> > > > found if we tolerate that and the assignment may not be rack aware as
> > the
> > > > user has expected and this creates debug problems when things fail.
> > > >
> > > > What do you think? If not fail-fast, is there anyway we can make the
> > user
> > > > error standing out?
> > > >
> > > >
> > > > On Thu, Oct 15, 2015 at 10:17 AM, 

Re: [DISCUSS] KIP-32 - Add CreateTime and LogAppendTime to Kafka message

2015-10-26 Thread Jiangjie Qin
Hi Jay,

Thanks for such detailed explanation. I think we both are trying to make
CreateTime work for us if possible. To me by "work" it means clear
guarantees on:
1. Log Retention Time enforcement.
2. Log Rolling time enforcement (This might be less a concern as you
pointed out)
3. Application search message by time.

WRT (1), I agree the expectation for log retention might be different
depending on who we ask. But my concern is about the level of guarantee we
give to user. My observation is that a clear guarantee to user is critical
regardless of the mechanism we choose. And this is the subtle but important
difference between using LogAppendTime and CreateTime.

Let's say user asks this question: How long will my message stay in Kafka?

If we use LogAppendTime for log retention, the answer is message will stay
in Kafka for retention time after the message is produced (to be more
precise, upper bounded by log.rolling.ms + log.retention.ms). User has a
clear guarantee and they may decide whether or not to put the message into
Kafka. Or how to adjust the retention time according to their requirements.
If we use create time for log retention, the answer would be it depends.
The best answer we can give is at least retention.ms because there is no
guarantee when the messages will be deleted after that. If a message sits
somewhere behind a larger create time, the message might stay longer than
expected. But we don't know how longer it would be because it depends on
the create time. In this case, it is hard for user to decide what to do.

I am worrying about this because a blurring guarantee has bitten us before,
e.g. Topic creation. We have received many questions like "why my topic is
not there after I created it". I can imagine we receive similar question
asking "why my message is still there after retention time has reached". So
my understanding is that a clear and solid guarantee is better than having
a mechanism that works in most cases but occasionally does not work.

If we think of the retention guarantee we provide with LogAppendTime, it is
not broken as you said, because we are telling user the log retention is
NOT based on create time at the first place.

WRT (3), no matter whether we index on LogAppendTime or CreateTime, the
best guarantee we can provide with user is "not missing message after a
certain timestamp". Therefore I actually really like to index on CreateTime
because that is the timestamp we provide to user, and we can have the solid
guarantee.
On the other hand, indexing on LogAppendTime and giving user CreateTime
does not provide solid guarantee when user do search based on timestamp. It
only works when LogAppendTime is always no earlier than CreateTime. This is
a reasonable assumption and we can easily enforce it.

With above, I am not sure if we can avoid server timestamp to make log
retention work with a clear guarantee. For searching by timestamp use case,
I really want to have the index built on CreateTime. But with a reasonable
assumption and timestamp enforcement, a LogAppendTime index would also work.

Thanks,

Jiangjie (Becket) Qin



On Thu, Oct 22, 2015 at 10:48 AM, Jay Kreps  wrote:

> Hey Becket,
>
> Let me see if I can address your concerns:
>
> 1. Let's say we have two source clusters that are mirrored to the same
> > target cluster. For some reason one of the mirror maker from a cluster
> dies
> > and after fix the issue we want to resume mirroring. In this case it is
> > possible that when the mirror maker resumes mirroring, the timestamp of
> the
> > messages have already gone beyond the acceptable timestamp range on
> broker.
> > In order to let those messages go through, we have to bump up the
> > *max.append.delay
> > *for all the topics on the target broker. This could be painful.
>
>
> Actually what I was suggesting was different. Here is my observation:
> clusters/topics directly produced to by applications have a valid assertion
> that log append time and create time are similar (let's call these
> "unbuffered"); other cluster/topic such as those that receive data from a
> database, a log file, or another kafka cluster don't have that assertion,
> for these "buffered" clusters data can be arbitrarily late. This means any
> use of log append time on these buffered clusters is not very meaningful,
> and create time and log append time "should" be similar on unbuffered
> clusters so you can probably use either.
>
> Using log append time on buffered clusters actually results in bad things.
> If you request the offset for a given time you get don't end up getting
> data for that time but rather data that showed up at that time. If you try
> to retain 7 days of data it may mostly work but any kind of bootstrapping
> will result in retaining much more (potentially the whole database
> contents!).
>
> So what I am suggesting in terms of the use of the max.append.delay is that
> unbuffered clusters would have this set and buffered clusters would not. 

Build failed in Jenkins: kafka-trunk-jdk7 #720

2015-10-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2652: integrate new group protocol into partition grouping

--
[...truncated 1734 lines...]

kafka.server.OffsetCommitTest > testUpdateOffsets PASSED

kafka.server.OffsetCommitTest > testLargeMetadataPayload PASSED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets PASSED

kafka.server.OffsetCommitTest > testOffsetExpiration PASSED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit PASSED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED

kafka.server.LeaderElectionTest > testLeaderElectionAndEpoch PASSED

kafka.server.DynamicConfigChangeTest > testProcessNotification PASSED

kafka.server.DynamicConfigChangeTest > testClientQuotaConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED


[jira] [Commented] (KAFKA-2689) Expose select gauges and metrics programmatically (not just through JMX)

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974897#comment-14974897
 ] 

ASF GitHub Bot commented on KAFKA-2689:
---

GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/363

KAFKA-2689: Expose select gauges and metrics programmatically (not just 
through JMX)

For now just exposing the replica manager gauges.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka kafka-2689

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/363.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #363


commit 90c0085a76374fafe6fa62c18e3d24504852e687
Author: Eno Thereska 
Date:   2015-10-07T00:06:49Z

Commits to fix timing issues in three JIRAs

commit ee66491fb36d55527d156afda90c3addc3eb3175
Author: Eno Thereska 
Date:   2015-10-07T00:07:21Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 17a373733e414456475217248cbc7b0bc98fda40
Author: Eno Thereska 
Date:   2015-10-07T15:15:19Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit eb5fbf458a5b455ae8b3c8b3ebf32524f5a3ab3e
Author: Eno Thereska 
Date:   2015-10-07T16:20:45Z

Removed debug messages

commit 041baae45012cf8f99afd2c8b5d9a8099a8a928b
Author: Eno Thereska 
Date:   2015-10-07T17:35:12Z

Pick a node, but not one that is blacked out

commit 69679d7e61d36f76d2ea1dd1fcc0a1192c9b50d6
Author: Eno Thereska 
Date:   2015-10-08T17:18:02Z

Removed unneeded checks

commit 3ce5e151396575f45d1f022720f454ac36653d0d
Author: Eno Thereska 
Date:   2015-10-08T17:18:18Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 76e6a0d8ab3fe847b28edde2e0072e7fe06484ff
Author: Eno Thereska 
Date:   2015-10-08T23:35:41Z

More efficient implementation of nodesEverSeen

commit 6576f372e0cddcc54b6fcb19b9d471cff16bcd77
Author: Eno Thereska 
Date:   2015-10-10T19:04:54Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 0f9507310812740d1a8304c6350f434b5a661c63
Author: Eno Thereska 
Date:   2015-10-12T21:33:52Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit b7c4c3c1600a6e21884dbcb39588a0681d351d60
Author: Eno Thereska 
Date:   2015-10-16T08:47:35Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit f6bd8788f0e6088ad81fd2847b999e3b0d4ecd2c
Author: Eno Thereska 
Date:   2015-10-16T10:39:25Z

Fixed handling of now. Added unit test for leastLoadedNode. Fixed 
disconnected method in MockSelector

commit b5f4c1796894de5b0c4cc31b7de98eb4536c0ccf
Author: Eno Thereska 
Date:   2015-10-17T19:53:15Z

Check case when node with same broker id has different host or port

commit bee1d583fa67d944e40ec700d0212c1bac314703
Author: Eno Thereska 
Date:   2015-10-17T19:53:30Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit bdf2fcf29d5396b97b9a24bf962a7c40b6a795c6
Author: Eno Thereska 
Date:   2015-10-19T21:26:46Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit d612afe2fd7ce63b054d2406e8d82419b3b39841
Author: Eno Thereska 
Date:   2015-10-19T21:30:33Z

Removed unnecessary Map remove

commit ba5eafcfeb006c403e7047c45442eca0d9ec763a
Author: Eno Thereska 
Date:   2015-10-20T07:59:03Z

Cleaned up parts of code. Minor.

commit 65e3aee2c9491b0411672eaf568034160b331074
Author: Eno Thereska 
Date:   2015-10-20T07:59:19Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 4ab54e061a1708d086f7720dc40778cdaf0d0362
Author: Eno Thereska 
Date:   2015-10-20T10:03:14Z

More cleanup. Minor

commit 570c15ff8032248018cc8c5a7f0df75d840a898f
Author: Eno Thereska 
Date:   2015-10-21T08:35:24Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 2a1f1a6cd350d2e655e5a0b41d66fca8f0af5782
Author: Eno Thereska 
Date:   2015-10-21T20:02:38Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 285bd1c0d8830e0e89ec49716b639156d291ace6
Author: Eno Thereska 
Date:   2015-10-23T17:40:14Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit d33199d5d32e7fc2f22e4fa64b505f15427d5be0
Author: Eno Thereska 
Date:   

[jira] [Created] (KAFKA-2694) Make a task id be a composite id of a task group id and a partition id

2015-10-26 Thread Yasuhiro Matsuda (JIRA)
Yasuhiro Matsuda created KAFKA-2694:
---

 Summary: Make a task id be a composite id of a task group id and a 
partition id
 Key: KAFKA-2694
 URL: https://issues.apache.org/jira/browse/KAFKA-2694
 Project: Kafka
  Issue Type: Task
  Components: kafka streams
Reporter: Yasuhiro Matsuda
 Fix For: 0.9.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2694) Make a task id be a composite id of a topic group id and a partition id

2015-10-26 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-2694:

Summary: Make a task id be a composite id of a topic group id and a 
partition id  (was: Make a task id be a composite id of a task group id and a 
partition id)

> Make a task id be a composite id of a topic group id and a partition id
> ---
>
> Key: KAFKA-2694
> URL: https://issues.apache.org/jira/browse/KAFKA-2694
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2694) Make a task id be a composite id of a topic group id and a partition id

2015-10-26 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda reassigned KAFKA-2694:
---

Assignee: Yasuhiro Matsuda

> Make a task id be a composite id of a topic group id and a partition id
> ---
>
> Key: KAFKA-2694
> URL: https://issues.apache.org/jira/browse/KAFKA-2694
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974859#comment-14974859
 ] 

ASF GitHub Bot commented on KAFKA-2644:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/361

MINOR: Add new build target for system test libs

KAFKA-2644 adds MiniKdc for system tests and hence needs a target to 
collect all MiniKdc jars. At the moment, system tests run `gradlew jar`. 
Replacing that with `gradlew systemTestLibs` will enable kafka jars and test 
dependency jars to be built and copied into appropriate locations. Submitting 
this as a separate PR so that the new target can be added to the build scripts 
that run system tests before KAFKA-2644 is committed. A separate target for 
system test artifacts will allow dependency changes to be made in future 
without breaking test runs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka kafka-systemTestLibs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/361.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #361


commit c4925d71bcc248c566a04a6348a216870f56243a
Author: Rajini Sivaram 
Date:   2015-10-26T19:07:21Z

MINOR: Add new build target for system test libs




> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2694) Make a task id be a composite id of a task group id and a partition id

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975146#comment-14975146
 ] 

ASF GitHub Bot commented on KAFKA-2694:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/365

KAFKA-2694: Task Id

@guozhangwang 

* A task id is now a class, ```TaskId```, that has a task group id and a 
partition id fields.
* ```TopologyBuilder``` assigns a task group id to a topic group. Related 
methods are changed accordingly.
* A state store uses the partition id part of the task id as the change log 
partition id.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka task_id

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/365.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #365


commit 31375d79ad666c6a38a7566e729a9062c9a97563
Author: Yasuhiro Matsuda 
Date:   2015-10-26T21:08:56Z

TaskId class




> Make a task id be a composite id of a task group id and a partition id
> --
>
> Key: KAFKA-2694
> URL: https://issues.apache.org/jira/browse/KAFKA-2694
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2689: Expose select gauges and metrics p...

2015-10-26 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/363

KAFKA-2689: Expose select gauges and metrics programmatically (not just 
through JMX)

For now just exposing the replica manager gauges.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka kafka-2689

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/363.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #363


commit 90c0085a76374fafe6fa62c18e3d24504852e687
Author: Eno Thereska 
Date:   2015-10-07T00:06:49Z

Commits to fix timing issues in three JIRAs

commit ee66491fb36d55527d156afda90c3addc3eb3175
Author: Eno Thereska 
Date:   2015-10-07T00:07:21Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 17a373733e414456475217248cbc7b0bc98fda40
Author: Eno Thereska 
Date:   2015-10-07T15:15:19Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit eb5fbf458a5b455ae8b3c8b3ebf32524f5a3ab3e
Author: Eno Thereska 
Date:   2015-10-07T16:20:45Z

Removed debug messages

commit 041baae45012cf8f99afd2c8b5d9a8099a8a928b
Author: Eno Thereska 
Date:   2015-10-07T17:35:12Z

Pick a node, but not one that is blacked out

commit 69679d7e61d36f76d2ea1dd1fcc0a1192c9b50d6
Author: Eno Thereska 
Date:   2015-10-08T17:18:02Z

Removed unneeded checks

commit 3ce5e151396575f45d1f022720f454ac36653d0d
Author: Eno Thereska 
Date:   2015-10-08T17:18:18Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 76e6a0d8ab3fe847b28edde2e0072e7fe06484ff
Author: Eno Thereska 
Date:   2015-10-08T23:35:41Z

More efficient implementation of nodesEverSeen

commit 6576f372e0cddcc54b6fcb19b9d471cff16bcd77
Author: Eno Thereska 
Date:   2015-10-10T19:04:54Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 0f9507310812740d1a8304c6350f434b5a661c63
Author: Eno Thereska 
Date:   2015-10-12T21:33:52Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit b7c4c3c1600a6e21884dbcb39588a0681d351d60
Author: Eno Thereska 
Date:   2015-10-16T08:47:35Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit f6bd8788f0e6088ad81fd2847b999e3b0d4ecd2c
Author: Eno Thereska 
Date:   2015-10-16T10:39:25Z

Fixed handling of now. Added unit test for leastLoadedNode. Fixed 
disconnected method in MockSelector

commit b5f4c1796894de5b0c4cc31b7de98eb4536c0ccf
Author: Eno Thereska 
Date:   2015-10-17T19:53:15Z

Check case when node with same broker id has different host or port

commit bee1d583fa67d944e40ec700d0212c1bac314703
Author: Eno Thereska 
Date:   2015-10-17T19:53:30Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit bdf2fcf29d5396b97b9a24bf962a7c40b6a795c6
Author: Eno Thereska 
Date:   2015-10-19T21:26:46Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit d612afe2fd7ce63b054d2406e8d82419b3b39841
Author: Eno Thereska 
Date:   2015-10-19T21:30:33Z

Removed unnecessary Map remove

commit ba5eafcfeb006c403e7047c45442eca0d9ec763a
Author: Eno Thereska 
Date:   2015-10-20T07:59:03Z

Cleaned up parts of code. Minor.

commit 65e3aee2c9491b0411672eaf568034160b331074
Author: Eno Thereska 
Date:   2015-10-20T07:59:19Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 4ab54e061a1708d086f7720dc40778cdaf0d0362
Author: Eno Thereska 
Date:   2015-10-20T10:03:14Z

More cleanup. Minor

commit 570c15ff8032248018cc8c5a7f0df75d840a898f
Author: Eno Thereska 
Date:   2015-10-21T08:35:24Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 2a1f1a6cd350d2e655e5a0b41d66fca8f0af5782
Author: Eno Thereska 
Date:   2015-10-21T20:02:38Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit 285bd1c0d8830e0e89ec49716b639156d291ace6
Author: Eno Thereska 
Date:   2015-10-23T17:40:14Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit d33199d5d32e7fc2f22e4fa64b505f15427d5be0
Author: Eno Thereska 
Date:   2015-10-26T12:44:21Z

Merge remote-tracking branch 'apache-kafka/trunk' into trunk

commit ff6c0cf771f4deab83ba603e447840cfa4a87a29
Author: Eno Thereska 
Date:   2015-10-26T19:46:46Z

Exporting metrics.


[jira] [Commented] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-10-26 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975020#comment-14975020
 ] 

Guozhang Wang commented on KAFKA-2674:
--

Personally I prefer the old function names since the new proposed names seem 
not very related to the old partitions as its parameter and hence its effort in 
resolving confusion seems overwhelmed by the new confusions it introduced. I 
would suggest we only document clearly that the callback will not be triggered 
upon consumer closure.

> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2689: Expose select gauges and metrics p...

2015-10-26 Thread enothereska
Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/363


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2689) Expose select gauges and metrics programmatically (not just through JMX)

2015-10-26 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-2689.
-
Resolution: Done

Will resolve incrementally through MINOR PR requests. 

> Expose select gauges and metrics programmatically (not just through JMX)
> 
>
> Key: KAFKA-2689
> URL: https://issues.apache.org/jira/browse/KAFKA-2689
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>  Labels: newbie
> Fix For: 0.9.0.0
>
>
> There are several gauges in core that are registered but cannot be accessed 
> programmatically. For example, gauges "LeaderCount", "PartitionCount", 
> "UnderReplicatedParittions" are all registered in ReplicaManager.scala but 
> there is no way to access them programmatically if one has access to the 
> kafka.server object. Other metrics,  such as isrExpandRate (also in 
> ReplicaManager.scala) can be accessed. The solution here is trivial, add a 
> var  in front of newGauge, as shown below
> var partitionCount newGauge(
> "PartitionCount",
> new Gauge[Int] {
>   def value = allPartitions.size
> }
> )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974964#comment-14974964
 ] 

Ewen Cheslack-Postava commented on KAFKA-2644:
--

[~rsivaram] Yes, that makes sense. I started the job again with the current PR 
so we can verify it works ok, but we can make the tweaks you mentioned when the 
other PR is merged: 
http://jenkins.confluent.io/job/kafka_system_tests_branch_builder/124/ 

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2652) Incorporate the new consumer protocol with partition-group interface

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974977#comment-14974977
 ] 

ASF GitHub Bot commented on KAFKA-2652:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/353


> Incorporate the new consumer protocol with partition-group interface
> 
>
> Key: KAFKA-2652
> URL: https://issues.apache.org/jira/browse/KAFKA-2652
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>
> After KAFKA-2464 is checked in, we need to incorporate the new protocol along 
> with a partition-group interface.
> The first step maybe a couple of pre-defined partitioning scheme that can be 
> chosen by user from some configs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2648) Coordinator should not allow empty groupIds

2015-10-26 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975075#comment-14975075
 ] 

Guozhang Wang commented on KAFKA-2648:
--

The new consumer consolidates the old high-level and simple consumers, and for 
old simple consumer one does not need to specify group ids since it does not 
require any coordination at all. But given that I feel we do not need to be 
concerned about compatibility since the new consumer does not need to be 
compatible with the old consumer's APIs.

The concerns I have, though, is about enforcing group id at the consumer side 
as the submitted PR: consumer instances may be used by tooling / admin / 
high-level services like streaming etc, which does not necessarily need to 
specify group ids since they do not need coordination. An alternative is to do 
exactly this ticket's title suggested: let the coordinator check if group id is 
empty, but not necessarily enforce non-empty group ids at the consumer side, 
and document "if you EVER want to subscribe during the lifetime of your 
consumer, you need to specify the group-id or otherwise subscribe() can throw a 
runtime exception".

> Coordinator should not allow empty groupIds
> ---
>
> Key: KAFKA-2648
> URL: https://issues.apache.org/jira/browse/KAFKA-2648
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The coordinator currently allows consumer groups with empty groupIds, but 
> there probably aren't any cases where this is actually a good idea and it 
> tends to mask problems where different groups have simply not configured a 
> groupId. To address this, we can add a new error code, say INVALID_GROUP_ID, 
> which the coordinator can return when it encounters an  empty groupID. We 
> should also make groupId a required property in consumer configuration and 
> enforce that it is non-empty. 
> It's a little unclear whether this change would have compatibility concerns. 
> The old consumer will fail with an empty groupId (because it cannot create 
> the zookeeper paths), but other clients may allow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2694) Make a task id be a composite id of a task group id and a partition id

2015-10-26 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda updated KAFKA-2694:

Issue Type: Sub-task  (was: Task)
Parent: KAFKA-2590

> Make a task id be a composite id of a task group id and a partition id
> --
>
> Key: KAFKA-2694
> URL: https://issues.apache.org/jira/browse/KAFKA-2694
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Add new build target for system test li...

2015-10-26 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/361

MINOR: Add new build target for system test libs

KAFKA-2644 adds MiniKdc for system tests and hence needs a target to 
collect all MiniKdc jars. At the moment, system tests run `gradlew jar`. 
Replacing that with `gradlew systemTestLibs` will enable kafka jars and test 
dependency jars to be built and copied into appropriate locations. Submitting 
this as a separate PR so that the new target can be added to the build scripts 
that run system tests before KAFKA-2644 is committed. A separate target for 
system test artifacts will allow dependency changes to be made in future 
without breaking test runs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka kafka-systemTestLibs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/361.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #361


commit c4925d71bcc248c566a04a6348a216870f56243a
Author: Rajini Sivaram 
Date:   2015-10-26T19:07:21Z

MINOR: Add new build target for system test libs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2652: integrate new group protocol into ...

2015-10-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/353


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2652) Incorporate the new consumer protocol with partition-group interface

2015-10-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2652.
--
   Resolution: Fixed
Fix Version/s: (was: 0.9.0.1)
   0.9.0.0

Issue resolved by pull request 353
[https://github.com/apache/kafka/pull/353]

> Incorporate the new consumer protocol with partition-group interface
> 
>
> Key: KAFKA-2652
> URL: https://issues.apache.org/jira/browse/KAFKA-2652
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>
> After KAFKA-2464 is checked in, we need to incorporate the new protocol along 
> with a partition-group interface.
> The first step maybe a couple of pre-defined partitioning scheme that can be 
> chosen by user from some configs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2689) Expose select gauges and metrics programmatically (not just through JMX)

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974903#comment-14974903
 ] 

ASF GitHub Bot commented on KAFKA-2689:
---

Github user enothereska closed the pull request at:

https://github.com/apache/kafka/pull/363


> Expose select gauges and metrics programmatically (not just through JMX)
> 
>
> Key: KAFKA-2689
> URL: https://issues.apache.org/jira/browse/KAFKA-2689
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
>  Labels: newbie
> Fix For: 0.9.0.0
>
>
> There are several gauges in core that are registered but cannot be accessed 
> programmatically. For example, gauges "LeaderCount", "PartitionCount", 
> "UnderReplicatedParittions" are all registered in ReplicaManager.scala but 
> there is no way to access them programmatically if one has access to the 
> kafka.server object. Other metrics,  such as isrExpandRate (also in 
> ReplicaManager.scala) can be accessed. The solution here is trivial, add a 
> var  in front of newGauge, as shown below
> var partitionCount newGauge(
> "PartitionCount",
> new Gauge[Int] {
>   def value = allPartitions.size
> }
> )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Expose ReplicaManager gauges

2015-10-26 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/364

MINOR: Expose ReplicaManager gauges

There are several gauges in core that are registered but cannot be accessed 
programmatically. For example, gauges "LeaderCount", "PartitionCount", 
"UnderReplicatedParittions" are all registered in ReplicaManager.scala but 
there is no way to access them programmatically if one has access to the 
kafka.server object. Other metrics,  such as isrExpandRate (also in 
ReplicaManager.scala) can be accessed. The solution here is trivial, add a var 
 in front of newGauge, as shown below
val partitionCount newGauge(
 "PartitionCount",
 new Gauge[Int] {
   def value = allPartitions.size
 }
)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka gauges

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/364.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #364


commit bcd2cab37874e5ae1dad20a40d416ec06c5781dc
Author: Eno Thereska 
Date:   2015-10-26T20:22:32Z

Expose ReplicaManager gauges




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974865#comment-14974865
 ] 

Rajini Sivaram commented on KAFKA-2644:
---

[~ewencp] [~ijuma] Thank you, I have added a new PR to update the system test 
target (sorry, I didn't expect it to appear here causing confusion). Once that 
is committed, can the build script used by the Jenkins build be updated to 
build _systemTestLibs_ instead of _jar_? Once that is done, I can change the 
original KAFKA-2644 PR to remove _copyDependantTestLibs_ from the jar target.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2648) Coordinator should not allow empty groupIds

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974878#comment-14974878
 ] 

ASF GitHub Bot commented on KAFKA-2648:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/362

KAFKA-2648: group.id is required for new consumer and cannot be empty



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2648

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/362.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #362


commit 5a419664f36fcbb955e250ebfe8c531e50fff981
Author: Jason Gustafson 
Date:   2015-10-26T19:40:02Z

KAFKA-2648: group.id is required for new consumer and cannot be empty




> Coordinator should not allow empty groupIds
> ---
>
> Key: KAFKA-2648
> URL: https://issues.apache.org/jira/browse/KAFKA-2648
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The coordinator currently allows consumer groups with empty groupIds, but 
> there probably aren't any cases where this is actually a good idea and it 
> tends to mask problems where different groups have simply not configured a 
> groupId. To address this, we can add a new error code, say INVALID_GROUP_ID, 
> which the coordinator can return when it encounters an  empty groupID. We 
> should also make groupId a required property in consumer configuration and 
> enforce that it is non-empty. 
> It's a little unclear whether this change would have compatibility concerns. 
> The old consumer will fail with an empty groupId (because it cannot create 
> the zookeeper paths), but other clients may allow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2648: group.id is required for new consu...

2015-10-26 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/362

KAFKA-2648: group.id is required for new consumer and cannot be empty



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2648

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/362.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #362


commit 5a419664f36fcbb955e250ebfe8c531e50fff981
Author: Jason Gustafson 
Date:   2015-10-26T19:40:02Z

KAFKA-2648: group.id is required for new consumer and cannot be empty




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975393#comment-14975393
 ] 

Rajini Sivaram commented on KAFKA-2644:
---

[~ewencp] Thank you once again for running the tests. MiniKdc started up this 
time, but Kafka was still failing to start with SASL. I have pushed a fix, but 
since I dont have an Oracle JVM, I am not sure if it does fix the issue. Tests 
don't hit this issue with OpenJDK.

[~ijuma] [~harsha_ch] Kafka was failing to start with the exception below. I 
think it may be because doNotPrompt option was not specified in jaas.conf. I 
took the Jaas options for Sun Krb5LoginModule from the tests in kafka/core. 
Were these tests run with the JDK used for system tests? I have pushed a change 
to include doNotPrompt=true, but just wanted to check. If this option is 
required for the Oracle JVM, we may want to include it in the jaas,conf used 
for the tests in kafka/core as well.

{quote}
org.apache.kafka.common.KafkaException: 
javax.security.auth.login.LoginException: No password provided
at 
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:76)
at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:56)
at kafka.network.Processor.(SocketServer.scala:379)
at 
kafka.network.SocketServer$$anonfun$startup$1$$anonfun$apply$1.apply$mcVI$sp(SocketServer.scala:96)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at 
kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:95)
at 
kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:91)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at 
scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
at kafka.network.SocketServer.startup(SocketServer.scala:91)
at kafka.server.KafkaServer.startup(KafkaServer.scala:178)
at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: javax.security.auth.login.LoginException: No password provided
at 
com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:878)
at 
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:719)
at 
com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:762)
at 
javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at 
javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687)
at javax.security.auth.login.LoginContext.login(LoginContext.java:595)
at org.apache.kafka.common.security.kerberos.Login.login(Login.java:308)
at 
org.apache.kafka.common.security.kerberos.Login.(Login.java:104)
at 
org.apache.kafka.common.security.kerberos.LoginManager.(LoginManager.java:43)
at 
org.apache.kafka.common.security.kerberos.LoginManager.acquireLoginManager(LoginManager.java:84)
at 
org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:59)
{quote}

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2464) Client-side assignment and group generalization

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975483#comment-14975483
 ] 

ASF GitHub Bot commented on KAFKA-2464:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/354


> Client-side assignment and group generalization
> ---
>
> Key: KAFKA-2464
> URL: https://issues.apache.org/jira/browse/KAFKA-2464
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Add support for client-side assignment and generalization of join group 
> protocol as documented here: 
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: follow-up to KAFKA-2464 for renaming/cl...

2015-10-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/354


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #57

2015-10-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2652: integrate new group protocol into partition grouping

--
[...truncated 6900 lines...]

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:runtime:processTestResources
:copycat:runtime:testClasses
:copycat:runtime:checkstyleTest
:copycat:runtime:test

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testDeliverConvertsData 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommit PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommitConsumerFailure 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommitTimeout PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testAssignmentPauseResume 
PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > 
testSendRecordsConvertsData PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testStopInvalidTask PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.copycat.runtime.WorkerTest > 

Build failed in Jenkins: kafka-trunk-jdk7 #721

2015-10-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: follow-up to KAFKA-2464 for renaming/cleanup

--
[...truncated 3094 lines...]
kafka.server.OffsetCommitTest > testOffsetExpiration PASSED

kafka.server.OffsetCommitTest > testNonExistingTopicOffsetCommit PASSED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED

kafka.server.LeaderElectionTest > testLeaderElectionAndEpoch PASSED

kafka.server.DynamicConfigChangeTest > testProcessNotification PASSED

kafka.server.DynamicConfigChangeTest > testClientQuotaConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown FAILED
java.net.BindException: Address already in use

java.lang.NullPointerException
at 
kafka.integration.BaseTopicMetadataTest.tearDown(BaseTopicMetadataTest.scala:63)
at 
kafka.integration.SaslPlaintextTopicMetadataTest.kafka$api$SaslTestHarness$$super$tearDown(SaslPlaintextTopicMetadataTest.scala:23)
at kafka.api.SaslTestHarness$class.tearDown(SaslTestHarness.scala:58)
at 
kafka.integration.SaslPlaintextTopicMetadataTest.tearDown(SaslPlaintextTopicMetadataTest.scala:23)

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED


[jira] [Created] (KAFKA-2695) Handle null string/bytes protocol primitives

2015-10-26 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-2695:
--

 Summary: Handle null string/bytes protocol primitives
 Key: KAFKA-2695
 URL: https://issues.apache.org/jira/browse/KAFKA-2695
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


The kafka protocol supports null bytes and string primitives by passing -1 as 
the size, but the current deserializers implemented in 
o.a.k.common.protocol.types.Type do not handle them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #59

2015-10-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2694: Reformat task id as group id and partition id

--
[...truncated 361 lines...]
:kafka-trunk-jdk8:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScala UP-TO-DATE
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:javadoc
:kafka-trunk-jdk8:core:javadoc
:kafka-trunk-jdk8:core:javadocJar
:kafka-trunk-jdk8:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk8:core:scaladocJar
:kafka-trunk-jdk8:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^

Jenkins build is back to normal : kafka-trunk-jdk7 #722

2015-10-26 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-2683) Ensure wakeup exceptions are propagated to user in new consumer

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975768#comment-14975768
 ] 

ASF GitHub Bot commented on KAFKA-2683:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/366

KAFKA-2683: ensure wakeup exceptions raised to user



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2683

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/366.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #366


commit 383d0cea274d6a7819b69cdba7b7768002822ae1
Author: Jason Gustafson 
Date:   2015-10-27T05:42:10Z

KAFKA-2683: ensure wakeup exceptions raised to user




> Ensure wakeup exceptions are propagated to user in new consumer
> ---
>
> Key: KAFKA-2683
> URL: https://issues.apache.org/jira/browse/KAFKA-2683
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> KafkaConsumer.wakeup() can be used to interrupt blocking operations (e.g. in 
> order to shutdown), so wakeup exceptions must get propagated to the user. 
> Currently, there are several locations in the code where a wakeup exception 
> could be caught and silently discarded. For example, when the rebalance 
> callback is invoked, we just catch and log all exceptions. In this case, we 
> also need to be careful that wakeup exceptions do not affect rebalance 
> callback semantics. In particular, it is possible currently for a wakeup to 
> cause onPartitionsRevoked to be invoked multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2683: ensure wakeup exceptions raised to...

2015-10-26 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/366

KAFKA-2683: ensure wakeup exceptions raised to user



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2683

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/366.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #366


commit 383d0cea274d6a7819b69cdba7b7768002822ae1
Author: Jason Gustafson 
Date:   2015-10-27T05:42:10Z

KAFKA-2683: ensure wakeup exceptions raised to user




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2694) Make a task id be a composite id of a topic group id and a partition id

2015-10-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2694.
--
Resolution: Fixed

Issue resolved by pull request 365
[https://github.com/apache/kafka/pull/365]

> Make a task id be a composite id of a topic group id and a partition id
> ---
>
> Key: KAFKA-2694
> URL: https://issues.apache.org/jira/browse/KAFKA-2694
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2694) Make a task id be a composite id of a topic group id and a partition id

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975594#comment-14975594
 ] 

ASF GitHub Bot commented on KAFKA-2694:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/365


> Make a task id be a composite id of a topic group id and a partition id
> ---
>
> Key: KAFKA-2694
> URL: https://issues.apache.org/jira/browse/KAFKA-2694
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2694: Task Id

2015-10-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/365


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #56

2015-10-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2449: Update mirror maker docs

--
[...truncated 6890 lines...]

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:runtime:processTestResources
:copycat:runtime:testClasses
:copycat:runtime:checkstyleTest
:copycat:runtime:test

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testDeliverConvertsData 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommit PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > 
testCommitTaskFlushFailure PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommitConsumerFailure 
PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testCommitTimeout PASSED

org.apache.kafka.copycat.runtime.WorkerSinkTaskTest > testAssignmentPauseResume 
PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testPollsInBackground 
PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testCommit PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > testCommitFailure PASSED

org.apache.kafka.copycat.runtime.WorkerSourceTaskTest > 
testSendRecordsConvertsData PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testReconfigureConnectorTasks 
PASSED

org.apache.kafka.copycat.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.copycat.runtime.WorkerTest > 

[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974305#comment-14974305
 ] 

Jun Rao commented on KAFKA-2644:


[~rsivaram], we have a jenkins job for system tests 
(http://jenkins.confluent.io/job/kafka_system_tests_branch_builder/). Could you 
try your branch there?

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2688) Avoid forcing reload of `Configuration`

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974166#comment-14974166
 ] 

ASF GitHub Bot commented on KAFKA-2688:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/359

KAFKA-2688; Avoid forcing reload of `Configuration`



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2688-avoid-forcing-reload-of-configuration

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/359.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #359


commit ab829485798b818840c4f37eb888a22c4eed59e9
Author: Ismael Juma 
Date:   2015-10-26T12:59:38Z

Remove forced reload of `Configuration` from `Login` and set JAAS property 
before starting `MiniKdc`




> Avoid forcing reload of `Configuration`
> ---
>
> Key: KAFKA-2688
> URL: https://issues.apache.org/jira/browse/KAFKA-2688
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We currently call `Configuration.setConfiguration(null)` from a couple of 
> places in our codebase (`Login` and `JaasUtils`) to force `Configuration` to 
> be reloaded. If this code is removed, some tests can fail depending on the 
> test execution order.
> Ideally we would not need to call `setConfiguration(null)` outside of tests. 
> Investigate if this is possible. If not, we should at least ensure that 
> reloads are done in a safe way within our codebase (perhaps using a lock).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2688; Avoid forcing reload of `Configura...

2015-10-26 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/359

KAFKA-2688; Avoid forcing reload of `Configuration`



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2688-avoid-forcing-reload-of-configuration

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/359.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #359


commit ab829485798b818840c4f37eb888a22c4eed59e9
Author: Ismael Juma 
Date:   2015-10-26T12:59:38Z

Remove forced reload of `Configuration` from `Login` and set JAAS property 
before starting `MiniKdc`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2693) Run relevant ducktape tests with SASL/PLAIN

2015-10-26 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-2693:
-

 Summary: Run relevant ducktape tests with SASL/PLAIN
 Key: KAFKA-2693
 URL: https://issues.apache.org/jira/browse/KAFKA-2693
 Project: Kafka
  Issue Type: Sub-task
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
Priority: Critical


KAFKA-2644 runs sanity test, replication tests and benchmarks with SASL using 
mechanism GSSAPI. For SASL/PLAIN, run sanity test and replication tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2658) Implement SASL/PLAIN

2015-10-26 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974374#comment-14974374
 ] 

Jun Rao commented on KAFKA-2658:


[~rsivaram], thanks for the patch. Will take a look today.

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1686) Implement SASL/Kerberos

2015-10-26 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974372#comment-14974372
 ] 

Sriharsha Chintalapani commented on KAFKA-1686:
---

[~junrao] Pretty much all the services using kdc works like this. Although our 
socket connections are long-living, in reality they dont' stay connected 
forever. Removing someone from KDC is possible but that doesn't happen often. 
Even than it would be good practice to remove ACLs of that principal.

> Implement SASL/Kerberos
> ---
>
> Key: KAFKA-1686
> URL: https://issues.apache.org/jira/browse/KAFKA-1686
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.8.2.1
>Reporter: Jay Kreps
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Implement SASL/Kerberos authentication.
> To do this we will need to introduce a new SASLRequest and SASLResponse pair 
> to the client protocol. This request and response will each have only a 
> single byte[] field and will be used to handle the SASL challenge/response 
> cycle. Doing this will initialize the SaslServer instance and associate it 
> with the session in a manner similar to KAFKA-1684.
> When using integrity or encryption mechanisms with SASL we will need to wrap 
> and unwrap bytes as in KAFKA-1684 so the same interface that covers the 
> SSLEngine will need to also cover the SaslServer instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974470#comment-14974470
 ] 

Ewen Cheslack-Postava commented on KAFKA-2644:
--

[~junrao] [~rsivaram] Auth for Confluent's Jenkins is currently limited to 
members of the confluentinc organization on GitHub. We haven't figured out yet 
how we want to manage giving others access. I think we're happy to give access, 
we just need to sort out the logistics.

In the mean time, I kicked off a job for this PR here: 
http://jenkins.confluent.io/job/kafka_system_tests_branch_builder/123/


> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-10-26 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-2674:
--

Assignee: Jason Gustafson  (was: Neha Narkhede)

> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2391) Blocking call such as position(), partitionsFor(), committed() and listTopics() should have a timeout

2015-10-26 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-2391:

Assignee: Jason Gustafson  (was: Onur Karaman)

> Blocking call such as position(), partitionsFor(), committed() and 
> listTopics() should have a timeout
> -
>
> Key: KAFKA-2391
> URL: https://issues.apache.org/jira/browse/KAFKA-2391
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
>Priority: Blocker
>
> The blocking calls should have a timeout from either configuration or 
> parameter. So far we have position(), partitionsFor(), committed() and 
> listTopics().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2371) Add distributed coordinator implementation for Copycat

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974647#comment-14974647
 ] 

ASF GitHub Bot commented on KAFKA-2371:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/360

MINOR: KAFKA-2371 follow-up, DistributedHerder should wakeup 
WorkerGroupMember after assignment to ensure work is started immediately



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
minor-kafka-2371-follow-up-wakeup-after-rebalance

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/360.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #360


commit 7cf8ea072910e6365470294e715d48baccc54d90
Author: Ewen Cheslack-Postava 
Date:   2015-10-26T17:39:21Z

MINOR: KAFKA-2371 follow-up, DistributedHerder should wakeup 
WorkerGroupMember after assignment to ensure work is started immediately




> Add distributed coordinator implementation for Copycat
> --
>
> Key: KAFKA-2371
> URL: https://issues.apache.org/jira/browse/KAFKA-2371
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Copycat needs a Coordinator implementation that handles multiple Workers that 
> automatically manage the distribution of connectors and tasks across them. To 
> start, this implementation should handle any connectors that have been 
> registered via either a CLI or REST interface for starting/stopping 
> connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: KAFKA-2371 follow-up, DistributedHerder...

2015-10-26 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/360

MINOR: KAFKA-2371 follow-up, DistributedHerder should wakeup 
WorkerGroupMember after assignment to ensure work is started immediately



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
minor-kafka-2371-follow-up-wakeup-after-rebalance

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/360.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #360


commit 7cf8ea072910e6365470294e715d48baccc54d90
Author: Ewen Cheslack-Postava 
Date:   2015-10-26T17:39:21Z

MINOR: KAFKA-2371 follow-up, DistributedHerder should wakeup 
WorkerGroupMember after assignment to ensure work is started immediately




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2235) LogCleaner offset map overflow

2015-10-26 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974667#comment-14974667
 ] 

Jun Rao commented on KAFKA-2235:


One way to get around your problem now is to set 
log.cleaner.io.buffer.load.factor to a larger value. Currently, it doesn't seem 
that we check the range of the value. So you can set it to a value larger than 
1.0, which will allow you to build a bigger offset map with the same buffer 
size.

> LogCleaner offset map overflow
> --
>
> Key: KAFKA-2235
> URL: https://issues.apache.org/jira/browse/KAFKA-2235
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 0.8.1, 0.8.2.0
>Reporter: Ivan Simoneko
>Assignee: Ivan Simoneko
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2235_v1.patch, KAFKA-2235_v2.patch
>
>
> We've seen log cleaning generating an error for a topic with lots of small 
> messages. It seems that cleanup map overflow is possible if a log segment 
> contains more unique keys than empty slots in offsetMap. Check for baseOffset 
> and map utilization before processing segment seems to be not enough because 
> it doesn't take into account segment size (number of unique messages in the 
> segment).
> I suggest to estimate upper bound of keys in a segment as a number of 
> messages in the segment and compare it with the number of available slots in 
> the map (keeping in mind desired load factor). It should work in cases where 
> an empty map is capable to hold all the keys for a single segment. If even a 
> single segment no able to fit into an empty map cleanup process will still 
> fail. Probably there should be a limit on the log segment entries count?
> Here is the stack trace for this error:
> 2015-05-19 16:52:48,758 ERROR [kafka-log-cleaner-thread-0] 
> kafka.log.LogCleaner - [kafka-log-cleaner-thread-0], Error due to
> java.lang.IllegalArgumentException: requirement failed: Attempt to add a new 
> entry to a full offset map.
>at scala.Predef$.require(Predef.scala:233)
>at kafka.log.SkimpyOffsetMap.put(OffsetMap.scala:79)
>at 
> kafka.log.Cleaner$$anonfun$kafka$log$Cleaner$$buildOffsetMapForSegment$1.apply(LogCleaner.scala:543)
>at 
> kafka.log.Cleaner$$anonfun$kafka$log$Cleaner$$buildOffsetMapForSegment$1.apply(LogCleaner.scala:538)
>at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)
>at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>at kafka.message.MessageSet.foreach(MessageSet.scala:67)
>at 
> kafka.log.Cleaner.kafka$log$Cleaner$$buildOffsetMapForSegment(LogCleaner.scala:538)
>at 
> kafka.log.Cleaner$$anonfun$buildOffsetMap$3.apply(LogCleaner.scala:515)
>at 
> kafka.log.Cleaner$$anonfun$buildOffsetMap$3.apply(LogCleaner.scala:512)
>at scala.collection.immutable.Stream.foreach(Stream.scala:547)
>at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:512)
>at kafka.log.Cleaner.clean(LogCleaner.scala:307)
>at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:221)
>at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:199)
>at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2235) LogCleaner offset map overflow

2015-10-26 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974502#comment-14974502
 ] 

Joel Koshy commented on KAFKA-2235:
---

Agreed with Todd - this is very similar to the proposal in btw, 
https://issues.apache.org/jira/browse/KAFKA-1755?focusedCommentId=14216486

> LogCleaner offset map overflow
> --
>
> Key: KAFKA-2235
> URL: https://issues.apache.org/jira/browse/KAFKA-2235
> Project: Kafka
>  Issue Type: Bug
>  Components: core, log
>Affects Versions: 0.8.1, 0.8.2.0
>Reporter: Ivan Simoneko
>Assignee: Ivan Simoneko
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2235_v1.patch, KAFKA-2235_v2.patch
>
>
> We've seen log cleaning generating an error for a topic with lots of small 
> messages. It seems that cleanup map overflow is possible if a log segment 
> contains more unique keys than empty slots in offsetMap. Check for baseOffset 
> and map utilization before processing segment seems to be not enough because 
> it doesn't take into account segment size (number of unique messages in the 
> segment).
> I suggest to estimate upper bound of keys in a segment as a number of 
> messages in the segment and compare it with the number of available slots in 
> the map (keeping in mind desired load factor). It should work in cases where 
> an empty map is capable to hold all the keys for a single segment. If even a 
> single segment no able to fit into an empty map cleanup process will still 
> fail. Probably there should be a limit on the log segment entries count?
> Here is the stack trace for this error:
> 2015-05-19 16:52:48,758 ERROR [kafka-log-cleaner-thread-0] 
> kafka.log.LogCleaner - [kafka-log-cleaner-thread-0], Error due to
> java.lang.IllegalArgumentException: requirement failed: Attempt to add a new 
> entry to a full offset map.
>at scala.Predef$.require(Predef.scala:233)
>at kafka.log.SkimpyOffsetMap.put(OffsetMap.scala:79)
>at 
> kafka.log.Cleaner$$anonfun$kafka$log$Cleaner$$buildOffsetMapForSegment$1.apply(LogCleaner.scala:543)
>at 
> kafka.log.Cleaner$$anonfun$kafka$log$Cleaner$$buildOffsetMapForSegment$1.apply(LogCleaner.scala:538)
>at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)
>at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>at kafka.message.MessageSet.foreach(MessageSet.scala:67)
>at 
> kafka.log.Cleaner.kafka$log$Cleaner$$buildOffsetMapForSegment(LogCleaner.scala:538)
>at 
> kafka.log.Cleaner$$anonfun$buildOffsetMap$3.apply(LogCleaner.scala:515)
>at 
> kafka.log.Cleaner$$anonfun$buildOffsetMap$3.apply(LogCleaner.scala:512)
>at scala.collection.immutable.Stream.foreach(Stream.scala:547)
>at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:512)
>at kafka.log.Cleaner.clean(LogCleaner.scala:307)
>at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:221)
>at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:199)
>at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-2369) Add Copycat REST API

2015-10-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-2369 started by Ewen Cheslack-Postava.

> Add Copycat REST API
> 
>
> Key: KAFKA-2369
> URL: https://issues.apache.org/jira/browse/KAFKA-2369
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Add a REST API for Copycat. At a minimum, for a single worker this should 
> support:
> * add/remove connector
> * connector status
> * task status
> * worker status
> In distributed mode this should handle forwarding if necessary, but it may 
> make sense to defer the distributed support for a later JIRA.
> This will require the addition of new dependencies to support implementing 
> the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974530#comment-14974530
 ] 

Rajini Sivaram commented on KAFKA-2644:
---

[~ewencp] Thank you for running the tests. The build target used to collect 
MiniKdc jars was a new target (described in my comment above) which needed to 
be added to the script that starts the system test run. Since that is not being 
built, SASL tests will fail in the current run. I have updated the patch to 
make _copyTestDependantLibs_ a dependency of _jar_ so that it works without 
modifying the script that runs the tests. Do you mind running the tests again 
with the updated PR? Sorry about that.

Do you have any suggestion on a better way to get the jars in place for the 
test? 

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2449) Update mirror maker (MirrorMaker) docs

2015-10-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2449:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 356
[https://github.com/apache/kafka/pull/356]

> Update mirror maker (MirrorMaker) docs
> --
>
> Key: KAFKA-2449
> URL: https://issues.apache.org/jira/browse/KAFKA-2449
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The Kafka docs on Mirror Maker state that it mirrors from N source clusters 
> to 1 destination, but this is no longer the case. Docs should be updated to 
> reflect that it mirrors from single source cluster to single target cluster.
> Docs I've found where this should be updated:
> http://kafka.apache.org/documentation.html#basic_ops_mirror_maker
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+mirroring+(MirrorMaker)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2449: Update mirror maker (MirrorMaker) ...

2015-10-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/356


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974540#comment-14974540
 ] 

Ismael Juma commented on KAFKA-2644:


Maybe we need a build target for generating the system tests artifacts? It 
seems weird to for `copyTestDependantLibs` to be a dependency of `jar`

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-26 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974544#comment-14974544
 ] 

Ewen Cheslack-Postava commented on KAFKA-2644:
--

Yeah, that seems like a reasonable solution. So far we just haven't needed any 
tools we don't publish as part of a normal release, although I suppose things 
like VerifiableProducer are a bit weird to include in a release.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2449) Update mirror maker (MirrorMaker) docs

2015-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974564#comment-14974564
 ] 

ASF GitHub Bot commented on KAFKA-2449:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/356


> Update mirror maker (MirrorMaker) docs
> --
>
> Key: KAFKA-2449
> URL: https://issues.apache.org/jira/browse/KAFKA-2449
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The Kafka docs on Mirror Maker state that it mirrors from N source clusters 
> to 1 destination, but this is no longer the case. Docs should be updated to 
> reflect that it mirrors from single source cluster to single target cluster.
> Docs I've found where this should be updated:
> http://kafka.apache.org/documentation.html#basic_ops_mirror_maker
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+mirroring+(MirrorMaker)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2369) Add Copycat REST API

2015-10-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava reassigned KAFKA-2369:


Assignee: Ewen Cheslack-Postava  (was: Liquan Pei)

> Add Copycat REST API
> 
>
> Key: KAFKA-2369
> URL: https://issues.apache.org/jira/browse/KAFKA-2369
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Add a REST API for Copycat. At a minimum, for a single worker this should 
> support:
> * add/remove connector
> * connector status
> * task status
> * worker status
> In distributed mode this should handle forwarding if necessary, but it may 
> make sense to defer the distributed support for a later JIRA.
> This will require the addition of new dependencies to support implementing 
> the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-10-26 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974770#comment-14974770
 ] 

Jason Gustafson commented on KAFKA-2674:


[~becket_qin] [~guozhang] What would you guys think of the following change to 
ConsumerRebalanceListener? Basically the objective is to make the calling order 
clear.

{code}
interface RebalanceListener {
  /* Invoked prior to rebalance, offsets committed here */
  void onPrepare(List oldAssignment);

  /* Invoked after rebalance, set initial offset here */
  void onComplete(List newAssignment)
}
{code}

Might also be nice to get feedback from [~jkreps].

> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)