Re: [DISCUSSION] KIP-266: Add TimeoutException to KafkaConsumer#position()

2018-03-11 Thread Richard Yu
Hi all,

I updated the KIP where overloading position() is now the favored approach.
Bounding position() using requestTimeoutMs has been listed as rejected.

Any thoughts?

On Tue, Mar 6, 2018 at 6:00 PM, Guozhang Wang  wrote:

> I agree that adding the overloads is most flexible. But going for that
> direction we'd do that for all the blocking call that I've listed above,
> with this timeout value covering the end-to-end waiting time.
>
>
> Guozhang
>
> On Tue, Mar 6, 2018 at 10:02 AM, Ted Yu  wrote:
>
> > bq. The most flexible option is to add overloads to the consumer
> >
> > This option is flexible.
> >
> > Looking at the tail of SPARK-18057, Spark dev voiced the same choice.
> >
> > +1 for adding overload with timeout parameter.
> >
> > Cheers
> >
> > On Mon, Mar 5, 2018 at 2:42 PM, Jason Gustafson 
> > wrote:
> >
> > > @Guozhang I probably have suggested all options at some point or
> another,
> > > including most recently, the current KIP! I was thinking that
> practically
> > > speaking, the request timeout defines how long the user is willing to
> > wait
> > > for a response. The consumer doesn't really have a complex send process
> > > like the producer for any of these APIs, so I wasn't sure how much
> > benefit
> > > there would be from having more granular control over timeouts (in the
> > end,
> > > KIP-91 just adds a single timeout to control the whole send). That
> said,
> > it
> > > might indeed be better to avoid overloading the config as you suggest
> > since
> > > at least it avoids inconsistency with the producer's usage.
> > >
> > > The most flexible option is to add overloads to the consumer so that
> > users
> > > can pass the timeout directly. I'm not sure if that is more or less
> > > annoying than a new config, but I've found config timeouts a little
> > > constraining in practice. For example, I could imagine users wanting to
> > > wait longer for an offset commit operation than a position lookup; if
> the
> > > latter isn't timely, users can just pause the partition and continue
> > > fetching on others. If you cannot commit offsets, however, it might be
> > > safer for an application to wait availability of the coordinator than
> > > continuing.
> > >
> > > -Jason
> > >
> > > On Sun, Mar 4, 2018 at 10:14 PM, Guozhang Wang 
> > wrote:
> > >
> > > > Hello Richard,
> > > >
> > > > Thanks for the proposed KIP. I have a couple of general comments:
> > > >
> > > > 1. I'm not sure if piggy-backing the timeout exception on the
> > > > existing requestTimeoutMs configured in "request.timeout.ms" is a
> good
> > > > idea
> > > > since a) it is a general config that applies for all types of
> requests,
> > > and
> > > > 2) using it to cover all the phases of an API call, including network
> > > round
> > > > trip and potential metadata refresh is shown to not be a good idea,
> as
> > > > illustrated in KIP-91:
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 91+Provide+Intuitive+User+Timeouts+in+The+Producer
> > > >
> > > > In fact, I think in KAFKA-4879 which is aimed for the same issue as
> > > > KAFKA-6608,
> > > > Jason has suggested we use a new config for the API. Maybe this would
> > be
> > > a
> > > > more intuitive manner than reusing the request.timeout.ms config.
> > > >
> > > >
> > > > 2. Besides the Consumer.position() call, there are a couple of more
> > > > blocking calls today that could result in infinite blocking:
> > > > Consumer.commitSync() and Consumer.committed(), should they be
> > considered
> > > > in this KIP as well?
> > > >
> > > > 3. There are a few other APIs that are today relying on
> > > request.timeout.ms
> > > > already for breaking the infinite blocking, namely
> > > Consumer.partitionFor(),
> > > > Consumer.OffsetAndTimestamp() and Consumer.listTopics(), if we are
> > making
> > > > the other blocking calls to be relying a new config as suggested in
> 1)
> > > > above, should we also change the semantics of these API functions for
> > > > consistency?
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > >
> > > >
> > > > On Sun, Mar 4, 2018 at 11:13 AM, Richard Yu <
> > yohan.richard...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I would like to discuss a potential change which would be made to
> > > > > KafkaConsumer:
> > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > action?pageId=75974886
> > > > >
> > > > > Thanks,
> > > > > Richard Yu
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-03-11 Thread Dong Lin
Hey Jason,

This is a good solution on the server side for log compacted topic.
Thinking about this more, there maybe another probably simpler solution on
the client side for log compacted topics. This solution is now specified in
the section "Changes in how producer produced keyed messages to log
compacted topics" of the KIP. The client-side solution seems simpler with
less performance overhead than the server-side solution. What do you think?

Thanks,
Dong


On Sat, Mar 10, 2018 at 10:55 AM, Jason Gustafson 
wrote:

> Hey Dong,
>
> I was thinking a bit about log compaction after a partition split. I think
> the best you could hope for in terms of efficiency is that the network
> overhead would be proportional to the number of remapped keys that need
> cleaning. One thought I had which gets close to this is to propagate a
> bloom filter covering the keys in the log prior to the split to all
> partitions that might contain some of the remapped keys. As a simple
> example, suppose we have a single partition which is split into two at
> offset N. Say that broker 0 owns partition 0 and broker 1 owns partition 1.
> Some subset of the keys prior to N will move to partition 1 and the rest
> will remain on partition 0. The idea is something like this:
>
> 1. Every time we clean partition 0 on broker 0, we compute a bloom filter
> for the keys in the log prior to offset N.
> 2. The bloom filter is propagated to broker 1 and cached.
> 3. The next time broker 1 cleans the log, it uses the bloom filter to
> collect a set of possible matches.
> 4. When the cleaning completes, the matching keys are propagated to broker
> 0, where they are cached until the next cleaning.
> 5. The next time broker 0 cleans the log, it can remove all keys that have
> been cached from the region prior to the split.
>
> This incremental approach allows us to tradeoff cleaning latency to reduce
> network traffic and memory overhead. A few points:
>
> - The accuracy of bloom filters decreases as you add more elements to them.
> We would probably choose to propagate the bloom filter for a subset of the
> keys once it had reached a certain capacity to avoid having too many false
> positives.
> - We can limit the number of bloom filter matches that we will collect and
> return in a single round of cleaning. These keys have to be cached in the
> broker for a little while (until the next cleaning), so this lets us keep
> the memory usage bounded.
>
> There is probably some room for cleverness as well to avoid repeating work.
> For example, the broker matching the bloom filter can also send the offset
> of the last key that was matched against the filter. The next time we send
> a bloom filter for a certain range of keys, we can send the starting offset
> for matching. It's kind of like our "dirty offset" notion.
>
> Needs a bit of investigation to work out the details (e.g. handling
> multiple splits), but seems like it could work. What do you think?
>
> -Jason
>
>
>
> On Fri, Mar 9, 2018 at 1:23 PM, Matthias J. Sax 
> wrote:
>
> > Thanks for your comment Clemens. It make sense what you are saying.
> > However, your described pattern is to split partitions and use linear
> > hashing to avoid random key distribution. But this is what Jan thinks we
> > should not do...
> >
> > Also, I just picked an example with 2 -> 3 partitions, but if you don't
> > use linear hashing I think the same issue occurs if you double the
> > number of partitions.
> >
> > I am in favor of using linear hashing. Still think, it is also useful to
> > split single partitions, too, in case load is not balanced and some
> > partitions are hot spots while others are "idle".
> >
> > -Matthias
> >
> >
> > On 3/9/18 5:41 AM, Clemens Valiente wrote:
> > > I think it's fair to assume that topics will always be increased by an
> > integer factor - e.g. from 2 partitions to 4 partitions. Then the mapping
> > is much easier.
> > >
> > > Why anyone would increase partitions by lass than x2 is a mystery to
> me.
> > If your two partitions cannot handle the load, then with three partitions
> > each one will still get 67% of that load which is still way too
> dangerous.
> > >
> > >
> > > So in your case we go from
> > >
> > > part1: A B C D
> > >
> > > part2: E F G H
> > >
> > >
> > > to
> > >
> > >
> > > part1: A C
> > >
> > > part2: B D
> > >
> > > part3: E F
> > >
> > > part4: G H
> > >
> > >
> > > 
> > > From: Matthias J. Sax 
> > > Sent: 09 March 2018 07:53
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-253: Support in-order message delivery with
> > partition expansion
> > >
> > > @Jan: You suggest to copy the data from one topic to a new topic, and
> > > provide an "offset mapping" from the old to the new topic for the
> > > consumers. I don't quite understand how this would work.
> > >
> > > Let's say there are 2 partitions in the original topic and 3 partitions
> > > 

Re: [DISCUSS] KIP-258: Allow to Store Record Timestamps in RocksDB

2018-03-11 Thread Matthias J. Sax
@John, Guozhang,

thanks a lot for your comments. Very long reply...


About upgrading the rebalance metadata:

Another possibility to do this, would be to register multiple assignment
strategies for the 1.2 applications. For this case, new instances would
be configured to support both and the broker would pick the version that
all instances understand. The disadvantage would be, that we send much
more data (ie, two subscriptions) in each rebalance as long as no second
rebalance is done disabling the old protocol. Thus, using this approach
would allow to avoid a second rebalance trading-off an increased
rebalance network footprint (I also assume that this would increase the
message size that is written into __consumer_offsets topic?). Overall, I
am not sure if this would be a good tradeoff, but it could avoid a
second rebalance (I have some more thoughts about stores below that are
relevant for single rebalance upgrade).

For future upgrades we might be able to fix this though. I was thinking
about the following:

In the current implementation, the leader fails if it gets a
subscription it does not understand (ie, newer version). We could change
this behavior and let the leader send an empty assignment plus error
code (including supported version) back to the instance sending the
"bad" subscription. This would allow the following logic for an
application instance:

 - on startup, always send the latest subscription format
 - if leader understands it, we get an assignment back an start processing
 - if leader does not understand it, we get an empty assignment and
supported version back
 - the application unsubscribe()/subscribe()/poll() again and sends a
subscription using the leader's supported version

This protocol would allow to do a single rolling bounce, and implements
a "version probing" step, that might result in two executed rebalances.
The advantage would be, that the user does not need to set any configs
or do multiple rolling bounces, as Streams takes care of this automatically.

One disadvantage would be, that two rebalances happen and that for an
error case during rebalance, we loose the information about the
supported leader version and the "probing step" would happen a second time.

If the leader is eventually updated, it will include it's own supported
version in all assignments, to allow a "down graded" application to
upgrade its version later. Also, if a application fails, the first
probing would always be successful and only a single rebalance happens.
If we use this protocol, I think we don't need any configuration
parameter for future upgrades.


About "upgrade.from" vs "internal.protocol.version":

Users would set "upgrade.from" to the release version the current/old
application is using. I think this is simpler, as users know this
version. If we use "internal.protocol.version" instead, we expose
implementation details and users need to know the protocol version (ie,
they need to map from the release version to the protocol version; ie,
"I am run 0.11.0 that runs with metadata protocol version 2").

Also the KIP states that for the second rolling bounce, the
"upgrade.mode" config should be set back to `null` -- and thus,
"upgrade.from" would not have any effect and is ignored (I will update
the KIP to point out this dependency).



About your second point: I'll update the KIP accordingly to describe
future updates as well. Both will be different.



One more point about upgrading the store format. I was thinking about
avoiding the second rolling bounce all together in the future: (1) the
goal is to achieve an upgrade with zero downtime (2) this required to
prepare the stores as "hot standbys" before we do the switch and delete
the old stores. (3) the current proposal does the switch "globally" --
this is simpler and due to the required second rebalance no disadvantage.
However, a global consistent switch over might actually not be required.
For "in_place" upgrade, following the protocol from above, we could
decouple the store switch and each instance could switch its store
independently from all other instances. After the rolling bounce, it
seems to be ok to switch from the old store to the new store "under the
hood" whenever the new store is ready (this could even be done, before
we switch to the new metadata version). Each time we update the "hot
standby" we check if it reached the "endOffset"  (or maybe X% that could
either be hardcoded or configurable). If we detect this situation, the
Streams application closes corresponding active tasks as well as "hot
standby" tasks, and re-creates the new active tasks using the new store.
(I need to go through the details once again, but it seems to be feasible.).

Combining this strategy with the "multiple assignment" idea, might even
enable us to do an single rolling bounce upgrade from 1.1 -> 1.2.
Applications would just use the old store, as long as the new store is
not ready, even if the new metadata version is used already.

For future 

Build failed in Jenkins: kafka-trunk-jdk8 #2465

2018-03-11 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Remove code duplication + excessive space (#4683)

--
[...truncated 419.48 KB...]

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED


Build failed in Jenkins: kafka-trunk-jdk7 #3242

2018-03-11 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Remove code duplication + excessive space (#4683)

--
[...truncated 418.73 KB...]
kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates STARTED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates PASSED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods STARTED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED


Build failed in Jenkins: kafka-trunk-jdk9 #463

2018-03-11 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Remove code duplication + excessive space (#4683)

--
[...truncated 1.48 MB...]
kafka.message.ByteBufferMessageSetTest > 
testWriteToChannelThatConsumesPartially PASSED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent STARTED

kafka.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.message.ByteBufferMessageSetTest > testWriteTo STARTED

kafka.message.ByteBufferMessageSetTest > testWriteTo PASSED

kafka.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.message.ByteBufferMessageSetTest > testIterator STARTED

kafka.message.ByteBufferMessageSetTest > testIterator PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testPeriodicTokenExpiry STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testPeriodicTokenExpiry PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testDescribeToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testDescribeToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testCreateToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testCreateToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testExpireToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testExpireToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testRenewToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testRenewToken 
PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED