[jira] [Created] (KAFKA-8270) Kafka retention hour is not working

2019-04-20 Thread Jiangtao Liu (JIRA)
Jiangtao Liu created KAFKA-8270:
---

 Summary: Kafka retention hour is not working
 Key: KAFKA-8270
 URL: https://issues.apache.org/jira/browse/KAFKA-8270
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Jiangtao Liu
Assignee: Richard Yu


Currently, when a consumer falls out of a consumer group, it will restart 
processing from the last checkpointed offset. However, this design could result 
in a lag which some users could not afford to let happen. For example, lets say 
a consumer crashed at offset 100, with the last checkpointed offset being at 
70. When it recovers at a later offset (say, 120), it will be behind by an 
offset range of 50 (120 - 70). This is because the consumer restarted at 70, 
forcing it to reprocess old data. To avoid this from happening, one option 
would be to allow the current consumer to start processing not from the last 
checkpointed offset (which is 70 in the example), but from 120 where it 
recovers. Meanwhile, a new KafkaConsumer will be instantiated and start reading 
from offset 70 in concurrency with the old process, and will be terminated once 
it reaches 120. In this manner, a considerable amount of lag can be avoided, 
particularly since the old consumer could proceed as if nothing had happened. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8269) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-04-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8269:
--

 Summary: Flaky Test 
TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
 Key: KAFKA-8269
 URL: https://issues.apache.org/jira/browse/KAFKA-8269
 Project: Kafka
  Issue Type: Bug
  Components: admin, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3573/tests]
{quote}java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:87)
at org.junit.Assert.assertTrue(Assert.java:42)
at org.junit.Assert.assertTrue(Assert.java:53)
at 
kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:659){quote}
It's a long LOG. This might be interesting:
{quote}[2019-04-20 21:30:37,936] ERROR [ReplicaFetcher replicaId=4, leaderId=5, 
fetcherId=0] Error for partition testCreateWithReplicaAssignment-0cpsXnG35w-0 
at offset 0 (kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-20 21:30:48,600] WARN Unable to read additional data from client 
sessionid 0x10510a59d3c0004, likely client has closed socket 
(org.apache.zookeeper.server.NIOServerCnxn:376)
[2019-04-20 21:30:48,908] WARN Unable to read additional data from client 
sessionid 0x10510a59d3c0003, likely client has closed socket 
(org.apache.zookeeper.server.NIOServerCnxn:376)
[2019-04-20 21:30:48,919] ERROR [RequestSendThread controllerId=0] Controller 0 
fails to send a request to broker localhost:43520 (id: 5 rack: rack3) 
(kafka.controller.RequestSendThread:76)
java.lang.InterruptedException
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at kafka.utils.ShutdownableThread.pause(ShutdownableThread.scala:75)
at 
kafka.controller.RequestSendThread.backoff$1(ControllerChannelManager.scala:224)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:252)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2019-04-20 21:30:48,920] ERROR [RequestSendThread controllerId=0] Controller 0 
fails to send a request to broker localhost:33570 (id: 4 rack: rack3) 
(kafka.controller.RequestSendThread:76)
java.lang.InterruptedException
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at kafka.utils.ShutdownableThread.pause(ShutdownableThread.scala:75)
at 
kafka.controller.RequestSendThread.backoff$1(ControllerChannelManager.scala:224)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:252)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
[2019-04-20 21:31:28,942] ERROR [ReplicaFetcher replicaId=3, leaderId=1, 
fetcherId=0] Error for partition under-min-isr-topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-04-20 21:31:28,973] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
fetcherId=0] Error for partition under-min-isr-topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3573

2019-04-20 Thread Apache Jenkins Server
See 


Changes:

[colin] MINOR: Remove errant lock.unlock() call from RoundTripWorker (#6612)

[mjsax] [KAFKA-3729] Auto-configure non-default SerDes passed alongside the

--
[...truncated 2.38 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [VOTE] KIP-431: Support of printing additional ConsumerRecord fields in DefaultMessageFormatter

2019-04-20 Thread Mateusz Zakarczemny
Hi Ismael,

Thanks for your questions. Answers below.

1. It would be helpful to see an example of the output with everything
enabled.

For consumer record:

new ConsumerRecord[Array[Byte], Array[Byte]](

  "someTopic",

  partition = 9,

  offset = 9876,

  timestamp = 123,

  timestampType = TimestampType.CREATE_TIME,

  checksum = 0L,

  serializedKeySize = 0,

  serializedValueSize = 0,

  key = "someKey",

  value = "someValue",

  new RecordHeaders(Seq(header("h1", "v1"), header("h2", "v2")).asJava)

)

and everything enabled:

Map("print.key" -> "true",

  "print.timestamp" -> "true",

  "print.partition" -> "true",

  "print.offset" -> "true",

  "print.headers" -> "true",

  "print.value" -> "true"),

The output would be:

"CreateTime:1234 someKey 9876 9 h1:v1,h2:v2 someValue
"


2. What are the default values for the properties (eg what's the default
header separator).

printTimestamp = false

printKey = false

printOffset = false

printPartition = false

printHeaders = false

printValue = true

keySeparator = "\t"

headersSeparator = ","

lineSeparator = "\n"


3. What is the separator used between key/value and the new fields?

There is no additional separator. keySeparator is used for separating key,
value and any new fields.
It is backward compatible behavior. keySeparator - separate key from
anything else.


Regards,
Mateusz Zakarczemny


pt., 12 kwi 2019 o 22:47 Ismael Juma  napisał(a):

> Hi Mateusz,
>
> The KIP looks good. Just a few of questions/suggestions:
>
> 1. It would be helpful to see an example of the output with everything
> enabled.
> 2. What are the default values for the properties (eg what's the default
> header separator).
> 3. What is the separator used between key/value and the new fields?
>
> Ismael
>
> On Fri, Apr 12, 2019 at 9:43 PM Mateusz Zakarczemny <
> m.zakarcze...@gmail.com>
> wrote:
>
> > Hi All,
> > This KIP is in discussion for more than a month. The feedback is positive
> > without any objection comments. Therefore, I would like to start voting.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-431%3A+Support+of+printing+additional+ConsumerRecord+fields+in+DefaultMessageFormatter
> >
> > Regards,
> > Mateusz Zakarczemny
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #3572

2019-04-20 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] KAFKA-7895: fix Suppress changelog restore (#6536)

--
[...truncated 4.76 MB...]
> Task :streams:upgrade-system-tests-20:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-20:test
> Task :streams:upgrade-system-tests-21:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-21:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-21:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-21:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-21:compileTestJava
> Task :streams:upgrade-system-tests-21:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-21:testClasses
> Task :streams:upgrade-system-tests-21:checkstyleTest
> Task :streams:upgrade-system-tests-21:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-21:test

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaProperties PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier PASSED


[jira] [Resolved] (KAFKA-7895) Ktable supress operator emitting more than one record for the same key per window

2019-04-20 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-7895.

Resolution: Fixed

> Ktable supress operator emitting more than one record for the same key per 
> window
> -
>
> Key: KAFKA-7895
> URL: https://issues.apache.org/jira/browse/KAFKA-7895
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
>Reporter: prasanthi
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.1.2, 2.2.1
>
>
> Hi, We are using kstreams to get the aggregated counts per vendor(key) within 
> a specified window.
> Here's how we configured the suppress operator to emit one final record per 
> key/window.
> {code:java}
> KTable, Long> windowedCount = groupedStream
>  .windowedBy(TimeWindows.of(Duration.ofMinutes(1)).grace(ofMillis(5L)))
>  .count(Materialized.with(Serdes.Integer(),Serdes.Long()))
>  .suppress(Suppressed.untilWindowCloses(unbounded()));
> {code}
> But we are getting more than one record for the same key/window as shown 
> below.
> {code:java}
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1039
> [KTABLE-TOSTREAM-10]: [131@154906704/154906710], 1162
> [KTABLE-TOSTREAM-10]: [9@154906704/154906710], 6584
> [KTABLE-TOSTREAM-10]: [88@154906704/154906710], 107
> [KTABLE-TOSTREAM-10]: [108@154906704/154906710], 315
> [KTABLE-TOSTREAM-10]: [119@154906704/154906710], 119
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 746
> [KTABLE-TOSTREAM-10]: [154@154906704/154906710], 809{code}
> Could you please take a look?
> Thanks
>  
>  
> Added by John:
> Acceptance Criteria:
>  * add suppress to system tests, such that it's exercised with crash/shutdown 
> recovery, rebalance, etc.
>  ** [https://github.com/apache/kafka/pull/6278]
>  * make sure that there's some system test coverage with caching disabled.
>  ** Follow-on ticket: https://issues.apache.org/jira/browse/KAFKA-7943
>  * test with tighter time bounds with windows of say 30 seconds and use 
> system time without adding any extra time for verification
>  ** Follow-on ticket: https://issues.apache.org/jira/browse/KAFKA-7944



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4332) kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test failure

2019-04-20 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-4332.

Resolution: Duplicate

> kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test 
> failure
> -
>
> Key: KAFKA-4332
> URL: https://issues.apache.org/jira/browse/KAFKA-4332
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 0.10.1.0, 2.3.0
>Reporter: Jun Rao
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> kafka.api.UserQuotaTest > testThrottledProducerConsumer FAILED
> java.lang.AssertionError: Should have been throttled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8268) Flaky Test SaslSslAdminClientIntegrationTest#testSeekAfterDeleteRecords

2019-04-20 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8268:
--

 Summary: Flaky Test 
SaslSslAdminClientIntegrationTest#testSeekAfterDeleteRecords
 Key: KAFKA-8268
 URL: https://issues.apache.org/jira/browse/KAFKA-8268
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3570/tests]
{quote}java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout.
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
 
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at 
kafka.api.AdminClientIntegrationTest.testSeekAfterDeleteRecords(AdminClientIntegrationTest.scala:775){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Permission to add KIP

2019-04-20 Thread Guozhang Wang
Hello Cyrus,

You're good to go.

Guozhang


On Sat, Apr 20, 2019 at 9:12 AM Cyrus Vafadari  wrote:

> Hello,
>
> I'd like to request permission to create a new KIP in the Apache Kafka
> project!
>
> Thanks,
>
> Cyrus Vafadari
>


-- 
-- Guozhang


Permission to add KIP

2019-04-20 Thread Cyrus Vafadari
Hello,

I'd like to request permission to create a new KIP in the Apache Kafka
project!

Thanks,

Cyrus Vafadari


Re: [VOTE] KIP-421: Automatically resolve external configurations.

2019-04-20 Thread Randall Hauch
+1

Thanks, Tejal.

Randall

On Thu, Apr 18, 2019 at 3:02 PM TEJAL ADSUL  wrote:

> Hi All,
>
> As we have reached a consensus on the design, I would like to start a vote
> for KIP-421. Below are the links for this proposal:
>
> KIP Link:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100829515
> DiscussionThread:
> https://lists.apache.org/thread.html/a2f834d876e9f8fb3977db794bf161818c97f7f481edd1b10449d89f@%3Cdev.kafka.apache.org%3E
>
> Thanks,
> Tejal
>


Jenkins build is back to normal : kafka-2.1-jdk8 #167

2019-04-20 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3571

2019-04-20 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Java8 cleanup (#6599)

[github] MINOR: Java8 cleanup (#6598)

--
[...truncated 2.37 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 

Jenkins build is back to normal : kafka-2.2-jdk8 #87

2019-04-20 Thread Apache Jenkins Server
See