Jenkins build is back to normal : kafka-trunk-jdk8 #4309

2020-03-10 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-2.3-jdk8 #184

2020-03-10 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9701) Consumer could catch InconsistentGroupProtocolException during rebalance

2020-03-10 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9701:
--

 Summary: Consumer could catch InconsistentGroupProtocolException 
during rebalance
 Key: KAFKA-9701
 URL: https://issues.apache.org/jira/browse/KAFKA-9701
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Boyang Chen


INFO log shows that we accidentally hit an unexpected inconsistent group 
protocol exception:

[2020-03-10T17:16:53-07:00] 
(streams-soak-2-5-eos-broker-2-5_soak_i-00067445452c82fe8_streamslog) 
[2020-03-11 *00:16:53,382*] INFO 
[stream-soak-test-d3da8597-c371-450e-81d9-72aea6a26949-StreamThread-1] 
stream-client [stream-soak-test-d3da8597-c371-450e-81d9-72aea6a26949] State 
transition from REBALANCING to RUNNING (org.apache.kafka.streams.KafkaStreams)

 

[2020-03-10T17:16:53-07:00] 
(streams-soak-2-5-eos-broker-2-5_soak_i-00067445452c82fe8_streamslog) 
[2020-03-11 *00:16:53,384*] WARN [kafka-producer-network-thread | 
stream-soak-test-d3da8597-c371-450e-81d9-72aea6a26949-StreamThread-1-0_1-producer]
 stream-thread 
[stream-soak-test-d3da8597-c371-450e-81d9-72aea6a26949-StreamThread-1] task 
[0_1] Error sending record to topic node-name-repartition due to Producer 
attempted an operation with an old epoch. Either there is a newer producer with 
the same transactionalId, or the producer's transaction has been expired by the 
broker.; No more records will be sent and no more offsets will be recorded for 
this task.

 

 

[2020-03-10T17:16:53-07:00] 
(streams-soak-2-5-eos-broker-2-5_soak_i-00067445452c82fe8_streamslog) 
[2020-03-11 *00:16:53,521*] INFO 
[stream-soak-test-d3da8597-c371-450e-81d9-72aea6a26949-StreamThread-1] 
[Consumer 
clientId=stream-soak-test-d3da8597-c371-450e-81d9-72aea6a26949-StreamThread-1-consumer,
 groupId=stream-soak-test] Member 
stream-soak-test-d3da8597-c371-450e-81d9-72aea6a26949-StreamThread-1-consumer-d1c3c796-0bfb-4c1c-9fb4-5a807d8b53a2
 sending LeaveGroup request to coordinator 
ip-172-31-20-215.us-west-2.compute.internal:9092 (id: 2147482646 rack: null) 
due to the consumer unsubscribed from all topics 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator)

 

[2020-03-10T17:16:54-07:00] 
(streams-soak-2-5-eos-broker-2-5_soak_i-00067445452c82fe8_streamslog) 
[2020-03-11 *00:16:53,798*] ERROR 
[stream-soak-test-d3da8597-c371-450e-81d9-72aea6a26949-StreamThread-1] 
stream-thread 
[stream-soak-test-d3da8597-c371-450e-81d9-72aea6a26949-StreamThread-1] 
Encountered the following unexpected Kafka exception during processing, this 
usually indicate Streams internal errors: 
(org.apache.kafka.streams.processor.internals.StreamThread)

[2020-03-10T17:16:54-07:00] 
(streams-soak-2-5-eos-broker-2-5_soak_i-00067445452c82fe8_streamslog) 
org.apache.kafka.common.errors.InconsistentGroupProtocolException: The group 
member's supported protocols are incompatible with those of existing members or 
first group member tried to join with empty protocol type or empty protocol 
list.

 

Potentially needs further log to understand this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9700) Negative estimatedCompressionRatio leads to misjudgment about if there is no room

2020-03-10 Thread jiamei xie (Jira)
jiamei xie created KAFKA-9700:
-

 Summary: Negative estimatedCompressionRatio leads to misjudgment 
about if there is no room
 Key: KAFKA-9700
 URL: https://issues.apache.org/jira/browse/KAFKA-9700
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: jiamei xie


* When I run the following command 
bin/kafka-producer-perf-test.sh --topic test --num-records 5000 
--throughput -1 --record-size 5000 --producer-props 
bootstrap.servers=server04:9092 acks=1 buffer.memory=67108864 batch.size 65536 
compression.type=zstd
There was a warning:
[2020-03-06 17:36:50,216] WARN [Producer clientId=producer-1] Got error produce 
response in correlation id 3261 on topic-partition test-1, splitting and 
retrying (2147483647 attempts left). Error: MESSAGE_TOO_LARGE 
(org.apache.kafka.clients.producer.internals.Sender)

* The batch size(65536) is smaller than max.message.bytes (1048588) .  So it's 
not the root cause.


* I added some logs in CompressionRatioEstimator.updateEstimation and found 
there were negative currentEstimation values.  The following were logs I added
public static float updateEstimation(String topic, CompressionType type, float 
observedRatio) {
float[] compressionRatioForTopic = getAndCreateEstimationIfAbsent(topic);
float currentEstimation = compressionRatioForTopic[type.id];
synchronized (compressionRatioForTopic) {
if (observedRatio > currentEstimation)
{
compressionRatioForTopic[type.id] = Math.max(currentEstimation 
+ COMPRESSION_RATIO_DETERIORATE_STEP, observedRatio);
}
else if (observedRatio < currentEstimation) {
  compressionRatioForTopic[type.id] = currentEstimation - 
COMPRESSION_RATIO_IMPROVING_STEP;
  log.warn("currentEstimation is {} , 
COMPRESSION_RATIO_IMPROVING_STEP is {} , compressionRatioForTopic[type.id] is 
{}, type.id is {}", currentEstimation, 
COMPRESSION_RATIO_IMPROVING_STEP,compressionRatioForTopic[type.id], type.id);
}
}
 return compressionRatioForTopic[type.id];
}


The observedRatio is smaller than COMPRESSION_RATIO_IMPROVING_STEP in some 
cases.  Some I think the else if block should be changed into 

else if (observedRatio < currentEstimation) {
  compressionRatioForTopic[type.id] = 
Math.max(currentEstimation - COMPRESSION_RATIO_IMPROVING_STEP, observedRatio);
  }





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9699) kafka server fail to start due to old version of sed not support -E option when detect java version in kafka-run-class.sh

2020-03-10 Thread qiang Liu (Jira)
qiang Liu created KAFKA-9699:


 Summary: kafka server fail to start due to old version of sed not 
support -E option when detect java version in kafka-run-class.sh
 Key: KAFKA-9699
 URL: https://issues.apache.org/jira/browse/KAFKA-9699
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Affects Versions: 2.4.0
 Environment: Red Hat Enterprise Linux Server release 5.6 (Tikanga)
GNU sed version 4.1.5
Reporter: qiang Liu


kafka server fail to start due to old version of sed not support -E option when 
detect java version in kafka-run-class.sh

detail info as follows

[~]$ sed --version
GNU sed version 4.1.5
Copyright (C) 2003 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE,
to the extent permitted by law.


[~]$ sed -E
sed: invalid option -- E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4308

2020-03-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9344: Override default values inside ConsumerConfigs (#7876)


--
[...truncated 5.85 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE

[jira] [Created] (KAFKA-9698) Wrong default max.message.bytes in document

2020-03-10 Thread jiamei xie (Jira)
jiamei xie created KAFKA-9698:
-

 Summary: Wrong default max.message.bytes in document
 Key: KAFKA-9698
 URL: https://issues.apache.org/jira/browse/KAFKA-9698
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0
Reporter: jiamei xie


The broker default for max.message.byte  has been changed  to 1048588 in 
https://issues.apache.org/jira/browse/KAFKA-4203. But the default value in 
http://kafka.apache.org/documentation/ is still 112.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk11 #1226

2020-03-10 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.5-jdk8 #60

2020-03-10 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9658; Fix user quota removal (#8232)


--
[...truncated 2.89 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

[RESULTS] [VOTE] Release Kafka version 2.4.1

2020-03-10 Thread Bill Bejeck
This vote passes with 10- +1 votes (4 bindings) and no 0 or -1 votes.

+1 votes
PMC Members (in voting order):
* David Arthur
* Colin McCabe
* Vahid Hashemian
* Gwen Shapira

Committers:
* Mickael Maison

Community (in voting order):
* Eric Lalonde
* Eno Thereska
* Tom Bentley
* Sean Glover
* Levani Kokhreidze

Vote thread:
https://www.mail-archive.com/dev@kafka.apache.org/msg105496.html

I'll continue with the release process, and the release announcement will
follow in the next few days.

Bill Bejeck


Re: Subject: [VOTE] 2.4.1 RC0

2020-03-10 Thread Bill Bejeck
Ismael,

Good point.  Since KAFKA-9675 doesn't meet the bar of being a regression in
this release or something more severe i.e., data loss, we'll move forward.

Thanks,
Bill

On Mon, Mar 9, 2020 at 5:02 PM Ismael Juma  wrote:

> Is this a blocker given that it's been like this for months and no-one
> noticed? 2.4.1 seemingly has all the votes needed for the release. Why not
> go ahead with it. When KAFKA-9675 is merged, it can be included in the next
> release.
>
> Ismael
>
> On Mon, Mar 9, 2020 at 8:43 PM Bill Bejeck  wrote:
>
> > Thanks to everyone for voting.
> >
> > A new blocker has surfaced
> > https://issues.apache.org/jira/browse/KAFKA-9675,
> > so I'll do another RC soon.
> >
> > Thanks again.
> > Bill
> >
> > On Mon, Mar 9, 2020 at 1:35 PM Levani Kokhreidze  >
> > wrote:
> >
> > > +1 non-binding.
> > >
> > > - Built from source
> > > - Ran unit tests. All passed.
> > > - Quickstart passed.
> > >
> > > Looking forward upgrading to 2.4.1
> > >
> > > Regards,
> > > Levani
> > >
> > > On Mon, 9 Mar 2020, 17:11 Sean Glover, 
> > wrote:
> > >
> > > > +1 (non-binding).  I built from source and ran the unit test suite
> > > > successfully.
> > > >
> > > > Thanks for running this release.  I'm looking forward to upgrading to
> > > > 2.4.1.
> > > >
> > > > Sean
> > > >
> > > > On Mon, Mar 9, 2020 at 8:07 AM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > > > Thanks for running the release!
> > > > > +1 (binding)
> > > > >
> > > > > - Verified signatures
> > > > > - Built from source
> > > > > - Ran unit tests, all passed
> > > > > - Ran through quickstart steps, all worked
> > > > >
> > > > > On Mon, Mar 9, 2020 at 11:04 AM Tom Bentley 
> > > wrote:
> > > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > Built from source, all unit tests passed.
> > > > > >
> > > > > > Thanks Bill.
> > > > > >
> > > > > > On Mon, Mar 9, 2020 at 3:44 AM Gwen Shapira 
> > > wrote:
> > > > > >
> > > > > > > +1 (binding)
> > > > > > >
> > > > > > > Verified signatures, built jars from source, quickstart passed
> > and
> > > > > local
> > > > > > > unit tests all passed.
> > > > > > >
> > > > > > > Thank you for the release Bill!
> > > > > > >
> > > > > > > Gwen Shapira
> > > > > > > Engineering Manager | Confluent
> > > > > > > 650.450.2760 | @gwenshap
> > > > > > > Follow us: Twitter | blog
> > > > > > >
> > > > > > > On Sat, Mar 07, 2020 at 8:15 PM, Vahid Hashemian <
> > > > > > > vahid.hashem...@gmail.com > wrote:
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > +1 (binding)
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Verified signature, built from source, and ran quickstart
> > > > > successfully
> > > > > > > > (using openjdk version "11.0.6"). I also ran unit tests
> locally
> > > > which
> > > > > > > > resulted in a few flaky tests for which there are already
> open
> > > > Jiras:
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
> > > > > > > > ConsumerBounceTest.testCloseDuringRebalance
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >
> ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
> > > > > > > >
> > > > >
> > >
> PlaintextEndToEndAuthorizationTest.testNoConsumeWithDescribeAclViaAssign
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >
> SaslClientsWithInvalidCredentialsTest.testManualAssignmentConsumerWithAuthenticationFailure
> > > > > > > > SaslMultiMechanismConsumerTest.testCoordinatorFailover
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Thanks for running the release Bill.
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > > --Vahid
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Fri, Mar 6, 2020 at 9:20 AM Colin McCabe < cmccabe@
> apache.
> > > > org (
> > > > > > > > cmcc...@apache.org ) > wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> +1 (binding)
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> Checked the git hash and branch, looked at the docs a bit.
> Ran
> > > > > > > quickstart
> > > > > > > >> (although not the connect or streams parts). Looks good.
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> best,
> > > > > > > >> Colin
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> On Fri, Mar 6, 2020, at 07:31, David Arthur wrote:
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>>
> > > > > > > >>>
> > > > > > > >>> +1 (binding)
> > > > > > > >>>
> > > > > > > >>>
> > > > > > > >>>
> > > > > > > >>> Download kafka_2.13-2.4.1 and verified signature, ran
> > > quickstart,
> > > > > > > >>> everything looks good.
> > > > > > > >>>
> > > > > > > >>>
> > > > > > > >>>
> > > > > > > >>> Thanks for running this release, Bill!
> > > 

[jira] [Resolved] (KAFKA-9658) Removing default user quota doesn't take effect until broker restart

2020-03-10 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9658.

Fix Version/s: 2.4.2
   2.3.2
   2.5.1
   Resolution: Fixed

> Removing default user quota doesn't take effect until broker restart
> 
>
> Key: KAFKA-9658
> URL: https://issues.apache.org/jira/browse/KAFKA-9658
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1, 2.1.1, 2.2.2, 2.4.0, 2.3.1
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Major
> Fix For: 2.5.1, 2.3.2, 2.4.2
>
>
> To reproduce (for any quota type: produce, consume, and request):
> Example with consumer quota, assuming no user/client quotas are set initially.
> 1. Set default user consumer quotas:
> {{./kafka-configs.sh --zookeeper  --alter --add-config 
> 'consumer_byte_rate=1' --entity-type users --entity-default}}
> {{2. Send some consume load for some user, say user1.}}
> {{3. Remove default user consumer quota using:}}
> {{./kafka-configs.sh --zookeeper  --alter --delete-config 
> 'consumer_byte_rate' --entity-type users --entity-default}}
> Result: --describe (as below) returns correct result that there is no quota, 
> but quota bound in ClientQuotaManager.metrics does not get updated for users 
> that were sending load, which causes the broker to continue throttling 
> requests with the previously set quota.
>  {{/opt/confluent/bin/kafka-configs.sh --zookeeper   --describe 
> --entity-type users --entity-default}}
> {{}}{{}} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9697) ControlPlaneNetworkProcessorAvgIdlePercent is always NaN

2020-03-10 Thread James Cheng (Jira)
James Cheng created KAFKA-9697:
--

 Summary: ControlPlaneNetworkProcessorAvgIdlePercent is always NaN
 Key: KAFKA-9697
 URL: https://issues.apache.org/jira/browse/KAFKA-9697
 Project: Kafka
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.3.0
Reporter: James Cheng


I have a broker running Kafka 2.3.0. The value of 
kafka.network:type=SocketServer,name=ControlPlaneNetworkProcessorAvgIdlePercent 
is always "NaN".

Is that normal, or is there a problem with the metric?

I am running Kafka 2.3.0. I have not checked this in newer/older versions.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9696) Document the control plane metrics that were added in KIP-402

2020-03-10 Thread James Cheng (Jira)
James Cheng created KAFKA-9696:
--

 Summary: Document the control plane metrics that were added in 
KIP-402
 Key: KAFKA-9696
 URL: https://issues.apache.org/jira/browse/KAFKA-9696
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.0, 2.3.0, 2.2.0
Reporter: James Cheng


KIP-402 (in https://issues.apache.org/jira/browse/KAFKA-7719) added new metrics 
of

 

kafka.network:type=SocketServer,name=ControlPlaneNetworkProcessorAvgIdlePercent

kafka.network:type=SocketServer,name=ControlPlaneExpiredConnectionsKilledCount

 

There is no documentation on these metrics on 
http://kafka.apache.org/documentation/. We should update the documentation to 
describe these new metrics.

 

I'm not 100% familiar with them, but it appears they are measuring the same 
thing as 

kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent

kafka.network:type=SocketServer,name=ExpiredConnectionsKilledCount

 

except for the control plane, instead of the data plane.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9695) AdminClient allows null topic configs, but broker throws NPE

2020-03-10 Thread Rajini Sivaram (Jira)
Rajini Sivaram created KAFKA-9695:
-

 Summary: AdminClient allows null topic configs, but broker throws 
NPE
 Key: KAFKA-9695
 URL: https://issues.apache.org/jira/browse/KAFKA-9695
 Project: Kafka
  Issue Type: Bug
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.6.0


Config entries may contain null values, but broker's AdminManager throws NPE 
resulting in UnknownServerException. We should handle null values in configs.
{code:java}
[2020-03-10 21:56:07,904] ERROR [Admin Manager on Broker 0]: Error processing 
create topic request CreatableTopic(name='topic', numPartitions=2, 
replicationFactor=3, assignments=[], 
configs=[CreateableTopicConfig(name='message.format.version', value=null), 
CreateableTopicConfig(name='compression.type', value='producer')]) 
(kafka.server.AdminManager:76)
java.lang.NullPointerException
at java.util.Hashtable.put(Hashtable.java:460)
at java.util.Properties.setProperty(Properties.java:166)
at 
kafka.server.AdminManager.$anonfun$createTopics$3(AdminManager.scala:99)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at 
kafka.server.AdminManager.$anonfun$createTopics$2(AdminManager.scala:98)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at 
scala.collection.mutable.HashMap$$anon$2.$anonfun$foreach$3(HashMap.scala:158)
at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
at scala.collection.mutable.HashMap$$anon$2.foreach(HashMap.scala:158)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at kafka.server.AdminManager.createTopics(AdminManager.scala:91)
at 
kafka.server.KafkaApis.handleCreateTopicsRequest(KafkaApis.scala:1701)
at kafka.server.KafkaApis.handle(KafkaApis.scala:147)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:70)
 {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4307

2020-03-10 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: Task#dirtyClose should not throw (#8258)

[github] KAFKA-9658; Fix user quota removal (#8232)


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task 

Build failed in Jenkins: kafka-trunk-jdk11 #1225

2020-03-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9686: MockConsumer#endOffsets should be idempotent (#8255)


--
[...truncated 5.88 MB...]

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task 

Re: [VOTE] 2.5.0 RC1

2020-03-10 Thread David Arthur
Thanks for the test failure reports, Tom. Tracking (and fixing) these is
important and will make future release managers have an easier time :)

-David

On Tue, Mar 10, 2020 at 10:16 AM Tom Bentley  wrote:

> Hi David,
>
> I verified signatures, built the tagged branch and ran unit and integration
> tests. I found some flaky tests, as follows:
>
> https://issues.apache.org/jira/browse/KAFKA-9691 (new)
> https://issues.apache.org/jira/browse/KAFKA-9692 (new)
> https://issues.apache.org/jira/browse/KAFKA-9283 (already reported)
>
> Many thanks,
>
> Tom
>
> On Tue, Mar 10, 2020 at 3:28 AM David Arthur  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the second candidate for release of Apache Kafka 2.5.0. The first
> > release candidate included an erroneous NOTICE file, so another RC was
> > needed to fix that.
> >
> > This is a major release of Kafka which includes many new features,
> > improvements, and bug fixes including:
> >
> > * TLS 1.3 support (1.2 is now the default)
> > * Co-groups for Kafka Streams
> > * Incremental rebalance for Kafka Consumer
> > * New metrics for better operational insight
> > * Upgrade Zookeeper to 3.5.7
> > * Deprecate support for Scala 2.11
> >
> > Release notes for the 2.5.0 release:
> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc1/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Monday, March 16th 2020 5pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc1/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~davidarthur/kafka-2.5.0-rc1/javadoc/
> >
> > * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.5.0-rc1
> >
> > * Documentation:
> > https://kafka.apache.org/25/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/25/protocol.html
> >
> > * Links to successful Jenkins builds for the 2.5 branch to follow
> >
> > Thanks,
> > David Arthur
> >
>


-- 
-David


Re: [DISCUSS] KIP-574: CLI Dynamic Configuration with file input

2020-03-10 Thread Aneel Nazareth
After reading a bit more about it in the Kubernetes case, I think it's
reasonable to do this and be explicit that we're ignoring the value,
just deleting all keys that appear in the file.

I've updated the KIP wiki page to reflect that:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-574%3A+CLI+Dynamic+Configuration+with+file+input

And updated my sample PR:
https://github.com/apache/kafka/pull/8184

If there are no further comments, I'll request a vote in a few days.

Thanks for the feedback!

On Mon, Mar 9, 2020 at 1:24 PM Aneel Nazareth  wrote:
>
> Hi David,
>
> Is the expected behavior that the keys are deleted without checking the 
> values?
>
> Let's say I had this file new.properties:
> a=1
> b=2
>
> And ran:
>
> bin/kafka-configs --bootstrap-server localhost:9092 \
>   --entity-type brokers --entity-default \
>   --alter --add-config-file new.properties
>
> It seems clear what should happen if I run this immediately:
>
> bin/kafka-configs --bootstrap-server localhost:9092 \
>   --entity-type brokers --entity-default \
>   --alter --delete-config-file new.properties
>
> (Namely that both a and b would now have no values in the config)
>
> But what if this were run in-between:
>
> bin/kafka-configs --bootstrap-server localhost:9092 \
>   --entity-type brokers --entity-default \
>   --alter --add-config a=3
>
> Would it be surprising if the key/value pair a=3 was deleted, even
> though the config that is in the file is a=1? Or would that be
> expected?
>
> On Mon, Mar 9, 2020 at 1:02 PM David Jacot  wrote:
> >
> > Hi Colin,
> >
> > Yes, you're right. This is weird but convenient because you don't have to
> > duplicate
> > the "keys". I was thinking about the kubernetes API which allows to create
> > a Pod
> > based on a file and allows to delete it as well with the same file. I have
> > always found
> > this convenient, especially when doing local tests.
> >
> > Best,
> > David
> >
> > On Mon, Mar 9, 2020 at 6:35 PM Colin McCabe  wrote:
> >
> > > Hi Aneel,
> > >
> > > Thanks for the KIP.  I like the idea.
> > >
> > > You mention that "input from STDIN can be used instead of a file on
> > > disk."  The example given in the KIP seems to suggest that the command
> > > defaults to reading from STDIN if no argument is given to 
> > > --add-config-file.
> > >
> > > I would argue against this particular command-line pattern.  From the
> > > user's point of view, if they mess up and forget to supply an argument, or
> > > for some reason the parser doesn't treat something like an argument, the
> > > program will appear to hang in a confusing way.
> > >
> > > Instead, it would be better to follow the traditional UNIX pattern where a
> > > dash indicates that STDIN should be read.  So "--add-config-file -" would
> > > indicate that the program should read form STDIN.  This would be difficult
> > > to trigger accidentally, and more in line with the traditional 
> > > conventions.
> > >
> > > On Mon, Mar 9, 2020, at 08:47, David Jacot wrote:
> > > > I wonder if we should also add a `--delete-config-file` as a counterpart
> > > of
> > > > `--add-config-file`. It would be a bit weird to use a properties file in
> > > > this case as the values are not necessary but it may be handy to have 
> > > > the
> > > > possibility to remove the configurations which have been set. Have you
> > > > considered this?
> > >
> > > Hi David,
> > >
> > > That's an interesting idea.  However, I think it might be confusing to
> > > users to supply a file, and then have the values supplied in that file be
> > > ignored.  Is there really a case where we need to do this (as opposed to
> > > creating a file with blank values, or just passing the keys to
> > > --delete-config?
> > >
> > > best,
> > > Colin
> > >
> > > >
> > > > David
> > > >
> > > > On Thu, Feb 27, 2020 at 11:15 PM Aneel Nazareth 
> > > wrote:
> > > >
> > > > > I've created a PR for a potential implementation of this:
> > > > > https://github.com/apache/kafka/pull/8184 if we decide to go ahead
> > > with
> > > > > this KIP.
> > > > >
> > > > > On Wed, Feb 26, 2020 at 12:36 PM Aneel Nazareth 
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I'd like to discuss adding a new argument to kafka-configs.sh
> > > > > > (ConfigCommand.scala).
> > > > > >
> > > > > > Recently I've been working on some things that require complex
> > > > > > configurations. I've chosen to represent them as JSON strings in my
> > > > > > server.properties. This works well, and I'm able to update the
> > > > > > configurations by editing server.properties and restarting the
> > > broker.
> > > > > I've
> > > > > > added the ability to dynamically configure them, and that works well
> > > > > using
> > > > > > the AdminClient. However, when I try to update these configurations
> > > using
> > > > > > kafka-configs.sh, I run into a problem. My configurations contain
> > > commas,
> > > > > > and kafka-configs.sh tries to break them up into key/value pairs at
> > > the
> > > > > > 

Jenkins build is back to normal : kafka-trunk-jdk8 #4306

2020-03-10 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-9562) Streams not making progress under heavy failures with EOS enabled on 2.5 branch

2020-03-10 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-9562.

Resolution: Fixed

> Streams not making progress under heavy failures with EOS enabled on 2.5 
> branch
> ---
>
> Key: KAFKA-9562
> URL: https://issues.apache.org/jira/browse/KAFKA-9562
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: John Roesler
>Assignee: Boyang Chen
>Priority: Blocker
>
> During soak testing in preparation for the 2.5.0 release, we have discovered 
> a case in which Streams appears to stop making progress. Specifically, this 
> is a failure-resilience test in which we inject network faults separating the 
> instances from the brokers roughly every twenty minutes.
> On 2.4, Streams would obviously spend a lot of time rebalancing under this 
> scenario, but would still make progress. However, on the current 2.5 branch, 
> Streams effectively stops making progress except rarely.
> This appears to be a severe regression, so I'm filing this ticket as a 2.5.0 
> release blocker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9674) Task corruption should also close the producer if necessary

2020-03-10 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-9674.

Resolution: Fixed

> Task corruption should also close the producer if necessary
> ---
>
> Key: KAFKA-9674
> URL: https://issues.apache.org/jira/browse/KAFKA-9674
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> The task revive call only transits the task to CREATED mode. It should handle 
> the recreation of task producer as well.
> Sequence is like:
>  # Task hits out of range exception and throws CorruptedException
>  # Task producer closed along with the task
>  # Task revived and rebalance triggered
>  # Task was assigned back to the same thread
>  # Trying to use task producer will throw as it has already been closed.
> The full log:
>  
> [2020-03-03T21:56:29-08:00] 
> (streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
> 05:56:29,070] WARN 
> [stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
> stream-thread 
> [stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
> Encountered org.apache.kafka.clients.consumer.OffsetOutOfRangeException 
> fetching records from restore consumer for partitions 
> [stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-19-changelog-0], it 
> is likely that the consumer's position has fallen out of the topic partition 
> offset range because the topic was truncated or compacted on the broker, 
> marking the corresponding tasks as corrupted and re-initializing it later. 
> (org.apache.kafka.streams.processor.internals.StoreChangelogReader)
> [2020-03-03T21:56:29-08:00] 
> (streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
> 05:56:29,071] WARN 
> [stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
> stream-thread 
> [stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
> Detected the states of tasks 
> \{1_0=[stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-19-changelog-0]}
>  are corrupted. Will close the task as dirty and re-create and bootstrap from 
> scratch. (org.apache.kafka.streams.processor.internals.StreamThread)
>  
> [2020-03-03T21:56:30-08:00] 
> (streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
> 05:56:30,010] INFO 
> [stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
> [Producer 
> clientId=stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3-1_0-producer,
>  transactionalId=stream-soak-test-1_0] Closing the Kafka producer with 
> timeoutMillis = 9223372036854775807 ms. 
> (org.apache.kafka.clients.producer.KafkaProducer)
>  
>  
> [2020-03-03T21:56:30-08:00] 
> (streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
> 05:56:30,017] INFO 
> [stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
> stream-thread 
> [stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] task 
> [1_0] Closed clean (org.apache.kafka.streams.processor.internals.StreamTask)
>  
>  
> [2020-03-03T21:56:22-08:00] 
> (streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
> 05:56:22,827] INFO 
> [stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
> [Producer 
> clientId=stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3-1_0-producer,
>  transactionalId=stream-soak-test-1_0] Closing the Kafka producer with 
> timeoutMillis = 9223372036854775807 ms. 
> (org.apache.kafka.clients.producer.KafkaProducer)
> [2020-03-03T21:56:22-08:00] 
> (streams-soak-trunk-eos_soak_i-0eaa3f3a6a197f876_streamslog) [2020-03-04 
> 05:56:22,829] INFO 
> [stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] 
> stream-thread 
> [stream-soak-test-93df69e6-1d85-4b6a-81a1-c6d554693e3f-StreamThread-3] task 
> [1_0] Closed dirty (org.apache.kafka.streams.processor.internals.StreamTask)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9618) Failed state store deletion could lead to task file not found

2020-03-10 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-9618.

Resolution: Fixed

> Failed state store deletion could lead to task file not found
> -
>
> Key: KAFKA-9618
> URL: https://issues.apache.org/jira/browse/KAFKA-9618
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> A failed deletion of a stream task directory could later lead to the 
> impression that the task state is still there, thus causing file not found 
> exception as the directory was partially deleted.
> {code:java}
> [2020-02-26T22:08:05-08:00] 
> (streams-soak-trunk-eos_soak_i-04ebd21fd0e0da9bf_streamslog) [2020-02-27 
> 06:08:04,394] WARN 
> [stream-soak-test-b26adb53-07e2-4013-933a-0f4bcac84c04-StreamThread-2] 
> stream-thread 
> [stream-soak-test-b26adb53-07e2-4013-933a-0f4bcac84c04-StreamThread-2] task 
> [2_2] Failed to wiping state stores for task 2_2 
> (org.apache.kafka.streams.processor.internals.StreamTask) 
> [2020-02-26T22:08:05-08:00] 
> (streams-soak-trunk-eos_soak_i-04ebd21fd0e0da9bf_streamslog) [2020-02-27 
> 06:08:04,394] INFO 
> [stream-soak-test-b26adb53-07e2-4013-933a-0f4bcac84c04-StreamThread-2] 
> [Producer 
> clientId=stream-soak-test-b26adb53-07e2-4013-933a-0f4bcac84c04-StreamThread-2-2_2-producer,
>  transactionalId=stream-soak-test-2_2] Closing the Kafka producer with 
> timeoutMillis = 9223372036854775807 ms. 
> (org.apache.kafka.clients.producer.KafkaProducer)
> [2020-02-26T22:08:05-08:00] 
> (streams-soak-trunk-eos_soak_i-04ebd21fd0e0da9bf_streamslog) [2020-02-27 
> 06:08:04,411] ERROR 
> [stream-soak-test-b26adb53-07e2-4013-933a-0f4bcac84c04-StreamThread-1] 
> stream-thread 
> [stream-soak-test-b26adb53-07e2-4013-933a-0f4bcac84c04-StreamThread-1] 
> Encountered the following exception during processing and the thread is going 
> to shut down:  (org.apache.kafka.streams.processor.internals.StreamThread) 
> [2020-02-26T22:08:05-08:00] 
> (streams-soak-trunk-eos_soak_i-04ebd21fd0e0da9bf_streamslog) 
> org.apache.kafka.streams.errors.ProcessorStateException: Error opening store 
> KSTREAM-AGGREGATE-STATE-STORE-40 at location 
> /mnt/run/streams/state/stream-soak-test/2_2/rocksdb/KSTREAM-AGGREGATE-STATE-STORE-40
>          at 
> org.apache.kafka.streams.state.internals.RocksDBTimestampedStore.openRocksDB(RocksDBTimestampedStore.java:87)
>          at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:191)
>          at 
> org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:230)
>          at 
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
>          at 
> org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:44)
>          at 
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
>          at 
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:58)
>          at 
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9682) Flaky Test KafkaBasedLogTest#testSendAndReadToEnd

2020-03-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-9682.

Resolution: Fixed

Closing this as "fixed by" KAFKA-9686

> Flaky Test KafkaBasedLogTest#testSendAndReadToEnd
> -
>
> Key: KAFKA-9682
> URL: https://issues.apache.org/jira/browse/KAFKA-9682
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect, unit tests
>Reporter: Matthias J. Sax
>Priority: Critical
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/1048/testReport/org.apache.kafka.connect.util/KafkaBasedLogTest/testSendAndReadToEnd/]
> {quote}java.lang.AssertionError: expected:<2> but was:<0> at 
> org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:647) at 
> org.junit.Assert.assertEquals(Assert.java:633) at 
> org.apache.kafka.connect.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:355){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9686) MockConsumer#endOffsets should be idempotent

2020-03-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-9686.

Resolution: Fixed

> MockConsumer#endOffsets should be idempotent
> 
>
> Key: KAFKA-9686
> URL: https://issues.apache.org/jira/browse/KAFKA-9686
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> {code:java}
> private Long getEndOffset(List offsets) {
> if (offsets == null || offsets.isEmpty()) {
> return null;
> }
> return offsets.size() > 1 ? offsets.remove(0) : offsets.get(0);
> }
> {code}
> The above code has two issues.
> 1. It does not return the latest offset since the latest offset is at the end 
> of offsets
> 1. It removes the element from offsets so MockConsumer#endOffsets gets 
> non-idempotent



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9694) Reduce String Operations during WindowStore Operations

2020-03-10 Thread Michael Viamari (Jira)
Michael Viamari created KAFKA-9694:
--

 Summary: Reduce String Operations during WindowStore Operations
 Key: KAFKA-9694
 URL: https://issues.apache.org/jira/browse/KAFKA-9694
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 2.4.0
Reporter: Michael Viamari


During most (all?) window store operations, whenever a timestamp is required a 
call to {{ApiUtils.validateMillisecond}} is used to validate the inputs. 
This involves a call to {{ApiUtils.prepareMillisCheckFailMsgPrefix}}, which 
builds part of a string that is used for any necessary error messages. The 
string is constructed whether or not it is used, which incurs overhead 
penalties for the WindowStore operation. This has a nominally minimal impact, 
but can add up in scenarios that involve a lot of WindowStore operations, where 
performance is at a premium.

To reduce this overhead, {{ApiUtils.prepareMillisCheckFailMsgPrefix}} could 
return a {{Supplier}} instead, so that the string operations only occur 
when the string is needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] (KAFKA-9693) Kafka latency spikes caused by log segment flush on roll

2020-03-10 Thread Paolo Moriello
Hi,

I've just created a Jira ticket to summarize the results of my analysis and
propose a mitigation to the latency spikes:
https://issues.apache.org/jira/browse/KAFKA-9693
Please have a look at the ticket.
Do you see any important implication/risk in doing this change?

Thanks,
Paolo

On Tue, 18 Feb 2020 at 14:42, Paolo Moriello 
wrote:

> Hello,
>
>
> I'm performing an investigation on Kafka latency. During my analysis I was
> able to reproduce a scenario in which Kafka latency repeatedly spikes at
> constant frequency, for small amounts of time.
>
> In my tests, in particular, latency could spike every ~2 minutes
> (dependently on the throughput and input...) from an avg of ~3ms up to a
> max of +500ms (p95-p99).
>
> See image: https://imagizer.imageshack.com/img922/5308/glhkO4.png
>
>
> Further investigations showed that this is most likely caused by log
> segments being rolled over.
>
>
> Did anybody ever noticed anything like that? Do you know if it is possible
> to tune p99 performance in order to reduce/eliminate the latency spikes?
>
>
> Thanks,
>
> Paolo
>
>
> Test configuration:
>
>- 15 brokers
>- 6 producers, ack=1, no compression
>- 1 topic, 90 partitions
>- Kafka 2.2.1
>
>


[jira] [Created] (KAFKA-9693) Kafka latency spikes caused by log segment flush on roll

2020-03-10 Thread Paolo Moriello (Jira)
Paolo Moriello created KAFKA-9693:
-

 Summary: Kafka latency spikes caused by log segment flush on roll
 Key: KAFKA-9693
 URL: https://issues.apache.org/jira/browse/KAFKA-9693
 Project: Kafka
  Issue Type: Improvement
  Components: core
 Environment: OS: Amazon Linux 2
Kafka version: 2.2.1
Reporter: Paolo Moriello
Assignee: Paolo Moriello
 Attachments: image-2020-03-10-13-17-34-618.png, 
image-2020-03-10-14-36-21-807.png, image-2020-03-10-15-00-23-020.png, 
image-2020-03-10-15-00-54-204.png, latency_plot.png

h1. 1. Phenomenon

Response time of produce request (99th ~ 99.9th %ile) repeatedly spikes to 
~50x-200x more than usual. For instance, normally 99th %ile is lower than 5ms, 
but when this issue occurs, it marks 100ms to 200ms. 99.9th and 99.99th %iles 
even jump to 500-700ms.

Latency spikes happen at constant frequency (depending on the input 
throughput), for small amounts of time. All the producers experience a latency 
increase at the same time.
h1. !image-2020-03-10-13-17-34-618.png|width=513,height=171!

{{Example of response time plot observed during on a single producer.}}

URPs rarely appear in correspondence of the latency spikes too. This is harder 
to reproduce, but from time to time it is possible to see a few partitions 
going out of sync in correspondence of a spike.
h1. 2. Experiment
h2. 2.1 Setup

Kafka cluster hosted on AWS EC2 instances.
h4. Cluster
 * 15 Kafka brokers: (EC2 m5.4xlarge)
 ** Disk: 1100Gb EBS volumes (4750Mbps)
 ** Network: 10 Gbps
 ** CPU: 16 Intel Xeon Platinum 8000
 ** Memory: 64Gb
 * 3 Zookeeper nodes: m5.large
 * 6 producers on 6 EC2 instances in the same region
 * 1 topic, 90 partitions - replication factor=3

h4. Broker config

Relevant configurations:
{quote}num.io.threads=8
num.replica.fetchers=2
offsets.topic.replication.factor=3
num.network.threads=5
num.recovery.threads.per.data.dir=2
min.insync.replicas=2
num.partitions=1
{quote}
h4. Perf Test
 * Throughput ~6000-8000 (~40-70Mb/s input + replication = ~120-210Mb/s per 
broker)
 * record size = 2
 * Acks = 1, linger.ms = 1, compression.type = none
 * Test duration: ~20/30min

h2. 2.2 Analysis

Our analysis showed an high +correlation between log segment flush count/rate 
and the latency spikes+. This indicates that the spikes in max latency are 
related to Kafka behavior on rolling over new segments.

The other metrics did not show any relevant impact on any hardware component of 
the cluster, eg. cpu, memory, network traffic, disk throughput...

 
!image-2020-03-10-14-14-49-131.png|width=514,height=274!
{{Correlation between latency spikes and log segment flush count. }}{{p50, p95, 
p99, p999 and p latencies (left axis, ns) and the flush #count (right axis, 
stepping blue line in plot).}}

Kafka schedules logs flushing (this includes flushing the file record 
containing log entries, the offset index, the timestamp index and the 
transaction index) during _roll_ operations. A log is rolled over onto a new 
empty log when:
 * the log segment is full
 * the maxtime has elapsed since the timestamp of first message in the segment 
(or, in absence of it, since the create time)
 * the index is full

In this case, the increase in latency happens on _append_ of a new message set 
to the active segment of the log. This is a synchronous operation which 
therefore blocks producers requests, causing the latency increase.

To confirm this, I instrumented Kafka to measure the duration of 
FileRecords.append(MemoryRecords) method, which is responsible of writing 
memory records to file. As a result, I observed the same spiky pattern as in 
the producer latency, with a one-to-one correspondence with the append duration.
!image-2020-03-10-14-36-21-807.png|width=513,height=273!
{{FileRecords.append(MemoryRecords) duration during test run.}}

Therefore, every time a new log segment (log.segment.bytes is set to default 
value of 1Gb) is rolled, Kafka forces a flush of the completed segment, which 
appears to slowdown the subsequent append requests on the active segment.
h2. 2.3 Solution

I managed to completely mitigate the problem by disabling the flush happening 
on log segment roll. Latency spikes and append duration flattened down.
!image-2020-03-10-15-00-23-020.png|width=513,height=171!
!image-2020-03-10-15-00-54-204.png|width=513,height=171!{{Producer response 
time before and after disabling log flush.}}
 
Generally, it is possible to control Kafka's flush behavior by setting a bunch 
of log.flush.xxx configurations. This flush policy can be controlled to force 
data to disk after a period of time or after a certain number of messages has 
been written.
 
However, these configuration don't have any impact on the flush of "rolled 
segments", which is scheduled and executed anyway.
 
Therefore, the suggested solution is to add a new configuration to potentially 

Re: [VOTE] 2.5.0 RC1

2020-03-10 Thread Tom Bentley
Hi David,

I verified signatures, built the tagged branch and ran unit and integration
tests. I found some flaky tests, as follows:

https://issues.apache.org/jira/browse/KAFKA-9691 (new)
https://issues.apache.org/jira/browse/KAFKA-9692 (new)
https://issues.apache.org/jira/browse/KAFKA-9283 (already reported)

Many thanks,

Tom

On Tue, Mar 10, 2020 at 3:28 AM David Arthur  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 2.5.0. The first
> release candidate included an erroneous NOTICE file, so another RC was
> needed to fix that.
>
> This is a major release of Kafka which includes many new features,
> improvements, and bug fixes including:
>
> * TLS 1.3 support (1.2 is now the default)
> * Co-groups for Kafka Streams
> * Incremental rebalance for Kafka Consumer
> * New metrics for better operational insight
> * Upgrade Zookeeper to 3.5.7
> * Deprecate support for Scala 2.11
>
> Release notes for the 2.5.0 release:
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Monday, March 16th 2020 5pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc1/javadoc/
>
> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> https://github.com/apache/kafka/releases/tag/2.5.0-rc1
>
> * Documentation:
> https://kafka.apache.org/25/documentation.html
>
> * Protocol:
> https://kafka.apache.org/25/protocol.html
>
> * Links to successful Jenkins builds for the 2.5 branch to follow
>
> Thanks,
> David Arthur
>


[jira] [Created] (KAFKA-9692) Flaky test - kafka.admin.ReassignPartitionsClusterTest#znodeReassignmentShouldOverrideApiTriggeredReassignment

2020-03-10 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9692:
--

 Summary: Flaky test - 
kafka.admin.ReassignPartitionsClusterTest#znodeReassignmentShouldOverrideApiTriggeredReassignment
 Key: KAFKA-9692
 URL: https://issues.apache.org/jira/browse/KAFKA-9692
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Affects Versions: 2.5.0
Reporter: Tom Bentley


{noformat}
java.lang.AssertionError: expected: but was:
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:120)
at org.junit.Assert.assertEquals(Assert.java:146)
at 
kafka.admin.ReassignPartitionsClusterTest.assertReplicas(ReassignPartitionsClusterTest.scala:1220)
at 
kafka.admin.ReassignPartitionsClusterTest.assertIsReassigning(ReassignPartitionsClusterTest.scala:1191)
at 
kafka.admin.ReassignPartitionsClusterTest.znodeReassignmentShouldOverrideApiTriggeredReassignment(ReassignPartitionsClusterTest.scala:897)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at jdk.internal.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at jdk.internal.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 

[jira] [Created] (KAFKA-9691) Flaky test kafka.admin.TopicCommandWithAdminClientTest#testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress

2020-03-10 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9691:
--

 Summary: Flaky test 
kafka.admin.TopicCommandWithAdminClientTest#testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress
 Key: KAFKA-9691
 URL: https://issues.apache.org/jira/browse/KAFKA-9691
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Affects Versions: 2.5.0
Reporter: Tom Bentley


Stacktrace:

{noformat}
java.lang.NullPointerException
at 
kafka.admin.TopicCommandWithAdminClientTest.$anonfun$testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress$3(TopicCommandWithAdminClientTest.scala:673)
at 
kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress(TopicCommandWithAdminClientTest.scala:671)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

Pristine Zookeeper and Kafka (via Docker) producing OFFSET_OUT_OF_RANGE for Fetch

2020-03-10 Thread kafka
Hi all,


I'm implementing a custom client.

I was wondering whether anyone could explain the OFFSET_OUT_OF_RANGE
error in this scenario.

My test suite tears down and spins up a fresh zookeeper and kafka every
time inside a pristine docker container.

The test suite runs as:

1. Producer runs first and finishes.
2. Consumer group members then runs later in 3 separate threads.

I write key/pairs of "fruit", "animal" and "vegetable" with a
round-robin algorithm for each partition.

The consumer group process runs an OffsetCommit with offset=0 for each
partition to kick off. (I found that if I just started with OffsetFetch I
would get UNKNOWN_TOPIC_OR_PARTITION, and couldn't find docs about
whether this was "normal" or not. But that's a tangent.)

This page shows writing to three partitions within a topic, and
each Produce request succeeding:

https://chrisdone.com/consumer-groups-sink-out-of-range.html [fine]

Each column is a thread in my test suite.

However, when trying to fetch those three partitions, for some reason on
partition 2, I get OFFSET_OUT_OF_RANGE. The other two partitions consume
successfully. This can be seen in the consumer side shown in this page:

https://chrisdone.com/consumer-groups-source-out-of-range.html [problem]

(scroll to about half way through, as the first half is three threads trying to 
join the group, then when they've joined the group. Three new threads spin up 
to the right, one for each consumer in the consumer group.)

Yet, this is a nondeterministic error that seems to depend on
timing. I have random 1-500ms waiting lag intentionally placed in
every message log so that the program might exhibit real-world cases
like this.

If I remove the random timeouts, this process works every time
(demonstrated here:
https://chrisdone.com/consumer-groups-source-working.html). So there is
some kind of timing issue that I cannot identify.

Upon receiving an OFFSET_OUT_OF_RANGE error, you can see in the log
that I wait, refresh metadata, and retry the request again (as I read
elsewhere[1] that this "typically implies a leader change") only to
get another OFFSET_OUT_OF_RANGE.

I'm receiving a OffsetFetch response of partitionIndex = 2 and
committedOffset = 0,

( ThreadId 46
, SourceRequestMsg
 (ReceivedResponse
 0.006022722
 s
 (OffsetFetchResponseV0
 { topicsArray =
 ARRAY
 [ OffsetFetchResponseV0Topics
 { name = STRING "355b26d6-ccab-4b28-bd05-a44ac6326cb7"
 , partitionsArray =
 ARRAY
 [ OffsetFetchResponseV0TopicsPartitions
 { partitionIndex = 2
 , committedOffset = 0
 , metadata = NULLABLE_STRING (Just "")
 , errorCode = NONE
 }
 ]
 }
 ]
 })))

I sent a fetch request,

( ThreadId 49
, ConsumerGroupConsumerFor
 "myclientid-e79931cc-d6d4-479b-90d6-1b61aab85198"
 [PartitionId 2]
 (KafkaSourceMsg
 (SourceRequestMsg
 (SendingRequest
 (FetchRequestV4
 { replicaId = -1
 , maxWaitTime = 200
 , minBytes = 5
 , maxBytes = 1048576
 , isolationLevel = 0
 , topicsArray =
 ARRAY
 [ FetchRequestV4Topics
 { topic =
 STRING "355b26d6-ccab-4b28-bd05-a44ac6326cb7"
 , partitionsArray =
 ARRAY
 [ FetchRequestV4TopicsPartitions
 { partition = 2
 , fetchOffset = 0
 , partitionMaxBytes = 1048576
 }
 ]
 }
 ]
 })

And yet it returns

FetchResponseV4Responses
 { topic = STRING "355b26d6-ccab-4b28-bd05-a44ac6326cb7"
 , partitionResponsesArray =
 ARRAY
 [ FetchResponseV4ResponsesPartitionResponses
 { partitionHeader =
 FetchResponseV4ResponsesPartitionResponsesPartitionHeader
 { partition = 2
 , errorCode = OFFSET_OUT_OF_RANGE
 , highWatermark = -1
 , lastStableOffset = -1
 , abortedTransactionsArray = ARRAY []
 }
 , recordSet = RecordBatchV2Sequence {recordBatchV2Sequence = []}
 }
 ]
 }

So I am very confused.

Can someone who is more familiar with this process hazard a guess as to
what's going on?

Cheers,

Chris

[1]: 
https://issues.apache.org/jira/browse/KAFKA-7395?focusedCommentId=16640313=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16640313

[jira] [Created] (KAFKA-9690) MemoryLeak in JMX Reporter

2020-03-10 Thread Kaare Nilsen (Jira)
Kaare Nilsen created KAFKA-9690:
---

 Summary: MemoryLeak in JMX Reporter
 Key: KAFKA-9690
 URL: https://issues.apache.org/jira/browse/KAFKA-9690
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.4.0
Reporter: Kaare Nilsen
 Attachments: image-2020-03-10-12-37-49-259.png, 
image-2020-03-10-12-44-11-688.png

We use kafka in a streamin http application creating a new consumer for each 
incoming requests. In version 2.4.0 we experience that the memory builds up for 
each new consumer. After debugging the issue after a memory dump revealed it 
was in the JMX subsystem we found that one of the JMX beans (kafka.consumer) 
build up one metric consumer-metrics without releasing them on closing the 
consumer.

What we found is that the metricRemoval  
{code:java}
public void metricRemoval(KafkaMetric metric) {
synchronized (LOCK) {
MetricName metricName = metric.metricName();
String mBeanName = getMBeanName(prefix, metricName);
KafkaMbean mbean = removeAttribute(metric, mBeanName);
if (mbean != null) {
if (mbean.metrics.isEmpty()) {
unregister(mbean);
mbeans.remove(mBeanName);
} else
reregister(mbean);
}
}
}
{code}
The check mbean.metrics.isEmpty() for this particular metric never yielded true 
so the mbean was never removed. Thus building up the mbeans HashMap.

The metrics that is not released are:
{code:java}
last-poll-seconds-ago
poll-idle-ratio-avg")
time-between-poll-avg
time-between-poll-max
{code}
I have a workaround in my code now by having a modified JMXReporter in my pwn 
project with the following close method
{code:java}
public void close() {
synchronized (LOCK) {
for (KafkaMbean mbean : this.mbeans.values()) {
mbean.removeAttribute("last-poll-seconds-ago");
mbean.removeAttribute("poll-idle-ratio-avg");
mbean.removeAttribute("time-between-poll-avg");
mbean.removeAttribute("time-between-poll-max");
unregister(mbean);
}
}
}
{code}
This will remove the attributes that are not cleaned up and prevent the memory 
leakage, but I have not found the root casue.
Another workaround is to use kafka client 2.3.1

 

this is how it looks in the jmx console after a couple of clients have 
connected and disconnected. Here you can see that the one metric builds up and 
the old ones have the four attributes that makes the unregister fail.

 

!image-2020-03-10-12-37-49-259.png!

 

dThis Is how it looks after a while in kafka client 2.3.1
!image-2020-03-10-12-44-11-688.png!

As you can see no leakage here.

I suspect this pull request to be the one that have introduced the leak: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-517%3A+Add+consumer+metrics+to+observe+user+poll+behavior]

https://issues.apache.org/jira/browse/KAFKA-8874



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-2.5-jdk8 #59

2020-03-10 Thread Apache Jenkins Server
See 



[jira] [Resolved] (KAFKA-9122) Externalizing DB password is not working

2020-03-10 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9122.
---
Resolution: Not A Bug

Closing, given that this was a configuration issue. 

> Externalizing DB password is not working
> 
>
> Key: KAFKA-9122
> URL: https://issues.apache.org/jira/browse/KAFKA-9122
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.1
> Environment: CentOS 6.7
>Reporter: Dwijadas
>Priority: Trivial
> Attachments: Screenshot_1.png
>
>
> Hi
> I am trying to externalizing user name and password for oracle DB using 
> {{FileConfigProvider}} provider.
> For that i have created a properties file that contains user name and 
> password.
>  
> {{$ cat /home/kfk/data/ora_credentials.properties
> ora.username="apps"
> ora.password="Passw0rd!"}}
> Added the config providers as file and also the config.providers.file.class 
> as FileConfigProvider in the worker config:
>  
> {{$ cat /home/kfk/etc/kafka/connect-distributed.properties
> ...
> ...
> config.providers=file
> config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
> ...
> ...}}
> Restarted the worker and submitted a task using REST with the following config
>  
> {{"config": \{
>"connector.class": 
> "io.confluent.connect.jdbc.JdbcSourceConnector",
>"tasks.max": "1",
>  "connection.user": 
> "${file:/home/kfk/data/ora_credentials.properties:ora.username}",
>"connection.password": 
> "${file:/home/kfk/data/ora_credentials.properties:ora.password}",
>...
>...
> }}}
> Submitting the above task resulting in the following error:
>  
> {{{
>   "error_code": 400,
>   "message": "Connector configuration is invalid and contains the following 2 
> error(s):\nInvalid value java.sql.SQLException: ORA-01017: invalid 
> username/password; logon denied\n for configuration Couldn't open connection 
> to jdbc:oracle:thin:@oebsr122.infodetics.com:1521:VIS\nInvalid value 
> java.sql.SQLException: ORA-01017: invalid username/password; logon denied\n 
> for configuration Couldn't open connection to 
> jdbc:oracle:thin:@oebsr122.infodetics.com:1521:VIS\nYou can also find the 
> above list of errors at the endpoint `/\{connectorType}/config/validate`"
> }}}
> Assuming the above config does not replaces the user name and password at all 
> rather entire values for connection.user and connection.password are used to 
> connect to the DB resulting in ORA-01017: invalid username/password error.
> Is it a bug ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)