Re: [DISCUSS] KIP-587 Suppress detailed responses for handled exceptions in security-sensitive environments

2020-04-08 Thread Christopher Egerton
Hi Connor,

Just a few more remarks!

I noticed that you said "Kafka Connect was passing these exceptions without
authentication." For what it's worth, the Connect REST API can be secured
with TLS out-of-the-box by configuring the worker with the various ssl.*
properties, but that doesn't provide any kind of authorization layer to
provide levels of security depending who the user is. Just pointing out in
case this helps with your use case.

As far as editing the KIP based on discussion goes--it's not only
acceptable, it's expected :) Ideally, the KIP should be kept up-to-date to
the point where, were it to be accepted at any moment, it would accurately
reflect the changes that would then be made to Kafka. This can be relaxed
if there's rapid iteration or items that are still up for discussion, but
as soon as things settle down it should be updated.

As far as item 4 goes, my question was about exceptions that aren't handled
by the ExceptionMapper, but which are returned as part of the response body
when querying the status of a connector or task that has failed by querying
the /connectors/{name}/status or /connectors/{name}/tasks/{taskId}/status
endpoints. Even if the request is successful and results in an HTTP 200
response, the body might contain a stack trace if the connector or any of
its tasks have failed.

For example, I ran an instance of the FileStreamSource connector named
"file-source" locally and instructed it to consume from a file that it
lacked permissions to read. When I queried the status of that connector by
issuing a request to /connectors/file-source/status, I got back the
following response:

{
  "name": "file-source",
  "connector": {
"state": "RUNNING",
"worker_id": "192.168.86.21:8083"
  },
  "tasks": [
{
  "id": 0,
  "state": "FAILED",
  "worker_id": "192.168.86.21:8083",
  "trace": "org.apache.kafka.connect.errors.ConnectException:
java.nio.file.AccessDeniedException: test.txt\n\tat
org.apache.kafka.connect.file.FileStreamSourceTask.poll(FileStreamSourceTask.java:116)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:265)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
java.lang.Thread.run(Thread.java:748)\nCaused by:
java.nio.file.AccessDeniedException: test.txt\n\tat
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)\n\tat
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)\n\tat
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)\n\tat
sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)\n\tat
java.nio.file.Files.newByteChannel(Files.java:361)\n\tat
java.nio.file.Files.newByteChannel(Files.java:407)\n\tat
java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)\n\tat
java.nio.file.Files.newInputStream(Files.java:152)\n\tat
org.apache.kafka.connect.file.FileStreamSourceTask.poll(FileStreamSourceTask.java:82)\n\t...
9 more\n"
}
  ],
  "type": "source"
}

Note the "trace" field in the first element of the "tasks" field of the
response: this was the stack trace for the exception that caused the task
to fail during execution, which has nothing to do with the success or
failure of the REST request I issued to the /connectors/file-source/status
endpoint.

I was wondering if you wanted to include these kinds of stack traces as
part of the KIP, as opposed to uncaught exceptions that result in a 500
error from the REST API.

Cheers,

Chris

On Wed, Apr 8, 2020 at 9:51 AM Connor Penhale  wrote:

> Hi All!
>
> Is there any additional feedback that the community can provide me on the
> KIP? Has anyone else run into requirements like this, or maybe my customer
> is the only one :)? If the scope looks good, is it time to call a vote? Or
> should I start porting my 2.0 code to 2.6 to show examples?
>
> Thanks!
> Connor
>
> On 4/6/20, 9:03 AM, "Connor Penhale"  wrote:
>
> Hi Colin,
>
> We did not find a specific security vulnerability. Our customer had
> auditors in their environment,  and they identified Kafka Connect as out of
> compliance with their particular standards, something that happens all the
> time for REST-based applications. What these security auditors expected
> Kafka Connect to be able to do was tune the response. As Kafka Connect
> could not provide this functionality, I'm proposing this KIP. Does that
> make sense? Should I still go through the process of a security disclosure?
>
> Our 

[jira] [Created] (KAFKA-9840) Consumer should not use OffsetForLeaderEpoch without current epoch validation

2020-04-08 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9840:
--

 Summary: Consumer should not use OffsetForLeaderEpoch without 
current epoch validation
 Key: KAFKA-9840
 URL: https://issues.apache.org/jira/browse/KAFKA-9840
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Jason Gustafson


We have observed a case where the consumer attempted to detect truncation with 
the OffsetsForLeaderEpoch API against a broker which had become a zombie. In 
this case, the last epoch known to the consumer was higher than the last epoch 
known to the zombie broker, so the broker returned -1 as the offset and epoch 
in the response. The consumer did not check for this in the response, which 
resulted in the following message:

{code}
Truncation detected for partition topic-1 at offset FetchPosition{offset=11859, 
offsetEpoch=Optional[46], currentLeader=LeaderAndEpoch{leader=broker-host (id: 
3 rack: null), epoch=-1}}, resetting offset to the first offset known to 
diverge FetchPosition{offset=-1, offsetEpoch=Optional[-1], 
currentLeader=LeaderAndEpoch{broker-host (id: 3 rack: null), epoch=-1}} 
(org.apache.kafka.clients.consumer.internals.SubscriptionState:414)
{code}

There are a couple ways we the consumer can handle this situation better. 
First, the reason we did not detect the zombie broker is that we did not 
include the current leader epoch in the OffsetForLeaderEpoch request. This was 
likely because of KAFKA-9212. Following this patch, we would not initialize the 
current leader epoch from metadata responses because there are cases that we 
cannot rely on it. But if the client cannot rely on being able to detect 
zombies, then the epoch validation is less useful anyway. So the simple 
solution is to not bother with the validation unless we have a reliable current 
leader epoch.

Second, the consumer needs to check for the case when the returned offset and 
epoch are not defined. In this case, we have to treat this as a normal 
OffsetOutOfRange case and invoke the reset policy. 





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9839) IllegalStateException on metadata update when broker learns about its new epoch after the controller

2020-04-08 Thread Anna Povzner (Jira)
Anna Povzner created KAFKA-9839:
---

 Summary: IllegalStateException on metadata update when broker 
learns about its new epoch after the controller
 Key: KAFKA-9839
 URL: https://issues.apache.org/jira/browse/KAFKA-9839
 Project: Kafka
  Issue Type: Bug
  Components: controller, core
Affects Versions: 2.3.1
Reporter: Anna Povzner


Broker throws "java.lang.IllegalStateException: Epoch XXX larger than current 
broker epoch YYY"  on UPDATE_METADATA when the controller learns about the 
broker epoch and sends UPDATE_METADATA before KafkaZkCLient.registerBroker 
completes (the broker learns about its new epoch).

Here is the scenario we observed in more detail:
1. ZK session expires on broker 1
2. Broker 1 establishes new session to ZK and creates znode
3. Controller learns about broker 1 and assigns epoch
4. Broker 1 receives UPDATE_METADATA from controller, but it does not know 
about its new epoch yet, so we get an exception:

ERROR [KafkaApi-3] Error when handling request: clientId=1, correlationId=0, 
api=UPDATE_METADATA, body={
.
java.lang.IllegalStateException: Epoch XXX larger than current broker epoch YYY 
at kafka.server.KafkaApis.isBrokerEpochStale(KafkaApis.scala:2725) at 
kafka.server.KafkaApis.handleUpdateMetadataRequest(KafkaApis.scala:320) at 
kafka.server.KafkaApis.handle(KafkaApis.scala:139) at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69) at 
java.lang.Thread.run(Thread.java:748)

5. KafkaZkCLient.registerBroker completes on broker 1: "INFO Stat of the 
created znode at /brokers/ids/1"

The result is the broker has a stale metadata for some time.

Possible solutions:
1. Broker returns a more specific error and controller retries UPDATE_MEDATA
2. Broker accepts UPDATE_METADATA with larger broker epoch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1337

2020-04-08 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6145: KIP-441 Move tasks with caught-up destination clients right


--
[...truncated 3.01 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

Build failed in Jenkins: kafka-trunk-jdk8 #4416

2020-04-08 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6145: KIP-441 Move tasks with caught-up destination clients right


--
[...truncated 3.00 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain 

Build failed in Jenkins: kafka-2.4-jdk8 #185

2020-04-08 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9835; Protect `FileRecords.slice` from concurrent write (#8451)


--
[...truncated 7.05 MB...]
org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidMap PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordDefaultValue 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeyWithSchema 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt8 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanFalse PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidSchemaType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessString PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsWithSchema PASSED

org.apache.kafka.connect.transforms.CastTest > castLogicalToString STARTED

org.apache.kafka.connect.transforms.CastTest > castLogicalToString PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.FlattenTest > 
tombstoneEventWithoutSchemaShouldPassThrough STARTED

org.apache.kafka.connect.transforms.FlattenTest > 
tombstoneEventWithoutSchemaShouldPassThrough PASSED

org.apache.kafka.connect.transforms.FlattenTest > testKey STARTED

org.apache.kafka.connect.transforms.FlattenTest > testKey PASSED

org.apache.kafka.connect.transforms.FlattenTest > 
testOptionalAndDefaultValuesNested STARTED

org.apache.kafka.connect.transforms.FlattenTest > 
testOptionalAndDefaultValuesNested PASSED

org.apache.kafka.connect.transforms.FlattenTest > topLevelMapRequired STARTED

org.apache.kafka.connect.transforms.FlattenTest > topLevelMapRequired PASSED

org.apache.kafka.connect.transforms.FlattenTest > topLevelStructRequired STARTED

org.apache.kafka.connect.transforms.FlattenTest > topLevelStructRequired PASSED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalFieldStruct 
STARTED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalFieldStruct PASSED

org.apache.kafka.connect.transforms.FlattenTest > 
tombstoneEventWithSchemaShouldPassThrough STARTED

org.apache.kafka.connect.transforms.FlattenTest > 
tombstoneEventWithSchemaShouldPassThrough PASSED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalNestedStruct 
STARTED

org.apache.kafka.connect.transforms.FlattenTest > testOptionalNestedStruct 
PASSED

org.apache.kafka.connect.transforms.FlattenTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1336

2020-04-08 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6145: KIP-441 Pt. 6 Trigger probing rebalances until group is

[github] KAFKA-9835; Protect `FileRecords.slice` from concurrent write (#8451)


--
[...truncated 3.02 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #4415

2020-04-08 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9835; Protect `FileRecords.slice` from concurrent write (#8451)


--
[...truncated 3.00 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain 

Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-04-08 Thread Boyang Chen
Thanks for the info Agam! Will add to the KIP.

On Wed, Apr 8, 2020 at 4:26 PM Agam Brahma  wrote:

> Hi Boyang,
>
> The KIP already talks about incorporating changes for FindCoordinator
> request routing, wanted to point out one additional case where internal
> topics are created "as a side effect":
>
> As part of handling metadata requests, if we are looking for metadata for
> an internal topic and auto-topic-creation is enabled [1], the broker
> currently goes ahead and creates the internal topic in the same way [2] as
> it would for the FindCoordinator request.
>
> -Agam
>
> [1]
>
> https://github.com/apache/kafka/blob/cd1e46c8bb46f1e5303c51f476c74e33b522fce8/core/src/main/scala/kafka/server/KafkaApis.scala#L1096
> [2]
>
> https://github.com/apache/kafka/blob/cd1e46c8bb46f1e5303c51f476c74e33b522fce8/core/src/main/scala/kafka/server/KafkaApis.scala#L1041
>
>
>
> On Mon, Apr 6, 2020 at 8:25 PM Boyang Chen 
> wrote:
>
> > Thanks for the various inputs everyone!
> >
> > I think Sonke and Colin's suggestions make sense. The tagged field also
> > avoids the unnecessary protocol changes for affected requests. Will add
> it
> > to the header. As for the verification, I'm not sure whether it's
> necessary
> > to require a higher permission level, as it is just an ignorable field?
> >
> > Guozhang's suggestions about metrics also sound great, I will think
> through
> > the use cases and make some changes to the KIP.
> >
> > Best,
> > Boyang
> >
> > On Mon, Apr 6, 2020 at 4:28 PM Guozhang Wang  wrote:
> >
> > > Thanks for the KIP Boyang, this looks good to me. Some minor comments:
> > >
> > > 1) I think in order to implement the forwarding mechanism the brokers
> > needs
> > > some purgatory to keep the forwarded requests; if that's true, should
> we
> > > add some broker-side metrics for those purgatories for debugging
> > purposes?
> > >
> > > 2) Should we also consider adding some extra metric counting old
> > versioned
> > > admin client request rates (this goes beyond
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
> > > since
> > > old versioned client would not report its Kafka version anyways); one
> use
> > > case I can think of besides debugging purposes, is that if we ever
> > decides
> > > to break compatibility in future versions way after the bridge
> releases,
> > to
> > > reject any v1 requests and hence can totally remove this forwarding
> logic
> > > on brokers, we can leverage on this metric to find a safe time to
> > upgrade.
> > >
> > >
> > > Guozhang
> > >
> > >
> > >
> > > On Mon, Apr 6, 2020 at 3:50 PM Colin McCabe 
> wrote:
> > >
> > > > Hi Sönke,
> > > >
> > > > Yeah, that was my thought too.  The request has already been
> validated
> > on
> > > > the forwarding broker, so we don't need to validate it again.
> However,
> > > you
> > > > make a good point that it's unfortunate that the audit log would lose
> > the
> > > > principal information if we didn't forward it as well.
> > > >
> > > > Perhaps we could add a tagged field to the request header for all
> > > > messages.  This field would contain the principal name.  Of course,
> > this
> > > > field should only be allowed if the request arrives with the highest
> > > > permission levels (Probably ClusterAction on Cluster, since that's
> what
> > > all
> > > > the brokers have.)
> > > >
> > > > regards,
> > > > Colin
> > > >
> > > >
> > > > On Mon, Apr 6, 2020, at 14:37, Sönke Liebau wrote:
> > > > > Hi Boyang,
> > > > >
> > > > > thanks for the KIP. Sounds good overall.
> > > > >
> > > > > @Tom: I thought about your remark a little and think that in
> > principle
> > > we
> > > > > can get away without forwarding the principal at all. Brokers
> > currently
> > > > > authenticate and authorize requests before performing writes to
> > > > Zookeeper -
> > > > > as long as we don't change that it shouldn't matter, whether the
> > write
> > > > goes
> > > > > to ZK or the controller, as long as that request is properly
> > > > authenticated.
> > > > > So the broker would simply authorize and authenticate the original
> > > > request
> > > > > and then forward it to the controller using its own credentials.
> And
> > > the
> > > > > controller could simply trust that this is a bona-fide request,
> > because
> > > > it
> > > > > came from a trusted peer.
> > > > >
> > > > > I can see two issues here, one is a bit academic I think..
> > > > >
> > > > > 1. The controller would be unable to write a proper audit log,
> > because
> > > it
> > > > > cannot know who sent the original request.
> > > > >
> > > > > 2. In theory, clusters could use Plaintext Listeners for inter
> broker
> > > > > traffic because that is on a separate, secure network or similar
> > > reasons.
> > > > > In that case, the forwarded request would be unauthenticated - then
> > > > again,
> > > > > so are all other requests between brokers, so nothing lost really.
> > > > >
> > > > 

Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-04-08 Thread Agam Brahma
Hi Boyang,

The KIP already talks about incorporating changes for FindCoordinator
request routing, wanted to point out one additional case where internal
topics are created "as a side effect":

As part of handling metadata requests, if we are looking for metadata for
an internal topic and auto-topic-creation is enabled [1], the broker
currently goes ahead and creates the internal topic in the same way [2] as
it would for the FindCoordinator request.

-Agam

[1]
https://github.com/apache/kafka/blob/cd1e46c8bb46f1e5303c51f476c74e33b522fce8/core/src/main/scala/kafka/server/KafkaApis.scala#L1096
[2]
https://github.com/apache/kafka/blob/cd1e46c8bb46f1e5303c51f476c74e33b522fce8/core/src/main/scala/kafka/server/KafkaApis.scala#L1041



On Mon, Apr 6, 2020 at 8:25 PM Boyang Chen 
wrote:

> Thanks for the various inputs everyone!
>
> I think Sonke and Colin's suggestions make sense. The tagged field also
> avoids the unnecessary protocol changes for affected requests. Will add it
> to the header. As for the verification, I'm not sure whether it's necessary
> to require a higher permission level, as it is just an ignorable field?
>
> Guozhang's suggestions about metrics also sound great, I will think through
> the use cases and make some changes to the KIP.
>
> Best,
> Boyang
>
> On Mon, Apr 6, 2020 at 4:28 PM Guozhang Wang  wrote:
>
> > Thanks for the KIP Boyang, this looks good to me. Some minor comments:
> >
> > 1) I think in order to implement the forwarding mechanism the brokers
> needs
> > some purgatory to keep the forwarded requests; if that's true, should we
> > add some broker-side metrics for those purgatories for debugging
> purposes?
> >
> > 2) Should we also consider adding some extra metric counting old
> versioned
> > admin client request rates (this goes beyond
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
> > since
> > old versioned client would not report its Kafka version anyways); one use
> > case I can think of besides debugging purposes, is that if we ever
> decides
> > to break compatibility in future versions way after the bridge releases,
> to
> > reject any v1 requests and hence can totally remove this forwarding logic
> > on brokers, we can leverage on this metric to find a safe time to
> upgrade.
> >
> >
> > Guozhang
> >
> >
> >
> > On Mon, Apr 6, 2020 at 3:50 PM Colin McCabe  wrote:
> >
> > > Hi Sönke,
> > >
> > > Yeah, that was my thought too.  The request has already been validated
> on
> > > the forwarding broker, so we don't need to validate it again.  However,
> > you
> > > make a good point that it's unfortunate that the audit log would lose
> the
> > > principal information if we didn't forward it as well.
> > >
> > > Perhaps we could add a tagged field to the request header for all
> > > messages.  This field would contain the principal name.  Of course,
> this
> > > field should only be allowed if the request arrives with the highest
> > > permission levels (Probably ClusterAction on Cluster, since that's what
> > all
> > > the brokers have.)
> > >
> > > regards,
> > > Colin
> > >
> > >
> > > On Mon, Apr 6, 2020, at 14:37, Sönke Liebau wrote:
> > > > Hi Boyang,
> > > >
> > > > thanks for the KIP. Sounds good overall.
> > > >
> > > > @Tom: I thought about your remark a little and think that in
> principle
> > we
> > > > can get away without forwarding the principal at all. Brokers
> currently
> > > > authenticate and authorize requests before performing writes to
> > > Zookeeper -
> > > > as long as we don't change that it shouldn't matter, whether the
> write
> > > goes
> > > > to ZK or the controller, as long as that request is properly
> > > authenticated.
> > > > So the broker would simply authorize and authenticate the original
> > > request
> > > > and then forward it to the controller using its own credentials. And
> > the
> > > > controller could simply trust that this is a bona-fide request,
> because
> > > it
> > > > came from a trusted peer.
> > > >
> > > > I can see two issues here, one is a bit academic I think..
> > > >
> > > > 1. The controller would be unable to write a proper audit log,
> because
> > it
> > > > cannot know who sent the original request.
> > > >
> > > > 2. In theory, clusters could use Plaintext Listeners for inter broker
> > > > traffic because that is on a separate, secure network or similar
> > reasons.
> > > > In that case, the forwarded request would be unauthenticated - then
> > > again,
> > > > so are all other requests between brokers, so nothing lost really.
> > > >
> > > > Overall though, I think that sending the principal along with the
> > request
> > > > shouldn't be a large issue though, it is just two Strings and a
> > boolean.
> > > > And the controller could bypass the PrincipalBuilder and just pass
> the
> > > > Principal that was built and sent by the remote broker straight to
> the
> > > > Authorizer. Since PrincipalBuilders are the same on 

Re: [DISCUSS] KIP-578: Add configuration to limit number of partitions

2020-04-08 Thread Gokul Ramanan Subramanian
Hi. Requesting you to take a look at this KIP and provide feedback.

Thanks. Regards.

On Wed, Apr 1, 2020 at 4:28 PM Gokul Ramanan Subramanian <
gokul24...@gmail.com> wrote:

> Hi.
>
> I have opened KIP-578, intended to provide a mechanism to limit the number
> of partitions in a Kafka cluster. Kindly provide feedback on the KIP which
> you can find at
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-578%3A+Add+configuration+to+limit+number+of+partitions
>
> I want to specially thank Stanislav Kozlovski who helped in formulating
> some aspects of the KIP.
>
> Many thanks,
>
> Gokul.
>


Re: Want to be Part of Kafka Committer - Siva

2020-04-08 Thread siva deva
Sure. Here is my user-id `3854148`

Thanks,
Siva


On Wed, Apr 8, 2020 at 3:58 PM Matthias J. Sax  wrote:

> Yes, I meant JIRA. You can just create a JIRA account (self service) and
> share you account ID here -- than we can add you to the JIRA contributor
> list -- this allows you to self-assign tickets.
>
>
> -Matthias
>
> On 4/8/20 3:29 PM, siva deva wrote:
> > Sorry, which account ID you mean? can you please guide here. Where
> should I
> > create an account (in JIRA) ?
> >
> > On Wed, Apr 8, 2020 at 2:27 PM Matthias J. Sax  wrote:
> >
> >> Did you create an account already? What is your account ID (we need it
> >> to add you to the list of contributors).
> >>
> >> -Matthias
> >>
> >>
> >>
> >> On 4/8/20 9:41 AM, siva deva wrote:
> >>> It’s mentioned that I need to add to the contributor list for
> assigning a
> >>> JIRA.
> >>>
> >>> Please contact us to be added the contributor list. After that you can
> >>> assign yourself to the JIRA ticket you have started working on so
> others
> >>> will notice.
> >>>
> >>> On Wed, Apr 8, 2020 at 8:58 AM Boyang Chen  >
> >>> wrote:
> >>>
>  Have you checked out https://kafka.apache.org/contributing.html?
> 
>  On Wed, Apr 8, 2020 at 8:14 AM siva deva 
> wrote:
> 
> > Hi,
> >
> > I am a newbie to Kafka Project. Can you please provide JIRA access to
> > starter/newbie tickets, so that I can start working on it.
> >
> > --
> > Best,
> > Siva
> >
> 
> >>
> >>
> >
>
>

-- 
Best,
Siva


Re: Want to be Part of Kafka Committer - Siva

2020-04-08 Thread Matthias J. Sax
Yes, I meant JIRA. You can just create a JIRA account (self service) and
share you account ID here -- than we can add you to the JIRA contributor
list -- this allows you to self-assign tickets.


-Matthias

On 4/8/20 3:29 PM, siva deva wrote:
> Sorry, which account ID you mean? can you please guide here. Where should I
> create an account (in JIRA) ?
> 
> On Wed, Apr 8, 2020 at 2:27 PM Matthias J. Sax  wrote:
> 
>> Did you create an account already? What is your account ID (we need it
>> to add you to the list of contributors).
>>
>> -Matthias
>>
>>
>>
>> On 4/8/20 9:41 AM, siva deva wrote:
>>> It’s mentioned that I need to add to the contributor list for assigning a
>>> JIRA.
>>>
>>> Please contact us to be added the contributor list. After that you can
>>> assign yourself to the JIRA ticket you have started working on so others
>>> will notice.
>>>
>>> On Wed, Apr 8, 2020 at 8:58 AM Boyang Chen 
>>> wrote:
>>>
 Have you checked out https://kafka.apache.org/contributing.html?

 On Wed, Apr 8, 2020 at 8:14 AM siva deva  wrote:

> Hi,
>
> I am a newbie to Kafka Project. Can you please provide JIRA access to
> starter/newbie tickets, so that I can start working on it.
>
> --
> Best,
> Siva
>

>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: Want to be Part of Kafka Committer - Siva

2020-04-08 Thread siva deva
Sorry, which account ID you mean? can you please guide here. Where should I
create an account (in JIRA) ?

On Wed, Apr 8, 2020 at 2:27 PM Matthias J. Sax  wrote:

> Did you create an account already? What is your account ID (we need it
> to add you to the list of contributors).
>
> -Matthias
>
>
>
> On 4/8/20 9:41 AM, siva deva wrote:
> > It’s mentioned that I need to add to the contributor list for assigning a
> > JIRA.
> >
> > Please contact us to be added the contributor list. After that you can
> > assign yourself to the JIRA ticket you have started working on so others
> > will notice.
> >
> > On Wed, Apr 8, 2020 at 8:58 AM Boyang Chen 
> > wrote:
> >
> >> Have you checked out https://kafka.apache.org/contributing.html?
> >>
> >> On Wed, Apr 8, 2020 at 8:14 AM siva deva  wrote:
> >>
> >>> Hi,
> >>>
> >>> I am a newbie to Kafka Project. Can you please provide JIRA access to
> >>> starter/newbie tickets, so that I can start working on it.
> >>>
> >>> --
> >>> Best,
> >>> Siva
> >>>
> >>
>
>

-- 
Best,
Siva


Jira Cleanup - Kafka on Windows

2020-04-08 Thread Sönke Liebau
All,

I stumbled across a recent issue about Kafka crashing on Windows the other
day, which keeps coming up on jira.
So I went and had a look and found 16 unresolved tickets about this - some
of them definitely are duplicates, some of them might be a variation.

KAKFA-1194 - The kafka broker cannot delete the old log files after the
configured time
KAKFA-2170 - 10 LogTest cases failed for file.renameTo failed under windows
KAKFA-5377 - Kafka server process crashing due to access violation (caused
by log cleaner)
KAKFA-6059 - Kafka cant delete old log files on windows
KAKFA-6200 - 0015.timeindex: The process cannot access the
file because it is being used by another process.
KAKFA-6203 - Kafka crash when deleting a topic
KAKFA-6406 - Topic deletion fails and kafka shuts down (on windows only)
KAKFA-6689 - Kafka not release .deleted file.
KAKFA-6983 - Error while deleting segments - The process cannot access the
file because it is being used by another process
KAKFA-7020 - Error when deleting topic with access denied exception
KAKFA-7086 - Kafka server process dies after try deleting old log files
under Windows 10
KAKFA-7575 - Error while writing to checkpoint file' Issue
KAKFA-8097 - Kafka broker crashes with java.nio.file.FileSystemException
Exception
KAKFA-8145 - Broker fails with FATAL Shutdown on Windows, log or index
renaming fail
KAKFA-8811 - Can not delete topics in Windows OS
KAKFA-9458 - Kafka crashed in windows environment

KAKFA-1194 to me seems to have seen the most activity and been there first,
so I'd propose to track this all work related to file deletion and renames
on Windows under this ticket. Objections?

Also, is anybody aware of any activity around this still going on? Reading
through the comments for a little while it looked like a fix was close, but
the most current state on this I could find was a comment from Colin [1]
saying that the proposed fix fell short of a proper solution and that we'd
need to have a Windows specific implementation of the Log class. But I was
unable to find anybody picking up this piece of work..

Best,
Sönke


[1] https://github.com/apache/kafka/pull/6329#issuecomment-505209147


Build failed in Jenkins: kafka-trunk-jdk8 #4414

2020-04-08 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6145: KIP-441 Pt. 6 Trigger probing rebalances until group is


--
[...truncated 6.00 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED


Re: [DISCUSS] KIP-585: Conditional SMT

2020-04-08 Thread Christopher Egerton
Hi Tom,

With regards to the potential Transformation::validate method, I don't
quite follow your objections. The AbstractHerder class, ConnectorConfig
class, and embedding of ConfigDefs that happens with the two is all
internal logic and we're allowed to modify it however we want, as long as
it doesn't alter any public-facing APIs (and if it does, we just have to
document those changes in a KIP). We don't have to embed the ConfigDef for
a transformation chain inside another ConfigDef if the API we want to
present to our users doesn't play well with that part of the code base.
Additionally, beyond aligning with the existing API provided for
connectors, another advantage is that it becomes possible to validate
properties for a configuration in the context of all other properties, so
to be clear, it's not just about preserving what may be perceived as a
superficial similarity and comes with real functional benefits that can't
be provided (at least, not as easily) by dynamic construction of a
ConfigDef object.

As far as the new proposal goes, I hate to say it, but I think we're stuck
with the worst of both worlds here. Adding the new RecordPredicate
interface seems like it defeats the whole purpose of SMTs, which is to
allow manipulation of a connector's event stream by users who don't
necessarily know how or have the time to write Java code of their own. This
is also why I'm in favor of adding a lightweight DSL for the condition;
emphasizing readability for people who aren't very familiar with Connect
and just want to get something going quickly should be a priority for SMTs.
But if that's not going to happen with this KIP, I'd still choose the
simpler, less-flexible approach initially outlined, in order to keep things
simple for people creating connectors and try to let them accomplish what
they want via configuration, not code.

With regards to the question about where the line should be drawn and how
much is too much and comparisons to other stream processing frameworks, I
think the nature of SMTs draws the line quite nicely: you can only process
one message at a time. There's plenty of use cases out there for
heavier-duty processing frameworks like Kafka Streams, with aggregate
operations, joining of streams, expanding a single message into multiple
messages, etc. With SMTs, none of this is possible; the general use case is
to filter and clean a stream of data. If any of the heavier-weight logic
provided by, e.g., Streams, isn't required for a project, it should be
possible to get along with just a collection of sink connectors, source
connectors, a converter or two, and SMTs that smooth over any differences
in data format between what source connectors produce and what sink
connectors expect. This is why I'm comfortable suggesting heavy expansion
of the out-of-the-box SMTs that we provide with Connect; as long as they're
reasonable to configure, they can greatly reduce the operational burden on
anyone running and/or using a Connect cluster since they might entirely
replace additional services that would otherwise be required.

All that said--this is a huge ask of someone who just wants to support the
SMT equivalent of an "if" statement, and it's totally understandable if
that's too much to ask. The one concern I have left is that if we expand
the SMT in the future, there become compatibility concerns since SMTs are
pretty tightly-coupled with the worker on which they run (although
technically, with some classpath/plugin path shuffling, you can run
different versions of the out-of-the-box SMTs from the Connect worker on
which they're run). Someone might write a connector config with the
If/Conditional/Whatever SMT with a condition type that works on one worker,
but that doesn't work on another worker that's running an earlier version
of Connect. This is why I'm in favor of adding extra predicates now instead
of later; if we're going to implement what has the potential to be a bit of
a Swiss army knife SMT with room for future expansion of configuration, and
we can think of a reasonable way to add that functionality now, it seems
better for users and administrators of Connect to try to do that now
instead of later.

Cheers,

Chris

On Wed, Apr 8, 2020 at 9:45 AM Tom Bentley  wrote:

> Since no one objected I've updated the KIP with the revised way to
> configure this transformation.
>
> On Mon, Apr 6, 2020 at 2:52 PM Tom Bentley  wrote:
>
> > To come back about a point Chris made:
> >
> > 1. Instead of the new "ConfigDef config(Map props)"
> method,
> >> what would you think about adopting a similar approach as the framework
> >> uses with connectors, by adding a "Config validate(Map
> >> props)" method that can perform custom validation outside of what can be
> >> performed by the ConfigDef's single-property-at-a-time validation? It
> may
> >> be a little heavyweight for use with this particular SMT, but it'd
> provide
> >> more flexibility for other SMT implementations and would mirror an API
> >> 

Re: Want to be Part of Kafka Committer - Siva

2020-04-08 Thread Matthias J. Sax
Did you create an account already? What is your account ID (we need it
to add you to the list of contributors).

-Matthias



On 4/8/20 9:41 AM, siva deva wrote:
> It’s mentioned that I need to add to the contributor list for assigning a
> JIRA.
> 
> Please contact us to be added the contributor list. After that you can
> assign yourself to the JIRA ticket you have started working on so others
> will notice.
> 
> On Wed, Apr 8, 2020 at 8:58 AM Boyang Chen 
> wrote:
> 
>> Have you checked out https://kafka.apache.org/contributing.html?
>>
>> On Wed, Apr 8, 2020 at 8:14 AM siva deva  wrote:
>>
>>> Hi,
>>>
>>> I am a newbie to Kafka Project. Can you please provide JIRA access to
>>> starter/newbie tickets, so that I can start working on it.
>>>
>>> --
>>> Best,
>>> Siva
>>>
>>



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-588: Allow producers to recover gracefully from transaction timeouts

2020-04-08 Thread Boyang Chen
That's a good suggestion Jason. Adding a dedicated PRODUCER_FENCED error
should help distinguish exceptions and could safely mark
INVALID_PRODUCER_EPOCH exception as non-fatal in the new code. Updated the
KIP.

Boyang

On Wed, Apr 8, 2020 at 12:18 PM Jason Gustafson  wrote:

> Hey Boyang,
>
> Thanks for the KIP. I think the main problem we've identified here is that
> the current errors conflate transaction timeouts with producer fencing. The
> first of these ought to be recoverable, but we cannot distinguish it. The
> suggestion to add a new error code makes sense to me, but it leaves this
> bit of awkwardness:
>
> > One extra issue that needs to be addressed is how to handle
> `ProducerFenced` from Produce requests.
>
> In fact, the underlying error code here is INVALID_PRODUCER_EPOCH. It's
> just that the code treats this as equivalent to `ProducerFenced`. One
> thought I had is maybe PRODUCER_FENCED needs to be a separate error code as
> well. After all, only the transaction coordinator knows whether a producer
> has been fenced or not. So maybe the handling could be something like the
> following:
>
> 1. Produce requests may return INVALID_PRODUCER_EPOCH. The producer
> recovers by following KIP-360 logic to see whether the epoch can be bumped.
> If it cannot because the broker version is too old, we fail.
> 2. Transactional APIs may return either TRANSACTION_TIMEOUT or
> PRODUCER_FENCED. In the first case, we do the same as above. We try to
> recover by bumping the epoch. If the error is PRODUCER_FENCED, it is fatal.
> 3. Older brokers may return INVALID_PRODUCER_EPOCH as well from
> transactional APIs. We treat this the same as 1.
>
> What do you think?
>
> Thanks,
> Jason
>
>
>
>
>
>
>
>
>
>
> On Mon, Apr 6, 2020 at 3:41 PM Boyang Chen 
> wrote:
>
> > Yep, updated the KIP, thanks!
> >
> > On Mon, Apr 6, 2020 at 3:11 PM Guozhang Wang  wrote:
> >
> > > Regarding 2), sounds good, I saw UNKNOWN_PRODUCER_ID is properly
> handled
> > > today in produce / add-partitions-to-txn / add-offsets-to-txn / end-txn
> > > responses, so that should be well covered.
> > >
> > > Could you reflect this in the wiki page that the broker should be
> > > responsible for using different error codes given client request
> versions
> > > as well?
> > >
> > >
> > >
> > > Guozhang
> > >
> > > On Mon, Apr 6, 2020 at 9:20 AM Boyang Chen  >
> > > wrote:
> > >
> > > > Thanks Guozhang for the review!
> > > >
> > > > On Sun, Apr 5, 2020 at 5:47 PM Guozhang Wang 
> > wrote:
> > > >
> > > > > Hello Boyang,
> > > > >
> > > > > Thank you for the proposed KIP. Just some minor comments below:
> > > > >
> > > > > 1. Could you also describe which producer APIs could potentially
> > throw
> > > > the
> > > > > new TransactionTimedOutException, and also how should callers
> handle
> > > them
> > > > > differently (i.e. just to make your description more concrete as
> > > > javadocs).
> > > > >
> > > > > Good point, I will add example java doc changes.
> > > >
> > > >
> > > > > 2. It's straight-forward if client is on newer version while
> broker's
> > > on
> > > > > older version; however If the client is on older version while
> > broker's
> > > > on
> > > > > newer version, today would the internal module of producers treat
> it
> > > as a
> > > > > general fatal error or not? If not, should the broker set a
> different
> > > > error
> > > > > code upon detecting older request versions?
> > > > >
> > > > > That's a good suggestion, my understanding is that the prerequisite
> > of
> > > > this change is the new KIP-360 API which is going out with 2.5,
> > > > so we could just return UNKNOWN_PRODUCER_ID instead of
> PRODUCER_FENCED
> > as
> > > > it could be interpreted as abortable error
> > > > in 2.5 producer and retry. Producers older than 2.5 will not be
> > covered.
> > > > WDYT?
> > > >
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Thu, Apr 2, 2020 at 1:40 PM Boyang Chen <
> > reluctanthero...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Hey there,
> > > > > >
> > > > > > I would like to start discussion for KIP-588:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-588%3A+Allow+producers+to+recover+gracefully+from+transaction+timeouts
> > > > > >
> > > > > > which aims to improve Producer resilience to transaction timeout
> > due
> > > to
> > > > > > transient system gaps.
> > > > > >
> > > > > > Best,
> > > > > > Boyang
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


[jira] [Resolved] (KAFKA-2493) Add ability to fetch only keys in consumer

2020-04-08 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-2493.
-
Resolution: Won't Fix

As this has been dormant for a long time and no one reacted to my comment I'll 
close this for now.

> Add ability to fetch only keys in consumer
> --
>
> Key: KAFKA-2493
> URL: https://issues.apache.org/jira/browse/KAFKA-2493
> Project: Kafka
>  Issue Type: Wish
>  Components: consumer
>Reporter: Ivan Balashov
>Assignee: Neha Narkhede
>Priority: Minor
>
> Often clients need to find out which offsets contain necessary data. One of 
> the possible solutions would be to iterate with small fetch size. However, 
> this still leads to unnecessary data being transmitted in case keys already 
> reference searched data. The ability to fetch keys only would simplify search 
> for the necessary offset.
> Of course, there can be other scenarios where consumer needs keys only, 
> without message part.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-976) Order-Preserving Mirror Maker Testcase

2020-04-08 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-976.

Resolution: Won't Fix

As this has been dormant for a long time and no one reacted to my comment I'll 
close this for now.

> Order-Preserving Mirror Maker Testcase
> --
>
> Key: KAFKA-976
> URL: https://issues.apache.org/jira/browse/KAFKA-976
> Project: Kafka
>  Issue Type: Test
>Reporter: Guozhang Wang
>Assignee: John Fung
>Priority: Major
> Attachments: kafka-976-v1.patch
>
>
> A new testcase (5007) for mirror_maker_testsuite is needed for the 
> key-dependent order-preserving mirror maker, this is related to KAFKA-957.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-2333) Add rename topic support

2020-04-08 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-2333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-2333.
-
Resolution: Won't Fix

As this has been dormant for a long time and no one reacted to my comment I'll 
close this for now.

> Add rename topic support
> 
>
> Key: KAFKA-2333
> URL: https://issues.apache.org/jira/browse/KAFKA-2333
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Major
>
> Add the ability to change the name of existing topics. 
> This likely needs an associated KIP. This Jira will be updated when one is 
> created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-3925) Default log.dir=/tmp/kafka-logs is unsafe

2020-04-08 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-3925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sönke Liebau resolved KAFKA-3925.
-
Resolution: Won't Fix

As this has been dormant for a long time and no one reacted to my comment I'll 
close this for now.

> Default log.dir=/tmp/kafka-logs is unsafe
> -
>
> Key: KAFKA-3925
> URL: https://issues.apache.org/jira/browse/KAFKA-3925
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.10.0.0
> Environment: Various, depends on OS and configuration
>Reporter: Peter Davis
>Priority: Major
>
> Many operating systems are configured to delete files under /tmp.  For 
> example Ubuntu has 
> [tmpreaper|http://manpages.ubuntu.com/manpages/wily/man8/tmpreaper.8.html], 
> others use tmpfs, others delete /tmp on startup. 
> Defaults are OK to make getting started easier but should not be unsafe 
> (risking data loss). 
> Something under /var would be a better default log.dir under *nix.  Or 
> relative to the Kafka bin directory to avoid needing root.  
> If the default cannot be changed, I would suggest a special warning print to 
> the console on broker startup if log.dir is under /tmp. 
> See [users list 
> thread|http://mail-archives.apache.org/mod_mbox/kafka-users/201607.mbox/%3cCAD5tkZb-0MMuWJqHNUJ3i1+xuNPZ4tnQt-RPm65grxE0=0o...@mail.gmail.com%3e].
>   I've also been bitten by this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-588: Allow producers to recover gracefully from transaction timeouts

2020-04-08 Thread Jason Gustafson
Hey Boyang,

Thanks for the KIP. I think the main problem we've identified here is that
the current errors conflate transaction timeouts with producer fencing. The
first of these ought to be recoverable, but we cannot distinguish it. The
suggestion to add a new error code makes sense to me, but it leaves this
bit of awkwardness:

> One extra issue that needs to be addressed is how to handle
`ProducerFenced` from Produce requests.

In fact, the underlying error code here is INVALID_PRODUCER_EPOCH. It's
just that the code treats this as equivalent to `ProducerFenced`. One
thought I had is maybe PRODUCER_FENCED needs to be a separate error code as
well. After all, only the transaction coordinator knows whether a producer
has been fenced or not. So maybe the handling could be something like the
following:

1. Produce requests may return INVALID_PRODUCER_EPOCH. The producer
recovers by following KIP-360 logic to see whether the epoch can be bumped.
If it cannot because the broker version is too old, we fail.
2. Transactional APIs may return either TRANSACTION_TIMEOUT or
PRODUCER_FENCED. In the first case, we do the same as above. We try to
recover by bumping the epoch. If the error is PRODUCER_FENCED, it is fatal.
3. Older brokers may return INVALID_PRODUCER_EPOCH as well from
transactional APIs. We treat this the same as 1.

What do you think?

Thanks,
Jason










On Mon, Apr 6, 2020 at 3:41 PM Boyang Chen 
wrote:

> Yep, updated the KIP, thanks!
>
> On Mon, Apr 6, 2020 at 3:11 PM Guozhang Wang  wrote:
>
> > Regarding 2), sounds good, I saw UNKNOWN_PRODUCER_ID is properly handled
> > today in produce / add-partitions-to-txn / add-offsets-to-txn / end-txn
> > responses, so that should be well covered.
> >
> > Could you reflect this in the wiki page that the broker should be
> > responsible for using different error codes given client request versions
> > as well?
> >
> >
> >
> > Guozhang
> >
> > On Mon, Apr 6, 2020 at 9:20 AM Boyang Chen 
> > wrote:
> >
> > > Thanks Guozhang for the review!
> > >
> > > On Sun, Apr 5, 2020 at 5:47 PM Guozhang Wang 
> wrote:
> > >
> > > > Hello Boyang,
> > > >
> > > > Thank you for the proposed KIP. Just some minor comments below:
> > > >
> > > > 1. Could you also describe which producer APIs could potentially
> throw
> > > the
> > > > new TransactionTimedOutException, and also how should callers handle
> > them
> > > > differently (i.e. just to make your description more concrete as
> > > javadocs).
> > > >
> > > > Good point, I will add example java doc changes.
> > >
> > >
> > > > 2. It's straight-forward if client is on newer version while broker's
> > on
> > > > older version; however If the client is on older version while
> broker's
> > > on
> > > > newer version, today would the internal module of producers treat it
> > as a
> > > > general fatal error or not? If not, should the broker set a different
> > > error
> > > > code upon detecting older request versions?
> > > >
> > > > That's a good suggestion, my understanding is that the prerequisite
> of
> > > this change is the new KIP-360 API which is going out with 2.5,
> > > so we could just return UNKNOWN_PRODUCER_ID instead of PRODUCER_FENCED
> as
> > > it could be interpreted as abortable error
> > > in 2.5 producer and retry. Producers older than 2.5 will not be
> covered.
> > > WDYT?
> > >
> > > >
> > > > Guozhang
> > > >
> > > > On Thu, Apr 2, 2020 at 1:40 PM Boyang Chen <
> reluctanthero...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Hey there,
> > > > >
> > > > > I would like to start discussion for KIP-588:
> > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-588%3A+Allow+producers+to+recover+gracefully+from+transaction+timeouts
> > > > >
> > > > > which aims to improve Producer resilience to transaction timeout
> due
> > to
> > > > > transient system gaps.
> > > > >
> > > > > Best,
> > > > > Boyang
> > > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


[jira] [Created] (KAFKA-9838) Add additional log concurrency test cases

2020-04-08 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9838:
--

 Summary: Add additional log concurrency test cases
 Key: KAFKA-9838
 URL: https://issues.apache.org/jira/browse/KAFKA-9838
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson


A couple recent bug fixes affecting log read semantics were due to race 
conditions with concurrent operations: see KAFKA-9807 and KAFKA-9835. We need 
better testing of concurrent operations on the log to know if there are 
additional problems and to prevent future regressions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9835) Race condition with concurrent write allows reads above high watermark

2020-04-08 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9835.

Resolution: Fixed

> Race condition with concurrent write allows reads above high watermark
> --
>
> Key: KAFKA-9835
> URL: https://issues.apache.org/jira/browse/KAFKA-9835
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.2, 2.3.1, 2.4.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.4.2, 2.5.1
>
>
> Kafka's log implementation serializes all writes using a lock, but allows 
> multiple concurrent reads while that lock is held. The `FileRecords` class 
> contains the core implementation. Reads to the log create logical slices of 
> `FileRecords` which are then passed to the network layer for sending. An 
> abridged version of the implementation of `slice` is provided below:
> {code}
> public FileRecords slice(int position, int size) throws IOException {
> int end = this.start + position + size;
> // handle integer overflow or if end is beyond the end of the file
> if (end < 0 || end >= start + sizeInBytes())
> end = start + sizeInBytes();
> return new FileRecords(file, channel, this.start + position, end, 
> true);
> }
> {code}
> The `size` parameter here is typically derived from the fetch size, but is 
> upper-bounded with respect to the high watermark. The two calls to 
> `sizeInBytes` here are problematic because the size of the file may change in 
> between them. Specifically a concurrent write may increase the size of the 
> file after the first call to `sizeInBytes` but before the second one. In the 
> worst case, when `size` defines the limit of the high watermark, this can 
> lead to a slice containing uncommitted data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4413

2020-04-08 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-8890: Make SSL context/engine configuration extensible (KIP-519)


--
[...truncated 2.99 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] 

Re: [DISCUSS] KIP-587 Suppress detailed responses for handled exceptions in security-sensitive environments

2020-04-08 Thread Connor Penhale
Hi All!

Is there any additional feedback that the community can provide me on the KIP? 
Has anyone else run into requirements like this, or maybe my customer is the 
only one :)? If the scope looks good, is it time to call a vote? Or should I 
start porting my 2.0 code to 2.6 to show examples?

Thanks!
Connor

On 4/6/20, 9:03 AM, "Connor Penhale"  wrote:

Hi Colin,

We did not find a specific security vulnerability. Our customer had 
auditors in their environment,  and they identified Kafka Connect as out of 
compliance with their particular standards, something that happens all the time 
for REST-based applications. What these security auditors expected Kafka 
Connect to be able to do was tune the response. As Kafka Connect could not 
provide this functionality, I'm proposing this KIP. Does that make sense? 
Should I still go through the process of a security disclosure?

Our particular need was around suppressing exceptions in the "public" 
response, as Kafka Connect was passing these exceptions without authentication, 
they became a public endpoint upon which the auditors could fuzz, and show it 
being out of compliance. Keeping these exceptions in the logs, as proposed in 
the KIP, makes sense to me as an operator.

I only mention PCI-DSS as this was the kind of environment my customer had 
that was making the request for being able to tune the response.

Thanks!
Connor

---
Connor Penhale | Enterprise Architect, OpenLogic (https://openlogic.com/)
Perforce (https://www.perforce.com/)
Support: +1 866.399.6736


On 4/3/20, 3:24 PM, "Colin McCabe"  wrote:

Also, if you do find a security issue, the process to follow is here: 
https://kafka.apache.org/project-security.html .

best,
Colin


On Fri, Apr 3, 2020, at 14:20, Colin McCabe wrote:
> Hi Connor,
>
> If we are putting security-sensitive information into REST responses,
> that is a bug that needs to be fixed, not worked around with a
> configuration option.  Do you have an example of security-sensitive
> information appearing in the exception text?  Why do you feel that
> PCI-DSS requires this change?
>
> By the way, the same concern applies to log messages.  We do not log
> sensitive information such as passwords to the log4j output.  If you
> know of that happening somewhere, please file a bug so it can be 
fixed.
>
> best,
> Colin
>
>
> On Fri, Apr 3, 2020, at 12:56, Connor Penhale wrote:
> > Hi Chris!
> >
> > Thanks for your feedback! I'll number my responses to your 
questions / thoughts.
> >
> > 1. Apologies on that lack of clarity! I settled on "Detailed 
exception
> > information has been suppressed. Please see logs."
> > 
(https://github.com/apache/kafka/pull/8355/files#diff-64c265986e7bbe40cdd79f831e961907R34).
 Should I update the KIP to reflect what I've already thought about? It's my 
first one, not sure what the process should be for editing.
> >
> > 2. I was unaware of the REST extensions! I'll see if I can implement
> > the same behavior as a REST extension. I agree that the KIP still 
has
> > merit, regardless of the feasibility of the extension, but in 
regards
> > to the 5th thought, this might make that decision easier.
> >
> > 3. I agree with your suggestion here. Absolutely ready to take the
> > community feedback on what makes sense here.
> >
> > 4. I should note that while I emphasized uncaught exceptions, I mean
> > all exceptions handled by the ExceptionMapper, including
> > ConnectRestExceptions. An example of this is here:
> > 
https://github.com/apache/kafka/pull/8355/files#diff-64c265986e7bbe40cdd79f831e961907R46
> >
> > 5. I didn't know how specific I should get if I had already taken a
> > stab at implementing! I'm happy to edit this in whatever way we 
want to
> > go about it.
> >
> > Let me know if anyone has any other questions or feedback!
> >
> >
> > Thanks!
> > Connor
> >
> > On 4/2/20, 3:58 PM, "Christopher Egerton"  
wrote:
> >
> > Hi Connor,
> >
> > Great stuff! I generally like being able to see the stack trace 
of an
> > exception directly via the REST API but can definitely 
understand the
> > security concerns here. I've got a few questions/remarks about 
the KIP and
> > would be interested in your thoughts:
> >
> > 1. The KIP mentions a SUPRESSED_EXCEPTION_MESSAGE, but doesn't 
actually
> > outline what this message would actually be. It'd be great to 
see the
> > actual message in the KIP since people may have thoughts on 

Jenkins build is back to normal : kafka-trunk-jdk11 #1334

2020-04-08 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-585: Conditional SMT

2020-04-08 Thread Tom Bentley
Since no one objected I've updated the KIP with the revised way to
configure this transformation.

On Mon, Apr 6, 2020 at 2:52 PM Tom Bentley  wrote:

> To come back about a point Chris made:
>
> 1. Instead of the new "ConfigDef config(Map props)" method,
>> what would you think about adopting a similar approach as the framework
>> uses with connectors, by adding a "Config validate(Map
>> props)" method that can perform custom validation outside of what can be
>> performed by the ConfigDef's single-property-at-a-time validation? It may
>> be a little heavyweight for use with this particular SMT, but it'd provide
>> more flexibility for other SMT implementations and would mirror an API
>> that
>> developers targeting the framework are likely already familiar with.
>>
>
> The validate() + config() approach taken for Connectors doesn't quite work
> for Transformations.
>
> The default Connector.validate() basically just calls
> `config().validate(connectorConfigs)` and returns a Config. Separately the
> AbstractHeader also calls `config()` and uses the Config and ConfigDef to
> build the ConfigInfos. The contract is not really defined well enough in
> the javadoc, but this means a connector can use validate() to build the
> ConfigDef which it will return from config().
>
> For Transformations we don't really want validate() to perform validation
> at all since ultimately the transformations config will be embedded in the
> connector's and will be validated by the connector itself. It wouldn't be
> harmful if it _did_ perform validation, just unnecessary. But having a
> validate() method that ought not to validate() seems like a recipe for
> confusion.
>
> Also problematic is that there's no use for the Config returned from
> Transformations.validate().
>
> So I'm not convinced that the superficial similarity really gains anything.
>
> Kind regards,
>
> Tom
>
> On Mon, Apr 6, 2020 at 2:29 PM Tom Bentley  wrote:
>
>> Hi,
>>
>> Hi all,
>>
>> Thanks for the discussion so far.
>>
>> It seems a bit weird to me that when configuring the Conditional SMT with
>> a DSL you would use a concise, intuitive DSL for expressing the condition,
>> but not for the transforms that it's guarding. It also seems natural, if
>> you support this for conditionally applying SMTs, that you'd soon want to
>> support the same thing for a generic filter transformer. Then, if you can
>> configure a filter transformation using this DSL, it becomes odd that you
>> can't do this for mapping transformations. I think it would be a mistake to
>> go specifying an expression language for conditions when really that might
>> just be part of a language for transformations.
>>
>> I think it would be possible today to write an SMT which allowed you to
>> express the transformation in a DSL. Concretely, it's possible to imagine a
>> single transformation taking a DSL something like this:
>>
>>   compose(
>> Flatten.Key(delimiter: ':'),
>> If(condition: TopicMatches(pattern: "blah-.*"),
>>then: Flatten.Value(delimiter: '/')))
>>
>> All I've really done here, beyond what was already proposed, is rename
>> Conditional to If, imagine a couple more bits of syntax
>> (SMTs being constructed by class name and named argument invocation
>> syntax to support SMT constructors) and add a higher level compose()
>> function for chaining SMTs (which could easily be replaced with brace
>> delimited blocks).
>>
>> That may be a discussion we should have, but I think in _this_ KIP we
>> should focus on the conditional part, since the above example hopefully
>> shows that it would be possible to reuse it in a DSL if there was appetite
>> for that.
>>
>> With that in mind, and since my original suggestion for an abbreviated
>> config syntax didn't appeal to people, I propose that we stick with the
>> existing norms for configuring this.
>> The previous example would look like this:
>>
>> transformation: flattenKey,if
>> transformation.flattenKey.type: Flatten.Key
>> transformation.flattenKey.delimiter: :
>> transformation.if.type: If
>> transformation.if.condition.type: TopicMatches
>> transformation.if.condition.pattern: blah-.*
>> transformation.if.then: flattenValue
>> transformation.if.then.flattenValue.type: Flatten.Value
>> transformation.if.then.flattenValue.delimiter: /
>>
>> Also, I'm inclined to think we should stick to just supporting
>> TopicMatches and Not, since we've not identified an actual need for
>> 'has-header', 'or' and 'and'.
>>
>> An example of the usage of Not would look like this:
>>
>> transformation: flattenKey,if
>> transformation.flattenKey.type: Flatten.Key
>> transformation.flattenKey.delimiter: :
>> transformation.if.type: If
>> transformation.if.condition.type: Not
>> transformation.if.condition.operand: startsWithBlah
>> transformation.if.condition.operand.startsWithBlah.type: TopicMatches
>> transformation.if.condition.operand.startsWithBlah.pattern: blah-.*
>> transformation.if.then: flattenValue
>> 

Re: Want to be Part of Kafka Committer - Siva

2020-04-08 Thread siva deva
It’s mentioned that I need to add to the contributor list for assigning a
JIRA.

Please contact us to be added the contributor list. After that you can
assign yourself to the JIRA ticket you have started working on so others
will notice.

On Wed, Apr 8, 2020 at 8:58 AM Boyang Chen 
wrote:

> Have you checked out https://kafka.apache.org/contributing.html?
>
> On Wed, Apr 8, 2020 at 8:14 AM siva deva  wrote:
>
> > Hi,
> >
> > I am a newbie to Kafka Project. Can you please provide JIRA access to
> > starter/newbie tickets, so that I can start working on it.
> >
> > --
> > Best,
> > Siva
> >
>
-- 
Best,
Siva


Re: KIP Access

2020-04-08 Thread Jun Rao
Hi, Jordan,

Thanks for your interest. Just gave you the wiki permissions.

Jun

On Wed, Apr 8, 2020 at 8:22 AM Jordan Moore  wrote:

> To whom it may concern,
>
> I (Jordan Moore) would like to request access to create a KIP for
> KAFKA-9774
> .
>
> *wiki id: *cricket007
>
> Regards, and stay well,
> Jordan M.
>


Re: Want to be Part of Kafka Committer - Siva

2020-04-08 Thread Boyang Chen
Have you checked out https://kafka.apache.org/contributing.html?

On Wed, Apr 8, 2020 at 8:14 AM siva deva  wrote:

> Hi,
>
> I am a newbie to Kafka Project. Can you please provide JIRA access to
> starter/newbie tickets, so that I can start working on it.
>
> --
> Best,
> Siva
>


Build failed in Jenkins: kafka-trunk-jdk8 #4412

2020-04-08 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: exclude ConsumerCoordinator from NPathComplexity check (#8447)


--
[...truncated 5.99 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain 

Want to be Part of Kafka Committer - Siva

2020-04-08 Thread siva deva
Hi,

I am a newbie to Kafka Project. Can you please provide JIRA access to
starter/newbie tickets, so that I can start working on it.

-- 
Best,
Siva


KIP Access

2020-04-08 Thread Jordan Moore
To whom it may concern,

I (Jordan Moore) would like to request access to create a KIP for KAFKA-9774
.

*wiki id: *cricket007

Regards, and stay well,
Jordan M.


[jira] [Resolved] (KAFKA-8890) KIP-519: Make SSL context/engine configuration extensible

2020-04-08 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8890.
---
Fix Version/s: 2.6.0
   Resolution: Fixed

> KIP-519: Make SSL context/engine configuration extensible
> -
>
> Key: KAFKA-8890
> URL: https://issues.apache.org/jira/browse/KAFKA-8890
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Maulin Vasavada
>Priority: Minor
> Fix For: 2.6.0
>
>
> This is to track changes for KIP-519: Make SSL context/engine configuration 
> extensible 
> ([https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=128650952])



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9837) New RPC for notifying controller of failed replica

2020-04-08 Thread David Arthur (Jira)
David Arthur created KAFKA-9837:
---

 Summary: New RPC for notifying controller of failed replica
 Key: KAFKA-9837
 URL: https://issues.apache.org/jira/browse/KAFKA-9837
 Project: Kafka
  Issue Type: Sub-task
  Components: controller, core
Reporter: David Arthur
 Fix For: 2.6.0


This is the tracking ticket for KIP-589. For the bridge release, brokers should 
no longer use ZooKeeper to notify the controller that a log dir has failed. It 
should instead use an RPC mechanism.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] 2.5.0 RC3

2020-04-08 Thread David Arthur
Passing Jenkins build on 2.5 branch:
https://builds.apache.org/job/kafka-2.5-jdk8/90/

On Wed, Apr 8, 2020 at 12:03 AM David Arthur  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the forth candidate for release of Apache Kafka 2.5.0.
>
> * TLS 1.3 support (1.2 is now the default)
> * Co-groups for Kafka Streams
> * Incremental rebalance for Kafka Consumer
> * New metrics for better operational insight
> * Upgrade Zookeeper to 3.5.7
> * Deprecate support for Scala 2.11
>
> Release notes for the 2.5.0 release:
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday April 10th 5pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
>
> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> https://github.com/apache/kafka/releases/tag/2.5.0-rc3
>
> * Documentation:
> https://kafka.apache.org/25/documentation.html
>
> * Protocol:
> https://kafka.apache.org/25/protocol.html
>
> Successful Jenkins builds to follow
>
> Thanks!
> David
>


-- 
David Arthur


[jira] [Created] (KAFKA-9836) org.apache.zookeeper.KeeperException$SessionMovedException: KeeperErrorCode = Session moved for /controller_epoch

2020-04-08 Thread Jagadish (Jira)
Jagadish created KAFKA-9836:
---

 Summary: 
org.apache.zookeeper.KeeperException$SessionMovedException: KeeperErrorCode = 
Session moved for /controller_epoch
 Key: KAFKA-9836
 URL: https://issues.apache.org/jira/browse/KAFKA-9836
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 2.3.0
Reporter: Jagadish
 Fix For: 2.3.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9835) Race condition with concurrent write allows reads above high watermark

2020-04-08 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9835:
--

 Summary: Race condition with concurrent write allows reads above 
high watermark
 Key: KAFKA-9835
 URL: https://issues.apache.org/jira/browse/KAFKA-9835
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Kafka's log implementation serializes all writes using a lock, but allows 
multiple concurrent reads while that lock is held. The `FileRecords` class 
contains the core implementation. Reads to the log create logical slices of 
`FileRecords` which are then passed to the network layer for sending. An 
abridged version of the implementation of `slice` is provided below:

{code}
public FileRecords slice(int position, int size) throws IOException {
int end = this.start + position + size;
// handle integer overflow or if end is beyond the end of the file
if (end < 0 || end >= start + sizeInBytes())
end = start + sizeInBytes();
return new FileRecords(file, channel, this.start + position, end, true);
}
{code}

The `size` parameter here is typically derived from the fetch size, but is 
upper-bounded with respect to the high watermark. The two calls to 
`sizeInBytes` here are problematic because the size of the file may change in 
between them. Specifically a concurrent write may increase the size of the file 
after the first call to `sizeInBytes` but before the second one. In the worst 
case, when `size` defines the limit of the high watermark, this can lead to a 
slice containing uncommitted data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1333

2020-04-08 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-9801: Still trigger rebalance when static member joins in


--
[...truncated 1.62 MB...]

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4411

2020-04-08 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-9801: Still trigger rebalance when static member joins in


--
[...truncated 1.62 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

> Task 

回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-04-08 Thread feyman2009
Hi Boyang,
Thanks for reminding me of that!
I'm not sure about the convention, I thought it would need to re-collect 
votes if the KIP has changed~
Let's leave the vote thread here for 2 days, if no objection, I will take 
it as approved and update the PR accordingly.

Thanks!
Feyman



--
发件人:Boyang Chen 
发送时间:2020年4月8日(星期三) 12:42
收件人:dev ; feyman2009 
主 题:Re: 回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

You should already get enough votes if I'm counting correctly (Guozhang, John, 
Matthias)
On Tue, Apr 7, 2020 at 6:59 PM feyman2009  wrote:
Hi, Boyang
 I think Matthias's proposal makes sense, but we can use the admin tool for 
this scenario as Boyang mentioned or follow up later, so I prefer to keep this 
KIP unchanged to minimize the scope.
 Calling for vote ~

 Thanks!
 Feyman

 --
 发件人:Boyang Chen 
 发送时间:2020年4月8日(星期三) 02:15
 收件人:dev 
 主 题:Re: 回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

 Hey Feyman,

 I think Matthias' suggestion is optional, and we could just use admin tool
 to remove single static members as well.

 Boyang

 On Tue, Apr 7, 2020 at 11:00 AM Matthias J. Sax  wrote:

 > > Would you mind to elaborate why we still need that
 >
 > Sure.
 >
 > For static memership, the session timeout it usually set quite high.
 > This make scaling in an application tricky: if you shut down one
 > instance, no rebalance would happen until `session.timeout.ms` hits.
 > This is specific to Kafka Streams, because when a Kafka Stream client is
 > closed, it does _not_ send a `LeaveGroupRequest`. Hence, the
 > corresponding partitions would not be processed for a long time and
 > thus, fall back.
 >
 > Given that each instance will have a unique `instance.id` provided by
 > the user, we could allow users to remove the instance they want to
 > decommission from the consumer group without the need to wait for
 > `session.timeout.ms`.
 >
 > Hence, it's not an application reset scenario for which one wants to
 > remove all members, but a scaling-in scenario. For dynamic membership,
 > this issue usually does not occur because the `session.timeout.ms` is
 > set to a fairly low value and a rebalance would happen quickly after an
 > instance is decommissioned.
 >
 > Does this make sense?
 >
 > As said before, we may or may not include this in this KIP. It's up to
 > you if you want to address it or not.
 >
 >
 > -Matthias
 >
 >
 >
 > On 4/7/20 7:12 AM, feyman2009 wrote:
 > > Hi, Matthias
 > > Thanks a lot!
 > > So you do not plan so support removing a _single static_ member via
 > `StreamsResetter`?
 > > =>
 > > Would you mind to elaborate why we still need that if we are
 > able to batch remove active members with adminClient?
 > >
 > > Thanks!
 > >
 > > Feyman
 > >  --
 > > 发件人:Matthias J. Sax 
 > > 发送时间:2020年4月7日(星期二) 08:25
 > > 收件人:dev 
 > > 主 题:Re: 回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members
 > in StreamsResetter
 > >
 > > Overall LGTM.
 > >
 > > +1 (binding)
 > >
 > > So you do not plan so support removing a _single static_ member via
 > > `StreamsResetter`? We can of course still add this as a follow up but it
 > > might be nice to just add it to this KIP right away. Up to you if you
 > > want to include it or not.
 > >
 > >
 > > -Matthias
 > >
 > >
 > >
 > > On 3/30/20 8:16 AM, feyman2009 wrote:
 > >> Hi, Boyang
 > >> Thanks a lot, that make sense, we should not expose member.id in
 > the MemberToRemove anymore, I have fixed it in the KIP.
 > >>
 > >>
 > >> Feyman
 > >> --
 > >> 发件人:Boyang Chen 
 > >> 发送时间:2020年3月29日(星期日) 01:45
 > >> 收件人:dev ; feyman2009 
 > >> 主 题:Re: 回复:回复:回复:[Vote] KIP-571: Add option to force remove members in
 > StreamsResetter
 > >>
 > >> Hey Feyman,
 > >>
 > >> thanks for the update. I assume we would rely entirely on the internal
 > changes for `removeMemberFromGroup` by sending a DescribeGroup request
 > first. With that in mind, I don't think we need to add member.id to
 > MemberToRemove anymore, as it is only facing public where users will only
 > configure group.instance.id?
 > >> On Fri, Mar 27, 2020 at 5:04 PM feyman2009
 >  wrote:
 > >> Bump, can anyone kindly take a look at the updated KIP-571? Thanks!
 > >>
 > >>
 > >>  --
 > >>  发件人:feyman2009 
 > >>  发送时间:2020年3月23日(星期一) 08:51
 > >>  收件人:dev 
 > >>  主 题:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in
 > StreamsResetter
 > >>
 > >>  Hi, team
 > >>  I have updated the KIP-571 according to our latest discussion
 > results, would you mind to take a look? Thanks!
 > >>
 > >>  Feyman
 > >>
 > >>
 > >>  

[jira] [Created] (KAFKA-9834) Add config to set ZSTD compresson level

2020-04-08 Thread jiamei xie (Jira)
jiamei xie created KAFKA-9834:
-

 Summary: Add config to set ZSTD compresson level
 Key: KAFKA-9834
 URL: https://issues.apache.org/jira/browse/KAFKA-9834
 Project: Kafka
  Issue Type: Bug
Reporter: jiamei xie
Assignee: jiamei xie


It seems kafka use zstd default compression level 3 and doesn't have support 
for setting zstd compression level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)