[GitHub] kafka pull request #2465: KAFKA-4710: Interpolate log4j's logging source int...
Github user kawamuray closed the pull request at: https://github.com/apache/kafka/pull/2465 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #2465: KAFKA-4710: Interpolate log4j's logging source int...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/2465 KAFKA-4710: Interpolate log4j's logging source interpretation to correct loation info of logs written through trait methods Issue: https://issues.apache.org/jira/browse/KAFKA-4710 This PR fixes location information of log4j `LoggingEvent` which is currently given wrongly due to indirect logger method invocation through Logging trait method. By introducing custom `LoggingEvent` which manually traverses stack trace to find a correct location where the logs are originally written, an appender can now obtain the correct location instead of somewhere inside the `Logging.scala`. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka KAFKA-4710-correct-log4j-location Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/2465.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2465 commit 75575dea7f4b4e0b42f5f67cf0160d28fd88e14a Author: Yuto Kawamura Date: 2017-01-30T06:07:30Z KAFKA-4710: Add unit tests for Logging trait commit a9b27e6cb2d61addbd0f4c538799bf9d92ee9bd4 Author: Yuto Kawamura Date: 2017-01-30T06:14:33Z KAFKA-4710: Interpolate log4j's logging source interpretation to correct loation info of logs written through trait methods Also, - Make all logging methods `final` to avoid unexpected call stack corruption - Refactoring some fields modifier to make their scope clearer commit be79c4afb73db86440d04769c2cd988905fcd7e9 Author: Yuto Kawamura Date: 2017-01-30T06:42:35Z MINOR: Code cleanup for Logging trait --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #2352: KAFKA-4614 Forcefully unmap mmap of OffsetIndex to...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/2352 KAFKA-4614 Forcefully unmap mmap of OffsetIndex to prevent long GC pause Issue: https://issues.apache.org/jira/browse/KAFKA-4614 Fixes the problem that the broker threads suffered by long GC pause. When GC thread collects mmap objects which were created for index files, it unmaps memory mapping so kernel turns to delete a file physically. This work may transparently read file's metadata from physical disk if it's not available on cache. This seems to happen typically when we're using G1GC, due to it's strategy to left a garbage for a long time if other objects in the same region are still alive. See the link for the details. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka KAFKA-4614-force-munmap-for-index Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/2352.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2352 commit 8c3a4c9f5a6188c641a45cb3e111094f545f7e66 Author: Yuto Kawamura Date: 2017-01-11T08:10:05Z KAFKA-4614 Forcefully unmap mmap of OffsetIndex to prevent long GC pause --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1816: KAFKA-4116: Handle 0.0.0.0 as a special case when ...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/1816 KAFKA-4116: Handle 0.0.0.0 as a special case when using advertised.listeners Issue: https://issues.apache.org/jira/browse/KAFKA-4116 You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka KAFKA-4116-listeners Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1816.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1816 commit 0eec0393b41e4e75d942b3431e328d5acc18ca7f Author: Yuto Kawamura Date: 2016-09-02T07:11:04Z KAFKA-4116: Handle 0.0.0.0 as a special case when using advertised.listeners --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1707: KAFKA-4024 KafkaProducer should not initialize met...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/1707 KAFKA-4024 KafkaProducer should not initialize metadata with current timestamp Issue: https://issues.apache.org/jira/browse/KAFKA-4024 Solves the problem that the first metadata update of KafkaProducer takes `retry.backoff.ms` to complete. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka KAFKA-4024-metadata-backoff Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1707.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1707 commit fc02b2381c9eb90a75fd0da338596679bd764a31 Author: Yuto Kawamura Date: 2016-08-06T09:25:29Z KAFKA-4024 KafkaProducer should not initialize metadata with current timestamp --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1606: KAFKA-3947: Add dumping current assignment capabil...
GitHub user kawamuray reopened a pull request: https://github.com/apache/kafka/pull/1606 KAFKA-3947: Add dumping current assignment capability to kafka-reassign-partitions.sh Issue: https://issues.apache.org/jira/browse/KAFKA-3947 You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka KAFKA-3947-dump-support Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1606.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1606 commit 929c2007d581fa12548e7b7bcedf8edd34295772 Author: Yuto Kawamura Date: 2016-07-11T10:03:41Z MINOR: Remove unused import commit 54c99f1c48b147e8c3616cf7da7dcd5e6adc8c34 Author: Yuto Kawamura Date: 2016-07-11T10:04:30Z MINOR: kafka-reassign-partitions.sh no longer stands only to perform reassignment commit 5054918a818cb901a24284ab300619e000490a98 Author: Yuto Kawamura Date: 2016-07-11T10:08:33Z KAFKA-3947: Add dumping current assignment capability to kafka-reassign-partitions.sh --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1607: MINOR: Doc of 'retries' config should mention abou...
Github user kawamuray closed the pull request at: https://github.com/apache/kafka/pull/1607 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1606: KAFKA-3947: Add dumping current assignment capabil...
Github user kawamuray closed the pull request at: https://github.com/apache/kafka/pull/1606 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1607: MINOR: Doc of 'retries' config should mention abou...
GitHub user kawamuray reopened a pull request: https://github.com/apache/kafka/pull/1607 MINOR: Doc of 'retries' config should mention about max.in.flight.requests.per.connection to avoid confusion When I first read the doc of producer config `retries`, it was quite confusing because it sounds like there's no way to guarantee in-order message producing with keep it durable for broker failure. If my understanding is correct, I felt it is much kind if the doc of `retries` mention about `max.in.flight.requests.per.connection`. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka MINOR-retries-doc Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1607.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1607 commit f4f0b00da6cf8daf01dea102f9442cc940661b0f Author: Yuto Kawamura Date: 2016-07-11T13:47:14Z MINOR: Doc of 'retires' config should mention about max.in.flight.requests.per.connection to avoid confusion --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1607: MINOR: Doc of 'retires' config should mention abou...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/1607 MINOR: Doc of 'retires' config should mention about max.in.flight.requests.per.connection to avoid confusion When I first read the doc of producer config `retries`, it was quite confusing because it sounds like there's no way to guarantee in-order message producing with keep it durable for broker failure. If my understanding is correct, I felt it is much kind if the doc of `retries` mention about `max.in.flight.requests.per.connection`. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka MINOR-retries-doc Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1607.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1607 commit f4f0b00da6cf8daf01dea102f9442cc940661b0f Author: Yuto Kawamura Date: 2016-07-11T13:47:14Z MINOR: Doc of 'retires' config should mention about max.in.flight.requests.per.connection to avoid confusion --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1606: KAFKA-3947: Add dumping current assignment capabil...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/1606 KAFKA-3947: Add dumping current assignment capability to kafka-reassign-partitions.sh Issue: https://issues.apache.org/jira/browse/KAFKA-3947 You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka KAFKA-3947-dump-support Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1606.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1606 commit 929c2007d581fa12548e7b7bcedf8edd34295772 Author: Yuto Kawamura Date: 2016-07-11T10:03:41Z MINOR: Remove unused import commit 54c99f1c48b147e8c3616cf7da7dcd5e6adc8c34 Author: Yuto Kawamura Date: 2016-07-11T10:04:30Z MINOR: kafka-reassign-partitions.sh no longer stands only to perform reassignment commit 5054918a818cb901a24284ab300619e000490a98 Author: Yuto Kawamura Date: 2016-07-11T10:08:33Z KAFKA-3947: Add dumping current assignment capability to kafka-reassign-partitions.sh --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1555: MINOR: Fix ambiguous log message in RecordCollecto...
GitHub user kawamuray reopened a pull request: https://github.com/apache/kafka/pull/1555 MINOR: Fix ambiguous log message in RecordCollector When producing fails in Kafka Streams, it gives an error like below: ``` Error sending record: null ``` by this line: https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollector.java#L59 This isn't not making sense because of: - Practically metadata is always null when exception != null : https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java#L107-L109 - It's quite misleading as we would interpret it like "Kafka Streams attempted to send 'null' as a record" which isn't in fact As I find a PR #873 as the origin of the above line I changed it to instantiate callback on each send in order to log destination topic at least. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka MINOR-record-collector-log-message Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1555.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1555 commit 753d0836e586f283bbbe54b354695c7fdbca6ef7 Author: Yuto Kawamura Date: 2016-06-26T07:32:08Z MINOR: Fix ambiguous log message in RecordCollector To avoid "Error sending record: null". - Practically metadata is always null when exception != null - With repsect to #873, create callback instance on each send to log destination topic --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1555: MINOR: Fix ambiguous log message in RecordCollecto...
Github user kawamuray closed the pull request at: https://github.com/apache/kafka/pull/1555 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1555: MINOR: Fix ambiguous log message in RecordCollecto...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/1555 MINOR: Fix ambiguous log message in RecordCollector When producing fails in Kafka Streams, it gives an error like below: ``` Error sending record: null ``` by this line: https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollector.java#L59 This isn't not making sense because of: - Practically metadata is always null when exception != null : https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordBatch.java#L107-L109 - It's quite misleading as we would interpret it like "Kafka Streams attempted to send 'null' as a record" which isn't in fact As I find a PR #873 as the origin of the above line I changed it to instantiate callback on each send in order to log destination topic at least. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka MINOR-record-collector-log-message Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1555.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1555 commit 753d0836e586f283bbbe54b354695c7fdbca6ef7 Author: Yuto Kawamura Date: 2016-06-26T07:32:08Z MINOR: Fix ambiguous log message in RecordCollector To avoid "Error sending record: null". - Practically metadata is always null when exception != null - With repsect to #873, create callback instance on each send to log destination topic --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1460: KAFKA-3775: Throttle maximum number of tasks assig...
Github user kawamuray closed the pull request at: https://github.com/apache/kafka/pull/1460 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request #1460: KAFKA-3775: Throttle maximum number of tasks assig...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/1460 KAFKA-3775: Throttle maximum number of tasks assigned to a single KafkaStreams Issue: https://issues.apache.org/jira/browse/KAFKA-3775 POC. Discussion in progress. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka KAFKA-3775-throttle-tasks Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1460.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1460 commit fefe259b2c97bb1bbf14b572533ca74348651c0d Author: Yuto Kawamura Date: 2016-06-02T03:46:51Z MINOR: Add toString() to ClientState for debugging commit c4f363d32d9a496c0f4b4e66ee846429a2a2eda5 Author: Yuto Kawamura Date: 2016-06-02T03:51:34Z MINOR: Remove meanglessly repeated assertions in unit test commit 3c173fa5d029277e5d1974c104d7e66939b5cd17 Author: Yuto Kawamura Date: 2016-06-02T03:55:10Z KAFKA-3775: Intorduce new streams configuration max.tasks.assigned This configuration limits the maximum number of tasks assigned to a single KafkaStreams instance. As a task consists of single partition for more than 1 topic, setting this value to lower is useful to prevent huge number of partitions are assigned to an instance which started first. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request: KAFKA-3642: Fix NPE from ProcessorStateManager...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/1289 KAFKA-3642: Fix NPE from ProcessorStateManager when the changelog topic not exists Issue: https://issues.apache.org/jira/browse/KAFKA-3642 You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka KAFKA-3642-streams-NPE Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1289.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1289 commit 189571f9ab6555cc420190a9fb38ab2064ce42ab Author: Yuto Kawamura Date: 2016-04-29T16:53:38Z KAFKA-3642: Fix MockConsumer#partitionsFor to behave as same as KafkaConsumer KafkaConsumer#partitionsFor returns null when the topic not exists. commit f8d96209c97eef4328f6255f6a43ae0c2c70543b Author: Yuto Kawamura Date: 2016-04-29T16:22:00Z KAFKA-3642: Make ProcessorStateManager throw meaningful exception instead of NPE when topic not exists commit f1cae8eb977965ec82a60ea45bdbe5c1ecee869a Author: Yuto Kawamura Date: 2016-04-29T16:23:50Z KAFKA-3642: Warn if expected internal topic not exists when zookeeper.connect isn't supplied commit 4f7c6dc9becb547368f5dac6d508bd071bdfec91 Author: Yuto Kawamura Date: 2016-04-29T16:26:39Z MINOR: Remove meaningless branching argument - It doesn't hurts anything even always return filled list --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request: KAFKA-3616: Make kafka producers/consumers inj...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/1264 KAFKA-3616: Make kafka producers/consumers injectable for KafkaStreams Ticket: https://issues.apache.org/jira/browse/KAFKA-3616 WIP. Just to show my mind. Will follow-up after positive feedback. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka kafka-3616-inject-clients Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1264.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1264 commit 8e608baa6670097d1bee896c9f82459a9aee4612 Author: Yuto Kawamura Date: 2016-04-24T14:54:53Z KAFKA-3616: Make kafka producers/consumers injectable for KafkaStreams --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request: KAFKA-3471: min.insync.replicas isn't respecte...
Github user kawamuray closed the pull request at: https://github.com/apache/kafka/pull/1146 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] kafka pull request: KAFKA-3471: min.insync.replicas isn't respecte...
GitHub user kawamuray opened a pull request: https://github.com/apache/kafka/pull/1146 KAFKA-3471: min.insync.replicas isn't respected when there's a delaying follower who still in ISR Ticket: https://issues.apache.org/jira/browse/KAFKA-3471 The number of followers which are already caught up until requiredOffset should be used instead of high watermark to consider whether there are enough number of replicas for a produce request. Please see the ticket for the detail. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kawamuray/kafka issue/KAFKA-3471-minISR Alternatively you can review and apply these changes as the patch at: https://github.com/apache/kafka/pull/1146.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1146 commit a784340b3876377894db25987659408779fec7dd Author: Yuto Kawamura Date: 2016-03-26T17:14:36Z KAFKA-3471: Add tests for Partition.checkEnoughReplicasReachOffset At the moment of this commit, some of test cases fails but that is expected. The next commit will follow up to fix checkEnoughReplicasReachOffset. commit cc96ab952165afc4652ae628e5489c911b755ab6 Author: Yuto Kawamura Date: 2016-03-26T17:18:32Z KAFKA-3471: Fix checkEnoughReplicasReachOffset to respect min.insync.replicas The number of followers which are already caught up until requiredOffset should be used instead of high watermark to consider whether there are enough number of replicas for a produce request. Please see the ticket for the detail. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---