[jira] [Commented] (KAFKA-2476) Define logical types for Copycat data API

2015-10-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945434#comment-14945434
 ] 

ASF GitHub Bot commented on KAFKA-2476:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/281

KAFKA-2476: Add Decimal, Date, and Timestamp logical types.

To support Decimal, this also adds support for schema parameters, which is 
an
extra set of String key value pairs which provide extra information about 
the
schema. For Decimal, this is used to encode the scale parameter, which is 
part
of the schema instead of being passed with every value.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-2476-copycat-logical-types

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/281.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #281


commit a97ff2f3d9ccce878d34036a8ce4e6ca35cbe08c
Author: Ewen Cheslack-Postava 
Date:   2015-10-05T23:23:52Z

KAFKA-2476: Add Decimal, Date, and Timestamp logical types.

To support Decimal, this also adds support for schema parameters, which is 
an
extra set of String key value pairs which provide extra information about 
the
schema. For Decimal, this is used to encode the scale parameter, which is 
part
of the schema instead of being passed with every value.




> Define logical types for Copycat data API
> -
>
> Key: KAFKA-2476
> URL: https://issues.apache.org/jira/browse/KAFKA-2476
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> We need some common types like datetime and decimal. This boils down to 
> defining the schemas for these types, along with documenting their semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046499#comment-15046499
 ] 

ASF GitHub Bot commented on KAFKA-1997:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/638


> Refactor Mirror Maker
> -
>
> Key: KAFKA-1997
> URL: https://issues.apache.org/jira/browse/KAFKA-1997
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1997.patch, KAFKA-1997.patch, 
> KAFKA-1997_2015-03-03_16:28:46.patch, KAFKA-1997_2015-03-04_15:07:46.patch, 
> KAFKA-1997_2015-03-04_15:42:45.patch, KAFKA-1997_2015-03-05_20:14:58.patch, 
> KAFKA-1997_2015-03-09_18:55:54.patch, KAFKA-1997_2015-03-10_18:31:34.patch, 
> KAFKA-1997_2015-03-11_15:20:18.patch, KAFKA-1997_2015-03-11_19:10:53.patch, 
> KAFKA-1997_2015-03-13_14:43:34.patch, KAFKA-1997_2015-03-17_13:47:01.patch, 
> KAFKA-1997_2015-03-18_12:47:32.patch
>
>
> Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2061) Offer a --version flag to print the kafka version

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046793#comment-15046793
 ] 

ASF GitHub Bot commented on KAFKA-2061:
---

GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/639

KAFKA-2061: Offer a --version flag to print the kafka version

Add version option to command line tools to print Kafka version

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka version_option

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/639.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #639


commit 08a87f1201fd6650057a60ec45a206ab612c271e
Author: Sasaki Toru 
Date:   2015-12-08T11:35:33Z

Add version option to command line tools to print Kafka version




> Offer a --version flag to print the kafka version
> -
>
> Key: KAFKA-2061
> URL: https://issues.apache.org/jira/browse/KAFKA-2061
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Andrew Pennebaker
>Priority: Minor
>
> As a newbie, I want kafka command line tools to offer a --version flag to 
> print the kafka version, so that it's easier to work with the community to 
> troubleshoot things.
> As a mitigation, users can query the package management system. But that's A) 
> Not necessarily a newbie's first instinct and B) Not always possible when 
> kafka is installed manually from tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2507) Replace ControlledShutdown{Request,Response} with org.apache.kafka.common.requests equivalent

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047189#comment-15047189
 ] 

ASF GitHub Bot commented on KAFKA-2507:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/640

KAFKA-2507: Replace ControlledShutdown{Request,Response} with o.a.k.c…

….requests equivalent

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka controlled-shutdown

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/640.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #640


commit 7474779b584815da5592554366ad910bc70ca17a
Author: Grant Henke 
Date:   2015-12-08T18:14:32Z

KAFKA-2507: Replace ControlledShutdown{Request,Response} with 
o.a.k.c.requests equivalent




> Replace ControlledShutdown{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2507
> URL: https://issues.apache.org/jira/browse/KAFKA-2507
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2958) Remove duplicate API key mapping functionality

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047153#comment-15047153
 ] 

ASF GitHub Bot commented on KAFKA-2958:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/637


> Remove duplicate API key mapping functionality
> --
>
> Key: KAFKA-2958
> URL: https://issues.apache.org/jira/browse/KAFKA-2958
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> Kafka common and core both have a class that maps request API keys and names. 
> To prevent errors and issues with consistency we should remove 
> RequestKeys.scala in core in favor or ApiKeys.java in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2957) Fix typos in Kafka documentation

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047357#comment-15047357
 ] 

ASF GitHub Bot commented on KAFKA-2957:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/641

KAFKA-2957: Fix typos in Kafka documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-2957

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/641.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #641


commit 3d2410946d3cd675de0ab4a45ee57bed18c3e4ca
Author: Vahid Hashemian 
Date:   2015-12-08T19:29:50Z

Fix some typos in documentation (resolves KAFKA-2957)




> Fix typos in Kafka documentation
> 
>
> Key: KAFKA-2957
> URL: https://issues.apache.org/jira/browse/KAFKA-2957
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Trivial
>  Labels: documentation, easyfix
> Fix For: 0.9.0.1
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There are some minor typos in Kafka documentation. Example:
> - Supporting these uses led use to a design with a number of unique elements, 
> more akin to a database log then a traditional messaging system.
> This should read as:
> - Supporting these uses led *us* to a design with a number of unique 
> elements, more akin to a database log *than* a traditional messaging system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2957) Fix typos in Kafka documentation

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047447#comment-15047447
 ] 

ASF GitHub Bot commented on KAFKA-2957:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/641


> Fix typos in Kafka documentation
> 
>
> Key: KAFKA-2957
> URL: https://issues.apache.org/jira/browse/KAFKA-2957
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Trivial
>  Labels: documentation, easyfix
> Fix For: 0.9.0.1
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There are some minor typos in Kafka documentation. Example:
> - Supporting these uses led use to a design with a number of unique elements, 
> more akin to a database log then a traditional messaging system.
> This should read as:
> - Supporting these uses led *us* to a design with a number of unique 
> elements, more akin to a database log *than* a traditional messaging system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2930) Update references to ZooKeeper in the docs

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036238#comment-15036238
 ] 

ASF GitHub Bot commented on KAFKA-2930:
---

GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/615

KAFKA-2930: Update references to ZooKeeper in the docs.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2930

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/615.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #615


commit 312e885390bd665cd349408df8f28fad20cca872
Author: Flavio Junqueira 
Date:   2015-12-02T17:44:13Z

KAFKA-2930: Updated doc.




> Update references to ZooKeeper in the docs
> --
>
> Key: KAFKA-2930
> URL: https://issues.apache.org/jira/browse/KAFKA-2930
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.0.1
>
>
> Information about ZooKeeper in the ops doc is stale, it refers to branch 3.3 
> and Kafka is already using branch 3.4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2851) system tests: error copying keytab file

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036304#comment-15036304
 ] 

ASF GitHub Bot commented on KAFKA-2851:
---

Github user apovzner closed the pull request at:

https://github.com/apache/kafka/pull/609


> system tests: error copying keytab file
> ---
>
> Key: KAFKA-2851
> URL: https://issues.apache.org/jira/browse/KAFKA-2851
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Anna Povzner
>Priority: Minor
>
> It is best to use unique paths for temporary files on the test driver machine 
> so that multiple test jobs don't conflict. 
> If the test driver machine is running multiple ducktape jobs concurrently, as 
> is the case with Confluent nightly test runs, conflicts can occur if the same 
> canonical path is always used.
> In this case, security_config.py copies a file to /tmp/keytab on the test 
> driver machine, while other jobs may remove this from the driver machine. 
> Then you can get errors like this:
> {code}
> 
> test_id:
> 2015-11-17--001.kafkatest.tests.replication_test.ReplicationTest.test_replication_with_broker_failure.security_protocol=SASL_PLAINTEXT.failure_mode=clean_bounce
> status: FAIL
> run time:   1 minute 33.395 seconds
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 101, in run_all_tests
> result.data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 151, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/mark/_mark.py",
>  line 331, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 132, in test_replication_with_broker_failure
> self.run_produce_consume_validate(core_test_action=lambda: 
> failures[failure_mode](self))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 66, in run_produce_consume_validate
> core_test_action()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 132, in 
> self.run_produce_consume_validate(core_test_action=lambda: 
> failures[failure_mode](self))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 43, in clean_bounce
> test.kafka.restart_node(prev_leader_node, clean_shutdown=True)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/services/kafka/kafka.py",
>  line 275, in restart_node
> self.start_node(node)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/services/kafka/kafka.py",
>  line 123, in start_node
> self.security_config.setup_node(node)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/services/security/security_config.py",
>  line 130, in setup_node
> node.account.scp_to(MiniKdc.LOCAL_KEYTAB_FILE, SecurityConfig.KEYTAB_PATH)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/cluster/remoteaccount.py",
>  line 174, in scp_to
> return self._ssh_quiet(self.scp_to_command(src, dest, recursive))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/cluster/remoteaccount.py",
>  line 219, in _ssh_quiet
> raise e
> CalledProcessError: Command 'scp -o 'HostName 52.33.250.202' -o 'Port 22' -o 
> 'UserKnownHostsFile /dev/null' -o 'StrictHostKeyChecking no' -o 
> 'PasswordAuthentication no' -o 'IdentityFile /var/lib/jenkins/muckrake.pem' 
> -o 'IdentitiesOnly yes' -o 'LogLevel FATAL'  /tmp/keytab 
> ubuntu@worker2:/mnt/security/keytab' returned non-zero exit status 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2906) Kafka Connect javadocs not built properly

2015-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031149#comment-15031149
 ] 

ASF GitHub Bot commented on KAFKA-2906:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/599

KAFKA-2906: Fix Connect javadocs, restrict only to api subproject, and 
clean up javadoc warnings.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-2906-connect-javadocs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/599.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #599


commit 1568235f9a8e9b986b5ef36f23ecd956f341d206
Author: Ewen Cheslack-Postava 
Date:   2015-11-29T19:40:13Z

KAFKA-2906: Fix Connect javadocs, restrict only to api subproject, and 
clean up javadoc warnings.




> Kafka Connect javadocs not built properly
> -
>
> Key: KAFKA-2906
> URL: https://issues.apache.org/jira/browse/KAFKA-2906
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> It looks like the filters used in other projects aren't working properly for 
> the nested connect projects, resulting in javadocs not being generated.
> We also probably only want the javadocs for connect:api to be generated. The 
> rest of the packages are all private implementation and any docs are better 
> handled in the regular docs instead of as javadocs (e.g. config options).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2906) Kafka Connect javadocs not built properly

2015-11-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031178#comment-15031178
 ] 

ASF GitHub Bot commented on KAFKA-2906:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/599


> Kafka Connect javadocs not built properly
> -
>
> Key: KAFKA-2906
> URL: https://issues.apache.org/jira/browse/KAFKA-2906
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> It looks like the filters used in other projects aren't working properly for 
> the nested connect projects, resulting in javadocs not being generated.
> We also probably only want the javadocs for connect:api to be generated. The 
> rest of the packages are all private implementation and any docs are better 
> handled in the regular docs instead of as javadocs (e.g. config options).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2771) Add Rolling Upgrade to Secured Cluster to System Tests

2015-11-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032598#comment-15032598
 ] 

ASF GitHub Bot commented on KAFKA-2771:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/496


> Add Rolling Upgrade to Secured Cluster to System Tests
> --
>
> Key: KAFKA-2771
> URL: https://issues.apache.org/jira/browse/KAFKA-2771
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Ensure we can perform a rolling upgrade to enable SSL, SASL_PLAINTEXT &  on a 
> running cluster
> *Method*
> - Start with 0.9.0 cluster with security disabled
> - Upgrade to Client and Inter-Broker ports to SSL (This will take two rounds 
> bounces. One to open the SSL port and one to close the PLAINTEXT port)
> - Ensure you can produce  (acks = -1) and consume during the process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2880) Fetcher.getTopicMetadata NullPointerException when broker cannot be reached

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036424#comment-15036424
 ] 

ASF GitHub Bot commented on KAFKA-2880:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/581


> Fetcher.getTopicMetadata NullPointerException when broker cannot be reached
> ---
>
> Key: KAFKA-2880
> URL: https://issues.apache.org/jira/browse/KAFKA-2880
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
>
> The Fetcher class will throw a NullPointerException if a broker cannot be 
> reached:
> {quote}
> Exception in thread "main" java.lang.NullPointerException
> at 
> org.apache.kafka.common.requests.MetadataResponse.(MetadataResponse.java:130)
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:203)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1143)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:126)
> at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:85)
> at org.apache.kafka.connect.runtime.Worker.start(Worker.java:108)
> at org.apache.kafka.connect.runtime.Connect.start(Connect.java:56)
> at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:62)
> {quote}
> This is trivially reproduced by trying to start Kafka Connect in distributed 
> mode (i.e. connect-distributed.sh config/connect-distributed.properties) with 
> no broker running. However, it's not specific to Kafka Connect, it just 
> happens to use the consumer in a way that triggers it reliably.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2929) Remove duplicate error mapping functionality

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036447#comment-15036447
 ] 

ASF GitHub Bot commented on KAFKA-2929:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/616

KAFKA-2929: Remove duplicate error mapping functionality

Removes ErrorMapping.scala in core in favor or Errors.java in common. 
Duplicated exceptions in core are removed as well, to ensure the mapping is 
correct.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka error-mapping

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/616.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #616


commit 10003b33140144f5bd97ba37654e8396db724d92
Author: Grant Henke 
Date:   2015-12-02T16:04:59Z

KAFKA-2929: Remove duplicate error mapping functionality




> Remove duplicate error mapping functionality
> 
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should remove 
> ErrorMapping.scala in core in favor or Errors.java in common. Any duplicated 
> exceptions in core should be removed as well to ensure the mapping is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2851) system tests: error copying keytab file

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036413#comment-15036413
 ] 

ASF GitHub Bot commented on KAFKA-2851:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/610


> system tests: error copying keytab file
> ---
>
> Key: KAFKA-2851
> URL: https://issues.apache.org/jira/browse/KAFKA-2851
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Anna Povzner
>Priority: Minor
>
> It is best to use unique paths for temporary files on the test driver machine 
> so that multiple test jobs don't conflict. 
> If the test driver machine is running multiple ducktape jobs concurrently, as 
> is the case with Confluent nightly test runs, conflicts can occur if the same 
> canonical path is always used.
> In this case, security_config.py copies a file to /tmp/keytab on the test 
> driver machine, while other jobs may remove this from the driver machine. 
> Then you can get errors like this:
> {code}
> 
> test_id:
> 2015-11-17--001.kafkatest.tests.replication_test.ReplicationTest.test_replication_with_broker_failure.security_protocol=SASL_PLAINTEXT.failure_mode=clean_bounce
> status: FAIL
> run time:   1 minute 33.395 seconds
> 
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 101, in run_all_tests
> result.data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 151, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/mark/_mark.py",
>  line 331, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 132, in test_replication_with_broker_failure
> self.run_produce_consume_validate(core_test_action=lambda: 
> failures[failure_mode](self))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 66, in run_produce_consume_validate
> core_test_action()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 132, in 
> self.run_produce_consume_validate(core_test_action=lambda: 
> failures[failure_mode](self))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 43, in clean_bounce
> test.kafka.restart_node(prev_leader_node, clean_shutdown=True)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/services/kafka/kafka.py",
>  line 275, in restart_node
> self.start_node(node)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/services/kafka/kafka.py",
>  line 123, in start_node
> self.security_config.setup_node(node)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/services/security/security_config.py",
>  line 130, in setup_node
> node.account.scp_to(MiniKdc.LOCAL_KEYTAB_FILE, SecurityConfig.KEYTAB_PATH)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/cluster/remoteaccount.py",
>  line 174, in scp_to
> return self._ssh_quiet(self.scp_to_command(src, dest, recursive))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/cluster/remoteaccount.py",
>  line 219, in _ssh_quiet
> raise e
> CalledProcessError: Command 'scp -o 'HostName 52.33.250.202' -o 'Port 22' -o 
> 'UserKnownHostsFile /dev/null' -o 'StrictHostKeyChecking no' -o 
> 'PasswordAuthentication no' -o 'IdentityFile /var/lib/jenkins/muckrake.pem' 
> -o 'IdentitiesOnly yes' -o 'LogLevel FATAL'  /tmp/keytab 
> ubuntu@worker2:/mnt/security/keytab' returned non-zero exit status 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2718) Reuse of temporary directories leading to transient unit test failures

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035477#comment-15035477
 ] 

ASF GitHub Bot commented on KAFKA-2718:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/613

KAFKA-2718: Add logging to investigate intermittent unit test failures

Print port and directories used by zookeeper in unit tests to figure out 
which may be causing conflict.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2718-logging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/613.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #613


commit 543990b2a5ab31b6c3335ef19de1ceb022e88365
Author: Rajini Sivaram 
Date:   2015-12-02T08:43:17Z

KAFKA-2718: Add logging to investigate intermittent unit test failures




> Reuse of temporary directories leading to transient unit test failures
> --
>
> Key: KAFKA-2718
> URL: https://issues.apache.org/jira/browse/KAFKA-2718
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.1
>
>
> Stack traces in some of the transient unit test failures indicate that 
> temporary directories used for Zookeeper are being reused.
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:253)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:237)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:231)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:63)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1851) OffsetFetchRequest returns extra partitions when input only contains unknown partitions

2015-12-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034929#comment-15034929
 ] 

ASF GitHub Bot commented on KAFKA-1851:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/609

KAFKA-1851 Using random dir under /temp for local kdc files to avoid 
conflicts.

when multiple test jobs are running.

I manually separated changes for KAFKA-2851 from this PR:  
https://github.com/apache/kafka/pull/570 which also had KAFKA-2825 changes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka-2851

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/609.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #609


commit d19a8533243a77f026a4f547bad40fd10dd68745
Author: Anna Povzner 
Date:   2015-12-02T00:02:13Z

KAFKA-1851 Using random dir under /temp for local kdc files to avoid 
conflicts when multiple test jobs are running.




> OffsetFetchRequest returns extra partitions when input only contains unknown 
> partitions
> ---
>
> Key: KAFKA-1851
> URL: https://issues.apache.org/jira/browse/KAFKA-1851
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.8.2.0
>
> Attachments: kafka-1851.patch
>
>
> When issuing an OffsetFetchRequest with an unknown topic partition, the 
> OffsetFetchResponse unexpectedly returns all partitions in the same consumer 
> group, in addition to the unknown partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2825) Add controller failover to existing replication tests

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036808#comment-15036808
 ] 

ASF GitHub Bot commented on KAFKA-2825:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/618

KAFKA-2825: Add controller failover to existing replication tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka_2825_01

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/618.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #618


commit fa0b4156d209522b1fe7656f73bb2792d8c932b3
Author: Anna Povzner 
Date:   2015-12-02T22:38:20Z

KAFKA-2825: Add controller failover to existing replication tests




> Add controller failover to existing replication tests
> -
>
> Key: KAFKA-2825
> URL: https://issues.apache.org/jira/browse/KAFKA-2825
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> Extend existing replication tests to include controller failover:
> * clean/hard shutdown
> * clean/hard bounce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2931) Consumer rolling upgrade test case

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15036817#comment-15036817
 ] 

ASF GitHub Bot commented on KAFKA-2931:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/619

KAFKA-2931: add system test for consumer rolling upgrades



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2931

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/619.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #619


commit 41e4ac43ed008b1043292cfc879992de3b5098ac
Author: Jason Gustafson 
Date:   2015-12-02T22:48:14Z

KAFKA-2931: add system test for consumer rolling upgrades




> Consumer rolling upgrade test case
> --
>
> Key: KAFKA-2931
> URL: https://issues.apache.org/jira/browse/KAFKA-2931
> Project: Kafka
>  Issue Type: Test
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> We need a system test which covers the rolling upgrade process for the new 
> consumer. The idea is to start the consumers with a "range" assignment 
> strategy and then upgrade to "round-robin" without any down-time. This 
> validates the coordinator's protocol selection process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2804) Create / Update changelog topics upon state store initialization

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037120#comment-15037120
 ] 

ASF GitHub Bot commented on KAFKA-2804:
---

GitHub user guozhangwang reopened a pull request:

https://github.com/apache/kafka/pull/579

KAFKA-2804: manage changelog topics through ZK in PartitionAssignor



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2804

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/579.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #579


commit b2aad7cb73431b923170ea3cc2dd193c4f10
Author: Guozhang Wang 
Date:   2015-11-22T02:42:49Z

comment links

commit d35a1599718b831deefcb47d5d11a3e59b0c31a1
Author: wangg...@gmail.com 
Date:   2015-11-22T02:51:08Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K2804

commit d23db8fd8d7810dfcf7b1be2daa25cd074127eb4
Author: Guozhang Wang 
Date:   2015-11-23T23:23:16Z

auto create topic in partition assignor and block wait on topic partition 
existence

commit cf263fdd23eaa268c29f94cd0c1ac9455add9a0f
Author: Guozhang Wang 
Date:   2015-11-24T00:10:08Z

fix unit tests

commit dd571904bd1bb834215c51806ee1a23d6b082670
Author: Guozhang Wang 
Date:   2015-11-24T00:10:23Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K2804

commit 3f5c1c34cc52c93758b58c7ad6f018402e06d31c
Author: Guozhang Wang 
Date:   2015-12-02T00:02:32Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K2804

commit 67ef0321ae7398114c0c6bd5b71df263cfe2f4bc
Author: Guozhang Wang 
Date:   2015-12-02T00:22:11Z

incoporate comments

commit 0e5c7c3be8db56366895f833a0092ca177fdbda5
Author: Guozhang Wang 
Date:   2015-12-02T01:37:53Z

refactor PartitionGrouper

commit f76ee8b94da66104b21534cf1c75c9314d995acc
Author: Guozhang Wang 
Date:   2015-12-03T00:58:07Z

add Job-Id into StreamingConfig

commit 0aa06ed3e992f4dd7ea7aa72707a8f53f5b52d67
Author: Guozhang Wang 
Date:   2015-12-03T00:58:15Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K2804

commit 1c9827a62dcdc8206bd1309b6bc474c68bf56952
Author: Guozhang Wang 
Date:   2015-12-03T02:31:58Z

some minor fixes




> Create / Update changelog topics upon state store initialization
> 
>
> Key: KAFKA-2804
> URL: https://issues.apache.org/jira/browse/KAFKA-2804
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> When state store instances that are logging-backed are initialized, we need 
> to check if the corresponding change log topics have been created with the 
> right number of partitions:
> 1) If not exist, create topic
> 2) If expected #.partitions < actual #.partitions, delete and re-create topic.
> 3) If expected #.partitions > actual #.partitions, add partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2942) Inadvertent auto-commit when pre-fetching can cause message loss

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037341#comment-15037341
 ] 

ASF GitHub Bot commented on KAFKA-2942:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/623

KAFKA-2942: inadvertent auto-commit when pre-fetching can cause message loss



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2942

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/623.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #623


commit 01872110576a82f851e791422f5fec3f797711e7
Author: Jason Gustafson 
Date:   2015-12-03T06:18:01Z

KAFKA-2942: inadvertent auto-commit when pre-fetching can cause message loss




> Inadvertent auto-commit when pre-fetching can cause message loss
> 
>
> Key: KAFKA-2942
> URL: https://issues.apache.org/jira/browse/KAFKA-2942
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> Before returning from KafkaConsumer.poll(), we update the consumed position 
> and invoke poll(0) to send new fetches. In doing so, it is possible that an 
> auto-commit is triggered which would commit the updated offsets which hasn't 
> yet been returned. If the process then crashes before consuming the messages, 
> there would be a gap in the delivery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2804) Create / Update changelog topics upon state store initialization

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037119#comment-15037119
 ] 

ASF GitHub Bot commented on KAFKA-2804:
---

Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/579


> Create / Update changelog topics upon state store initialization
> 
>
> Key: KAFKA-2804
> URL: https://issues.apache.org/jira/browse/KAFKA-2804
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> When state store instances that are logging-backed are initialized, we need 
> to check if the corresponding change log topics have been created with the 
> right number of partitions:
> 1) If not exist, create topic
> 2) If expected #.partitions < actual #.partitions, delete and re-create topic.
> 3) If expected #.partitions > actual #.partitions, add partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2940) Make available to use any Java options at startup scripts

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037136#comment-15037136
 ] 

ASF GitHub Bot commented on KAFKA-2940:
---

GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/621

KAFKA-2940: Make available to use any Java options at startup scripts

We cannot specify any Java options (e.g. option for remote debugging) at 
startup scrips such as kafka-server-start.sh .
This ticket makes we can specify to use "JAVA_OPTS" environmental variables.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka java_opt

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/621.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #621


commit aaf58f1de02b56468911e416fd3053d2d4acf320
Author: Sasaki Toru 
Date:   2015-12-03T02:39:08Z

Add JAVA_OPTS to specify any Java options to use start scripts.

commit 06f688e386b5a12301e0980fda794b21248b0d64
Author: Sasaki Toru 
Date:   2015-12-03T02:41:15Z

Merge branch 'trunk' of https://github.com/apache/kafka into java_opt




> Make available to use any Java options at startup scripts
> -
>
> Key: KAFKA-2940
> URL: https://issues.apache.org/jira/browse/KAFKA-2940
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Sasaki Toru
>Priority: Minor
> Fix For: 0.9.0.1
>
>
> We cannot specify any Java options (e.g. option for remote debugging) at 
> startup scrips such as kafka-server-start.sh .
> This ticket makes we can specify to use "JAVA_OPTS" environmental variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2308) New producer + Snappy face un-compression errors after broker restart

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037210#comment-15037210
 ] 

ASF GitHub Bot commented on KAFKA-2308:
---

Github user darionyaphet commented on the pull request:

https://github.com/apache/storm/pull/801#issuecomment-161509467
  
Hi @knusbaum @revans2 I read `Kafka Release Notes Version 0.8.2.2`  and 
found a bug fixed 
([KAFKA-2308](https://issues.apache.org/jira/browse/KAFKA-2308)) about New 
producer and Snappy un-compression errors when Kafka Broker restart . So I 
think this is maybe useful .


> New producer + Snappy face un-compression errors after broker restart
> -
>
> Key: KAFKA-2308
> URL: https://issues.apache.org/jira/browse/KAFKA-2308
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
> Fix For: 0.9.0.0, 0.8.2.2
>
> Attachments: KAFKA-2308.patch
>
>
> Looks like the new producer, when used with Snappy, following a broker 
> restart is sending messages the brokers can't decompress. This issue was 
> discussed at few mailing lists thread, but I don't think we ever resolved it.
> I can reproduce with trunk and Snappy 1.1.1.7. 
> To reproduce:
> 1. Start 3 brokers
> 2. Create a topic with 3 partitions and 3 replicas each.
> 2. Start performance producer with --new-producer --compression-codec 2 (and 
> set the number of messages to fairly high, to give you time. I went with 10M)
> 3. Bounce one of the brokers
> 4. The log of one of the surviving nodes should contain errors like:
> {code}
> 2015-07-02 13:45:59,300 ERROR kafka.server.ReplicaManager: [Replica Manager 
> on Broker 66]: Error processing append operation on partition [t3,0]
> kafka.common.KafkaException:
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94)
> at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64)
> at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
> at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
> at 
> kafka.message.ByteBufferMessageSet$$anon$2.innerDone(ByteBufferMessageSet.scala:177)
> at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:218)
> at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:173)
> at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
> at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at 
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
> at 
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
> at 
> scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
> at scala.collection.AbstractIterator.to(Iterator.scala:1157)
> at 
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
> at 
> kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:267)
> at kafka.log.Log.liftedTree1$1(Log.scala:327)
> at kafka.log.Log.append(Log.scala:326)
> at 
> kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:423)
> at 
> kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:409)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:409)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:365)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:350)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at 

[jira] [Commented] (KAFKA-2924) Add offsets/group metadata decoder so that DumpLogSegments can be used with the offsets topic

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037230#comment-15037230
 ] 

ASF GitHub Bot commented on KAFKA-2924:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/622

KAFKA-2924: support offsets topic in DumpLogSegments



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2924

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/622.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #622


commit 2b2bb8af1f2e46ec8a36f23e297b2aeea497f648
Author: Jason Gustafson 
Date:   2015-12-03T01:56:16Z

KAFKA-2924: support offsets topic in DumpLogSegments




> Add offsets/group metadata decoder so that DumpLogSegments can be used with 
> the offsets topic
> -
>
> Key: KAFKA-2924
> URL: https://issues.apache.org/jira/browse/KAFKA-2924
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> We've only implemented a MessageFormatter for use with the ConsoleConsumer, 
> but it would be helpful to be able to pull offsets/metadata from log files 
> directly in testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2931) Consumer rolling upgrade test case

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15039764#comment-15039764
 ] 

ASF GitHub Bot commented on KAFKA-2931:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/619


> Consumer rolling upgrade test case
> --
>
> Key: KAFKA-2931
> URL: https://issues.apache.org/jira/browse/KAFKA-2931
> Project: Kafka
>  Issue Type: Test
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> We need a system test which covers the rolling upgrade process for the new 
> consumer. The idea is to start the consumers with a "range" assignment 
> strategy and then upgrade to "round-robin" without any down-time. This 
> validates the coordinator's protocol selection process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2870) Support configuring operationRetryTimeout of underlying ZkClient through ZkUtils constructor

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15037752#comment-15037752
 ] 

ASF GitHub Bot commented on KAFKA-2870:
---

GitHub user Mszak opened a pull request:

https://github.com/apache/kafka/pull/624

KAFKA-2870: add optional operationRetryTimeout parameter to apply method in 
ZKUtils.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Mszak/kafka kafka-2870

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/624.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #624


commit 0432a6c307e0acec4424fbd84aa13ad61566e0b3
Author: Jakub Nowak 
Date:   2015-12-03T12:44:54Z

Add optional operationRetryTimeoutInMillis field in ZKUtils apply method.




> Support configuring operationRetryTimeout of underlying ZkClient through 
> ZkUtils constructor
> 
>
> Key: KAFKA-2870
> URL: https://issues.apache.org/jira/browse/KAFKA-2870
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Stevo Slavic
>Assignee: Jakub Nowak
>Priority: Minor
>
> Currently (Kafka 0.9.0.0 RC3) it's not possible to have underlying 
> {{ZkClient}} {{operationRetryTimeout}} configured and use Kafka's 
> {{ZKStringSerializer}} in {{ZkUtils}} instance.
> Please support configuring {{operationRetryTimeout}} via another 
> {{ZkUtils.apply}} factory method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2893) Add Negative Partition Seek Check

2015-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041588#comment-15041588
 ] 

ASF GitHub Bot commented on KAFKA-2893:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/628

KAFKA-2893: Add a simple non-negative partition seek check



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2893

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #628


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit 253769e9441bc0634fd27d00375c5381daf03202
Author: jinxing 
Date:   2015-12-04T14:01:42Z

KAFKA-2893: Add Negative Partition Seek Check




> Add Negative Partition Seek Check
> -
>
> Key: KAFKA-2893
> URL: https://issues.apache.org/jira/browse/KAFKA-2893
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
>
> When adding add seek that is a negative number, there isn't a check. When you 
> do give a negative number, you get the following output:
> {{2015-11-25 13:54:16 INFO  Fetcher:567 - Fetch offset null is out of range, 
> resetting offset}}
> Code to replicate:
> KafkaConsumer consumer = new KafkaConsumer String>(props);
> TopicPartition partition = new TopicPartition(topic, 0);
> consumer.assign(Arrays.asList(partition));
> consumer.seek(partition, -1);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1148) Delayed fetch/producer requests should be satisfied on a leader change

2015-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15043841#comment-15043841
 ] 

ASF GitHub Bot commented on KAFKA-1148:
---

GitHub user iBuddha opened a pull request:

https://github.com/apache/kafka/pull/633

KAFKA-1148 check leader epoch for DelayedProduce

KAFKA-1148: check leader epoch for DelayedProduce

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/iBuddha/kafka KAFKA-1148

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/633.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #633


commit 873b555906a773e19bdc3fc54fe9b3f5c3f8a6dd
Author: xhuang 
Date:   2015-12-06T12:02:49Z

KAFKA-1148 check leader epoch for DelayedProduce




> Delayed fetch/producer requests should be satisfied on a leader change
> --
>
> Key: KAFKA-1148
> URL: https://issues.apache.org/jira/browse/KAFKA-1148
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>
> Somewhat related to KAFKA-1016.
> This would be an issue only if max.wait is set to a very high value. When a 
> leader change occurs we should remove the delayed request from the purgatory 
> - either satisfy with error/expire - whichever makes more sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2949) Make EndToEndAuthorizationTest replicated

2015-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15043350#comment-15043350
 ] 

ASF GitHub Bot commented on KAFKA-2949:
---

GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/631

KAFKA-2949: Make EndToEndAuthorizationTest replicated.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2949

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/631.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #631


commit cc9757e347858b06fc4afe442adbc82fd0ce841a
Author: Flavio Junqueira 
Date:   2015-12-05T15:31:06Z

KAFKA-2949: Making topic replicated.




> Make EndToEndAuthorizationTest replicated
> -
>
> Key: KAFKA-2949
> URL: https://issues.apache.org/jira/browse/KAFKA-2949
> Project: Kafka
>  Issue Type: Test
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>
> The call to create a topic in the setup method is setting the degree of 
> replication to 1, we should make it 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2962) Add Simple Join API

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048211#comment-15048211
 ] 

ASF GitHub Bot commented on KAFKA-2962:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/644


> Add Simple Join API
> ---
>
> Key: KAFKA-2962
> URL: https://issues.apache.org/jira/browse/KAFKA-2962
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> Stream-Table and Table-Table joins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2965) Two variables should be exchanged.

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047898#comment-15047898
 ] 

ASF GitHub Bot commented on KAFKA-2965:
---

GitHub user boweite opened a pull request:

https://github.com/apache/kafka/pull/646

[KAFKA-2965]Two variables should be exchanged.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/boweite/kafka kafka-2965

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/646.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #646


commit ad71fb59dc5e9db1a9ceea20a9b320e0885ba146
Author: unknown 
Date:   2015-12-09T02:28:50Z

change variables




> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Priority: Minor
>  Labels: bug
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2733) Distinguish metric names inside the sensor registry

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047676#comment-15047676
 ] 

ASF GitHub Bot commented on KAFKA-2733:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/643

KAFKA-2733: Standardize metric name for Kafka Streams



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2733

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/643.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #643


commit c437400f28c711a89fbab0c9fd179aa817f8c1fb
Author: Guozhang Wang 
Date:   2015-12-08T23:20:25Z

v1




> Distinguish metric names inside the sensor registry
> ---
>
> Key: KAFKA-2733
> URL: https://issues.apache.org/jira/browse/KAFKA-2733
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.9.0.1
>
>
> Since stream tasks can share the same StreamingMetrics object, and the 
> MetricName is distinguishable only by the group name (same for the same type 
> of states, and for other streaming metrics) and the tags (currently only the 
> client-ids of the StreamThead), when we have multiple tasks within a single 
> stream thread, it could lead to IllegalStateException upon trying to registry 
> the same metric from those tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2948) Kafka producer does not cope well with topic deletions

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047782#comment-15047782
 ] 

ASF GitHub Bot commented on KAFKA-2948:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/645

KAFKA-2948: Remove unused topics from producer metadata set

If no messages are sent to a topic during the last refresh interval or if 
UNKNOWN_TOPIC_OR_PARTITION error is received, remove the topic from the 
metadata list. Topics are added to the list on the next attempt to send a 
message to the topic.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2948

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/645.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #645


commit f7e40e5ce515d700e8cc7ab02a0f16141fa14f67
Author: rsivaram 
Date:   2015-12-09T00:16:18Z

KAFKA-2948: Remove unused topics from producer metadata set




> Kafka producer does not cope well with topic deletions
> --
>
> Key: KAFKA-2948
> URL: https://issues.apache.org/jira/browse/KAFKA-2948
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Kafka producer gets metadata for topics when send is invoked and thereafter 
> it attempts to keep the metadata up-to-date without any explicit requests 
> from the client. This works well in static environments, but when topics are 
> added or deleted, list of topics in Metadata grows but never shrinks. Apart 
> from being a memory leak, this results in constant requests for metadata for 
> deleted topics.
> We are running into this issue with the Confluent REST server where topic 
> deletion from tests are filling up logs with warnings about unknown topics. 
> Auto-create is turned off in our Kafka cluster.
> I am happy to provide a fix, but am not sure what the right fix is. Does it 
> make sense to remove topics from the metadata list when 
> UNKNOWN_TOPIC_OR_PARTITION response is received if there are no outstanding 
> sends? It doesn't look very straightforward to do this, so any alternative 
> suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2667) Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047852#comment-15047852
 ] 

ASF GitHub Bot commented on KAFKA-2667:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/642


> Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure
> 
>
> Key: KAFKA-2667
> URL: https://issues.apache.org/jira/browse/KAFKA-2667
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> Seen in recent builds:
> {code}
> org.apache.kafka.copycat.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.kafka.copycat.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2962) Add Simple Join API

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047694#comment-15047694
 ] 

ASF GitHub Bot commented on KAFKA-2962:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/644

KAFKA-2962: stream-table table-table joins

@guozhangwang 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka join_methods

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/644.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #644


commit 15804dc1b8a8d9cfeee685d66b64d5fb9f77989f
Author: Yasuhiro Matsuda 
Date:   2015-12-08T23:39:15Z

stream-table table-table joins




> Add Simple Join API
> ---
>
> Key: KAFKA-2962
> URL: https://issues.apache.org/jira/browse/KAFKA-2962
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.1.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>
> Stream-Table and Table-Table joins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2399) Replace Stream.continually with Iterator.continually

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047833#comment-15047833
 ] 

ASF GitHub Bot commented on KAFKA-2399:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/106


> Replace Stream.continually with Iterator.continually
> 
>
> Key: KAFKA-2399
> URL: https://issues.apache.org/jira/browse/KAFKA-2399
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
> Fix For: 0.9.1.0
>
>
> There are two usages of `Stream.continually` and neither of them seems to 
> need the extra functionality it provides over `Iterator.continually` 
> (`Stream.continually` allocates `Cons` instances to save the computation 
> instead of recomputing it if needed more than once).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2667) Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047597#comment-15047597
 ] 

ASF GitHub Bot commented on KAFKA-2667:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/642

KAFKA-2667: fix assertion depending on hash map order in 
KafkaBasedLogTest.testSendAndReadToEnd



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2667

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/642.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #642


commit 791ce85204e5c3be9afcfdf18bb066001503c347
Author: Jason Gustafson 
Date:   2015-12-08T22:16:53Z

KAFKA-2667: fix assertion depending on hash map order in 
KafkaBasedLogTest.testSendAndReadToEnd




> Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure
> 
>
> Key: KAFKA-2667
> URL: https://issues.apache.org/jira/browse/KAFKA-2667
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> Seen in recent builds:
> {code}
> org.apache.kafka.copycat.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.kafka.copycat.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2509) Replace LeaderAndIsr{Request,Response} with org.apache.kafka.common.network.requests equivalent

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047950#comment-15047950
 ] 

ASF GitHub Bot commented on KAFKA-2509:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/647

KAFKA-2509: Replace LeaderAndIsr{Request,Response} with o.a.k.c reque…

…sts equivalent

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka isr-request

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/647.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #647


commit 7292291832b688c77ab0f27ba68f20e71ca2e81b
Author: Grant Henke 
Date:   2015-12-09T03:13:27Z

KAFKA-2509: Replace LeaderAndIsr{Request,Response} with o.a.k.c requests 
equivalent




> Replace LeaderAndIsr{Request,Response} with 
> org.apache.kafka.common.network.requests equivalent
> ---
>
> Key: KAFKA-2509
> URL: https://issues.apache.org/jira/browse/KAFKA-2509
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2668) Add a metric that records the total number of metrics

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047974#comment-15047974
 ] 

ASF GitHub Bot commented on KAFKA-2668:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/328


> Add a metric that records the total number of metrics
> -
>
> Key: KAFKA-2668
> URL: https://issues.apache.org/jira/browse/KAFKA-2668
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joel Koshy
>Assignee: Dong Lin
> Fix For: 0.9.1.0
>
>
> Sounds recursive and weird, but this would have been useful while debugging 
> KAFKA-2664



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2924) Add offsets/group metadata decoder so that DumpLogSegments can be used with the offsets topic

2015-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048089#comment-15048089
 ] 

ASF GitHub Bot commented on KAFKA-2924:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/622


> Add offsets/group metadata decoder so that DumpLogSegments can be used with 
> the offsets topic
> -
>
> Key: KAFKA-2924
> URL: https://issues.apache.org/jira/browse/KAFKA-2924
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> We've only implemented a MessageFormatter for use with the ConsoleConsumer, 
> but it would be helpful to be able to pull offsets/metadata from log files 
> directly in testing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048704#comment-15048704
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/648

KAFKA-2837: fix transient failure of kafka.api.ProducerBounceTest > 
testBrokerFailure

I can reproduced this transient failure, it seldom happen;
code is like below:
 // rolling bounce brokers
for (i <- 0 until numServers) {
  for (server <- servers) {
server.shutdown()
server.awaitShutdown()
server.startup()
Thread.sleep(2000)
  }

  // Make sure the producer do not see any exception
  // in returned metadata due to broker failures
  assertTrue(scheduler.failed == false)

  // Make sure the leader still exists after bouncing brokers
  (0 until numPartitions).foreach(partition => 
TestUtils.waitUntilLeaderIsElectedOrChanged(zkUtils, topic1, partition))
Brokers keep rolling restart, and producer keep sending messages;
In every loop, it will wait for election of partition leader;
But if the election is slow, more messages will be buffered in 
RecordAccumulator's BufferPool;
The limit for buffer is set to be 3;
TimeoutException("Failed to allocate memory within the configured max 
blocking time") will show up when out of memory;
Since for every restart of the broker, it will sleep for 2000 ms,  so this 
transient failure seldom happen;
But if I reduce the sleeping period, the bigger chance failure happens; 
for example if the broker with role of controller suffered a restart, it 
will take time to select controller first, then select leader, which will lead 
to more messges blocked in KafkaProducer:RecordAccumulator:BufferPool;
In this fix, I just enlarge the producer's buffer size to be 1MB;
@guozhangwang , Could you give some comments?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2837

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #648


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit cd5e6f4700a4387f9383b84aca0ee9c4639b1033
Author: jinxing 
Date:   2015-12-09T13:49:07Z

KAFKA-2837: fix transient failure kafka.api.ProducerBounceTest > 
testBrokerFailure




> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>  Labels: newbie
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 

[jira] [Commented] (KAFKA-2972) ControlledShutdownResponse always deserialises `partitionsRemaining` as empty

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048976#comment-15048976
 ] 

ASF GitHub Bot commented on KAFKA-2972:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/649

KAFKA-2972; Add missing `partitionsRemaingList.add` in 
`ControlledShutdownResponse` constructor



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
KAFKA-2972-controlled-shutdown-response-bug

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/649.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #649


commit 82eb116122637e05221a8afbceae12d97cc1463d
Author: Ismael Juma 
Date:   2015-12-09T16:57:56Z

Add missing `partitionsRemaingList.add` in `ControlledShutdownResponse` 
constructor




> ControlledShutdownResponse always deserialises `partitionsRemaining` as empty
> -
>
> Key: KAFKA-2972
> URL: https://issues.apache.org/jira/browse/KAFKA-2972
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> This was a regression introduced when moving to Java request/response classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2965) Two variables should be exchanged.

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049090#comment-15049090
 ] 

ASF GitHub Bot commented on KAFKA-2965:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/646


> Two variables should be exchanged.
> --
>
> Key: KAFKA-2965
> URL: https://issues.apache.org/jira/browse/KAFKA-2965
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.0
> Environment: NA
>Reporter: Bo Wang
>Priority: Minor
>  Labels: bug
> Fix For: 0.9.1.0
>
> Attachments: Kafka-2965.patch
>
>
> Two variables should be exchanged in KafkaController.scala as follows:
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress
> Should change to:
> val topicsForWhichPreferredReplicaElectionIsInProgress = 
> controllerContext.partitionsUndergoingPreferredReplicaElection.map(_.topic)
> val topicsForWhichPartitionReassignmentIsInProgress = 
> controllerContext.partitionsBeingReassigned.keySet.map(_.topic)
> val topicsIneligibleForDeletion = topicsWithReplicasOnDeadBrokers | 
> topicsForWhichPartitionReassignmentIsInProgress |
>   
> topicsForWhichPreferredReplicaElectionIsInProgress



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2945) CreateTopic - protocol and server side implementation

2015-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042308#comment-15042308
 ] 

ASF GitHub Bot commented on KAFKA-2945:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/626


> CreateTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2945
> URL: https://issues.apache.org/jira/browse/KAFKA-2945
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2945) CreateTopic - protocol and server side implementation

2015-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042309#comment-15042309
 ] 

ASF GitHub Bot commented on KAFKA-2945:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/626

KAFKA-2945: CreateTopic - protocol and server side implementation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka create-wire

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/626.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #626


commit d1fe53ecb9ccdc94457efcb61332cd54ca7b8095
Author: Grant Henke 
Date:   2015-12-02T03:23:45Z

KAFKA-2945: CreateTopic - protocol and server side implementation

commit 5dce80683ebd2fe6a30c8c0aa8dfe8b2233602a8
Author: Grant Henke 
Date:   2015-12-04T21:57:57Z

Address reviews: possible codes, comments, invalid config exception




> CreateTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2945
> URL: https://issues.apache.org/jira/browse/KAFKA-2945
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2856) add KTable

2015-12-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042375#comment-15042375
 ] 

ASF GitHub Bot commented on KAFKA-2856:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/604


> add KTable
> --
>
> Key: KAFKA-2856
> URL: https://issues.apache.org/jira/browse/KAFKA-2856
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> KTable is a special type of the stream that represents a changelog of a 
> database table (or a key-value store).
> A changelog has to meet the following requirements.
> * Key-value mapping is surjective in the database table (the key must be the 
> primary key).
> * All insert/update/delete events are delivered in order for the same key
> * An update event has the whole data (not just delta).
> * A delete event is represented by the null value.
> KTable does not necessarily materialized as a local store. It may be 
> materialized when necessary. (see below)
> KTable supports look-up by key. KTable is materialized implicitly when 
> look-up is necessary.
> * KTable may be created from a topic. (Base KTable)
> * KTable may be created from another KTable by filter(), filterOut(), 
> mapValues(). (Derived KTable)
> * A call to the user supplied function is skipped when the value is null 
> since such an event represents a deletion. 
> * Instead of dropping, events filtered out by filter() or filterOut() are 
> converted to delete events. (Can we avoid this?)
> * map(), flatMap() and flatMapValues() are not supported since they may 
> violate the changelog requirements
> A derived KTable may be persisted to a topic by to() or through(). through() 
> creates another base KTable. 
> KTable can be converted to KStream by the toStream() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2950) Performance regression in producer

2015-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15043582#comment-15043582
 ] 

ASF GitHub Bot commented on KAFKA-2950:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/632


> Performance regression in producer
> --
>
> Key: KAFKA-2950
> URL: https://issues.apache.org/jira/browse/KAFKA-2950
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
> Fix For: 0.9.0.1
>
>
> For small messages the producer has gotten slower since the 0.8 release. E.g. 
> for a single thread on linux sending 100 byte messages the decrease seems to 
> be about 30%. The root cause seems to be that the new timeout we added for 
> max.block.ms ends up doing about 4 more system calls to check the ellapsed 
> time.
> The reason for these calls is to detect slow serializers or partitioners. But 
> I think this is not worth the performance hit. I think we can say the config 
> is only for blocking due to metadata or memory allocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2950) Performance regression in producer

2015-12-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15043568#comment-15043568
 ] 

ASF GitHub Bot commented on KAFKA-2950:
---

GitHub user jkreps opened a pull request:

https://github.com/apache/kafka/pull/632

KAFKA-2950: Fix performance regression in the producer

Removes all the System.currentTimeMillis calls to help with performance on 
small messages.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jkreps/kafka producer-perf-regression

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/632.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #632


commit 69d94701d4de3283f3783826edb9321dc6c800ec
Author: Jay Kreps 
Date:   2015-12-05T23:08:04Z

Fix performance regression in the producer for small messages due to too 
many System.currentTimeMillis() calls.




> Performance regression in producer
> --
>
> Key: KAFKA-2950
> URL: https://issues.apache.org/jira/browse/KAFKA-2950
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jay Kreps
>
> For small messages the producer has gotten slower since the 0.8 release. E.g. 
> for a single thread on linux sending 100 byte messages the decrease seems to 
> be about 30%. The root cause seems to be that the new timeout we added for 
> max.block.ms ends up doing about 4 more system calls to check the ellapsed 
> time.
> The reason for these calls is to detect slow serializers or partitioners. But 
> I think this is not worth the performance hit. I think we can say the config 
> is only for blocking due to metadata or memory allocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2942) Inadvertent auto-commit when pre-fetching can cause message loss

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15038335#comment-15038335
 ] 

ASF GitHub Bot commented on KAFKA-2942:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/623


> Inadvertent auto-commit when pre-fetching can cause message loss
> 
>
> Key: KAFKA-2942
> URL: https://issues.apache.org/jira/browse/KAFKA-2942
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> Before returning from KafkaConsumer.poll(), we update the consumed position 
> and invoke poll(0) to send new fetches. In doing so, it is possible that an 
> auto-commit is triggered which would commit the updated offsets which hasn't 
> yet been returned. If the process then crashes before consuming the messages, 
> there would be a gap in the delivery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2905) System test for rolling upgrade to enable ZooKeeper ACLs with SASL

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15039567#comment-15039567
 ] 

ASF GitHub Bot commented on KAFKA-2905:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/598


> System test for rolling upgrade to enable ZooKeeper ACLs with SASL
> --
>
> Key: KAFKA-2905
> URL: https://issues.apache.org/jira/browse/KAFKA-2905
> Project: Kafka
>  Issue Type: Test
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>
> Write a ducktape test to verify the ability of performing a rolling upgrade 
> to enable the use of secure ACLs and SASL with ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2945) CreateTopic - protocol and server side implementation

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15039619#comment-15039619
 ] 

ASF GitHub Bot commented on KAFKA-2945:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/626

KAFKA-2945: CreateTopic - protocol and server side implementation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka create-wire

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/626.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #626


commit d1fe53ecb9ccdc94457efcb61332cd54ca7b8095
Author: Grant Henke 
Date:   2015-12-02T03:23:45Z

KAFKA-2945: CreateTopic - protocol and server side implementation




> CreateTopic - protocol and server side implementation
> -
>
> Key: KAFKA-2945
> URL: https://issues.apache.org/jira/browse/KAFKA-2945
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1851) OffsetFetchRequest returns extra partitions when input only contains unknown partitions

2015-12-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035112#comment-15035112
 ] 

ASF GitHub Bot commented on KAFKA-1851:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/610

KAFKA-1851 Using random file names for local kdc files to avoid conflicts.

I originally tried to solve the problem by using tempfile, and creating and 
using scp() utility method that created a random local temp file every time it 
was called. However, it required passing miniKdc object to SecurityConfig 
setup_node which looked very invasive, since many tests use this method. Here 
is the PR for that, which I think we will close: 
https://github.com/apache/kafka/pull/609

This change is the least invasive change to solve conflicts between 
multiple tests jobs. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka_2851_01

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/610.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #610


commit 4c9c76825b9dff5fb509eac01592d37f357b5775
Author: Anna Povzner 
Date:   2015-12-02T01:55:08Z

KAFKA-2851:  Using random file names for local kdc files to avoid conflicts




> OffsetFetchRequest returns extra partitions when input only contains unknown 
> partitions
> ---
>
> Key: KAFKA-1851
> URL: https://issues.apache.org/jira/browse/KAFKA-1851
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.0
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.8.2.0
>
> Attachments: kafka-1851.patch
>
>
> When issuing an OffsetFetchRequest with an unknown topic partition, the 
> OffsetFetchResponse unexpectedly returns all partitions in the same consumer 
> group, in addition to the unknown partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2926) [MirrorMaker] InternalRebalancer calls wrong method of external rebalancer

2015-12-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035135#comment-15035135
 ] 

ASF GitHub Bot commented on KAFKA-2926:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/611

KAFKA-2926: [MirrorMaker] InternalRebalancer calls wrong method of ex…

…ternal rebalancer

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2926

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/611.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #611


commit a5a4db994c0facd09716ec2c51f9c8ea868e2e87
Author: Gwen Shapira 
Date:   2015-12-02T02:13:04Z

KAFKA-2926: [MirrorMaker] InternalRebalancer calls wrong method of external 
rebalancer




> [MirrorMaker] InternalRebalancer calls wrong method of external rebalancer
> --
>
> Key: KAFKA-2926
> URL: https://issues.apache.org/jira/browse/KAFKA-2926
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>
> MirrorMaker has an internal rebalance listener that will invoke an external 
> (pluggable) listener if such exists. Looks like the internal listener calls 
> the wrong method of the external listener.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2915) System Tests that use bootstrap.servers embedded in jinja files are not working

2015-12-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034245#comment-15034245
 ] 

ASF GitHub Bot commented on KAFKA-2915:
---

GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/608

KAFKA-2915: Fix problem with System Tests that use bootstrap.servers 
embedded in jinja files

Fixes problems in mirror maker and consumer tests
http://jenkins.confluent.io/job/kafka_system_tests_branch_builder/290/
http://jenkins.confluent.io/job/kafka_system_tests_branch_builder/289/

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-2915-jinja-bug

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/608.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #608


commit 640532b7ca10298d545e523e291e6f6fe82843c6
Author: Ben Stopford 
Date:   2015-12-01T15:27:46Z

KAFKA-2915: Added call security protocol to bootstrap servers call in jinja 
file

commit 192d96c6a53481db5b8dc428f0a2eb6d401862ea
Author: Ben Stopford 
Date:   2015-12-01T16:40:56Z

KAFKA-2915: fixed string formatting




> System Tests that use bootstrap.servers embedded in jinja files are not 
> working
> ---
>
> Key: KAFKA-2915
> URL: https://issues.apache.org/jira/browse/KAFKA-2915
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
>
> Regression due to changes in the way the tests handle security. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2915) System Tests that use bootstrap.servers embedded in jinja files are not working

2015-12-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034462#comment-15034462
 ] 

ASF GitHub Bot commented on KAFKA-2915:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/608


> System Tests that use bootstrap.servers embedded in jinja files are not 
> working
> ---
>
> Key: KAFKA-2915
> URL: https://issues.apache.org/jira/browse/KAFKA-2915
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ben Stopford
> Fix For: 0.9.1.0
>
>
> Regression due to changes in the way the tests handle security. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2421) Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7

2015-12-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034476#comment-15034476
 ] 

ASF GitHub Bot commented on KAFKA-2421:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/552


> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7
> 
>
> Key: KAFKA-2421
> URL: https://issues.apache.org/jira/browse/KAFKA-2421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: IBM Java 7
>Reporter: Rajini Sivaram
>Assignee: Grant Henke
> Attachments: KAFKA-2421.patch, KAFKA-2421_2015-08-11_18:54:26.patch, 
> kafka-2421_2015-09-08_11:38:03.patch
>
>
> Upgrade LZ4 to version 1.3 to avoid crashing with IBM Java 7.
> LZ4 version 1.2 crashes with 64-bit IBM Java 7. This has been fixed in LZ4 
> version 1.3 (https://github.com/jpountz/lz4-java/blob/master/CHANGES.md, 
> https://github.com/jpountz/lz4-java/pull/46).
> The unit test org.apache.kafka.common.record.MemoryRecordsTest crashes when 
> run with 64-bit IBM Java7 with the error:
> {quote}
> 023EB900: Native Method 0263CE10 
> (net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput([BII[BII)I)
> 023EB900: Invalid JNI call of function void 
> ReleasePrimitiveArrayCritical(JNIEnv *env, jarray array, void *carray, jint 
> mode): For array FFF7EAB8 parameter carray passed FFF85998, 
> expected to be FFF7EAC0
> 14:08:42.763 0x23eb900j9mm.632*   ** ASSERTION FAILED ** at 
> StandardAccessBarrier.cpp:335: ((false))
> JVMDUMP039I Processing dump event "traceassert", detail "" at 2015/08/11 
> 15:08:42 - please wait.
> {quote}
> Stack trace from javacore:
> 3XMTHREADINFO3   Java callstack:
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNI.LZ4_compress_limitedOutput(Native Method)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4JNICompressor.compress(LZ4JNICompressor.java:31)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.(LZ4Factory.java:163)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.instance(LZ4Factory.java:46)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.nativeInstance(LZ4Factory.java:76)
> 5XESTACKTRACE   (entered lock: 
> net/jpountz/lz4/LZ4Factory@0xE02F0BE8, entry count: 1)
> 4XESTACKTRACEat 
> net/jpountz/lz4/LZ4Factory.fastestInstance(LZ4Factory.java:129)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:93)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/KafkaLZ4BlockOutputStream.(KafkaLZ4BlockOutputStream.java:103)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance0(Native Method)
> 4XESTACKTRACEat 
> sun/reflect/NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:86)
> 4XESTACKTRACEat 
> sun/reflect/DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:58)
> 4XESTACKTRACEat 
> java/lang/reflect/Constructor.newInstance(Constructor.java:542)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.wrapForOutput(Compressor.java:222)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:72)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/Compressor.(Compressor.java:76)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.(MemoryRecords.java:43)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:51)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecords.emptyRecords(MemoryRecords.java:55)
> 4XESTACKTRACEat 
> org/apache/kafka/common/record/MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> java -version
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build pxa6470_27sr3fp1-20150605_01(SR3 FP1))
> IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References 
> 20150407_243189 (JIT enabled, AOT enabled)
> J9VM - R27_Java727_SR3_20150407_1831_B243189
> JIT  - tr.r13.java_20150406_89182
> GC   - R27_Java727_SR3_20150407_1831_B243189_CMPRSS
> J9CL - 20150407_243189)
> JCL - 20150601_01 based on Oracle 7u79-b14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2958) Remove duplicate API key mapping functionality

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046186#comment-15046186
 ] 

ASF GitHub Bot commented on KAFKA-2958:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/637

KAFKA-2958: Remove duplicate API key mapping functionality



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka api-keys

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/637.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #637


commit a6a6c3c449ab84cee9178f0997c2c16356d4b391
Author: Grant Henke 
Date:   2015-12-08T01:57:29Z

KAFKA-2958: Remove duplicate API key mapping functionality




> Remove duplicate API key mapping functionality
> --
>
> Key: KAFKA-2958
> URL: https://issues.apache.org/jira/browse/KAFKA-2958
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps request API keys and names. 
> To prevent errors and issues with consistency we should remove 
> RequestKeys.scala in core in favor or ApiKeys.java in common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1911) Log deletion on stopping replicas should be async

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045855#comment-15045855
 ] 

ASF GitHub Bot commented on KAFKA-1911:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/636

KAFKA-1911

Made delete topic on brokers async

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-1911

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/636.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #636


commit 86a432c21eb2b206ffe120a4a4172a087fb109d4
Author: Mayuresh Gharat 
Date:   2015-12-07T22:01:22Z

Made Delete topic on the brokers Async




> Log deletion on stopping replicas should be async
> -
>
> Key: KAFKA-1911
> URL: https://issues.apache.org/jira/browse/KAFKA-1911
> Project: Kafka
>  Issue Type: Bug
>  Components: log, replication
>Reporter: Joel Koshy
>Assignee: Mayuresh Gharat
>  Labels: newbie++, newbiee
>
> If a StopReplicaRequest sets delete=true then we do a file.delete on the file 
> message sets. I was under the impression that this is fast but it does not 
> seem to be the case.
> On a partition reassignment in our cluster the local time for stop replica 
> took nearly 30 seconds.
> {noformat}
> Completed request:Name: StopReplicaRequest; Version: 0; CorrelationId: 467; 
> ClientId: ;DeletePartitions: true; ControllerId: 1212; ControllerEpoch: 
> 53 from 
> client/...:45964;totalTime:29191,requestQueueTime:1,localTime:29190,remoteTime:0,responseQueueTime:0,sendTime:0
> {noformat}
> This ties up one API thread for the duration of the request.
> Specifically in our case, the queue times for other requests also went up and 
> producers to the partition that was just deleted on the old leader took a 
> while to refresh their metadata (see KAFKA-1303) and eventually ran out of 
> retries on some messages leading to data loss.
> I think the log deletion in this case should be fully asynchronous although 
> we need to handle the case when a broker may respond immediately to the 
> stop-replica-request but then go down after deleting only some of the log 
> segments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2804) Create / Update changelog topics upon state store initialization

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15045965#comment-15045965
 ] 

ASF GitHub Bot commented on KAFKA-2804:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/579


> Create / Update changelog topics upon state store initialization
> 
>
> Key: KAFKA-2804
> URL: https://issues.apache.org/jira/browse/KAFKA-2804
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> When state store instances that are logging-backed are initialized, we need 
> to check if the corresponding change log topics have been created with the 
> right number of partitions:
> 1) If not exist, create topic
> 2) If expected #.partitions < actual #.partitions, delete and re-create topic.
> 3) If expected #.partitions > actual #.partitions, add partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1997) Refactor Mirror Maker

2015-12-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046296#comment-15046296
 ] 

ASF GitHub Bot commented on KAFKA-1997:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/638

MINOR: Remove unused DoublyLinkedList

It used to be used by MirrorMaker but its usage was removed in KAFKA-1997.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka remove-dll

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/638.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #638


commit 7fa9407e5971cdb8bd2e4902f94b200f8b6088a6
Author: Grant Henke 
Date:   2015-12-08T03:47:52Z

MINOR: Remove unused DoublyLinkedList

It used to be used by MirrorMaker but its usage was removed in KAFKA-1997.




> Refactor Mirror Maker
> -
>
> Key: KAFKA-1997
> URL: https://issues.apache.org/jira/browse/KAFKA-1997
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1997.patch, KAFKA-1997.patch, 
> KAFKA-1997_2015-03-03_16:28:46.patch, KAFKA-1997_2015-03-04_15:07:46.patch, 
> KAFKA-1997_2015-03-04_15:42:45.patch, KAFKA-1997_2015-03-05_20:14:58.patch, 
> KAFKA-1997_2015-03-09_18:55:54.patch, KAFKA-1997_2015-03-10_18:31:34.patch, 
> KAFKA-1997_2015-03-11_15:20:18.patch, KAFKA-1997_2015-03-11_19:10:53.patch, 
> KAFKA-1997_2015-03-13_14:43:34.patch, KAFKA-1997_2015-03-17_13:47:01.patch, 
> KAFKA-1997_2015-03-18_12:47:32.patch
>
>
> Refactor mirror maker based on KIP-3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2642) Run replication tests in ducktape with SSL for clients

2015-12-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035835#comment-15035835
 ] 

ASF GitHub Bot commented on KAFKA-2642:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/563


> Run replication tests in ducktape with SSL for clients
> --
>
> Key: KAFKA-2642
> URL: https://issues.apache.org/jira/browse/KAFKA-2642
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Under KAFKA-2581, replication tests were parametrized to run with SSL for 
> interbroker communication, but not for clients. When KAFKA-2603 is committed, 
> the tests should be able to use SSL for clients as well,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2825) Add controller failover to existing replication tests

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15038285#comment-15038285
 ] 

ASF GitHub Bot commented on KAFKA-2825:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/618


> Add controller failover to existing replication tests
> -
>
> Key: KAFKA-2825
> URL: https://issues.apache.org/jira/browse/KAFKA-2825
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>
> Extend existing replication tests to include controller failover:
> * clean/hard shutdown
> * clean/hard bounce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2718) Reuse of temporary directories leading to transient unit test failures

2015-12-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15038298#comment-15038298
 ] 

ASF GitHub Bot commented on KAFKA-2718:
---

Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/613


> Reuse of temporary directories leading to transient unit test failures
> --
>
> Key: KAFKA-2718
> URL: https://issues.apache.org/jira/browse/KAFKA-2718
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 0.9.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.1
>
>
> Stack traces in some of the transient unit test failures indicate that 
> temporary directories used for Zookeeper are being reused.
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:253)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:237)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:231)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:63)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2964) Split Security Rolling Upgrade Test By Client and Broker Protocols

2015-12-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15052509#comment-15052509
 ] 

ASF GitHub Bot commented on KAFKA-2964:
---

GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/667

KAFKA-2964: Split Security Rolling Upgrade Test by Client and Broker 
Protocols

The core of this test is to ensure we evaluate enabling security in a 
running cluster where we have different broker and client protocols. 
Also in this PR are some improvements to the validation process in 
produce_consume_validate.py:
- Fail fast if producer or consumer stop running. 
- If messages go missing, check in the data files to see if the cause was 
data loss or the consumer missing messages. 
- Remove unnecessary sleeps which hide problem with consumer disconnection. 
- Make it possible for the ConsoleConsumer to optionally log both what it 
consumed and when it consumed it. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka 
security-rolling_upgrade-additions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/667.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #667


commit c0ae7f29b8381d83870bb56d4312a4897c77944e
Author: Ben Stopford 
Date:   2015-12-08T21:27:01Z

KAFKA-2964: Parameterise security rolling upgrade so it runs independetly 
for broker-broker and broker-client protocols.

commit 03740f9fd21c05eb8eb3492ac3ce35efa1f9428a
Author: Ben Stopford 
Date:   2015-12-10T19:38:44Z

KAFKA-2964: Check for data loss if messages go missing

commit 9640d3efaeb0a0ef549b3a8c5a758bfde60ace30
Author: Ben Stopford 
Date:   2015-12-10T22:34:31Z

KAFKA-2964: Refactored produce_consume_validate

commit bd53895892d7938f17975298f1380739a2f45504
Author: Ben Stopford 
Date:   2015-12-11T09:21:12Z

KAFKA-2964: clean up




> Split Security Rolling Upgrade Test By Client and Broker Protocols
> --
>
> Key: KAFKA-2964
> URL: https://issues.apache.org/jira/browse/KAFKA-2964
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Stopford
>Priority: Minor
>
> We should ensure the security rolling upgrade test runs with different 
> client-broker and broker-broker protocols (previously it just ran with 
> protocol pairs) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2981) Fix javadoc in KafkaConsumer

2015-12-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15052585#comment-15052585
 ] 

ASF GitHub Bot commented on KAFKA-2981:
---

GitHub user vesense opened a pull request:

https://github.com/apache/kafka/pull/668

KAFKA-2981: Fix javadoc in KafkaConsumer

https://issues.apache.org/jira/browse/KAFKA-2981

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vesense/kafka patch-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/668.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #668


commit ab23830a67d3e02c4ba0cada87dc2e98e09ceb44
Author: Xin Wang 
Date:   2015-12-11T10:11:09Z

fix javadoc in KafkaConsumer




> Fix javadoc in KafkaConsumer
> 
>
> Key: KAFKA-2981
> URL: https://issues.apache.org/jira/browse/KAFKA-2981
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Xin Wang
>Priority: Minor
>
> error javadoc:
> {code}consumer.subscribe("topic");{code}
> fix:
> {code}consumer.subscribe(Arrays.asList("topic"));{code}
> Since KafkaConsumer.subscribe() method uses List as the input type, using 
> string "topic" will get an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2928) system tests: failures in version-related sanity checks

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049839#comment-15049839
 ] 

ASF GitHub Bot commented on KAFKA-2928:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/656

KAFKA-2928: system test: fix version sanity checks

Fixed version sanity checks by updated kafkatest version to match kafka 
version

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2928-fix-version-sanity-checks

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/656.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #656


commit 1196d5aefa32e338881e7b3b50682e082733c625
Author: Geoff Anderson 
Date:   2015-12-10T01:43:04Z

Fixed version sanity checks by updated kafkatest version to match kafka 
version




> system tests: failures in version-related sanity checks
> ---
>
> Key: KAFKA-2928
> URL: https://issues.apache.org/jira/browse/KAFKA-2928
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> There have been a few consecutive failures of version-related sanity checks 
> in nightly system test runs:
> kafkatest.sanity_checks.test_verifiable_producer
> kafkatest.sanity_checks.test_kafka_version
> assert is_version(...) is failing
> utils.util.is_version is a fairly rough heuristic, so most likely this needs 
> to be updated.
> E.g., see
> http://testing.confluent.io/kafka/2015-12-01--001/
> (if this is broken, use 
> http://testing.confluent.io/kafka/2015-12-01--001.tar.gz)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2927) System tests: reduce storage footprint of collected logs

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049845#comment-15049845
 ] 

ASF GitHub Bot commented on KAFKA-2927:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/657

KAFKA-2927: reduce system test storage footprint

Split kafka logging into two levels - DEBUG and INFO, and do not collect 
DEBUG by default.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2927-reduce-log-footprint

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/657.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #657


commit 0dc3a1a367083f57f3cb6d8e1cd82571598d7108
Author: Geoff Anderson 
Date:   2015-12-10T01:09:59Z

Split kafka logging into two levels - DEBUG and INFO, and do not collect 
DEBUG by default




> System tests: reduce storage footprint of collected logs
> 
>
> Key: KAFKA-2927
> URL: https://issues.apache.org/jira/browse/KAFKA-2927
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Looking at recent night test runs (testing.confluent.io/kafka), the storage 
> requirements for log output from the various services has increased 
> significantly, up to 7-10G for a single test run, up from hundreds of MB
> Current breakdown:
> 23M   Benchmark
> 3.2M  ClientCompatibilityTest
> 613M  ConnectDistributedTest
> 1.1M  ConnectRestApiTest
> 1.5M  ConnectStandaloneFileTest
> 2.0M  ConsoleConsumerTest
> 440K  KafkaVersionTest
> 744K  Log4jAppenderTest
> 49M   QuotaTest
> 3.0G  ReplicationTest
> 1.2G  TestMirrorMakerService
> 185M  TestUpgrade
> 372K  TestVerifiableProducer
> 2.3G  VerifiableConsumerTest
> The biggest contributors in these test suites:
> ReplicationTest:
> verifiable_producer.log (currently TRACE level)
> VerifiableConsumerTest:
> kafka server.log
> TestMirrorMakerService:
> verifiable_producer.log
> ConnectDistributedTest:
> kafka server.log
> The worst offenders are therefore 
> verifiable_producer.log which is logging at TRACE level, and kafka server.log 
> which is logging at debug level
> One solution is to:
> 1) Update the log4j configs to log separately to both an INFO level file, and 
> another file for DEBUG at least for the worst offenders.
> 2) Don't collect these DEBUG (and below) logs by default; only mark for 
> collection during failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2896) System test for partition re-assignment

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049793#comment-15049793
 ] 

ASF GitHub Bot commented on KAFKA-2896:
---

GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/655

KAFKA-2896 Added system test for partition re-assignment

Partition re-assignment tests with and without broker failure.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka kafka_2896

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/655.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #655


commit bddca8055a70ccc4385e7898fd6ff2eb38db
Author: Anna Povzner 
Date:   2015-12-10T01:06:11Z

KAFKA-2896 Added system test for partition re-assignment




> System test for partition re-assignment
> ---
>
> Key: KAFKA-2896
> URL: https://issues.apache.org/jira/browse/KAFKA-2896
> Project: Kafka
>  Issue Type: Task
>Reporter: Gwen Shapira
>Assignee: Anna Povzner
>
> Lots of users depend on partition re-assignment tool to manage their cluster. 
> Will be nice to have a simple system tests that creates a topic with few 
> partitions and few replicas, reassigns everything and validates the ISR 
> afterwards. 
> Just to make sure we are not breaking anything. Especially since we have 
> plans to improve (read: modify) this area.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2578) Client Metadata internal state should be synchronized

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051375#comment-15051375
 ] 

ASF GitHub Bot commented on KAFKA-2578:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/659


> Client Metadata internal state should be synchronized
> -
>
> Key: KAFKA-2578
> URL: https://issues.apache.org/jira/browse/KAFKA-2578
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Edward Ribeiro
>Priority: Trivial
>
> Some recent patches introduced a couple new fields in o.a.k.clients.Metadata: 
> 'listeners' and 'needMetadataForAllTopics'. Accessor methods for these fields 
> should be synchronized like the rest of the internal Metadata state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2980) ZookeeperConsumerConnector may enter deadlock if a rebalance occurs during a stream creation.

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051512#comment-15051512
 ] 

ASF GitHub Bot commented on KAFKA-2980:
---

GitHub user becketqin reopened a pull request:

https://github.com/apache/kafka/pull/660

KAFKA-2980 Fix deadlock when ZookeeperConsumerConnector create messag…



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-2980

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/660.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #660


commit 6ad40206f354512b1f2db1e3784754ea29415ce7
Author: Jiangjie Qin 
Date:   2015-12-10T19:08:15Z

KAKFA-2980 Fix deadlock when ZookeeperConsumerConnector create message 
streams.




> ZookeeperConsumerConnector may enter deadlock if a rebalance occurs during a 
> stream creation.
> -
>
> Key: KAFKA-2980
> URL: https://issues.apache.org/jira/browse/KAFKA-2980
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> The following sequence caused problems:
> 1. Multiple ZookeeperConsumerConnector in the same group start at the same 
> time.
> 2. The user consumer thread called createMessageStreamsByFilter()
> 3. Right before the user consumer thread enters syncedRebalance(), a 
> rebalance was triggered by another consumer joining the group.
> 4. Because the watcher executor has been up and running at this point, the 
> executor watcher will start to rebalance. Now both the user consumer thread 
> and the executor watcher are trying to rebalance.
> 5. The executor watcher wins this time. It finishes the rebalance, so the 
> fetchers started to run.
> 6. After that the user consumer thread will try to rebalance again, but it 
> blocks when trying to stop the fetchers. Since the fetcher threads are 
> blocked on putting data chunk into data chunk queue.
> 7. In this case, because there is no thread taking messages out of data chunk 
> queue, the fetcher thread will not be able to make process. Neither does the 
> user consumer thread. So we have a deadlock here.
> The current code works if there is no fetcher thread running when 
> createMessageStreams/createMessageStreamsByFilter is called. The simple fix 
> is to let those two methods acquire the rebalance lock.
> Although it is a fix to old consumer, but since the fix is quite small and 
> important for people who are still using old consumer. I think it still worth 
> doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2980) ZookeeperConsumerConnector may enter deadlock if a rebalance occurs during a stream creation.

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051511#comment-15051511
 ] 

ASF GitHub Bot commented on KAFKA-2980:
---

Github user becketqin closed the pull request at:

https://github.com/apache/kafka/pull/660


> ZookeeperConsumerConnector may enter deadlock if a rebalance occurs during a 
> stream creation.
> -
>
> Key: KAFKA-2980
> URL: https://issues.apache.org/jira/browse/KAFKA-2980
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>
> The following sequence caused problems:
> 1. Multiple ZookeeperConsumerConnector in the same group start at the same 
> time.
> 2. The user consumer thread called createMessageStreamsByFilter()
> 3. Right before the user consumer thread enters syncedRebalance(), a 
> rebalance was triggered by another consumer joining the group.
> 4. Because the watcher executor has been up and running at this point, the 
> executor watcher will start to rebalance. Now both the user consumer thread 
> and the executor watcher are trying to rebalance.
> 5. The executor watcher wins this time. It finishes the rebalance, so the 
> fetchers started to run.
> 6. After that the user consumer thread will try to rebalance again, but it 
> blocks when trying to stop the fetchers. Since the fetcher threads are 
> blocked on putting data chunk into data chunk queue.
> 7. In this case, because there is no thread taking messages out of data chunk 
> queue, the fetcher thread will not be able to make process. Neither does the 
> user consumer thread. So we have a deadlock here.
> The current code works if there is no fetcher thread running when 
> createMessageStreams/createMessageStreamsByFilter is called. The simple fix 
> is to let those two methods acquire the rebalance lock.
> Although it is a fix to old consumer, but since the fix is quite small and 
> important for people who are still using old consumer. I think it still worth 
> doing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2507) Replace ControlledShutdown{Request,Response} with org.apache.kafka.common.requests equivalent

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051491#comment-15051491
 ] 

ASF GitHub Bot commented on KAFKA-2507:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/640


> Replace ControlledShutdown{Request,Response} with 
> org.apache.kafka.common.requests equivalent
> -
>
> Key: KAFKA-2507
> URL: https://issues.apache.org/jira/browse/KAFKA-2507
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2015-12-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15052845#comment-15052845
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/595


> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.0.1
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2974) `==` is used incorrectly in a few places in Java code

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15050039#comment-15050039
 ] 

ASF GitHub Bot commented on KAFKA-2974:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/652


> `==` is used incorrectly in a few places in Java code
> -
>
> Key: KAFKA-2974
> URL: https://issues.apache.org/jira/browse/KAFKA-2974
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>
> Unlike Scala, `==` is reference equality in Java and one normally wants to 
> use `equals`. We should fix the cases where `==` is used incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2733) Distinguish metric names inside the sensor registry

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15050151#comment-15050151
 ] 

ASF GitHub Bot commented on KAFKA-2733:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/643


> Distinguish metric names inside the sensor registry
> ---
>
> Key: KAFKA-2733
> URL: https://issues.apache.org/jira/browse/KAFKA-2733
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> Since stream tasks can share the same StreamingMetrics object, and the 
> MetricName is distinguishable only by the group name (same for the same type 
> of states, and for other streaming metrics) and the tags (currently only the 
> client-ids of the StreamThead), when we have multiple tasks within a single 
> stream thread, it could lead to IllegalStateException upon trying to registry 
> the same metric from those tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2975) The newtorkClient should request a metadata update after it gets an error in the handleResponse()

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049939#comment-15049939
 ] 

ASF GitHub Bot commented on KAFKA-2975:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/658

KAFKA-2975

The newtorkClient should request a metadata update after it gets an error 
in the handleResponse().

Currently in data pipeline,
1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry 
is set to 5 min
2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and 
tries torefresh its Metadata.
3) It gets LeaderNotAvailableException, may be because the topic is not 
created yet.
4) Now its metadata does not have any information about that topic.
5) It will wait for 5 min to do the next refresh.
6) In the mean time the batches sitting in the accumulator will expire and 
the mirror makers die to avoid data loss.

To overcome this we need to refresh the metadata after 3).

Well there is an alternative solution to have the metadataExpiry set to be 
less then requestTimeout, but this will mean we make more metadataRequest over 
the wire in normal scenario as well.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-2975

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/658.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #658


commit 8c7534b5ff89960e26db22a27e183d10002aeb01
Author: Mayuresh Gharat 
Date:   2015-12-10T03:04:37Z

The newtorkClient should request a metadata update after it gets an error 
in response




> The newtorkClient should request a metadata update after it gets an error in 
> the handleResponse()
> -
>
> Key: KAFKA-2975
> URL: https://issues.apache.org/jira/browse/KAFKA-2975
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Currently in data pipeline, 
> 1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry is 
> set to 5 min
> 2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and tries 
> torefresh its Metadata.
> 3) It gets LeaderNotAvailableException, may be because the topic is not 
> created yet.
> 4) Now its metadata does not have any information about that topic.
> 5) It will wait for 5 min to do the next refresh.
> 6) In the mean time the batches sitting in the accumulator will expire and 
> the mirror makers die to avoid data loss.
> To overcome this we need to refresh the metadata after 3).
> Well there is an alternative solution to have the metadataExpiry set to be 
> less then requestTimeout, but this will mean we make more metadataRequest 
> over the wire in normal scenario as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2070) Replace OffsetRequest/response with ListOffsetRequest/response from org.apache.kafka.common.requests

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051583#comment-15051583
 ] 

ASF GitHub Bot commented on KAFKA-2070:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/663

KAFKA-2070: Replace Offset{Request,Response} with o.a.k.c requests eq…

…uivalent

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka offset-list

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/663.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #663


commit 26cd605239950ed209461b87b23ed58a3e749987
Author: Grant Henke 
Date:   2015-12-10T20:00:40Z

KAFKA-2070: Replace Offset{Request,Response} with o.a.k.c requests 
equivalent




> Replace OffsetRequest/response with ListOffsetRequest/response from 
> org.apache.kafka.common.requests
> 
>
> Key: KAFKA-2070
> URL: https://issues.apache.org/jira/browse/KAFKA-2070
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> Replace OffsetRequest/response with ListOffsetRequest/response from 
> org.apache.kafka.common.requests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2926) [MirrorMaker] InternalRebalancer calls wrong method of external rebalancer

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051655#comment-15051655
 ] 

ASF GitHub Bot commented on KAFKA-2926:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/611


> [MirrorMaker] InternalRebalancer calls wrong method of external rebalancer
> --
>
> Key: KAFKA-2926
> URL: https://issues.apache.org/jira/browse/KAFKA-2926
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> MirrorMaker has an internal rebalance listener that will invoke an external 
> (pluggable) listener if such exists. Looks like the internal listener calls 
> the wrong method of the external listener.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2015-12-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15052864#comment-15052864
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/669

KAFKA-2875:

Hi @ijuma 
I repopened this PR;
I make slf4jlog4j12 dependency version to be 1.7.13, by this way, the 1.7.6 
transitive dependency version will be overriten I think;
From my point of view, a proper way to fix the multi binding of slf4j is to 
specify the classpath inside of parent script of kafka-run-class.sh, like 
kafka-topics.sh, kafka-console-consumer.sh;
As a result, kafka-run-class.sh will be told which command to run and where 
to find jars contains the class for the command;
Am I right?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2875

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/669.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #669


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit 0070c2d71d06ee8baa1cddb3451cd5af6c6b1d4a
Author: ZoneMayor 
Date:   2015-12-11T14:50:30Z

Merge pull request #8 from apache/trunk

2015-12-11

commit fe1f7fdb73c75dea5247bc5c4c9e78fdad6fea37
Author: jinxing 
Date:   2015-12-11T14:52:24Z

KAFKA-2875: Class path contains multiple SLF4J bindings warnings when using 
scripts under bin

commit 5047602438c2e6d66514ebfed07002404cc01578
Author: jinxing 
Date:   2015-12-11T14:54:43Z

KAFKA-2875: WIP




> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.0.1
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA

[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15052873#comment-15052873
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/648

KAFKA-2837: fix transient failure of kafka.api.ProducerBounceTest > 
testBrokerFailure

I can reproduced this transient failure, it seldom happen;
code is like below:
 // rolling bounce brokers
for (i <- 0 until numServers) {
  for (server <- servers) {
server.shutdown()
server.awaitShutdown()
server.startup()
Thread.sleep(2000)
  }

  // Make sure the producer do not see any exception
  // in returned metadata due to broker failures
  assertTrue(scheduler.failed == false)

  // Make sure the leader still exists after bouncing brokers
  (0 until numPartitions).foreach(partition => 
TestUtils.waitUntilLeaderIsElectedOrChanged(zkUtils, topic1, partition))
Brokers keep rolling restart, and producer keep sending messages;
In every loop, it will wait for election of partition leader;
But if the election is slow, more messages will be buffered in 
RecordAccumulator's BufferPool;
The limit for buffer is set to be 3;
TimeoutException("Failed to allocate memory within the configured max 
blocking time") will show up when out of memory;
Since for every restart of the broker, it will sleep for 2000 ms,  so this 
transient failure seldom happen;
But if I reduce the sleeping period, the bigger chance failure happens; 
for example if the broker with role of controller suffered a restart, it 
will take time to select controller first, then select leader, which will lead 
to more messges blocked in KafkaProducer:RecordAccumulator:BufferPool;
In this fix, I just enlarge the producer's buffer size to be 1MB;
@guozhangwang , Could you give some comments?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2837

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #648


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit cd5e6f4700a4387f9383b84aca0ee9c4639b1033
Author: jinxing 
Date:   2015-12-09T13:49:07Z

KAFKA-2837: fix transient failure kafka.api.ProducerBounceTest > 
testBrokerFailure

commit 8ded9104a04861f789a7a990c2ddd4fc38a899cd
Author: ZoneMayor 
Date:   2015-12-10T04:47:06Z

Merge pull request #6 from apache/trunk

2015-12-10

commit 2bcf010c73923bb24bbd9cece7e39983b2bdce0c
Author: jinxing 
Date:   2015-12-10T04:47:39Z

KAFKA-2837: WIP

commit dae4a3cc0b564bb25121d54e65b5ad363c3e866d
Author: jinxing 
Date:   2015-12-10T04:48:21Z

Merge branch 'trunk-KAFKA-2837' of https://github.com/ZoneMayor/kafka into 
trunk-KAFKA-2837

commit 7118e11813e445bca3eab65a23028e76138b136a
Author: jinxing 
Date:   2015-12-10T04:51:43Z

KAFKA-2837: WIP

commit 310dd6b34547b52aad21a35dcf631bda3e15ab64
Author: jinxing 
Date:   2015-12-11T03:43:32Z

KAFKA-2837: WIP




> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   

[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15052872#comment-15052872
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/648


> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

[jira] [Commented] (KAFKA-2929) Migrate server side error mapping functionality

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060416#comment-15060416
 ] 

ASF GitHub Bot commented on KAFKA-2929:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/616

KAFKA-2929: Migrate duplicate error mapping functionality

Deprecates ErrorMapping.scala in core in favor or Errors.java in common. 
Duplicated exceptions in core are deprecated as well, to ensure the mapping 
is correct.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka error-mapping

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/616.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #616


commit 631f38af04f2f944d9a31506ad6290603cc4641e
Author: Grant Henke 
Date:   2015-12-16T17:55:33Z

KAFKA-2929: Migrate duplicate error mapping functionality




> Migrate server side error mapping functionality
> ---
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should migrate from 
> ErrorMapping.scala in core in favor or Errors.java in common.
> When the old clients are removed ErrorMapping.scala and the old exceptions 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2929) Migrate server side error mapping functionality

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060415#comment-15060415
 ] 

ASF GitHub Bot commented on KAFKA-2929:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/616


> Migrate server side error mapping functionality
> ---
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should migrate from 
> ErrorMapping.scala in core in favor or Errors.java in common.
> When the old clients are removed ErrorMapping.scala and the old exceptions 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2422) Allow copycat connector plugins to be aliased to simpler names

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061450#comment-15061450
 ] 

ASF GitHub Bot commented on KAFKA-2422:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/687

KAFKA-2422: Allow copycat connector plugins to be aliased to simpler …

…names

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2422

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/687.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #687


commit b00939902c58b98cbcb187c754e2bd1dc6463c14
Author: Gwen Shapira 
Date:   2015-12-17T04:32:09Z

KAFKA-2422: Allow copycat connector plugins to be aliased to simpler names




> Allow copycat connector plugins to be aliased to simpler names
> --
>
> Key: KAFKA-2422
> URL: https://issues.apache.org/jira/browse/KAFKA-2422
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
>
> Configurations of connectors can get quite verbose when you have to specify 
> the full class name, e.g. 
> connector.class=org.apache.kafka.copycat.file.FileStreamSinkConnector
> It would be nice to allow connector classes to provide shorter aliases, e.g. 
> something like "file-sink", to make this config less verbose. Flume does 
> this, so we can use it as an example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3003) The fetch.wait.max.ms is not honored when new log segment rolled for low volume topics.

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061611#comment-15061611
 ] 

ASF GitHub Bot commented on KAFKA-3003:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/688

KAFKA-3003 Update the replica.highWatermark correctly



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3003

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/688.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #688


commit d3f9edf89ac32f44413edc3d58e227fa2d859ca2
Author: Jiangjie Qin 
Date:   2015-12-17T06:52:11Z

KAFKA-3003 Update the replica.highWatermark correctly




> The fetch.wait.max.ms is not honored when new log segment rolled for low 
> volume topics.
> ---
>
> Key: KAFKA-3003
> URL: https://issues.apache.org/jira/browse/KAFKA-3003
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.1
>
>
> The problem we saw can be explained by the example below:
> 1. Message offset 100 is appended to partition p0, log segment .log. 
> at time T. After that no message is appended. 
> 2. This message is replicated, leader replica update its 
> highWatermark.messageOffset=100, highWatermark.segmentBaseOffset=0.
> 3. At time T + retention.ms, because no message has been appended to current 
> active log segment for retention.ms, the last modified time of the current 
> log segment reaches retention time. 
> 4. Broker rolls out a new log segment 0001.log, and deletes the old log 
> segment .log. The new log segment in this case is empty because there 
> is no message appended. 
> 5. In Log, the nextOffsetMetadata.segmentBaseOffset will be updated to the 
> new log segment's base offset, but nextOffsetMetadata.messageOffset does not 
> change. so nextOffsetMetadata.messageOffset=1, 
> nextOffsetMetadata.segmentBaseOffset=1.
> 6. Now a FetchRequest comes and try to fetch from offset 1, 
> fetch.wait.max.ms=1000.
> 7. In ReplicaManager, because there is no data to return, the fetch request 
> will be put into purgatory. When delayedFetchPurgatory.tryCompleteElseWatch() 
> is called, the DelayedFetch.tryComplete() compares replica.highWatermark and 
> the fetchOffset returned by log.read(), it will see the 
> replica.highWatermark.segmentBaseOffset=0 and 
> fetchOffset.segmentBaseOffset=1. So it will assume the fetch occurs on a 
> later segment and complete the delayed fetch immediately.
> In this case, the replica.highWatermark was not updated because the 
> LogOffsetMetadata.preceds() only checks the messageOffset but ignored 
> segmentBaseOffset. The fix is to let LogOffsetMetadata first check the 
> messageOffset then check the segmentBaseOffset. So replica.highWatermark will 
> get updated after the follower fetches from the leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3002) Make available to specify hostname with Uppercase at broker list

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061247#comment-15061247
 ] 

ASF GitHub Bot commented on KAFKA-3002:
---

GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/685

KAFKA-3002: Make available to specify hostname with Uppercase at broker list

Make available to specify hostname with Uppercase at broker list

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka hostname_uppercase

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/685.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #685


commit 337b75eeb8450daf994cd055b1ba0b2f79bbe676
Author: Sasaki Toru 
Date:   2015-12-16T15:55:45Z

make available to specify hostname with Uppercase letter




> Make available to specify hostname with Uppercase at broker list
> 
>
> Key: KAFKA-3002
> URL: https://issues.apache.org/jira/browse/KAFKA-3002
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Sasaki Toru
>Priority: Minor
> Fix For: 0.9.0.1
>
>
> Now we cannot specify hostname with Uppercase letter at broker list (e.g. 
> option for kafka-console-producer.sh)
> OK: kafka-console-producer.sh --broker-list kafkaserver:9092 --topic test
> NG: kafka-console-producer.sh --broker-list KafkaServer:9092 --topic test
> (exception will occur since DNS resolution failed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2988) Change default value of log.cleaner.enable

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061408#comment-15061408
 ] 

ASF GitHub Bot commented on KAFKA-2988:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/686

KAFKA-2988: Change default configuration of the log cleaner



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka compaction

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/686.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #686


commit 9d5a56ec164d3113762ffda5e83dd9c278a9d29a
Author: Grant Henke 
Date:   2015-12-17T03:44:42Z

KAFKA-2988: Change default value of log.cleaner.enable




> Change default value of log.cleaner.enable 
> ---
>
> Key: KAFKA-2988
> URL: https://issues.apache.org/jira/browse/KAFKA-2988
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Since 0.9.0 the internal "__consumer_offsets" topic is being used more 
> heavily. Because this is a compacted topic "log.cleaner.enable" needs to be 
> "true" in order for it to be compacted. 
> Since this is critical for core Kafka functionality we should change the 
> default to true and potentially consider removing the option to disable all 
> together. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2990) NoSuchMethodError when Kafka is compiled with 1.8 and run on 1.7

2015-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15056988#comment-15056988
 ] 

ASF GitHub Bot commented on KAFKA-2990:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/675

KAFKA-2990: fix NoSuchMethodError in Pool with cast to ConcurrentMap



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2990

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/675.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #675


commit 19a12ca16b4330de90096f1fafffdc5ae7709ead
Author: Jason Gustafson 
Date:   2015-12-14T23:36:24Z

KAFKA-2990: fix NoSuchMethodError in Pool with cast to ConcurrentMap




> NoSuchMethodError when Kafka is compiled with 1.8 and run on 1.7
> 
>
> Key: KAFKA-2990
> URL: https://issues.apache.org/jira/browse/KAFKA-2990
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> I saw the following exception in the server logs when Kafka is compiled with 
> 1.8 jdk and run on a 1.7.
> {code}
> java.lang.NoSuchMethodError: 
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
> at kafka.utils.Pool.keys(Pool.scala:79)
> at 
> kafka.coordinator.GroupMetadataManager.kafka$coordinator$GroupMetadataManager$$removeGroupsAndOffsets$1(GroupMetadataManager.scala:483)
> at 
> kafka.coordinator.GroupMetadataManager$$anonfun$removeGroupsForPartition$1.apply$mcV$sp(GroupMetadataManager.scala:465)
> at 
> kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
> at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The keySet() method for ConcurrentHashMap was changed in 1.8 to refer to 
> KeySetView, which didn't exist in 1.7. To fix it, we just need to make sure 
> that Pool calls ConcurrentMap.keySet() instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2990) NoSuchMethodError when Kafka is compiled with 1.8 and run on 1.7

2015-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057007#comment-15057007
 ] 

ASF GitHub Bot commented on KAFKA-2990:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/675


> NoSuchMethodError when Kafka is compiled with 1.8 and run on 1.7
> 
>
> Key: KAFKA-2990
> URL: https://issues.apache.org/jira/browse/KAFKA-2990
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> I saw the following exception in the server logs when Kafka is compiled with 
> 1.8 jdk and run on a 1.7.
> {code}
> java.lang.NoSuchMethodError: 
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
> at kafka.utils.Pool.keys(Pool.scala:79)
> at 
> kafka.coordinator.GroupMetadataManager.kafka$coordinator$GroupMetadataManager$$removeGroupsAndOffsets$1(GroupMetadataManager.scala:483)
> at 
> kafka.coordinator.GroupMetadataManager$$anonfun$removeGroupsForPartition$1.apply$mcV$sp(GroupMetadataManager.scala:465)
> at 
> kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
> at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:60)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The keySet() method for ConcurrentHashMap was changed in 1.8 to refer to 
> KeySetView, which didn't exist in 1.7. To fix it, we just need to make sure 
> that Pool calls ConcurrentMap.keySet() instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15057164#comment-15057164
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/674


> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at 

[jira] [Commented] (KAFKA-2977) Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest

2015-12-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15055005#comment-15055005
 ] 

ASF GitHub Bot commented on KAFKA-2977:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/671

KAFKA-2977: Transient Failure in 
kafka.log.LogCleanerIntegrationTest.cleanerTest

Hi @guozhangwang 
-
Code is as below:
val appends = writeDups(numKeys = 100, numDups = 3, log, 
CompressionCodec.getCompressionCodec(compressionCodec))
cleaner.startup()
val firstDirty = log.activeSegment.baseOffset
cleaner.awaitCleaned("log", 0, firstDirty)

val appends2 = appends ++ writeDups(numKeys = 100, numDups = 3, log, 
CompressionCodec.getCompressionCodec(compressionCodec))
val firstDirty2 = log.activeSegment.baseOffset
cleaner.awaitCleaned("log", 0, firstDirty2)
--
log cleaner and writeDups are two different threads;
log cleaner do cleaning every 15s, timeout in "cleaner.awaitCleaned" is 60s;
there is a filtering condition for a log to be chosen to become a cleaning 
target: cleanableRatio> 0.5(configured log.cleaner.min.cleanable.ratio) by 
default;
It may happen that, during "val appends2 = appends ++ writeDups(numKeys = 
100, numDups = 3, log, 
CompressionCodec.getCompressionCodec(compressionCodec))", log is also 
undergoing a cleaning process; 
Since the segment size configured in this test is quite small: 100, there 
is possibility that before the end of 'writeDups', some 'dirty segment' of the 
log is already cleaned;
With tiny dirty part left,  cleanableRatio> 0.5 cannot be satisfied;
thus firstDirty2>lastCleaned2, which leads to this test failed;

Does it make sense?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2977

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/671.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #671


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit 0070c2d71d06ee8baa1cddb3451cd5af6c6b1d4a
Author: ZoneMayor 
Date:   2015-12-11T14:50:30Z

Merge pull request #8 from apache/trunk

2015-12-11

commit 09908ac646d4c84f854dad63b8c99213b74a7063
Author: ZoneMayor 
Date:   2015-12-13T14:17:19Z

Merge pull request #9 from apache/trunk

2015-12-13

commit ff1e68bb7101d12624c189174ef1dceb21ed9798
Author: jinxing 
Date:   2015-12-13T14:31:34Z

KAFKA-2054: Transient Failure in 
kafka.log.LogCleanerIntegrationTest.cleanerTest

commit 6321ab6599cb7a981fac2a4eea64a5f2ea805dd6
Author: jinxing 
Date:   2015-12-13T14:36:11Z

removed unnecessary maven repo




> Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest
> 
>
> Key: KAFKA-2977
> URL: https://issues.apache.org/jira/browse/KAFKA-2977
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: jin xing
>
> {code}
> java.lang.AssertionError: log cleaner should have processed up to offset 599
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.log.LogCleanerIntegrationTest.cleanerTest(LogCleanerIntegrationTest.scala:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> 

[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15055111#comment-15055111
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/648


> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at 

[jira] [Commented] (KAFKA-2058) ProducerTest.testSendWithDeadBroker transient failure

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062083#comment-15062083
 ] 

ASF GitHub Bot commented on KAFKA-2058:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/689

KAFKA-2058: ProducerTest.testSendWithDeadBroker transient failure

I reproduced this transient failure;
It turns that waitUntilMetadataIsPropagated is not enough;
in "onBrokerStartup", methods below will send send both LeaderAndIsrRequest 
and UpdateMetadataRequest to KafkaApis:
replicaStateMachine.handleStateChanges(allReplicasOnNewBrokers, 
OnlineReplica)
partitionStateMachine.triggerOnlinePartitionStateChange()
The two kinds of request are handled seperately and we are not sure about 
the order;
If UpdateMetadataRequest is handled first, metadataCache of kafkaApis will 
be updated, thus TestUtils.waitUntilMetadataIsPropagated will be satisfied, and 
consumer can(will) start fetching data;
But if the LeaderAndIsrRequest is not handled at this moment, 
"becomeLeaderOrFollower" cannot be called , thus structures like 
"leaderReplicaOpt" cannot be updated, which leads to failure of consumer's 
fetching data;
To fix above, consumer should start fetching data after partition's 
leaderReplica is refreshed, not just the leader is elected;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2058

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/689.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #689


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit 0070c2d71d06ee8baa1cddb3451cd5af6c6b1d4a
Author: ZoneMayor 
Date:   2015-12-11T14:50:30Z

Merge pull request #8 from apache/trunk

2015-12-11

commit 09908ac646d4c84f854dad63b8c99213b74a7063
Author: ZoneMayor 
Date:   2015-12-13T14:17:19Z

Merge pull request #9 from apache/trunk

2015-12-13

commit 30b26b2d3c714bff11f4c58f00f5d1b075a592e9
Author: ZoneMayor 
Date:   2015-12-17T12:27:27Z

Merge pull request #10 from apache/trunk

2015-12-17

commit 6b1790b2742fa1244d3ba44aef459d8d5a6d3b55
Author: jinxing 
Date:   2015-12-17T12:30:38Z

KAFKA-2058: 30b26b2d3c714bff11f4c58f00f5d1b075a592e9




> ProducerTest.testSendWithDeadBroker transient failure
> -
>
> Key: KAFKA-2058
> URL: https://issues.apache.org/jira/browse/KAFKA-2058
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Bill Bejeck
>  Labels: newbie
>
> {code}
> kafka.producer.ProducerTest > testSendWithDeadBroker FAILED
> java.lang.AssertionError: Message set should have 1 message
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:44)
> at 
> kafka.producer.ProducerTest.testSendWithDeadBroker(ProducerTest.scala:260)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3006) kafka client should offer Collection alternative to Array call signatures

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062224#comment-15062224
 ] 

ASF GitHub Bot commented on KAFKA-3006:
---

GitHub user pyr opened a pull request:

https://github.com/apache/kafka/pull/690

consumer: Collection alternatives to array calls. Fixes KAFKA-3006

This makes it much easier when using the library from some JVM languages.
In clojure for instance you can go from

```clojure
(.pause consumer (into-array TopicPartition partitions))
```
to simply:

```clojure
(.pause consumer partitions)
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pyr/kafka feature/collections

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/690.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #690


commit 3948e87f82273b3c0694f6a911d327d67c5432b4
Author: Pierre-Yves Ritschard 
Date:   2015-12-17T15:44:30Z

consumer: Collection alternatives to array calls. Fixes KAFKA-3006




> kafka client should offer Collection alternative to Array call signatures
> -
>
> Key: KAFKA-3006
> URL: https://issues.apache.org/jira/browse/KAFKA-3006
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Pierre-Yves Ritschard
>  Labels: patch
>
> Some languages (in my case, clojure) make it a bit cumbersome to deal with 
> java arrays. 
> In the consumer, these four signatures only accepts arrays:
> seekToBeginning, seekToEnd, pause, resume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2653) Stateful operations in the KStream DSL layer

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062651#comment-15062651
 ] 

ASF GitHub Bot commented on KAFKA-2653:
---

Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/665


> Stateful operations in the KStream DSL layer
> 
>
> Key: KAFKA-2653
> URL: https://issues.apache.org/jira/browse/KAFKA-2653
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> This includes the interface design the implementation for stateful operations 
> including:
> 0. table representation in KStream.
> 1. stream-stream join.
> 2. stream-table join.
> 3. table-table join.
> 4. stream / table aggregations.
> With 0 and 3 being tackled in KAFKA-2856 and KAFKA-2962 separately, this 
> ticket is going to only focus on windowing definition and 1 / 2 / 4 above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2653) Stateful operations in the KStream DSL layer

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15062655#comment-15062655
 ] 

ASF GitHub Bot commented on KAFKA-2653:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/691

KAFKA-2653: Kafka Streams Stateful API Design



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2653

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/691.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #691


commit e46d649c2e40078ed161c83fdc1690456f09f43a
Author: Guozhang Wang 
Date:   2015-12-10T04:31:25Z

v1

commit 2167f29ff630577fe63abc93fd8a58aa6c7d3c1c
Author: Guozhang Wang 
Date:   2015-12-10T19:32:34Z

option 1 of windowing opeartions

commit fb92b2b20f7be6f17c006de6e48cb04065808477
Author: Guozhang Wang 
Date:   2015-12-11T05:47:51Z

v1

commit 0862ec2b4ecb151ea1b3395c74787e4de99891fe
Author: Guozhang Wang 
Date:   2015-12-11T22:15:02Z

v1

commit 9558891bdaccc0b8861f882b957b5131556f896c
Author: Guozhang Wang 
Date:   2015-12-15T00:30:20Z

address Yasu's comments

commit e6373cbc4229637100c97bbb440555c2f0719d03
Author: Guozhang Wang 
Date:   2015-12-15T01:50:17Z

add built-in aggregates

commit 66e122adc8911334e924921bc7fa67275445bd71
Author: Guozhang Wang 
Date:   2015-12-15T03:17:59Z

add built-in aggregates in KTable

commit 13c15ada1edbff51e34022484bcde3955cdf99cd
Author: Guozhang Wang 
Date:   2015-12-15T19:28:12Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K2653r

commit 1f360a25022d0286f6ebbf1a6735201ba8fdab53
Author: Guozhang Wang 
Date:   2015-12-15T19:43:53Z

address Yasu's comments

commit 2b027bf8614026cbec05404dffd5e9c2598db6f4
Author: Guozhang Wang 
Date:   2015-12-15T20:58:11Z

add missing files

commit 5214b12fcd66eb4cfa9af4258ca2146c11aa2e89
Author: Guozhang Wang 
Date:   2015-12-15T23:11:27Z

address Yasu's comments

commit a603a9afde8a86906d085b6cf942df67d2082fb9
Author: Guozhang Wang 
Date:   2015-12-15T23:15:29Z

rename aggregateSupplier to aggregatorSupplier

commit e186710bc3b66e88148ab81087276cedffa2bad3
Author: Guozhang Wang 
Date:   2015-12-16T22:20:59Z

modify built-in aggregates

commit 5bb1e8c95e0c1ab131d5212d1a7d793ce8b49414
Author: Guozhang Wang 
Date:   2015-12-16T22:24:10Z

add missing files

commit 4570dd0d98526f8388c13ef5fe4af12d372f73c6
Author: Guozhang Wang 
Date:   2015-12-17T00:01:58Z

further comments addressed




> Stateful operations in the KStream DSL layer
> 
>
> Key: KAFKA-2653
> URL: https://issues.apache.org/jira/browse/KAFKA-2653
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> This includes the interface design the implementation for stateful operations 
> including:
> 0. table representation in KStream.
> 1. stream-stream join.
> 2. stream-table join.
> 3. table-table join.
> 4. stream / table aggregations.
> With 0 and 3 being tackled in KAFKA-2856 and KAFKA-2962 separately, this 
> ticket is going to only focus on windowing definition and 1 / 2 / 4 above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063614#comment-15063614
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/669


> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.0.1
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063638#comment-15063638
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/693

KAFKA-2875:  remove slf4j multi binding warnings when running form source 
distribution

hi @ijuma I reopened this pr again (sorry for my inexperience using github);
I think I did much deduplication for the script;
Please have a look when you have time  : - )

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2875

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/693.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #693


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit ffedf6fd04280e89978531fd73e7fe37a4d9bbed
Author: jinxing 
Date:   2015-12-18T07:24:14Z

KAFKA-2875 Class path contains multiple SLF4J bindings warnings when using 
scripts under bin




> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.9.0.1
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2940) Make available to use any Java options at startup scripts

2015-12-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063123#comment-15063123
 ] 

ASF GitHub Bot commented on KAFKA-2940:
---

Github user sasakitoa closed the pull request at:

https://github.com/apache/kafka/pull/621


> Make available to use any Java options at startup scripts
> -
>
> Key: KAFKA-2940
> URL: https://issues.apache.org/jira/browse/KAFKA-2940
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Sasaki Toru
>Assignee: Sasaki Toru
>Priority: Minor
> Fix For: 0.9.0.1
>
>
> We cannot specify any Java options (e.g. option for remote debugging) at 
> startup scrips such as kafka-server-start.sh .
> This ticket makes we can specify to use "JAVA_OPTS" environmental variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    3   4   5   6   7   8   9   10   11   12   >