Jenkins build is back to normal : kafka-trunk-jdk8 #1950

2017-08-26 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-5798) Couldnt connect to mysql using mysql.jdbc driver

2017-08-26 Thread Bubesh Shankar Govindarajan (JIRA)
Bubesh Shankar Govindarajan created KAFKA-5798:
--

 Summary: Couldnt connect to mysql using mysql.jdbc driver
 Key: KAFKA-5798
 URL: https://issues.apache.org/jira/browse/KAFKA-5798
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.0.1
 Environment: Ubuntu
Reporter: Bubesh Shankar Govindarajan


Am a beginner to both java and kafka, trying to connect kafka and mysql to 
stream data from mysql database and consume it via kafka consumers. 

I have downloaded the confluent 3.3.0 from the below link 
https://www.confluent.io/download/

I am using - *confluent-3.3.0*
*+Java version :+*
$ java -version
openjdk version "9-Ubuntu"
OpenJDK Runtime Environment (build 9-Ubuntu+0-9b161-1)
OpenJDK Server VM (build 9-Ubuntu+0-9b161-1, mixed mode)

Mysql JDBC driver: *com.mysql.jdbc_5.1.5.jar*

I have started the zookeeper and kafka server  using the below commands

* zookeeper-server-start 
/home/bubesh/Kafka/confluent-3.3.0/etc/kafka/zookeeper.properties
* kafka-server-start 
/home/bubesh/Kafka/confluent-3.3.0/etc/kafka/server.properties

*+I have used the below command to invoke kafka-connect - to connect to mysql 
via jdbc driver:+*

connect-standalone 
/home/bubesh/Kafka/confluent-3.3.0/etc/schema-registry/connect-avro-standalone.properties
 
/home/bubesh/Kafka/confluent-3.3.0/etc/kafka-connect-jdbc/source-quickstart-mysql.properties

*+My CLASSPATH variable has: +*

/home/bubesh/JDBCDriver/com.mysql.jdbc_5.1.5.jar::/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/confluent-common/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-serde-tools/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-connect-elasticsearch/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-connect-hdfs/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-connect-jdbc/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-connect-s3/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-connect-storage-common/*


*+Error while running the command:+*
Error: Config file not found: 
/usr/lib/jvm/java-9-openjdk-i386/conf/management/management.properties

*+connect-avro-standalone.properties:+*

bootstrap.servers=localhost:9092
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets

I use a VMware with Ubuntu installed in it and my MySQL sever is installed in 
my host machine. I have also connected to this 

*+ipconfig of the host:+*

 Connection-specific DNS Suffix  . : home
   Link-local IPv6 Address . . . . . : fe80::c81:5928:4ee6:5a2d%9
   IPv4 Address. . . . . . . . . . . : 192.168.1.10
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 192.168.1.1

*+source-quickstart-mysql.properties:+*

name=mysql-whitelist-timestamp-source
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1

connection.url=jdbc:mysql://192.168.1.10:3306/sandbox?user=bubeshpassword=bubesh21
query=SELECT * from temp1
mode=incrementing
incrementing.column.name=c1

topic.prefix=mysql-test-topic1




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5797) StoreChangelogReader should be resilient to broker-side metadata not available

2017-08-26 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-5797:


 Summary: StoreChangelogReader should be resilient to broker-side 
metadata not available
 Key: KAFKA-5797
 URL: https://issues.apache.org/jira/browse/KAFKA-5797
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang
Assignee: Guozhang Wang


In {{StoreChangelogReader#validatePartitionExists}}, if the metadata for the 
required partition is not available, or a timeout exception is thrown, today 
the function would directly throw the exception all the way up to user's 
exception handlers.

Since we have now extracted the restoration out of the consumer callback, a 
better way to handle this, is to only validate the partition during restoring, 
and if it does not exist we can just proceed and retry in the next loop



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-0.11.0-jdk7 #286

2017-08-26 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-3989; MINOR: follow-up: update script to run from kafka root

--
[...truncated 2.44 MB...]

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToChangeLogOnPutIfAbsentWhenValueForKeyExists PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnPutIfAbsentWhenNoPreviousValue STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnPutIfAbsentWhenNoPreviousValue PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogKeyNullOnDelete STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogKeyNullOnDelete PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnGetWhenDoesntExist STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnGetWhenDoesntExist PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnOldValueOnDelete STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnOldValueOnDelete PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToInnerOnPutIfAbsentWhenValueForKeyExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToInnerOnPutIfAbsentWhenValueForKeyExists PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnValueOnGetWhenExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnValueOnGetWhenExists PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRemove STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRemove PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldPutAndFetch STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldPutAndFetch PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRollSegments STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRollSegments PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldFindValuesWithinRange STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldFindValuesWithinRange PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 

Build failed in Jenkins: kafka-trunk-jdk8 #1949

2017-08-26 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5620: Expose the ClassCastException as the cause for the

--
[...truncated 921.67 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED


[GitHub] kafka pull request #3746: MINOR: Fix doc typos and grammar

2017-08-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3746


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3746: MINOR: Fix doc typos and grammar

2017-08-26 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/3746

MINOR: Fix doc typos and grammar

This is contributed by mihbor on various doc fixes including:

https://github.com/apache/kafka/pull/3224
https://github.com/apache/kafka/pull/3226
https://github.com/apache/kafka/pull/3229

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KMinor-doc-typos

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3746.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3746


commit 0afd8f01e33c7d5454582d5aeb0224cf456d0175
Author: Guozhang Wang 
Date:   2017-08-26T23:29:55Z

fix doc typos and grammar




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2654: MINOR: KAFKA-3989_follow_up_PR: update script to r...

2017-08-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2654


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3556: KAFKA-5620

2017-08-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3556


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-5620) SerializationException in doSend() masks class cast exception

2017-08-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5620.
--
   Resolution: Fixed
Fix Version/s: 1.0.0

Issue resolved by pull request 3556
[https://github.com/apache/kafka/pull/3556]

> SerializationException in doSend() masks class cast exception
> -
>
> Key: KAFKA-5620
> URL: https://issues.apache.org/jira/browse/KAFKA-5620
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Jeremy Custenborder
>Assignee: Jeremy Custenborder
> Fix For: 1.0.0
>
>
> I misconfigured my Serializer and passed a byte array to BytesSerializer. 
> This caused the following exception to be thrown. 
> {code}
> org.apache.kafka.common.errors.SerializationException: Can't convert value of 
> class [B to class org.apache.kafka.common.serialization.BytesSerializer 
> specified in value.serializer
> {code}
> This doesn't provide much detail because it strips the ClassCastException. It 
> made figuring this out much more difficult. The real value was the inner 
> exception which was:
> {code}
> [B cannot be cast to org.apache.kafka.common.utils.Bytes
> {code}
> We should include the ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site issue #63: Do not use hyphens in wrapping property names in table...

2017-08-26 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/63
  
Merged to asf-site.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request #63: Do not use hyphens in wrapping property names i...

2017-08-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/63


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3745: [KAFKA-4468] Correctly calculate the window end ti...

2017-08-26 Thread ConcurrencyPractitioner
GitHub user ConcurrencyPractitioner opened a pull request:

https://github.com/apache/kafka/pull/3745

[KAFKA-4468] Correctly calculate the window end timestamp after read from 
state stores

I have decided to use the following approach to fixing this bug:

1) Since the Window Size in WindowedDeserializer was originally unknown, I 
have initialized
a field _windowSize_ and created a constructor to allow it to be 
instantiated

2) The default size for __windowSize__ is _Long.MAX_VALUE_. If that is the 
case, then the 
deserialize method will return an Unlimited Window, or else will return 
Timed one.

3) Temperature Demo was modified to demonstrate how to use this new 
constructor, given
that the window size is known.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ConcurrencyPractitioner/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3745.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3745


commit 7dcf42e110dadd3f257c7ea1a0d10adc60bd0eea
Author: Richard Yu 
Date:   2017-08-26T20:18:48Z

KAFKA-4468 Correctly calculate the window end timestamp after read from 
state stores




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site pull request #56: Update commiter page to indicate that Gwen is a...

2017-08-26 Thread wushujames
Github user wushujames closed the pull request at:

https://github.com/apache/kafka-site/pull/56


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka-site issue #56: Update commiter page to indicate that Gwen is a PMC me...

2017-08-26 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/56
  
@wushujames This is already resolved, could you close this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3736: KAFKA-5787: StoreChangelogReader needs to restore ...

2017-08-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3736


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4906) Support 0.9 brokers with a newer Producer or Consumer version

2017-08-26 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-4906.

Resolution: Won't Fix

> Support 0.9 brokers with a newer Producer or Consumer version
> -
>
> Key: KAFKA-4906
> URL: https://issues.apache.org/jira/browse/KAFKA-4906
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.2.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> KAFKA-4507 added the ability for newer Kafka clients to talk to older Kafka 
> brokers if a new feature supported by a newer wire protocol was not 
> used/required. 
> We currently support brokers as old as 0.10.0.0 because thats when the 
> ApiVersionsRequest/Response was added to the broker (KAFKA-3307).
> However, there are relatively few changes between 0.9.0.0 and 0.10.0.0 on the 
> wire, making it possible to support another major broker version set by 
> assuming that any disconnect resulting from an ApiVersionsRequest is from a 
> 0.9 broker and defaulting to legacy protocol versions. 
> Supporting 0.9 with newer clients can drastically simplify upgrades, allow 
> for libraries and frameworks to easily support a wider set of environments, 
> and let developers take advantage of client side improvements without 
> requiring cluster upgrades first. 
> Below is a list of the wire protocol versions by release for reference: 
> {noformat}
> 0.10.x
>   Produce(0): 0 to 2
>   Fetch(1): 0 to 2 
>   Offsets(2): 0
>   Metadata(3): 0 to 1
>   OffsetCommit(8): 0 to 2
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): 0
>   Heartbeat(12): 0
>   LeaveGroup(13): 0
>   SyncGroup(14): 0
>   DescribeGroups(15): 0
>   ListGroups(16): 0
>   SaslHandshake(17): 0
>   ApiVersions(18): 0
> 0.9.x:
>   Produce(0): 0 to 1 (no response timestamp from v2)
>   Fetch(1): 0 to 1 (no response timestamp from v2)
>   Offsets(2): 0
>   Metadata(3): 0 (no cluster id or rack info from v1)
>   OffsetCommit(8): 0 to 2
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): 0
>   Heartbeat(12): 0
>   LeaveGroup(13): 0
>   SyncGroup(14): 0
>   DescribeGroups(15): 0
>   ListGroups(16): 0
>   SaslHandshake(17): UNSUPPORTED
>   ApiVersions(18): UNSUPPORTED
> 0.8.2.x:
>   Produce(0): 0 (no quotas from v1)
>   Fetch(1): 0 (no quotas from v1)
>   Offsets(2): 0
>   Metadata(3): 0
>   OffsetCommit(8): 0 to 1 (no global retention time from v2)
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): UNSUPPORTED
>   Heartbeat(12): UNSUPPORTED
>   LeaveGroup(13): UNSUPPORTED
>   SyncGroup(14): UNSUPPORTED
>   DescribeGroups(15): UNSUPPORTED
>   ListGroups(16): UNSUPPORTED
>   SaslHandshake(17): UNSUPPORTED
>   ApiVersions(18): UNSUPPORTED
> {noformat}
> Note: Due to KAFKA-3088 it may take up to request.timeout.time to fail an 
> ApiVersionsRequest and failover to legacy protocol versions unless we handle 
> that scenario specifically in this patch. The workaround would be to reduce 
> request.timeout.time if needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : kafka-trunk-jdk8 #1947

2017-08-26 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #3729: KAFKA-5749: Add MeteredSessionStore and Changelogg...

2017-08-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3729


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---