[jira] [Commented] (KAFKA-9334) Add more unit tests for Materialized class

2019-12-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003439#comment-17003439
 ] 

ASF GitHub Bot commented on KAFKA-9334:
---

SainathB commented on pull request #7871: KAFKA-9334: Added more unit tests for 
Materialized class
URL: https://github.com/apache/kafka/pull/7871
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add more unit tests for Materialized class
> --
>
> Key: KAFKA-9334
> URL: https://issues.apache.org/jira/browse/KAFKA-9334
> Project: Kafka
>  Issue Type: Test
>  Components: unit tests
>Reporter: Sainath Batthala
>Priority: Minor
>  Labels: newbie
>
> Add more unit tests for org.apache.kafka.streams.kstream.Materialized class.
> For example:
> There is unit test case for max allowed store length validation
> There is no unit test case for negative retention 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9236) Confused log after using CLI scripts to produce messages

2019-12-25 Thread Xiang Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Zhang resolved KAFKA-9236.

Resolution: Fixed

> Confused log after using CLI scripts to produce messages
> 
>
> Key: KAFKA-9236
> URL: https://issues.apache.org/jira/browse/KAFKA-9236
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xiang Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9236) Confused log after using CLI scripts to produce messages

2019-12-25 Thread huxihx (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003415#comment-17003415
 ] 

huxihx commented on KAFKA-9236:
---

[~iamabug] Can we close this Jira now?

> Confused log after using CLI scripts to produce messages
> 
>
> Key: KAFKA-9236
> URL: https://issues.apache.org/jira/browse/KAFKA-9236
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xiang Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9277) move all group state transition rules into their states

2019-12-25 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9277:
-

Assignee: dengziming  (was: huxihx)

> move all group state transition rules into their states
> ---
>
> Key: KAFKA-9277
> URL: https://issues.apache.org/jira/browse/KAFKA-9277
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dengziming
>Assignee: dengziming
>Priority: Minor
> Fix For: 2.5.0
>
>
> Today the `GroupMetadata` maintain a validPreviousStates map of all 
> GroupState:
> ```
> private val validPreviousStates: Map[GroupState, Set[GroupState]] =
>  Map(Dead -> Set(Stable, PreparingRebalance, CompletingRebalance, Empty, 
> Dead),
>  CompletingRebalance -> Set(PreparingRebalance),
>  Stable -> Set(CompletingRebalance),
>  PreparingRebalance -> Set(Stable, CompletingRebalance, Empty),
>  Empty -> Set(PreparingRebalance))
> ```
> It would be cleaner to move all state transition rules into their states :
> ```
> private[group] sealed trait GroupState {
>  val validPreviousStates: Set[GroupState]
> }
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9277) move all group state transition rules into their states

2019-12-25 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-9277:
-

Assignee: huxihx  (was: dengziming)

> move all group state transition rules into their states
> ---
>
> Key: KAFKA-9277
> URL: https://issues.apache.org/jira/browse/KAFKA-9277
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dengziming
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.5.0
>
>
> Today the `GroupMetadata` maintain a validPreviousStates map of all 
> GroupState:
> ```
> private val validPreviousStates: Map[GroupState, Set[GroupState]] =
>  Map(Dead -> Set(Stable, PreparingRebalance, CompletingRebalance, Empty, 
> Dead),
>  CompletingRebalance -> Set(PreparingRebalance),
>  Stable -> Set(CompletingRebalance),
>  PreparingRebalance -> Set(Stable, CompletingRebalance, Empty),
>  Empty -> Set(PreparingRebalance))
> ```
> It would be cleaner to move all state transition rules into their states :
> ```
> private[group] sealed trait GroupState {
>  val validPreviousStates: Set[GroupState]
> }
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9334) Add more unit tests for Materialized class

2019-12-25 Thread Sainath Batthala (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sainath Batthala updated KAFKA-9334:

Labels: newbie  (was: )

> Add more unit tests for Materialized class
> --
>
> Key: KAFKA-9334
> URL: https://issues.apache.org/jira/browse/KAFKA-9334
> Project: Kafka
>  Issue Type: Test
>  Components: unit tests
>Reporter: Sainath Batthala
>Priority: Minor
>  Labels: newbie
>
> Add more unit tests for org.apache.kafka.streams.kstream.Materialized class.
> For example:
> There is unit test case for max allowed store length validation
> There is no unit test case for negative retention 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9334) Add more unit tests for Materialized class

2019-12-25 Thread Sainath Batthala (Jira)
Sainath Batthala created KAFKA-9334:
---

 Summary: Add more unit tests for Materialized class
 Key: KAFKA-9334
 URL: https://issues.apache.org/jira/browse/KAFKA-9334
 Project: Kafka
  Issue Type: Test
  Components: unit tests
Reporter: Sainath Batthala


Add more unit tests for org.apache.kafka.streams.kstream.Materialized class.
For example:
There is unit test case for max allowed store length validation
There is no unit test case for negative retention 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-4149) java.lang.NoSuchMethodError when running streams tests

2019-12-25 Thread John Roesler (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-4149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003327#comment-17003327
 ] 

John Roesler commented on KAFKA-4149:
-

Hi @sdhalex,

Have you observed this happening? Ismael said we stopped seeing it. I’ve never 
seen it. 

The root cause must have been a failure in building the right class path when 
running the tests. 

> java.lang.NoSuchMethodError when running streams tests
> --
>
> Key: KAFKA-4149
> URL: https://issues.apache.org/jira/browse/KAFKA-4149
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Priority: Major
>
> This started happening recently, may be related to upgrading to Gradle 3:
> {code}
> java.lang.NoSuchMethodError: 
> scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
>   at kafka.utils.MockScheduler.(MockScheduler.scala:38)
>   at kafka.utils.MockTime.(MockTime.scala:35)
>   at kafka.utils.MockTime.(MockTime.scala:37)
>   at 
> org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.(EmbeddedKafkaCluster.java:44)
>   at 
> org.apache.kafka.streams.KafkaStreamsTest.(KafkaStreamsTest.java:42)
> {code}
> https://builds.apache.org/job/kafka-trunk-jdk7/1530/testReport/junit/org.apache.kafka.streams/KafkaStreamsTest/classMethod/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9333) Shim `core` module that targets default scala version (KIP-531)

2019-12-25 Thread Ismael Juma (Jira)
Ismael Juma created KAFKA-9333:
--

 Summary: Shim `core` module that targets default scala version 
(KIP-531)
 Key: KAFKA-9333
 URL: https://issues.apache.org/jira/browse/KAFKA-9333
 Project: Kafka
  Issue Type: Improvement
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 2.5.0


Introduce a shim `core` module that targets the default scala version. This 
useful for applications that do not require a specific Scala version. Java 
applications that shade Scala dependencies or Java applications that have a 
single Scala dependency would fall under this category. We will target Scala 
2.13 in the initial version of this module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9312) KafkaProducer flush behavior does not guarantee completed sends under record batch splitting

2019-12-25 Thread Jonathan Santilli (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Santilli reassigned KAFKA-9312:


Assignee: Jonathan Santilli

> KafkaProducer flush behavior does not guarantee completed sends under record 
> batch splitting
> 
>
> Key: KAFKA-9312
> URL: https://issues.apache.org/jira/browse/KAFKA-9312
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 1.0.0, 1.1.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0
>Reporter: Lucas Bradstreet
>Assignee: Jonathan Santilli
>Priority: Major
>
> The KafkaProducer flush call guarantees that all records that have been sent 
> at time of the flush call will be either sent successfully or will result in 
> an error.
> The KafkaProducer will split record batches upon receiving a 
> MESSAGE_TOO_LARGE error from the broker. However the flush behavior relies on 
> the accumulator checking incomplete sends that exist at the time of the flush 
> call.
> {code:java}
> public void awaitFlushCompletion() throws InterruptedException {
> try {
> for (ProducerBatch batch : this.incomplete.copyAll())
> batch.produceFuture.await();
> } finally {
> this.flushesInProgress.decrementAndGet();
> }
> }{code}
> When large record batches are split, the batch producerFuture in question is 
> completed, and new batches added to the incomplete list of record batches. 
> This will break the flush guarantee as awaitFlushCompletion will finish 
> without awaiting the new split batches, and any pre-split batches being 
> awaited on above will have been completed.
> This is demonstrated in a test case that can be found at 
> [https://github.com/lbradstreet/kafka/commit/733a683273c31823df354d0a785cb2c24365735a#diff-0b8da0c7ceecaa1f00486dadb53208b1R2339]
> This problem is likely present since record batch splitting was added as of 
> KAFKA-3995; KIP-126; 0.11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9254) Topic level configuration failed

2019-12-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003140#comment-17003140
 ] 

ASF GitHub Bot commented on KAFKA-9254:
---

huxihx commented on pull request #7870: KAFKA-9254: Topic level configuration 
failed
URL: https://github.com/apache/kafka/pull/7870
 
 
   https://issues.apache.org/jira/browse/KAFKA-9254
   
   Currently, when dynamic broker config is updated, the log config will be 
recreated with an empty overridden configs. In such case, when updating dynamic 
broker configs a second round, the topic-level configs will lost.
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Topic level configuration failed
> 
>
> Key: KAFKA-9254
> URL: https://issues.apache.org/jira/browse/KAFKA-9254
> Project: Kafka
>  Issue Type: Bug
>  Components: config, log, replication
>Affects Versions: 2.0.1
>Reporter: fenghong
>Assignee: huxihx
>Priority: Critical
>
> We are engineers at Huobi and now encounter Kafka BUG 
> Modifying DynamicBrokerConfig more than 2 times will invalidate the topic 
> level unrelated configuration
> The bug reproduction method as follows:
>  # Set Kafka Broker config  server.properties min.insync.replicas=3
>  # Create topic test-1 and set topic‘s level config min.insync.replicas=2
>  # Dynamically modify the configuration twice as shown below
> {code:java}
> bin/kafka-configs.sh --bootstrap-server xxx:9092 --entity-type brokers 
> --entity-default --alter --add-config log.message.timestamp.type=LogAppendTime
> bin/kafka-configs.sh --bootstrap-server xxx:9092 --entity-type brokers 
> --entity-default --alter --add-config log.retention.ms=60480
> {code}
>  # stop a Kafka Server and found the Exception as shown below
>  org.apache.kafka.common.errors.NotEnoughReplicasException: Number of insync 
> replicas for partition test-1-0 is [2], below required minimum [3]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)