[jira] [Created] (KAFKA-8308) Update jetty for security vulnerability CVE-2019-10241

2019-04-30 Thread Di Shang (JIRA)
Di Shang created KAFKA-8308:
---

 Summary: Update jetty for security vulnerability CVE-2019-10241
 Key: KAFKA-8308
 URL: https://issues.apache.org/jira/browse/KAFKA-8308
 Project: Kafka
  Issue Type: Task
  Components: core
Affects Versions: 2.2.0
Reporter: Di Shang


Kafka 2.2 uses jetty-*-9.4.14.v20181114 which is marked vulnerable

[https://github.com/apache/kafka/blob/2.2/gradle/dependencies.gradle#L58]

 

[https://nvd.nist.gov/vuln/detail/CVE-2019-10241]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5117) Kafka Connect REST endpoints reveal Password typed values

2019-04-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830783#comment-16830783
 ] 

ASF GitHub Bot commented on KAFKA-5117:
---

rhauch commented on pull request #4441: [KAFKA-5117]: Password Mask to Kafka 
Connect REST Endpoint
URL: https://github.com/apache/kafka/pull/4441
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Kafka Connect REST endpoints reveal Password typed values
> -
>
> Key: KAFKA-5117
> URL: https://issues.apache.org/jira/browse/KAFKA-5117
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Thomas Holmes
>Assignee: Chris Egerton
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.2.0, 2.1.1, 2.0.2
>
>
> A Kafka Connect connector can specify ConfigDef keys as type of Password. 
> This type was added to prevent logging the values (instead "[hidden]" is 
> logged).
> This change does not apply to the values returned by executing a GET on 
> {{connectors/\{connector-name\}}} and 
> {{connectors/\{connector-name\}/config}}. This creates an easily accessible 
> way for an attacker who has infiltrated your network to gain access to 
> potential secrets that should not be available.
> I have started on a code change that addresses this issue by parsing the 
> config values through the ConfigDef for the connector and returning their 
> output instead (which leads to the masking of Password typed configs as 
> [hidden]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8307) Kafka Streams should provide some mechanism to determine topology equality and compatibility

2019-04-30 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-8307:
-
Labels: user-experience  (was: )

> Kafka Streams should provide some mechanism to determine topology equality 
> and compatibility
> 
>
> Key: KAFKA-8307
> URL: https://issues.apache.org/jira/browse/KAFKA-8307
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Priority: Major
>  Labels: user-experience
>
> Currently, Streams provides no mechanism to compare two topologies. This is a 
> common operation when users want to have tests verifying that they don't 
> accidentally alter their topology. They would save the known-good topology 
> and then add a unit test verifying the current code against that known-good 
> state.
> However, because there's no way to do this comparison properly, everyone is 
> reduced to using the string format of the topology (from 
> `Topology#describe().toString()`). The major drawback is that the string 
> format is meant for human consumption. It is neither machine-parseable nor 
> stable. So, these compatibility tests are doomed to fail when any minor, 
> non-breaking, change is made either to the application, or to the library. 
> This trains everyone to update the test whenever it fails, undermining its 
> utility.
> We should fix this problem, and provide both a mechanism to serialize the 
> topology and to compare two topologies for compatibility. All in all, I think 
> we need:
> # a way to serialize/deserialize topology structure in a machine-parseable 
> format that is future-compatible. Offhand, I'd recommend serializing the 
> topology structure as JSON, and establishing a policy that attributes should 
> only be added to the object graph, never removed. Note, it's out of scope to 
> be able to actually run a deserialized topology; we only want to save and 
> load the structure (not the logic) to facilitate comparisons.
> # a method to verify the *equality* of two topologies... This method tells 
> you that the two topologies are structurally identical. We can't know if the 
> logic of any operator has changed, only if the structure of the graph is 
> changed. We can consider whether other graph properties, like serdes, should 
> be included.
> # a method to verify the *compatibility* of two topologies... This method 
> tells you that moving from topology A to topology B does not require an 
> application reset. Note that this operation is not commutative: 
> `A.compatibleWith(B)` does not imply `B.compatibleWith(A)`. We can discuss 
> whether `A.compatibleWith(B) && B.compatibleWith(A)` implies `A.equals(B)` (I 
> think not necessarily, because we may want "equality" to be stricter than 
> "compatibility").



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7885) Streams: TopologyDescription violates equals-hashCode contract.

2019-04-30 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830736#comment-16830736
 ] 

John Roesler commented on KAFKA-7885:
-

Hey [~MonCalamari], I've just created 
https://issues.apache.org/jira/browse/KAFKA-8307 . You might be interested.

> Streams: TopologyDescription violates equals-hashCode contract.
> ---
>
> Key: KAFKA-7885
> URL: https://issues.apache.org/jira/browse/KAFKA-7885
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Piotr Fras
>Assignee: Piotr Fras
>Priority: Minor
>  Labels: user-experience
>
> As per JavaSE documentation:
> > If two objects are *equal* according to the *equals*(Object) method, then 
> >calling the *hashCode* method on each of the two objects must produce the 
> >same integer result.
>  
> This is not the case for TopologyDescription.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8307) Kafka Streams should provide some mechanism to determine topology equality and compatibility

2019-04-30 Thread John Roesler (JIRA)
John Roesler created KAFKA-8307:
---

 Summary: Kafka Streams should provide some mechanism to determine 
topology equality and compatibility
 Key: KAFKA-8307
 URL: https://issues.apache.org/jira/browse/KAFKA-8307
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


Currently, Streams provides no mechanism to compare two topologies. This is a 
common operation when users want to have tests verifying that they don't 
accidentally alter their topology. They would save the known-good topology and 
then add a unit test verifying the current code against that known-good state.

However, because there's no way to do this comparison properly, everyone is 
reduced to using the string format of the topology (from 
`Topology#describe().toString()`). The major drawback is that the string format 
is meant for human consumption. It is neither machine-parseable nor stable. So, 
these compatibility tests are doomed to fail when any minor, non-breaking, 
change is made either to the application, or to the library. This trains 
everyone to update the test whenever it fails, undermining its utility.

We should fix this problem, and provide both a mechanism to serialize the 
topology and to compare two topologies for compatibility. All in all, I think 
we need:
# a way to serialize/deserialize topology structure in a machine-parseable 
format that is future-compatible. Offhand, I'd recommend serializing the 
topology structure as JSON, and establishing a policy that attributes should 
only be added to the object graph, never removed. Note, it's out of scope to be 
able to actually run a deserialized topology; we only want to save and load the 
structure (not the logic) to facilitate comparisons.
# a method to verify the *equality* of two topologies... This method tells you 
that the two topologies are structurally identical. We can't know if the logic 
of any operator has changed, only if the structure of the graph is changed. We 
can consider whether other graph properties, like serdes, should be included.
# a method to verify the *compatibility* of two topologies... This method tells 
you that moving from topology A to topology B does not require an application 
reset. Note that this operation is not commutative: `A.compatibleWith(B)` does 
not imply `B.compatibleWith(A)`. We can discuss whether `A.compatibleWith(B) && 
B.compatibleWith(A)` implies `A.equals(B)` (I think not necessarily, because we 
may want "equality" to be stricter than "compatibility").




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-30 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830587#comment-16830587
 ] 

John Roesler commented on KAFKA-8289:
-

Ok, I've submitted a bugfix PR: https://github.com/apache/kafka/pull/6654

[~xiaoxiaoliner], if you have a chance, can you verify that that PR fixes the 
issue for you? Also, if you're available for a code review, I'd appreciate your 
feedback.

Thanks,
-John

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 

[jira] [Commented] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830577#comment-16830577
 ] 

ASF GitHub Bot commented on KAFKA-8289:
---

vvcephei commented on pull request #6654: KAFKA-8289: Fix Session Expiration 
and Suppression
URL: https://github.com/apache/kafka/pull/6654
 
 
   Fix two problems in Streams:
   * Session windows expired prematurely (off-by-one error), since the window 
end is inclusive, unlike other windows
   * Suppress duration for sessions incorrectly waited only the grace period, 
but session windows aren't closed until `gracePeriod + sessionGap`
   
   Update the tests accordingly
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : 

[jira] [Assigned] (KAFKA-8306) Ensure consistency of checkpointed log start offset and current log end offset

2019-04-30 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-8306:
--

Assignee: Dhruvil Shah

> Ensure consistency of checkpointed log start offset and current log end offset
> --
>
> Key: KAFKA-8306
> URL: https://issues.apache.org/jira/browse/KAFKA-8306
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Dhruvil Shah
>Priority: Major
>
> When initializing a log, we may use the checkpointed log start offset. We 
> need to ensure that the log end offset is set consistently with this value 
> (i.e. it must be greater than or equal to it). This may not always be true if 
> the log data is removed or has become corrupted. As a simple experiment, you 
> can try the following steps to reproduce the problem:
>  # Write some data to the partition
>  # Use DeleteRecords to advance log start
>  # Shutdown the broker
>  # Delete the log directory
>  # Restart the broker
> You will see something like this in the logs:
> {code:java}
> [2019-04-29 11:55:21,259] INFO [Log partition=foo-0, dir=/tmp/kafka-logs] 
> Completed load of log with 1 segments, log start offset 10 and log end offset 
> 0 in 36 ms (kafka.log.Log){code}
> This may be the cause of KAFKA-8255, but I am not sure yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8134) ProducerConfig.LINGER_MS_CONFIG undocumented breaking change in kafka-clients 2.1

2019-04-30 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-8134.

Resolution: Fixed

> ProducerConfig.LINGER_MS_CONFIG undocumented breaking change in kafka-clients 
> 2.1
> -
>
> Key: KAFKA-8134
> URL: https://issues.apache.org/jira/browse/KAFKA-8134
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.1.0
>Reporter: Sam Lendle
>Assignee: Dhruvil Shah
>Priority: Major
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> Prior to 2.1, the type of the "linger.ms" config was Long, but was changed to 
> Integer in 2.1.0 ([https://github.com/apache/kafka/pull/5270]) A config using 
> a Long value for that parameter which works with kafka-clients < 2.1 will 
> cause a ConfigException to be thrown when constructing a KafkaProducer if 
> kafka-clients is upgraded to >= 2.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-30 Thread Xiaolin Jia (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830365#comment-16830365
 ] 

Xiaolin Jia commented on KAFKA-8289:


 

[~vvcephei] Thank you, thank you for your reply, according to your suggestion 
and my recording rate. I temporarily set my gap and grace to the same seconds. 
Imperfect but effective, got exactly one window final result.

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- 

[jira] [Commented] (KAFKA-7656) ReplicaManager fetch fails on leader due to long/integer overflow

2019-04-30 Thread Ricardo Boccato Alves (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830321#comment-16830321
 ] 

Ricardo Boccato Alves commented on KAFKA-7656:
--

Hi,

I am having the same problem with Kafka 2.1.1 (using confluent docker image 
5.1.2). We have 3 brokers with min.insync.replicas = 2.

Is there a work around to deal with the issue? I am having this problem in 
production.

Here is the error log, in case it is of any help.

 

[2019-04-29 15:36:19,452] ERROR [ReplicaManager broker=1] Error processing 
fetch operation on partition __consumer_offsets-13, offset 0 
(kafka.server.ReplicaManager)
java.lang.IllegalArgumentException: Invalid max size -2147483648 for log read 
from segment FileRecords(file= 
/var/lib/kafka/data/__consumer_offsets-13/.log, start=0, 
end=2147483647)
at kafka.log.LogSegment.read(LogSegment.scala:274)
at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1190)
at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1145)
at kafka.log.Log.maybeHandleIOException(Log.scala:1927)
at kafka.log.Log.read(Log.scala:1145)
at kafka.cluster.Partition$$anonfun$readRecords$1.apply(Partition.scala:790)
at kafka.cluster.Partition$$anonfun$readRecords$1.apply(Partition.scala:767)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
at kafka.cluster.Partition.readRecords(Partition.scala:767)
at 
kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:898)
at 
kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:965)
at 
kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:964)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:964)
at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:817)
at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:829)
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:693)
at kafka.server.KafkaApis.handle(KafkaApis.scala:114)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
at java.lang.Thread.run(Thread.java:748)

> ReplicaManager fetch fails on leader due to long/integer overflow
> -
>
> Key: KAFKA-7656
> URL: https://issues.apache.org/jira/browse/KAFKA-7656
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.0.1
> Environment: Linux 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 
> EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Patrick Haas
>Assignee: Jose Armando Garcia Sancio
>Priority: Major
>
> (Note: From 2.0.1-cp1 from confluent distribution)
> {{[2018-11-19 21:13:13,687] ERROR [ReplicaManager broker=103] Error 
> processing fetch operation on partition __consumer_offsets-20, offset 0 
> (kafka.server.ReplicaManager)}}
> {{java.lang.IllegalArgumentException: Invalid max size -2147483648 for log 
> read from segment FileRecords(file= 
> /prod/kafka/data/kafka-logs/__consumer_offsets-20/.log, 
> start=0, end=2147483647)}}
> {{ at kafka.log.LogSegment.read(LogSegment.scala:274)}}
> {{ at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1159)}}
> {{ at kafka.log.Log$$anonfun$read$2.apply(Log.scala:1114)}}
> {{ at kafka.log.Log.maybeHandleIOException(Log.scala:1842)}}
> {{ at kafka.log.Log.read(Log.scala:1114)}}
> {{ at 
> kafka.server.ReplicaManager.kafka$server$ReplicaManager$$read$1(ReplicaManager.scala:912)}}
> {{ at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:974)}}
> {{ at 
> kafka.server.ReplicaManager$$anonfun$readFromLocalLog$1.apply(ReplicaManager.scala:973)}}
> {{ at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)}}
> {{ at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)}}
> {{ at kafka.server.ReplicaManager.readFromLocalLog(ReplicaManager.scala:973)}}
> {{ at kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:802)}}
> {{ at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:815)}}
> {{ at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:685)}}
> {{ at kafka.server.KafkaApis.handle(KafkaApis.scala:114)}}
> {{ at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)}}
> {{ at java.lang.Thread.run(Thread.java:748)}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-04-30 Thread Viktor Somogyi-Vass (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830302#comment-16830302
 ] 

Viktor Somogyi-Vass edited comment on KAFKA-8030 at 4/30/19 1:35 PM:
-

Managed to reproduce a failure again with logs. Now it's better, just 1 hour 
and 100 something iterations :D.

Attached the info level logs. Will look into it later this week.

[^testDescribeUnderMinIsrPartitionsMixed.log]


was (Author: viktorsomogyi):
Managed to reproduce a failure again with logs. Now it's better, just 1 hour 
and 100 something tests :D.

Attached the info level logs. Will look into it later this week.

[^testDescribeUnderMinIsrPartitionsMixed.log]

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8030
> URL: https://issues.apache.org/jira/browse/KAFKA-8030
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Viktor Somogyi-Vass
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
> Attachments: testDescribeUnderMinIsrPartitionsMixed.log
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
> {quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> org.junit.Assert.assertTrue(Assert.java:53) at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
> STDERR
> {quote}Option "[replica-assignment]" can't be used with option 
> "[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-04-30 Thread Viktor Somogyi-Vass (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viktor Somogyi-Vass updated KAFKA-8030:
---
Attachment: testDescribeUnderMinIsrPartitionsMixed.log

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8030
> URL: https://issues.apache.org/jira/browse/KAFKA-8030
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Viktor Somogyi-Vass
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
> Attachments: testDescribeUnderMinIsrPartitionsMixed.log
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
> {quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> org.junit.Assert.assertTrue(Assert.java:53) at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
> STDERR
> {quote}Option "[replica-assignment]" can't be used with option 
> "[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-04-30 Thread Viktor Somogyi-Vass (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830302#comment-16830302
 ] 

Viktor Somogyi-Vass commented on KAFKA-8030:


Managed to reproduce a failure again with logs. Now it's better, just 1 hour 
and 100 something tests :D.

Attached the info level logs. Will look into it later this week.

[^testDescribeUnderMinIsrPartitionsMixed.log]

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8030
> URL: https://issues.apache.org/jira/browse/KAFKA-8030
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Viktor Somogyi-Vass
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
> Attachments: testDescribeUnderMinIsrPartitionsMixed.log
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
> {quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> org.junit.Assert.assertTrue(Assert.java:53) at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
> STDERR
> {quote}Option "[replica-assignment]" can't be used with option 
> "[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7697) Possible deadlock in kafka.cluster.Partition

2019-04-30 Thread Rajini Sivaram (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830251#comment-16830251
 ] 

Rajini Sivaram commented on KAFKA-7697:
---

[~little brother ma] [~boge] [~yanrui] Can you provide full thread dumps of a 
broker that encountered this issue with 2.1.1? Thank you!

> Possible deadlock in kafka.cluster.Partition
> 
>
> Key: KAFKA-7697
> URL: https://issues.apache.org/jira/browse/KAFKA-7697
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Gian Merlino
>Assignee: Rajini Sivaram
>Priority: Blocker
> Fix For: 2.2.0, 2.1.1
>
> Attachments: threaddump.txt
>
>
> After upgrading a fairly busy broker from 0.10.2.0 to 2.1.0, it locked up 
> within a few minutes (by "locked up" I mean that all request handler threads 
> were busy, and other brokers reported that they couldn't communicate with 
> it). I restarted it a few times and it did the same thing each time. After 
> downgrading to 0.10.2.0, the broker was stable. I attached a thread dump from 
> the last attempt on 2.1.0 that shows lots of kafka-request-handler- threads 
> trying to acquire the leaderIsrUpdateLock lock in kafka.cluster.Partition.
> It jumps out that there are two threads that already have some read lock 
> (can't tell which one) and are trying to acquire a second one (on two 
> different read locks: 0x000708184b88 and 0x00070821f188): 
> kafka-request-handler-1 and kafka-request-handler-4. Both are handling a 
> produce request, and in the process of doing so, are calling 
> Partition.fetchOffsetSnapshot while trying to complete a DelayedFetch. At the 
> same time, both of those locks have writers from other threads waiting on 
> them (kafka-request-handler-2 and kafka-scheduler-6). Neither of those locks 
> appear to have writers that hold them (if only because no threads in the dump 
> are deep enough in inWriteLock to indicate that).
> ReentrantReadWriteLock in nonfair mode prioritizes waiting writers over 
> readers. Is it possible that kafka-request-handler-1 and 
> kafka-request-handler-4 are each trying to read-lock the partition that is 
> currently locked by the other one, and they're both parked waiting for 
> kafka-request-handler-2 and kafka-scheduler-6 to get write locks, which they 
> never will, because the former two threads own read locks and aren't giving 
> them up?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-04-30 Thread Viktor Somogyi-Vass (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16830086#comment-16830086
 ] 

Viktor Somogyi-Vass commented on KAFKA-8030:


I think I managed to reproduce it, or at least something like it. Running it 
for more than 17 hours the 204th iteration failed.

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8030
> URL: https://issues.apache.org/jira/browse/KAFKA-8030
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Viktor Somogyi-Vass
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
> {quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> org.junit.Assert.assertTrue(Assert.java:53) at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
> STDERR
> {quote}Option "[replica-assignment]" can't be used with option 
> "[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)