[jira] [Commented] (KAFKA-6987) Reimplement KafkaFuture with CompletableFuture

2021-01-13 Thread Andras Beni (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264038#comment-17264038
 ] 

Andras Beni commented on KAFKA-6987:


[~tombentley] feel free to solve this. I don't plan to work on it.

> Reimplement KafkaFuture with CompletableFuture
> --
>
> Key: KAFKA-6987
> URL: https://issues.apache.org/jira/browse/KAFKA-6987
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Priority: Minor
>
> KafkaFuture documentation states:
> {{This will eventually become a thin shim on top of Java 8's 
> CompletableFuture.}}
> With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7631) NullPointerException when SCRAM is allowed bu ScramLoginModule is not in broker's jaas.conf

2018-12-06 Thread Andras Beni (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16711771#comment-16711771
 ] 

Andras Beni commented on KAFKA-7631:


[~mrsrinivas] none that I know of

> NullPointerException when SCRAM is allowed bu ScramLoginModule is not in 
> broker's jaas.conf
> ---
>
> Key: KAFKA-7631
> URL: https://issues.apache.org/jira/browse/KAFKA-7631
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Assignee: Attila Sasvari
>Priority: Minor
>
> When user wants to use delegation tokens and lists {{SCRAM}} in 
> {{sasl.enabled.mechanisms}}, but does not add {{ScramLoginModule}} to 
> broker's JAAS configuration, a null pointer exception is thrown on broker 
> side and the connection is closed.
> Meaningful error message should be logged and sent back to the client.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:376)
> at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:262)
> at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:127)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:489)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:427)
> at kafka.network.Processor.poll(SocketServer.scala:679)
> at kafka.network.Processor.run(SocketServer.scala:584)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7062) Simplify MirrorMaker loop after removal of old consumer support

2018-11-18 Thread Andras Beni (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16690883#comment-16690883
 ] 

Andras Beni commented on KAFKA-7062:


[~Junyu Chen] Feel free to implement this change. 
Note that there is a discussion going on about completely rewriting MirrorMaker 
at KAFKA-7500 
)[KIP-382|https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0]).
 

> Simplify MirrorMaker loop after removal of old consumer support
> ---
>
> Key: KAFKA-7062
> URL: https://issues.apache.org/jira/browse/KAFKA-7062
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Andras Beni
>Priority: Minor
>  Labels: newbie
>
> Once KAFKA-2983 is merged, we can simplify the MirrorMaker loop to be a 
> single loop instead of two nested loops. In the old consumer, even if there 
> is no message offsets would still be committed so receive() could block. The 
> new consumer doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-3362) Update protocol schema and field doc strings

2018-11-18 Thread Andras Beni (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Beni reassigned KAFKA-3362:
--

Assignee: (was: Andras Beni)

> Update protocol schema and field doc strings
> 
>
> Key: KAFKA-3362
> URL: https://issues.apache.org/jira/browse/KAFKA-3362
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Priority: Major
>
> In KAFKA-3361, auto generation of docs based on the definitions in 
> Protocol.java was added. There are some inconsistencies and missing 
> information in the docs strings in the code vs. the [wiki 
> page|https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol].
>  
> The code documentation strings should be reviewed and updated to be complete 
> and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6987) Reimplement KafkaFuture with CopletableFuture

2018-11-18 Thread Andras Beni (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Beni reassigned KAFKA-6987:
--

Assignee: (was: Andras Beni)

> Reimplement KafkaFuture with CopletableFuture
> -
>
> Key: KAFKA-6987
> URL: https://issues.apache.org/jira/browse/KAFKA-6987
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Priority: Minor
>
> KafkaFuture documentation states:
> {{This will eventually become a thin shim on top of Java 8's 
> CompletableFuture.}}
> With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7062) Simplify MirrorMaker loop after removal of old consumer support

2018-11-18 Thread Andras Beni (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Beni reassigned KAFKA-7062:
--

Assignee: (was: Andras Beni)

> Simplify MirrorMaker loop after removal of old consumer support
> ---
>
> Key: KAFKA-7062
> URL: https://issues.apache.org/jira/browse/KAFKA-7062
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Priority: Minor
>  Labels: newbie
>
> Once KAFKA-2983 is merged, we can simplify the MirrorMaker loop to be a 
> single loop instead of two nested loops. In the old consumer, even if there 
> is no message offsets would still be committed so receive() could block. The 
> new consumer doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7631) NullPointerException when SCRAM is allowed bu ScramLoginModule is not in broker's jaas.conf

2018-11-15 Thread Andras Beni (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687631#comment-16687631
 ] 

Andras Beni commented on KAFKA-7631:


[~asasvari], [~viktorsomogyi] you might want to look at this issue.

> NullPointerException when SCRAM is allowed bu ScramLoginModule is not in 
> broker's jaas.conf
> ---
>
> Key: KAFKA-7631
> URL: https://issues.apache.org/jira/browse/KAFKA-7631
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Priority: Minor
>
> When user wants to use delegation tokens and lists {{SCRAM}} in 
> {{sasl.enabled.mechanisms}}, but does not add {{ScramLoginModule}} to 
> broker's JAAS configuration, a null pointer exception is thrown on broker 
> side and the connection is closed.
> Meaningful error message should be logged and sent back to the client.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:376)
> at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:262)
> at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:127)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:489)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:427)
> at kafka.network.Processor.poll(SocketServer.scala:679)
> at kafka.network.Processor.run(SocketServer.scala:584)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7631) NullPointerException when SCRAM is allowed bu ScramLoginModule is not in broker's jaas.conf

2018-11-15 Thread Andras Beni (JIRA)
Andras Beni created KAFKA-7631:
--

 Summary: NullPointerException when SCRAM is allowed bu 
ScramLoginModule is not in broker's jaas.conf
 Key: KAFKA-7631
 URL: https://issues.apache.org/jira/browse/KAFKA-7631
 Project: Kafka
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.0
Reporter: Andras Beni


When user wants to use delegation tokens and lists {{SCRAM}} in 
{{sasl.enabled.mechanisms}}, but does not add {{ScramLoginModule}} to broker's 
JAAS configuration, a null pointer exception is thrown on broker side and the 
connection is closed.

Meaningful error message should be logged and sent back to the client.
{code}
java.lang.NullPointerException
at 
org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:376)
at 
org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:262)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:127)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:489)
at org.apache.kafka.common.network.Selector.poll(Selector.java:427)
at kafka.network.Processor.poll(SocketServer.scala:679)
at kafka.network.Processor.run(SocketServer.scala:584)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7630) Clarify that broker doesn't need scram username/password for delegation tokens

2018-11-14 Thread Andras Beni (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16687615#comment-16687615
 ] 

Andras Beni commented on KAFKA-7630:


[~asasvari], [~viktorsomogyi] you might want to take a look at this.

> Clarify that broker doesn't need scram username/password for delegation tokens
> --
>
> Key: KAFKA-7630
> URL: https://issues.apache.org/jira/browse/KAFKA-7630
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Priority: Minor
>
> [Documentation|https://kafka.apache.org/documentation/#security_token_authentication]
>  on delegation tokens refers to SCRAM 
> [configuration|https://kafka.apache.org/documentation/#security_sasl_scram_brokerconfig]
>  section. However, in a setup where only delegation tokens use SCRAM and all 
> other authentication goes via Kerberos, {{ScramLoginModule}} does not need 
> {{username}} and {{password}}.
> This is not obvious from the documentation.
> I believe the same is true for setups where SCRAM is used by clients but 
> inter broker communication is GSSAPI or PLAIN, but have not tested it.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7630) Clarify that broker doesn't need scram username/password for delegation tokens

2018-11-14 Thread Andras Beni (JIRA)
Andras Beni created KAFKA-7630:
--

 Summary: Clarify that broker doesn't need scram username/password 
for delegation tokens
 Key: KAFKA-7630
 URL: https://issues.apache.org/jira/browse/KAFKA-7630
 Project: Kafka
  Issue Type: Improvement
  Components: documentation, security
Affects Versions: 2.0.0
Reporter: Andras Beni


[Documentation|https://kafka.apache.org/documentation/#security_token_authentication]
 on delegation tokens refers to SCRAM 
[configuration|https://kafka.apache.org/documentation/#security_sasl_scram_brokerconfig]
 section. However, in a setup where only delegation tokens use SCRAM and all 
other authentication goes via Kerberos, {{ScramLoginModule}} does not need 
{{username}} and {{password}}.

This is not obvious from the documentation.

I believe the same is true for setups where SCRAM is used by clients but inter 
broker communication is GSSAPI or PLAIN, but have not tested it.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7145) Consumer thread getting stuck in hasNext() method

2018-07-11 Thread Andras Beni (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539982#comment-16539982
 ] 

Andras Beni commented on KAFKA-7145:


[~lovenishgoyal],
as I can see you are using the so called 'old client' (the one residing in 
artifact org.apache.kafka:kafka_). It has been deprecated for 
quite some time and has not got bugfixes lately. It will even be removed in 
version 2.0.0 (ETA: in a few days). So I would not expect the community to put 
much effort in debugging and fixing bugs in this client.
I recommend to migrate to the 'new client' 
(org.apache.kafka.clients.consumer.KafkaConsumer in artifact 
org.apache.kafka:kafka-clients) and try to reproduce the issue with that. 
Probably this behavior has been fixed already.  


> Consumer thread getting stuck in hasNext() method
> -
>
> Key: KAFKA-7145
> URL: https://issues.apache.org/jira/browse/KAFKA-7145
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Lovenish goyal
>Priority: Blocker
>
> Consumer thread is getting stuck at *hasNext()* method.
> we are using ConsumerIterator for same and below is the code snipped 
>  
> {code:java}
> ConsumerIterator mIterator;
> List> streams = 
> mConsumerConnector.createMessageStreamsByFilter(topicFilter);
> KafkaStream stream = streams.get(0);
> mIterator = stream.iterator();
> {code}
>  
> When i manually check via [Kafdrop|https://github.com/HomeAdvisor/Kafdrop] I 
> am seeing 'No message found' message.I have tried same with both kafka 
> version 9 & 10 and getting same issue. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6968) Call RebalanceListener in MockConsumer

2018-06-28 Thread Andras Beni (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526492#comment-16526492
 ] 

Andras Beni commented on KAFKA-6968:


[~bgummalla] 
As I can see in [current 
code|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java#L86],
 it is.


> Call RebalanceListener in MockConsumer
> --
>
> Key: KAFKA-6968
> URL: https://issues.apache.org/jira/browse/KAFKA-6968
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 1.1.0
>Reporter: Andras Beni
>Priority: Minor
>
> {{org.apache.kafka.clients.consumer.MockConsumer}} simulates rebalance with 
> method {{public synchronized void rebalance(Collection 
> newAssignment)}}. This method does not call {{ConsumerRebalanceListener}} 
> methods. Calls to {{onPartitionsRevoked(...)}} and 
> {{onPartitionsAssigned(...)}} should be added in appropriate order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6812) Async ConsoleProducer exits with 0 status even after data loss

2018-06-26 Thread Andras Beni (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524080#comment-16524080
 ] 

Andras Beni commented on KAFKA-6812:


[~enether] 
In sync mode, when an exception occurs, it is logged, and the tool immediately 
exits with error status. In async mode, the same situation is  handled by 
logging, continuing operation and exit code will still be 0. I also see how 
this behavior can be derived from current implementation. I was questioning if 
it is correct and by design, because loosing data should be unexpected and the 
difference in error handling is not obvious from sync-async distinction. If 
this difference is intended, I propose to either
* document that in async mode dropping records is expected and accepted or
* add a flag (e.g. --error-handler-strategy) so users can choose how they want 
to handle errors.

If the latter case is acceptable for you, I am happy to write the KIP and 
implement. 
If neither option is acceptable, feel free to close this issue.

> Async ConsoleProducer exits with 0 status even after data loss
> --
>
> Key: KAFKA-6812
> URL: https://issues.apache.org/jira/browse/KAFKA-6812
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.1.0
>Reporter: Andras Beni
>Assignee: Stanislav Kozlovski
>Priority: Minor
>
> When {{ConsoleProducer}} is run without {{--sync}} flag and one of the 
> batches times out, {{ErrorLoggingCallback}} logs the error:
> {code:java}
>  18/04/21 04:23:01 WARN clients.NetworkClient: [Producer 
> clientId=console-producer] Connection to node 10 could not be established. 
> Broker may not be available.
>  18/04/21 04:23:02 ERROR internals.ErrorLoggingCallback: Error when sending 
> message to topic my-topic with key: null, value: 8 bytes with error:
>  org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 
> my-topic-0: 1530 ms has passed since batch creation plus linger time{code}
>  However, the tool exits with status code 0. 
>  In my opinion the tool should indicate in the exit status that there was 
> data lost. Maybe it's reasonable to exit after the first error.
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7062) Simplify MirrorMaker loop after removal of old consumer support

2018-06-15 Thread Andras Beni (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Beni reassigned KAFKA-7062:
--

Assignee: Andras Beni

> Simplify MirrorMaker loop after removal of old consumer support
> ---
>
> Key: KAFKA-7062
> URL: https://issues.apache.org/jira/browse/KAFKA-7062
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Andras Beni
>Priority: Minor
>  Labels: newbie
>
> Once KAFKA-2983 is merged, we can simplify the MirrorMaker loop to be a 
> single loop instead of two nested loops. In the old consumer, even if there 
> is no message offsets would still be committed so receive() could block. The 
> new consumer doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6987) Reimplement KafkaFuture with CopletableFuture

2018-06-04 Thread Andras Beni (JIRA)
Andras Beni created KAFKA-6987:
--

 Summary: Reimplement KafkaFuture with CopletableFuture
 Key: KAFKA-6987
 URL: https://issues.apache.org/jira/browse/KAFKA-6987
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 2.0.0
Reporter: Andras Beni
Assignee: Andras Beni


KafkaFuture documentation states:
{{This will eventually become a thin shim on top of Java 8's 
CompletableFuture.}}
With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6968) Call RebalanceListener in MockConsumer

2018-05-30 Thread Andras Beni (JIRA)
Andras Beni created KAFKA-6968:
--

 Summary: Call RebalanceListener in MockConsumer
 Key: KAFKA-6968
 URL: https://issues.apache.org/jira/browse/KAFKA-6968
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 1.1.0
Reporter: Andras Beni


{{org.apache.kafka.clients.consumer.MockConsumer}} simulates rebalance with 
method {{public synchronized void rebalance(Collection 
newAssignment)}}. This method does not call {{ConsumerRebalanceListener}} 
methods. Calls to {{onPartitionsRevoked(...)}} and 
{{onPartitionsAssigned(...)}} should be added in appropriate order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-2983) Remove old Scala consumer and all related code, tests, and tools

2018-05-22 Thread Andras Beni (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Beni reassigned KAFKA-2983:
--

Assignee: Ismael Juma  (was: Andras Beni)

[~ijuma], I have not made significant progress. Assigning this issue to you. 
Thanks for notifying me.

> Remove old Scala consumer and all related code, tests, and tools
> 
>
> Key: KAFKA-2983
> URL: https://issues.apache.org/jira/browse/KAFKA-2983
> Project: Kafka
>  Issue Type: Task
>Reporter: Grant Henke
>Assignee: Ismael Juma
>Priority: Major
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-2983) Remove old Scala consumer and all related code, tests, and tools

2018-05-22 Thread Andras Beni (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Beni reassigned KAFKA-2983:
--

Assignee: Andras Beni  (was: Grant Henke)

> Remove old Scala consumer and all related code, tests, and tools
> 
>
> Key: KAFKA-2983
> URL: https://issues.apache.org/jira/browse/KAFKA-2983
> Project: Kafka
>  Issue Type: Task
>Reporter: Grant Henke
>Assignee: Andras Beni
>Priority: Major
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6812) Async ConsoleProducer exits with 0 status even after data loss

2018-04-22 Thread Andras Beni (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Beni updated KAFKA-6812:
---
Summary: Async ConsoleProducer exits with 0 status even after data loss  
(was: Async ConsoleProducer exists with 0 status even after data loss)

> Async ConsoleProducer exits with 0 status even after data loss
> --
>
> Key: KAFKA-6812
> URL: https://issues.apache.org/jira/browse/KAFKA-6812
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 1.1.0
>Reporter: Andras Beni
>Priority: Minor
>
> When {{ConsoleProducer}} is run without {{--sync}} flag and one of the 
> batches times out, {{ErrorLoggingCallback}} logs the error:
> {code:java}
>  18/04/21 04:23:01 WARN clients.NetworkClient: [Producer 
> clientId=console-producer] Connection to node 10 could not be established. 
> Broker may not be available.
>  18/04/21 04:23:02 ERROR internals.ErrorLoggingCallback: Error when sending 
> message to topic my-topic with key: null, value: 8 bytes with error:
>  org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 
> my-topic-0: 1530 ms has passed since batch creation plus linger time{code}
>  However, the tool exits with status code 0. 
>  In my opinion the tool should indicate in the exit status that there was 
> data lost. Maybe it's reasonable to exit after the first error.
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6812) Async ConsoleProducer exists with 0 status even after data loss

2018-04-21 Thread Andras Beni (JIRA)
Andras Beni created KAFKA-6812:
--

 Summary: Async ConsoleProducer exists with 0 status even after 
data loss
 Key: KAFKA-6812
 URL: https://issues.apache.org/jira/browse/KAFKA-6812
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 1.1.0
Reporter: Andras Beni


When {{ConsoleProducer}} is run without {{--sync}} flag and one of the batches 
times out, {{ErrorLoggingCallback}} logs the error:
{code:java}
 18/04/21 04:23:01 WARN clients.NetworkClient: [Producer 
clientId=console-producer] Connection to node 10 could not be established. 
Broker may not be available.
 18/04/21 04:23:02 ERROR internals.ErrorLoggingCallback: Error when sending 
message to topic my-topic with key: null, value: 8 bytes with error:
 org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 
my-topic-0: 1530 ms has passed since batch creation plus linger time{code}
 However, the tool exits with status code 0. 
 In my opinion the tool should indicate in the exit status that there was data 
lost. Maybe it's reasonable to exit after the first error.
  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-3365) Add a documentation field for types and update doc generation

2018-03-19 Thread Andras Beni (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Beni reassigned KAFKA-3365:
--

Assignee: Andras Beni

> Add a documentation field for types and update doc generation
> -
>
> Key: KAFKA-3365
> URL: https://issues.apache.org/jira/browse/KAFKA-3365
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Andras Beni
>Priority: Major
>
> Currently the type class does not allow a documentation field. This means we 
> can't auto generate a high level documentation summary for each type in the 
> protocol. Adding this field and updating the generated output would be 
> valuable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-3368) Add the Message/Record set protocol to the protocol docs

2017-06-22 Thread Andras Beni (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Beni reassigned KAFKA-3368:
--

Assignee: Andras Beni

> Add the Message/Record set protocol to the protocol docs
> 
>
> Key: KAFKA-3368
> URL: https://issues.apache.org/jira/browse/KAFKA-3368
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Andras Beni
>
> The message/Record format is not a part of the standard Protocol.java class. 
> This should be added to the protocol or manually added to the doc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)