[jira] [Created] (KAFKA-13389) Add to kafka shell scripts checks about server state

2021-10-21 Thread Seweryn Habdank-Wojewodzki (Jira)
Seweryn Habdank-Wojewodzki created KAFKA-13389:
--

 Summary: Add to kafka shell scripts checks about server state
 Key: KAFKA-13389
 URL: https://issues.apache.org/jira/browse/KAFKA-13389
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.8.0
Reporter: Seweryn Habdank-Wojewodzki


Hello,

Within the discussion with Confluent included in the Confluent Support Ticket: 
[#71907|https://urldefense.com/v3/__https:/support.confluent.io/hc/requests/71907__;!!OMGRPR5eiCE28w!9-OfZd3vUrXgjEtagEYeB1O5tmebDaANKfi6c-VRV0RrdcFEnFlzb7pDwpSwJTZ7qFnigilEAPhGW1vS5XdsSkU$],
 we found out that according to "Eventually Consistency" Kafka shell scripts 
may deliver wrong information, for example when listing topics, the result 
might be empty even if topics are existing, but Server status is not in synch 
(e.g. when URP > 0).

To be concrete. This call below may return empty list, if server is not in 
synch.

{code}

$ ./bin/kafka-topics.sh --bootstrap-server= 
--list

{code}

 

Remark from Confluent engineers is: that before getting whose results, one have 
to check server status and in particular URP shall be 0, otherwise results 
might be wrong.

So in fact Kafka shell scripts contains bug delivering possibly broken results 
and not reporting error instead.

The proposal here is to add to all Kafka shell scripts check if server status 
is proper (e.g. URP is 0) and in case of having server not in good state, 
instead of returning possible wrong values, script shall return proper error 
code with message, that server is not in proper state.

Why in Kafka shell scripts and not on the user side?

Because Kafka Team knows all server conditions and can describe server status 
much better than any other user and checks will be done centrally for all 
users, who do not need to always implement the same. Also updates, when Kafka 
changes own API will be done synchronously.

 

Thanks in advance for adding those checks and best regards,

Seweryn.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7214) Mystic FATAL error

2021-10-06 Thread Seweryn Habdank-Wojewodzki (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki resolved KAFKA-7214.
---
Resolution: Workaround

The solution is to avoid low values of {{max.block.ms}}

> Mystic FATAL error
> --
>
> Key: KAFKA-7214
> URL: https://issues.apache.org/jira/browse/KAFKA-7214
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.3, 1.1.1, 2.3.0, 2.2.1
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Critical
> Attachments: qns-1.1.zip, qns-1.zip
>
>
> Dears,
> Very often at startup of the streaming application I got exception:
> {code}
> Exception caught in process. taskId=0_1, processor=KSTREAM-SOURCE-00, 
> topic=my_instance_medium_topic, partition=1, offset=198900203; 
> [org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:212),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks$2.apply(AssignedTasks.java:347),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:420),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:339),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:648),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:513),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:482),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:459)]
>  in thread 
> my_application-my_instance-my_instance_medium-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62
> {code}
> and then (without shutdown request from my side):
> {code}
> 2018-07-30 07:45:02 [ar313] [INFO ] StreamThread:912 - stream-thread 
> [my_application-my_instance-my_instance-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62]
>  State transition from PENDING_SHUTDOWN to DEAD.
> {code}
> What is this?
> How to correctly handle it?
> Thanks in advance for help.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13351) Add possibility to write kafka headers in Kafka Console Producer

2021-10-06 Thread Seweryn Habdank-Wojewodzki (Jira)
Seweryn Habdank-Wojewodzki created KAFKA-13351:
--

 Summary: Add possibility to write kafka headers in Kafka Console 
Producer
 Key: KAFKA-13351
 URL: https://issues.apache.org/jira/browse/KAFKA-13351
 Project: Kafka
  Issue Type: Wish
Affects Versions: 2.8.1
Reporter: Seweryn Habdank-Wojewodzki


Dears,

Currently there is an asymetry between Kafka Console Consumer and Producer.
Kafka Consumer can display headers (KAFKA-6733), but Kafka Producer cannot 
produce them.

It would be good to unify this and add possibility to Kafka Console Producer to 
produce them.

Similar ticket is here: KAFKA-6574, but it is very old and does not represents 
current state of the software.

Please consider this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9221) Kafka REST Proxy wrongly converts quotes in message when sending json

2019-11-21 Thread Seweryn Habdank-Wojewodzki (Jira)
Seweryn Habdank-Wojewodzki created KAFKA-9221:
-

 Summary: Kafka REST Proxy wrongly converts quotes in message when 
sending json
 Key: KAFKA-9221
 URL: https://issues.apache.org/jira/browse/KAFKA-9221
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 2.3.0
 Environment: Linux redhat
Reporter: Seweryn Habdank-Wojewodzki


Kafka REST Proxy has a problem when sending/converting json files (e.g. 
json.new) into Kafka protocol. For example JSON file:
{code:java}
{"records":[{"value":"rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
 1337 1572276922"}]}
{code}
is sent using call to Kafka REST Proxy on localhost:8073:
{code:java}
curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" -H "Accept: 
application/vnd.kafka.v2+json" --data @json.new  
http://localhost:8073/topics/somple_topic -i 
{code}
in Kafka in some_topic we got:
{code:java}
"rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
 1337 1572276922"
{code}
but expected is that message has no quotes:
{code:java}
rest.kafka.testmetric,host=server.com,partition=8,topic=my_topic,url=http:--localhost:7071-metrics
 1337 1572276922
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6882) Wrong producer settings may lead to DoS on Kafka Server

2019-07-12 Thread Seweryn Habdank-Wojewodzki (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki resolved KAFKA-6882.
---
Resolution: Won't Fix

As there are no improvment proposals I am closing it. :-)

> Wrong producer settings may lead to DoS on Kafka Server
> ---
>
> Key: KAFKA-6882
> URL: https://issues.apache.org/jira/browse/KAFKA-6882
> Project: Kafka
>  Issue Type: Bug
>  Components: core, producer 
>Affects Versions: 1.0.1, 1.1.0
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Major
>
> The documentation of the following parameters “linger.ms” and “batch.size” is 
> a bit confusing. In fact those parameters wrongly set on the producer side 
> might completely destroy BROKER throughput.
> I see, that smart developers are reading documentation of those parameters.
> Then they want to have super performance and super safety, so they set 
> something like this below:
> {code}
> kafkaProps.put(ProducerConfig.LINGER_MS_CONFIG, 1);
> kafkaProps.put(ProducerConfig.BATCH_SIZE_CONFIG, 0);
> {code}
> Then we have situation, when each and every message is send separately. 
> TCP/IP protocol is really busy in that case and when they needed high 
> throughput they got much less throughput, as every message is goes separately 
> causing all network communication and TCP/IP overhead significant.
> Those settings are good only if someone sends critical messages like once a 
> while (e.g. one message per minute) and not when throughput is important by 
> sending thousands messages per second.
> Situation is even worse when smart developers are reading, that for safety, 
> they need acknowledges from all cluster nodes. So they are adding:
> {code}
> kafkaProps.put(ProducerConfig.ACKS_CONFIG, "all");
> {code}
> And this is the end of Kafka performance! 
> Even worse it is not a problem for the Kafka producer. The problem remains at 
> the server (cluster, broker) side. The server is so busy by acknowledging 
> *each and every* message from all nodes, that other work is NOT performed, so 
> the end to end performance is almost none.
> I would like to ask you to improve documentation of this parameters.
> And consider corner cases is case of providing detailed information how 
> extreme values of parameters - namely lowest and highest – may influence work 
> of the cluster.
> This was documentation issue. 
> On the other hand it is security/safety matter.
> Technically the problem is that __commit_offsets topic is loaded with 
> enormous amount of messages. It leads to the situation, when Kafka Broker is 
> exposed to *DoS *due to the Producer settings. Three lines of code a bit load 
> and the Kafka cluster is dead.
> I suppose there are ways to prevent such a situation on the cluster side, but 
> it require some logic to be implemented to detect such a simple but efficient 
> DoS.
> BTW. Do Kafka Admin Tools provide any kind of "kill" connection, when one or 
> the other producer makes problems?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8548) Inconsistency in Kafka Documentation

2019-06-17 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-8548:
-

 Summary: Inconsistency in Kafka Documentation
 Key: KAFKA-8548
 URL: https://issues.apache.org/jira/browse/KAFKA-8548
 Project: Kafka
  Issue Type: Task
  Components: documentation
Affects Versions: 2.2.1
Reporter: Seweryn Habdank-Wojewodzki


Dears,

Two parts (referenced below) of [documentation 
|http://kafka.apache.org/documentation/] are not quite consistent.

In one text we can read, that max.poll.interval.ms has defaut value 
Integer.MAX_VALUE, in the other it is 300 000.

Part 1.

{quote}
The default values for two configurations of the StreamsConfig class were 
changed to improve the resiliency of Kafka Streams applications. The internal 
Kafka Streams producer retries default value was changed from 0 to 10. The 
internal Kafka Streams consumer max.poll.interval.ms default value was changed 
from 30 to {color:#FF}Integer.MAX_VALUE{color}.
{quote}
 
Part 2. - Table

|max.poll.interval.ms|The maximum delay between invocations of poll() when 
using consumer group management. This places an upper bound on the amount of 
time that the consumer can be idle before fetching more records. If poll() is 
not called before expiration of this timeout, then the consumer is considered 
failed and the group will rebalance in order to reassign the partitions to 
another member.|int|{color:#FF}30{color}|[1,...]|medium|

Which value is then default :-)




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6777) Wrong reaction on Out Of Memory situation

2019-01-22 Thread Seweryn Habdank-Wojewodzki (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki resolved KAFKA-6777.
---
Resolution: Won't Fix

Last comment is accepted. We have to prepare other measures to mitigate this 
situation.
I am resolving the ticket :-).


> Wrong reaction on Out Of Memory situation
> -
>
> Key: KAFKA-6777
> URL: https://issues.apache.org/jira/browse/KAFKA-6777
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.0
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Critical
> Attachments: screenshot-1.png
>
>
> Dears,
> We already encountered many times problems related to Out Of Memory situation 
> in Kafka Broker and streaming clients.
> The scenario is the following.
> When Kafka Broker (or Streaming Client) is under load and has too less 
> memory, there are no errors in server logs. One can see some cryptic entries 
> in GC logs, but they are definitely not self-explaining.
> Kafka Broker (and Streaming Clients) works further. Later we see in JMX 
> monitoring, that JVM uses more and more time in GC. In our case it grows from 
> e.g. 1% to 80%-90% of CPU time is used by GC.
> Next, software collapses into zombie mode – process in not ending. In such a 
> case I would expect, that process is crashing (e.g. got SIG SEGV). Even worse 
> Kafka treats such a zombie process normal and somewhat sends messages, which 
> are in fact getting lost, also the cluster is not excluding broken nodes. The 
> question is how to configure Kafka to really terminate the JVM and not remain 
> in zombie mode, to give a chance to other nodes to know, that something is 
> dead.
> I would expect that in Out Of Memory situation JVM is ended if not graceful 
> than at least process is crashed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7375) Improve error messages verbosity

2018-09-04 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-7375:
-

 Summary: Improve error messages verbosity
 Key: KAFKA-7375
 URL: https://issues.apache.org/jira/browse/KAFKA-7375
 Project: Kafka
  Issue Type: Task
Affects Versions: 1.1.1
Reporter: Seweryn Habdank-Wojewodzki


Dears,

Very often when clients are trying to connect we see in Kafka logs:

{code}
“org.apache.kafka.common.network.SslTransportLayer  - Failed to send SSL Close 
message“
{code}

The problem here is following: there is no word who? No IP, no client addres, 
nothing.

Would be great to have in all error or warning reports like this one, very 
precize information which client has a problem, to be able to solve it. When 
the number of clients is more than 10, this message is completely useless and 
when there are even more clients it really spams logs.

Thanks in advance for help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7363) How num.stream.threads in streaming application influence memory consumption?

2018-08-31 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-7363:
-

 Summary: How num.stream.threads in streaming application influence 
memory consumption?
 Key: KAFKA-7363
 URL: https://issues.apache.org/jira/browse/KAFKA-7363
 Project: Kafka
  Issue Type: Task
Reporter: Seweryn Habdank-Wojewodzki


Dears,

How option _num.stream.threads_ in streaming application influence memory 
consumption?
I see that by increasing num.stream.threads my application needs more memory.
This is obvious, but it is not obvious how much I need to give it. Try and 
error method does not work, as it seems to be highly dependen on forced 
throughput.
I mean: higher load more memory is needed.

Thanks for help and regards,
Seweryn.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7214) Mystic FATAL error

2018-07-29 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-7214:
-

 Summary: Mystic FATAL error
 Key: KAFKA-7214
 URL: https://issues.apache.org/jira/browse/KAFKA-7214
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.3
Reporter: Seweryn Habdank-Wojewodzki


Dears,

Very often at startup of the streaming application I got exception:

{code}
Exception caught in process. taskId=0_1, processor=KSTREAM-SOURCE-00, 
topic=my_instance_medium_topic, partition=1, offset=198900203; 
[org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:212),
 
org.apache.kafka.streams.processor.internals.AssignedTasks$2.apply(AssignedTasks.java:347),
 
org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:420),
 
org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:339),
 
org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:648),
 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:513),
 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:482),
 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:459)]
 in thread 
my_application-my_instance-my_instance_medium-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62
{code}

and then (without shutdown request from my side):

{code}
2018-07-30 07:45:02 [ar313] [INFO ] StreamThread:912 - stream-thread 
[my_application-my_instance-my_instance-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62]
 State transition from PENDING_SHUTDOWN to DEAD.
{code}

What is this?
How to correctly handle it?

Thanks in advance for help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6882) Wrong producer settings may lead to DoS on Kafka Server

2018-05-08 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-6882:
-

 Summary: Wrong producer settings may lead to DoS on Kafka Server
 Key: KAFKA-6882
 URL: https://issues.apache.org/jira/browse/KAFKA-6882
 Project: Kafka
  Issue Type: Bug
  Components: core, producer 
Affects Versions: 1.0.1, 1.1.0
Reporter: Seweryn Habdank-Wojewodzki


The documentation of the following parameters “linger.ms” and “batch.size” is a 
bit confusing. In fact those parameters wrongly set on the producer side might 
completely destroy BROKER throughput.

I see, that smart developers they are reading documentation of those parameters.
Then they want to have super performance and super safety, so they set 
something like this below:

{code}
kafkaProps.put(ProducerConfig.LINGER_MS_CONFIG, 1);
kafkaProps.put(ProducerConfig.BATCH_SIZE_CONFIG, 0);
{code}

Then we have situation, when each and every message is send separately. TCP/IP 
protocol is really busy in that case and when they needed high throughput they 
got much less throughput, as every message is goes separately causing all 
network communication and TCP/IP overhead significant.

Those settings are good only if someone sends critical messages like once a 
while (e.g. one message per minute) and not when throughput is important by 
sending thousands messages per second.

Situation is even worse when smart developers are reading that for safety they 
need acknowledges from all cluster nodes. So they are adding:

{code}
kafkaProps.put(ProducerConfig.ACKS_CONFIG, "all");
{code}

And this is the end of Kafka performance! 

Even worse it is not a problem for the Kafka producer. The problem remains at 
the server (cluster, broker) side. The server is so busy by acknowledging *each 
and every* message from all nodes, that other work is NOT performed, so the end 
to end performance is almost none.

I would like to ask you to improve documentation of this parameters.
And consider corner cases is case of providing detailed information how extreme 
values of parameters - namely lowest and highest – may influence work of the 
cluster.
This was documentation issue. 

On the other hand it is security/safetly matter.

Technically the problem is that __commit_offsets topic is loaded with enormous 
amount of messages. It leads to the situation, when Kafka Broker is exposed to 
*DoS *due to the Producer settings. Three lines of code a bit load and the 
Kafka cluster is dead.
I suppose there are ways to prevent such a situation on the cluster side, but 
it require some loginc to be implemented to detect such a simple but efficient 
DoS.

BTW. Do Kafka Admin Tools provide any kind of "kill" connection, when one or 
the other producer makes problems?




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6777) Wrong reaction on Out Of Memory situation

2018-04-11 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-6777:
-

 Summary: Wrong reaction on Out Of Memory situation
 Key: KAFKA-6777
 URL: https://issues.apache.org/jira/browse/KAFKA-6777
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.0.0
Reporter: Seweryn Habdank-Wojewodzki


Dears,

We already encountered many times problems related to Out Of Memory situation 
in Kafka Broker and streaming clients.

The scenario is the following.

When Kafka Broker (or Streaming Client) is under load and has too less memory, 
there are no errors in server logs. One can see some cryptic entries in GC 
logs, but they are definitely not self-explaining.

Kafka Broker (and Streaming Clients) works further. Later we see in JMX 
monitoring, that JVM uses more and more time in GC. In our case it grows from 
e. 1% to 80%-90% of CPU time is used by GC.

Next software collapses into zombie mode – process in not ending. In such a 
case I would expect, that process is crashing (e.g. got SIG SEGV). Even worse 
Kafka treats such a zombie process normal and somewhat sends messages, which 
are in fact getting lost, also the cluster is not excluding broken nodes. The 
question is how to configure Kafka to really terminate the JVM and not remain 
in zombie mode, to give a chance to other nodes to know, that something is dead.

I would expect that in Out Of Memory situation JVM is ended if not graceful 
than at least process is crashed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6699) When one of two Kafka nodes are dead, streaming API cannot handle messaging

2018-03-21 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-6699:
-

 Summary: When one of two Kafka nodes are dead, streaming API 
cannot handle messaging
 Key: KAFKA-6699
 URL: https://issues.apache.org/jira/browse/KAFKA-6699
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.2
Reporter: Seweryn Habdank-Wojewodzki


Dears,

I am observing quite often, when Kafka Broker is partly dead(*), then 
application, which uses streaming API are doing nothing.

(*) Partly dead in my case it means that one of two Kafka nodes are out of 
order. 

Especially when disk is full on one machine, then Broker is going in some 
strange state, where streaming API goes vacations. It seems like regular 
producer/consumer API has no problem in such a case.

Can you have a look on that matter?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6457) Error: NOT_LEADER_FOR_PARTITION leads to NPE

2018-01-17 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-6457:
-

 Summary: Error: NOT_LEADER_FOR_PARTITION leads to NPE
 Key: KAFKA-6457
 URL: https://issues.apache.org/jira/browse/KAFKA-6457
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
Reporter: Seweryn Habdank-Wojewodzki


One of our nodes was dead. Then the second one took all responsibility.

But streamming aplication in the meanwhile crashed due to NPE caused by 
{{Error: NOT_LEADER_FOR_PARTITION}}.

The stack trace is below.

 

Is it something expected?

 

{code}

2018-01-17 11:47:21 [my] [WARN ] Sender:251 - [Producer ...2018-01-17 11:47:21 
[my] [WARN ] Sender:251 - [Producer 
clientId=restreamer-my-fef07ca9-b067-45c0-a5af-68b5a1730dac-StreamThread-1-producer]
 Got error produce response with correlation id 768962 on topic-partition 
my_internal_topic-5, retrying (9 attempts left). Error: NOT_LEADER_FOR_PARTITION
2018-01-17 11:47:21 [my] [WARN ] Sender:251 - [Producer 
clientId=restreamer-my-fef07ca9-b067-45c0-a5af-68b5a1730dac-StreamThread-1-producer]
 Got error produce response with correlation id 768962 on topic-partition 
my_internal_topic-7, retrying (9 attempts left). Error: NOT_LEADER_FOR_PARTITION
2018-01-17 11:47:21 [my] [ERROR] AbstractCoordinator:296 - [Consumer 
clientId=restreamer-my-fef07ca9-b067-45c0-a5af-68b5a1730dac-StreamThread-1-consumer,
 groupId=restreamer-my] Heartbeat thread for group restreamer-my failed due to 
unexpected error
java.lang.NullPointerException: null
    at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:436) 
~[my-restreamer.jar:?]
    at org.apache.kafka.common.network.Selector.poll(Selector.java:395) 
~[my-restreamer.jar:?]
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) 
~[my-restreamer.jar:?]
    at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:238)
 ~[my-restreamer.jar:?]
    at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:275)
 ~[my-restreamer.jar:?]
    at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:934)
 [my-restreamer.jar:?]

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4315) Kafka Connect documentation problems

2018-01-08 Thread Seweryn Habdank-Wojewodzki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki resolved KAFKA-4315.
---
Resolution: Done

I do not care anymore about this matter.

> Kafka Connect documentation problems
> 
>
> Key: KAFKA-4315
> URL: https://issues.apache.org/jira/browse/KAFKA-4315
> Project: Kafka
>  Issue Type: Bug
>Reporter: Seweryn Habdank-Wojewodzki
>
> On the base of documentation of the Kafka Connect - 
> http://kafka.apache.org/documentation#connect, I had tried to build example 
> in Java. It was not possible. 
> The code pieces available on the webpage are taken out of any context and 
> they are not compiling. 
> Also it seems they are taken completely from other code software parts, so 
> even putting them together shows, that they are not building any reasonable 
> example. And they tend to be very complex. where I would expect that the API 
> examples are driving "Hello World" like code.
> Also there are weak connections between examples from the Kafka documentation 
> and Kafka Connect tools code parts available in the Kafka source.
> Finally I would be nice to have a kind of statement in the Kafka 
> documentation which parts of API are stable and which are unstable or 
> experimental.
> I saw much (~20) of such a remarks in the Kafka code - I mean that API is 
> unstable. This note is very important, as we will plan additional effort to 
> prepare some facades for unstable code.
> In my opinion it is nothing wrong in experimental API, but all those matters 
> when documented shall be well documented. The current status of the main 
> Kafka documentation makes impression that Kafka Connect is well tested and 
> consistent and stable feature set, but it is not. What leads to confusion on 
> the effort management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4908) consumer.properties logging warnings

2018-01-08 Thread Seweryn Habdank-Wojewodzki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki resolved KAFKA-4908.
---
Resolution: Done

Not an issue for me anymore.

> consumer.properties logging warnings
> 
>
> Key: KAFKA-4908
> URL: https://issues.apache.org/jira/browse/KAFKA-4908
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Minor
>
> default consumer.properties at startaup of the console consumer delivered 
> with Kafka package are logging warnings:
> [2017-03-15 16:36:57,439] WARN The configuration 
> 'zookeeper.connection.timeout.ms' was supplied but isn't a known config. 
> (org.apache.kafka.clients.consumer.ConsumerConfig)
> [2017-03-15 16:36:57,455] WARN The configuration 'zookeeper.connect' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.consumer.ConsumerConfig)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6260) KafkaStream 1.0.0 does not strarts correctly with Broker 0.11.0.1

2017-11-22 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-6260:
-

 Summary: KafkaStream 1.0.0 does not strarts correctly with Broker 
0.11.0.1
 Key: KAFKA-6260
 URL: https://issues.apache.org/jira/browse/KAFKA-6260
 Project: Kafka
  Issue Type: Bug
Reporter: Seweryn Habdank-Wojewodzki


The new KafkaStreams API, which uses StreamsBuilder class has troubles to 
connect with Broker v. 0.11.0.1. Below there are logs.
The same app wich uses old API (v. 0.11.0.1) KStreamBuilder has no issues to 
connect with Broker 0.11.0.1.

Can you check that?

{code}
2017-11-22 16:27:58 DEBUG NetworkClient:183 - [Consumer 
clientId=kafka-endpoint-1a02a283-47e0-40b6-8da4-6e338da0814f-StreamThread-1-consumer,
 groupId=kafka-endpoint] Disconnecting from node 1 due to request timeout.
2017-11-22 16:27:58 DEBUG ConsumerNetworkClient:195 - [Consumer 
clientId=kafka-endpoint-1a02a283-47e0-40b6-8da4-6e338da0814f-StreamThread-1-consumer,
 groupId=kafka-endpoint] Cancelled FETCH request RequestHeader(apiKey=FETCH, 
apiVersion=5, 
clientId=kafka-endpoint-1a02a283-47e0-40b6-8da4-6e338da0814f-StreamThread-1-consumer,
 correlationId=74) with correlation id 74 due to node 1 being disconnected
2017-11-22 16:27:58 DEBUG Fetcher:195 - [Consumer 
clientId=kafka-endpoint-1a02a283-47e0-40b6-8da4-6e338da0814f-StreamThread-1-consumer,
 groupId=kafka-endpoint] Fetch request {my_internal_topic-4=(offset=199966200, 
logStartOffset=-1, maxBytes=1048576), my_internal_topic-0=(offset=199987332, 
logStartOffset=-1, maxBytes=1048576)} to myp01.eb.lan.at:9093 (id: 1 rack: 
DC-1) failed org.apache.kafka.common.errors.DisconnectException: null 
2017-11-22 16:27:58 DEBUG NetworkClient:183 - [Consumer 
clientId=kafka-endpoint-1a02a283-47e0-40b6-8da4-6e338da0814f-StreamThread-1-consumer,
 groupId=kafka-endpoint] Initialize connection to node myp01.eb.lan.at:9093 
(id: 1 rack: DC-1) for sending metadata request 2017-11-22 16:27:58 DEBUG 
NetworkClient:183 - [Consumer 
clientId=kafka-endpoint-1a02a283-47e0-40b6-8da4-6e338da0814f-StreamThread-1-consumer,
 groupId=kafka-endpoint] Initiating connection to node myp01.eb.lan.at:9093 
(id: 1 rack: DC-1) 2017-11-22 16:27:58 ERROR AbstractCoordinator:296 - 
[Consumer 
clientId=kafka-endpoint-1a02a283-47e0-40b6-8da4-6e338da0814f-StreamThread-1-consumer,
 groupId=kafka-endpoint] Heartbeat thread for group kafka-endpoint failed due 
to unexpected errorjava.lang.NullPointerException: null
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:436) 
~[my-kafka-endpoint.jar:?]
at org.apache.kafka.common.network.Selector.poll(Selector.java:395) 
~[my-kafka-endpoint.jar:?]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) 
~[my-kafka-endpoint.jar:?]
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:238)
 ~[my-kafka-endpoint.jar:?]
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:275)
 ~[my-kafka-endpoint.jar:?]
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:934)
 [my-kafka-endpoint.jar:?]
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6202) Classes OffsetsMessageFormatter and GroupMetadataMessageFormatter shall be used by kafka tools, but in the last releases lost visibility

2017-11-13 Thread Seweryn Habdank-Wojewodzki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki resolved KAFKA-6202.
---
Resolution: Invalid

> Classes OffsetsMessageFormatter and GroupMetadataMessageFormatter shall be 
> used by kafka tools, but in the last releases lost visibility
> 
>
> Key: KAFKA-6202
> URL: https://issues.apache.org/jira/browse/KAFKA-6202
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0, 1.0.0, 0.11.0.2, 0.11.0.3
>Reporter: Seweryn Habdank-Wojewodzki
>
> Classes OffsetsMessageFormatter and GroupMetadataMessageFormatter shall be 
> visible.
> Also they shall/might be used by external toolis like console-consumer.
> What is documented in the code!
> But in last releases those two classes lost its visibility.
> Currently they have none access modifier, which for subclasses are 
> interpreted as private.
> The proposal is to follow comments and use cases driven by console-consumer 
> and change visibility from none/default to public.
> Issue was found during discussion in SO: 
> https://stackoverflow.com/questions/47218277/what-may-cause-huge-load-in-kafka-consumer-offsets-topic



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6202) Classes OffsetsMessageFormatter and GroupMetadataMessageFormatter shall be used by kafka tools, but in the last releases lost visibility

2017-11-10 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-6202:
-

 Summary: Classes OffsetsMessageFormatter and 
GroupMetadataMessageFormatter shall be used by kafka tools, but in the last 
releases lost visibility
 Key: KAFKA-6202
 URL: https://issues.apache.org/jira/browse/KAFKA-6202
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0, 0.11.0.0, 0.11.0.2, 0.11.0.3
Reporter: Seweryn Habdank-Wojewodzki


Classes OffsetsMessageFormatter and GroupMetadataMessageFormatter shall be 
visible.

Also they shall/might be used by external toolis like console-consumer.
What is documented in the code!

But in last releases those two classes lost its visibility.

Currently they have none access modifier, which for subclasses are interpreted 
as private.
The proposal is to follow comments and use cases driven by console-consumer and 
change visibility from none/default to public.

Issue was found during discussion in SO: 
https://stackoverflow.com/questions/47218277/what-may-cause-huge-load-in-kafka-consumer-offsets-topic




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5882) NullPointerException in ConsumerCoordinator

2017-09-13 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-5882:
-

 Summary: NullPointerException in ConsumerCoordinator
 Key: KAFKA-5882
 URL: https://issues.apache.org/jira/browse/KAFKA-5882
 Project: Kafka
  Issue Type: Bug
Reporter: Seweryn Habdank-Wojewodzki


It seems bugfix [KAFKA-5073|https://issues.apache.org/jira/browse/KAFKA-5073] 
is made, but introduce some other issues.
In some cases (I am not sure which ones) I got NPE (below).

I would expect that even in case of FATAL error anythink except NPE is thrown.

{code}
2017-09-12 23:34:54 ERROR ConsumerCoordinator:269 - User provided listener 
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener for 
group streamer failed on partition assignment
java.lang.NullPointerException: null
at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:123)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:1234)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:294)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:254)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:1313)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.access$1100(StreamThread.java:73)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:183)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:265)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:363)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043) 
[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:582)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:553)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)
 [myapp-streamer.jar:?]
2017-09-12 23:34:54 INFO  StreamThread:1040 - stream-thread 
[streamer-3a44578b-faa8-4b5b-bbeb-7a7f04639563-StreamThread-1] Shutting down
2017-09-12 23:34:54 INFO  KafkaProducer:972 - Closing the Kafka producer with 
timeoutMillis = 9223372036854775807 ms.
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5861) KStream close( withTimeout ) - does not work under load conditions in the multi-threaded KStream application

2017-09-12 Thread Seweryn Habdank-Wojewodzki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki resolved KAFKA-5861.
---
Resolution: Workaround

> KStream close( withTimeout ) - does not work under load conditions in the 
> multi-threaded KStream application
> 
>
> Key: KAFKA-5861
> URL: https://issues.apache.org/jira/browse/KAFKA-5861
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Seweryn Habdank-Wojewodzki
>
> Recently implemented close( withTimeout ) for streams does not work under 
> load conditions in the multi-threaded KStream application.
> Where there are more consuming threads and there many messages in stream, 
> then close ( withTimeout ) does not work. 
> 1. Timeout is not respected at all and
> 2. application is hanging in some streaming chaos. Theoretically threads are 
> working - they are busy with themselves, so the app cannot end, but they are 
> not processing any further messages.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5861) KStream stop( withTimeout ) - does not work under load conditions in the multithreaded KStream application

2017-09-08 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-5861:
-

 Summary: KStream stop( withTimeout ) - does not work under load 
conditions in the multithreaded KStream application
 Key: KAFKA-5861
 URL: https://issues.apache.org/jira/browse/KAFKA-5861
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.0
Reporter: Seweryn Habdank-Wojewodzki


Recently implemented stop( withTimeout ) for streams does not work under load 
conditions in the multi-threaded KStream application.

Where there are more consuming threads and there many messages in stream, then 
stop ( withTimeout ) does not work. 
1. Timeout is not respected at all and
2. application is hanging in some streaming chaos. Theoretically threads are 
working - they are busy with themselves, so the app cannot end, but they are 
not processing any further messages.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5786) Yet another exception is causing that streamming app is zombie

2017-08-25 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-5786:
-

 Summary: Yet another exception is causing that streamming app is 
zombie
 Key: KAFKA-5786
 URL: https://issues.apache.org/jira/browse/KAFKA-5786
 Project: Kafka
  Issue Type: Bug
Reporter: Seweryn Habdank-Wojewodzki
Priority: Critical


Not handled exception in streamming app causes zombie state of the process.

{code}
2017-08-24 15:17:40 WARN  StreamThread:978 - stream-thread 
[kafka-endpoint-1236e6d5-75f0-4c14-b025-78e632484a26-StreamThread-3] Unexpected 
state transition from RUNNING to DEAD.
2017-08-24 15:17:40 FATAL StreamProcessor:67 - Caught unhandled exception: 
stream-thread 
[kafka-endpoint-1236e6d5-75f0-4c14-b025-78e632484a26-StreamThread-3] Failed to 
rebalance.; 
[org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:589),
 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:553),
 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)]
 in thread kafka-endpoint-1236e6d5-75f0-4c14-b025-78e632484a26-StreamThread-3
{code}

The final state of the app is similar to KAFKA-5779, but the exception and its 
location is in different place.

The exception shall be handled in the way that either application tries to 
continue working or shall completely quit if the error is not recoverable.

Current situation when application is zombie is not good.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5779) Single message may exploit application based on KStream

2017-08-24 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-5779:
-

 Summary: Single message may exploit application based on KStream
 Key: KAFKA-5779
 URL: https://issues.apache.org/jira/browse/KAFKA-5779
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0, 0.10.2.1
Reporter: Seweryn Habdank-Wojewodzki
Priority: Critical


The context: in Kafka streamming I am *defining* simple KStream processing:

{code}
stringInput // line 54 of the SingleTopicStreamer class
.filter( streamFilter::passOrFilterMessages )
.map( normalizer )
.to( outTopicName );
{code}

For some reasons I got wrong message (I am still investigating what is the 
problem), 
but anyhow my services was exploited with FATAL error:

{code}
2017-08-22 17:08:44 FATAL SingleTopicStreamer:54 - Caught unhandled exception: 
Input record ConsumerRecord(topic = XXX_topic, partition = 8, offset = 15, 
CreateTime = -1, serialized key size = -1, serialized value size = 255, headers 
= RecordHeaders(headers = [], isReadOnly = false), key = null, value = 
{"recordTimestamp":"2017-08-22T17:07:40:619+02:00","logLevel":"INFO","sourceApplication":"WPT","message":"Kafka-Init","businessError":false,"normalizedStatus":"green","logger":"CoreLogger"})
 has invalid (negative) timestamp. Possibly because a pre-0.10 producer client 
was used to write this record to Kafka without embedding a timestamp, or 
because the input topic was created before upgrading the Kafka cluster to 
0.10+. Use a different TimestampExtractor to process this data.; 
[org.apache.kafka.streams.processor.FailOnInvalidTimestamp.onInvalidTimestamp(FailOnInvalidTimestamp.java:63),
 
org.apache.kafka.streams.processor.ExtractRecordMetadataTimestamp.extract(ExtractRecordMetadataTimestamp.java:61),
 
org.apache.kafka.streams.processor.FailOnInvalidTimestamp.extract(FailOnInvalidTimestamp.java:46),
 
org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:85),
 
org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:117),
 
org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:464),
 
org.apache.kafka.streams.processor.internals.StreamThread.addRecordsToTasks(StreamThread.java:650),
 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:556),
 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)]
 in thread restreamer-d4e77d18-6e7b-4708-8436-7fea0d4b1cdf-StreamThread-3
{code}

The possible reason about using old producer in message is false, as we are 
using Kafka 0.10.2.1 and 0.11.0.0 and the topics had been created within this 
version of Kafka. 
The sender application is .NET client from Confluent.

All the matter is a bit problematic with this exception, as it was suggested it 
is thrown in scope of initialization of the stream, but effectively it happend 
in processing, so adding try{} catch {} around stringInput statement does not 
help, as stream was correctly defined, but only one message send later had 
exploited all the app.

In my opinion KStream shall be robust enough to catch all such a exception and 
shall protect application from death due to single corrupted message. 
Especially when timestamp is not embedded. In such a case one can patch message 
with current timestamp without loss of overall performance.

I would expect Kafka Stream will handle this.

I will continue to investigate, what is the problem with the message, but it is 
quite hard to me, as it happens internally in Kafka stream combined with .NET 
producer.

And I had already tested, that this problem does not occur when I got these 
concrete messages in old-fashioned Kafka Consumer :-).




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4314) Kafka Streams documentation needs definitive rework and improvement

2017-08-01 Thread Seweryn Habdank-Wojewodzki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki resolved KAFKA-4314.
---
Resolution: Done

> Kafka Streams documentation needs definitive rework and improvement
> ---
>
> Key: KAFKA-4314
> URL: https://issues.apache.org/jira/browse/KAFKA-4314
> Project: Kafka
>  Issue Type: Bug
>Reporter: Seweryn Habdank-Wojewodzki
>
> On the base of documentation of the Kafka Stream, I had tried to build 
> example in Java. It was not possible. The code pieces available on the 
> webpage: http://kafka.apache.org/documentation#streams are taken out of any 
> context and they are not compiling. 
> Also it seems they are taken completely from other code software parts, so 
> even putting them together shows, that they are not building any reasonable 
> example.
> I took the code of the Kafka itself and there there are some examples of the 
> Kafka streams, which are at least consistent. It is very good basis to repair 
> the main documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5686) Documentation inconsistency on the "Compression"

2017-08-01 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-5686:
-

 Summary: Documentation inconsistency on the "Compression"
 Key: KAFKA-5686
 URL: https://issues.apache.org/jira/browse/KAFKA-5686
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.11.0.0
Reporter: Seweryn Habdank-Wojewodzki
Priority: Minor


At the page:

https://kafka.apache.org/documentation/

There is a sentence:

_Kafka supports GZIP, Snappy and LZ4 compression protocols. More details on 
compression can be found here. _

especially link under the word *here *is describing very old compression 
settings, which is false in case of version 0.11.x.y.

JAVA API:
Also it would be nice to clearly state if *compression.type* uses only case 
sensitive String as a value or if it is recommended to use e.g. 
{{CompressionType.GZIP.name()}} for JAVA API.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5530) Balancer is dancing with KStream all the time, and due to that Kafka cannot work :-)

2017-07-03 Thread Seweryn Habdank-Wojewodzki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki resolved KAFKA-5530.
---
Resolution: Not A Bug

The main problem, at least what we had observed at the end, was that our was 
simply *_too_* small.

Currently we set: max.poll.interval.ms=100 and Kafka Stream (consuming one) 
is starting properly.

Perhaps it would be good to have some hint in documentation, that 
max.poll.interval.ms should not be too small, as it will cause endless 
rebalancing. 

The implicit explanation is here:
If poll() is not called before expiration of this timeout, then the consumer is 
considered failed and the group will rebalance in order to reassign the 
partitions to another member. 

But explicitely it is not stated, that max.poll.interval.ms shall be somewhat 
big :-).

> Balancer is dancing with KStream all the time, and due to that Kafka cannot 
> work :-)
> 
>
> Key: KAFKA-5530
> URL: https://issues.apache.org/jira/browse/KAFKA-5530
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0, 0.10.2.1, 0.11.0.0
> Environment: Linux, Windows
>Reporter: Seweryn Habdank-Wojewodzki
> Attachments: streamer_20-topics_1-thread-K-0.11.0.0.log.zip, 
> streamer_20-topics_4-threads-K-0.11.0.0.log.zip, 
> streamer_2-topics_1-thread-K-0.11.0.0.log.zip, 
> streamer_2-topics_4_threads-K-0.11.0.0.log.zip
>
>
> Dears,
> There are problems with balancer in KStreams (v. 0.10.2.x), when 
> _num.stream.threads_ is bigger than 1 and the number of the input topics are 
> bigger than 1.
> I am doing more less such a setup in the code:
> {code:java}
> // loop over the inTopicName(s) {
> KStream stringInput = kBuilder.stream( STRING_SERDE, 
> STRING_SERDE, inTopicName );
> stringInput.filter( streamFilter::passOrFilterMessages ).map( ndmNormalizer 
> ).to( outTopicName );
> // } end of loop
> streams = new KafkaStreams( kBuilder, streamsConfig );
> streams.cleanUp();
> streams.start();
> {code}
> And if there are *_num.stream.threads=4_* but there are 2 or more but less 
> than num.stream.threads inTopicNames, then complete application startup is 
> totally self-blocked, by writing endless starnge things in log and not 
> starting.
> Even more problematic is when the nuber of topics is higher than 
> _num.stream.threads_ what I had commented in *KAFKA-5167 streams task gets 
> stuck after re-balance due to LockException*.
> I am attaching logs for two scenarios:
>  * when: 1 < num.stream.threads < numer of topics (KAFKA-5167)
>  * when: 1 < numer of topics < num.stream.threads (this ticket).
> I can fully reproduce the behaviour. Even I found workaround for it, but is 
> not desired. When _num.stream.threads=1_ than all works fine :-( (for K v. 
> 0.10.2.x, v. 0.11.0.0 does not work at all).
> {code:bash}
> 2017-06-27 19:45:00 INFO StreamPartitionAssignor:466 - stream-thread 
> [StreamThread-3] Assigned tasks to clients as 
> {de0ead97-89d8-49b0-be84-876ca5b41cd8=[activeTasks: ([]) assignedTasks: ([]) 
> prevActiveTasks: ([]) prevAssignedTasks: ([]) capacity: 2.0 cost: 0.0]}.
> 2017-06-27 19:45:00 INFO AbstractCoordinator:375 - Successfully joined group 
> stream with generation 2701
> 2017-06-27 19:45:00 INFO AbstractCoordinator:375 - Successfully joined group 
> stream with generation 2701
> 2017-06-27 19:45:00 INFO ConsumerCoordinator:252 - Setting newly assigned 
> partitions [] for group stream
> 2017-06-27 19:45:00 INFO ConsumerCoordinator:252 - Setting newly assigned 
> partitions [] for group stream
> 2017-06-27 19:45:00 INFO StreamThread:228 - stream-thread [StreamThread-3] 
> New partitions [[]] assigned at the end of consumer rebalance.
> 2017-06-27 19:45:00 INFO StreamThread:228 - stream-thread [StreamThread-1] 
> New partitions [[]] assigned at the end of consumer rebalance.
> 2017-06-27 19:45:00 INFO ConsumerCoordinator:393 - Revoking previously 
> assigned partitions [] for group stream
> 2017-06-27 19:45:00 INFO StreamThread:254 - stream-thread [StreamThread-1] 
> partitions [[]] revoked at the beginning of consumer rebalance.
> 2017-06-27 19:45:00 INFO StreamThread:1012 - stream-thread [StreamThread-1] 
> Updating suspended tasks to contain active tasks [[]]
> 2017-06-27 19:45:00 INFO StreamThread:1019 - stream-thread [StreamThread-1] 
> Removing all active tasks [[]]
> 2017-06-27 19:45:00 INFO StreamThread:1034 - stream-thread [StreamThread-1] 
> Removing all standby tasks [[]]
> 2017-06-27 19:45:00 INFO AbstractCoordinator:407 - (Re-)joining group stream
> 2017-06-27 19:45:00 INFO StreamPartitionAssignor:290 - stream-thread 
> [StreamThread-1] Constructed client metadata 
> {de0ead97-89d8-49b0-be84-876ca5b41cd8=ClientMetadata{hostInfo=null, 
> 

[jira] [Created] (KAFKA-5530) Balancer is dancing with KStream all the time, and due to that Kafka cannot work :-)

2017-06-28 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-5530:
-

 Summary: Balancer is dancing with KStream all the time, and due to 
that Kafka cannot work :-)
 Key: KAFKA-5530
 URL: https://issues.apache.org/jira/browse/KAFKA-5530
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.2.1
 Environment: Linux, Windows
Reporter: Seweryn Habdank-Wojewodzki
 Attachments: streamer-2.zip, streamer.zip

Hi,

I think I found much easier way to reproduce the same behaviour.
I am doing more less suche setup in the code:

{code:java}
// loop over the inTopicName(s) {

KStream stringInput = kBuilder.stream( STRING_SERDE, 
STRING_SERDE, inTopicName );
stringInput.filter( streamFilter::passOrFilterMessages ).map( ndmNormalizer 
).to( outTopicName );

// } end of loop

streams = new KafkaStreams( kBuilder, streamsConfig );
streams.cleanUp();
streams.start();
{code}

And if there are *_num.stream.threads=4_* but there are 2 or more nut less than 
num.stream.threads inTopicNames, then complete application startup is totally 
self-blocked, by writing endless:



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4908) consumer.properties logging warnings

2017-03-21 Thread Seweryn Habdank-Wojewodzki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935288#comment-15935288
 ] 

Seweryn Habdank-Wojewodzki commented on KAFKA-4908:
---

So even more. Deprecated configs shall be removed from release, or? :-)

> consumer.properties logging warnings
> 
>
> Key: KAFKA-4908
> URL: https://issues.apache.org/jira/browse/KAFKA-4908
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Minor
>
> default consumer.properties at startaup of the console consumer delivered 
> with Kafka package are logging warnings:
> [2017-03-15 16:36:57,439] WARN The configuration 
> 'zookeeper.connection.timeout.ms' was supplied but isn't a known config. 
> (org.apache.kafka.clients.consumer.ConsumerConfig)
> [2017-03-15 16:36:57,455] WARN The configuration 'zookeeper.connect' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.consumer.ConsumerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4908) consumer.properties logging warnings

2017-03-16 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-4908:
-

 Summary: consumer.properties logging warnings
 Key: KAFKA-4908
 URL: https://issues.apache.org/jira/browse/KAFKA-4908
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.2.0
Reporter: Seweryn Habdank-Wojewodzki
Priority: Minor


default consumer.properties at startaup of the console consumer delivered with 
Kafka package are logging warnings:

[2017-03-15 16:36:57,439] WARN The configuration 
'zookeeper.connection.timeout.ms' was supplied but isn't a known config. 
(org.apache.kafka.clients.consumer.ConsumerConfig)
[2017-03-15 16:36:57,455] WARN The configuration 'zookeeper.connect' was 
supplied but isn't a known config. 
(org.apache.kafka.clients.consumer.ConsumerConfig)





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4849) Bug in KafkaStreams documentation

2017-03-06 Thread Seweryn Habdank-Wojewodzki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15898851#comment-15898851
 ] 

Seweryn Habdank-Wojewodzki commented on KAFKA-4849:
---

@Matthias: I just found DOC inconsistency. My bug report is related to the 
Apache Kafka. I do not follow your comment.
@ASF GitHubBot: Thanks!

> Bug in KafkaStreams documentation
> -
>
> Key: KAFKA-4849
> URL: https://issues.apache.org/jira/browse/KAFKA-4849
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Seweryn Habdank-Wojewodzki
>Assignee: Matthias J. Sax
>Priority: Minor
>
> At the page: https://kafka.apache.org/documentation/streams
>  
> In the chapter titled Application Configuration and Execution, in the example 
> there is a line:
>  
> settings.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "zookeeper1:2181");
>  
> but ZOOKEEPER_CONNECT_CONFIG is deprecated in the Kafka version 0.10.2.0.
>  
> Also the table on the page: 
> https://kafka.apache.org/0102/documentation/#streamsconfigs is a bit 
> misleading.
> 1. Again zookeeper.connect is deprecated.
> 2. The client.id and zookeeper.connect are marked by high importance, 
> but according to http://docs.confluent.io/3.2.0/streams/developer-guide.html 
> none of them are important to initialize the stream.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4849) Bug in KafkaStreams documentation

2017-03-06 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-4849:
-

 Summary: Bug in KafkaStreams documentation
 Key: KAFKA-4849
 URL: https://issues.apache.org/jira/browse/KAFKA-4849
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.2.0
Reporter: Seweryn Habdank-Wojewodzki
Priority: Minor


At the page: https://kafka.apache.org/documentation/streams
 
In the chapter titled Application Configuration and Execution, in the example 
there is a line:
 
settings.put(StreamsConfig.ZOOKEEPER_CONNECT_CONFIG, "zookeeper1:2181");
 
but ZOOKEEPER_CONNECT_CONFIG is deprecated in the Kafka version 0.10.2.0.
 
Also the table on the page: 
https://kafka.apache.org/0102/documentation/#streamsconfigs is a bit misleading.
1. Again zookeeper.connect is deprecated.
2. The client.id and zookeeper.connect are marked by high importance, 
but according to http://docs.confluent.io/3.2.0/streams/developer-guide.html 
none of them are important to initialize the stream.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4316) Kafka Streams 0.10.0.1 does not run on Windows x64

2016-10-19 Thread Seweryn Habdank-Wojewodzki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589756#comment-15589756
 ] 

Seweryn Habdank-Wojewodzki commented on KAFKA-4316:
---

Thanks a lot for very quick response!

We are using bleeding edge features of the Kafka (especially Streams), so we 
are waiting for every new release and all bug fixes :-).
I will talk with our team and we will schedule this update, when Kafka 0.10.1 
comes.

> Kafka Streams 0.10.0.1 does not run on Windows x64
> --
>
> Key: KAFKA-4316
> URL: https://issues.apache.org/jira/browse/KAFKA-4316
> Project: Kafka
>  Issue Type: Bug
>Reporter: Seweryn Habdank-Wojewodzki
>
> We had encountered the problem that starting application with Kafka Streams 
> 0.10.0.1 leads to runtime exception, that rocksdb DLL is missing on the 
> Windows x64 machine. 
> Part of the stacktrace:
> {code}
> Caused by: java.lang.RuntimeException: librocksdbjni-win64.dll was not found 
> inside JAR.
> at 
> org.rocksdb.NativeLibraryLoader.loadLibraryFromJarToTemp(NativeLibraryLoader.java:106)
> {code}
> It is true, as Kafka 0.10.0.1 uses RocksDB 4.8.0. This RocksDB release has 
> broken Java API. 
> See: 
> https://github.com/facebook/rocksdb/issues/1177
> https://github.com/facebook/rocksdb/issues/1302
> This critical (for Windows) bug was fixed in RocksDB 4.9.0.
> Please update Kafka gradle\dependencies.gradle to use at least 4.9.0:
> So the line shall be rocksDB: "4.9.0".
> I had tested some basic functionality with Kafka Streams with RocksDB 4.11.2 
> and it was promissing and definitely the bug was away.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4316) Kafka Streams 0.10.0.1 does not run on Windows x64

2016-10-19 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-4316:
-

 Summary: Kafka Streams 0.10.0.1 does not run on Windows x64
 Key: KAFKA-4316
 URL: https://issues.apache.org/jira/browse/KAFKA-4316
 Project: Kafka
  Issue Type: Bug
Reporter: Seweryn Habdank-Wojewodzki


We had encountered the problem that starting application with Kafka Streams 
0.10.0.1 leads to runtime exception, that rocksdb DLL is missing on the Windows 
x64 machine. 

Part of the stacktrace:
Caused by: java.lang.RuntimeException: librocksdbjni-win64.dll was not found 
inside JAR.
at 
org.rocksdb.NativeLibraryLoader.loadLibraryFromJarToTemp(NativeLibraryLoader.java:106)

It is true, as Kafka 0.10.0.1 uses RoscksDB 4.8.0. This RocksDB release has 
broken Java API. 
See: 
https://github.com/facebook/rocksdb/issues/1177
https://github.com/facebook/rocksdb/issues/1302

This critical (for Windows) bug was fixed in RocksDB 4.9.0.

Please update Kafka gradle\dependencies.gradle to use at least 4.9.0:
So the line shall be rocksDB: "4.9.0".

I had tested some basic functionality with Kafka Streams with RocksDB 4.11.2 
and it was promissing and definitely the bug was away.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4315) Kafka Connect documentation problems

2016-10-19 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-4315:
-

 Summary: Kafka Connect documentation problems
 Key: KAFKA-4315
 URL: https://issues.apache.org/jira/browse/KAFKA-4315
 Project: Kafka
  Issue Type: Bug
Reporter: Seweryn Habdank-Wojewodzki


On the base of documentation of the Kafka Connect - 
http://kafka.apache.org/documentation#connect, I had tried to build example in 
Java. It was not possible. 

The code pieces available on the webpage are taken out of any context and they 
are not compiling. 

Also it seems they are taken completely from other code software parts, so even 
putting them together shows, that they are not building any reasonable example. 
And they tend to be very complex. where I would expect that the API examples 
are driving "Hello World" like code.

Also there are weak connections between examples from the Kafka documentation 
and Kafka Connect tools code parts available in the Kafka source.

Finally I would be nice to have a kind of statement in the Kafka documentation 
which parts of API are stable and which are unstable or experimental.
I saw much (~20) of such a remarks in the Kafka code - I mean that API is 
unstable. This note is very important, as we will plan additional effort to 
prepare some facades for unstable code.

In my opinion it is nothing wrong in experimental API, but all those matters 
when documented shall be well documented. The current status of the main Kafka 
documentation makes impression that Kafka Connect is well tested and consistent 
and stable feature set, but it is not. What leads to confusion on the effort 
management.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4314) Kafka Streams documentation needs definitive rework and improvement

2016-10-19 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-4314:
-

 Summary: Kafka Streams documentation needs definitive rework and 
improvement
 Key: KAFKA-4314
 URL: https://issues.apache.org/jira/browse/KAFKA-4314
 Project: Kafka
  Issue Type: Bug
Reporter: Seweryn Habdank-Wojewodzki


On the base of documentation of the Kafka Stream, I had tried to build example 
in Java. It was not possible. The code pieces available on the webpage: 
http://kafka.apache.org/documentation#streams are taken out of any context and 
they are not compiling. 
Also it seems they are taken completely from other code software parts, so even 
putting them together shows, that they are not building any reasonable example.

I took the code of the Kafka itself and there there are some examples of the 
Kafka streams, which are at least consistent. It is very good basis to repair 
the main documentation.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4312) void KTableImpl.writeAsText(String filePath) throws NullPointerException when filePath is empty String

2016-10-18 Thread Seweryn Habdank-Wojewodzki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki updated KAFKA-4312:
--
Summary: void KTableImpl.writeAsText(String filePath) throws 
NullPointerException when filePath is empty String  (was: void 
writeAsText(String filePath) throws NullPointerException when filePath is empty 
String)

> void KTableImpl.writeAsText(String filePath) throws NullPointerException when 
> filePath is empty String
> --
>
> Key: KAFKA-4312
> URL: https://issues.apache.org/jira/browse/KAFKA-4312
> Project: Kafka
>  Issue Type: Bug
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Minor
>
> The KTable method 
> void writeAsText(String filePath) 
> throws NullPointerException when filePath is empty String = "".
> It is pretty uninformative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4312) void writeAsText(String filePath) throws NullPointerException when filePath is empty String

2016-10-18 Thread Seweryn Habdank-Wojewodzki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki updated KAFKA-4312:
--
Description: 
The KTable method 
void writeAsText(String filePath) 
throws NullPointerException when filePath is empty String = "".

It is pretty uninformative.

  was:
The KTable method 
void writeAsText(String filePath) 
throws NullPointerException when filePath is empty.

It is pretty uninformative.


> void writeAsText(String filePath) throws NullPointerException when filePath 
> is empty String
> ---
>
> Key: KAFKA-4312
> URL: https://issues.apache.org/jira/browse/KAFKA-4312
> Project: Kafka
>  Issue Type: Bug
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Minor
>
> The KTable method 
> void writeAsText(String filePath) 
> throws NullPointerException when filePath is empty String = "".
> It is pretty uninformative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4312) void writeAsText(String filePath) throws NullPointerException when filePath is empty String

2016-10-18 Thread Seweryn Habdank-Wojewodzki (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seweryn Habdank-Wojewodzki updated KAFKA-4312:
--
Summary: void writeAsText(String filePath) throws NullPointerException when 
filePath is empty String  (was: void writeAsText(String filePath) throws 
NullPointerException when filePath is empty)

> void writeAsText(String filePath) throws NullPointerException when filePath 
> is empty String
> ---
>
> Key: KAFKA-4312
> URL: https://issues.apache.org/jira/browse/KAFKA-4312
> Project: Kafka
>  Issue Type: Bug
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Minor
>
> The KTable method 
> void writeAsText(String filePath) 
> throws NullPointerException when filePath is empty.
> It is pretty uninformative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4312) void writeAsText(String filePath) throws NullPointerException when filePath is empty

2016-10-18 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-4312:
-

 Summary: void writeAsText(String filePath) throws 
NullPointerException when filePath is empty
 Key: KAFKA-4312
 URL: https://issues.apache.org/jira/browse/KAFKA-4312
 Project: Kafka
  Issue Type: Bug
Reporter: Seweryn Habdank-Wojewodzki
Priority: Minor


The KTable method 
void writeAsText(String filePath) 
throws NullPointerException when filePath is empty.

It is pretty uninformative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)