[jira] [Commented] (SPARK-20036) impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0

2017-04-18 Thread Daniel Nuriyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15973544#comment-15973544
 ] 

Daniel Nuriyev commented on SPARK-20036:


I placed the explicit dependencies in my pom because the internal dependency of 
streaming is on kafka 0.10.0.0.
I have been able to overcome the originally reported issue only when I 
explicitly specified 0.10.2.0 in my pom.

> impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0 
> 
>
> Key: SPARK-20036
> URL: https://issues.apache.org/jira/browse/SPARK-20036
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
> Attachments: Main.java, pom.xml
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the whole topic using:
> kafkaParams.put("auto.offset.reset", "earliest");
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> Each batch returns empty.
> I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
> overrides "earliest" with "none".
> Whether this is related or not, when I used kafka 0.8 on the client with 
> kafka 0.10.1 on the server, I could read the whole topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20036) impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0

2017-04-12 Thread Daniel Nuriyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15966939#comment-15966939
 ] 

Daniel Nuriyev commented on SPARK-20036:


Thank you, Cody. I will do as you say and report what happens.

> impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0 
> 
>
> Key: SPARK-20036
> URL: https://issues.apache.org/jira/browse/SPARK-20036
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
> Attachments: Main.java, pom.xml
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the whole topic using:
> kafkaParams.put("auto.offset.reset", "earliest");
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> Each batch returns empty.
> I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
> overrides "earliest" with "none".
> Whether this is related or not, when I used kafka 0.8 on the client with 
> kafka 0.10.1 on the server, I could read the whole topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20037) impossible to set kafka offsets using kafka 0.10 and spark 2.0.0

2017-03-27 Thread Daniel Nuriyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943654#comment-15943654
 ] 

Daniel Nuriyev commented on SPARK-20037:


Thank you for your feedback, This problem started when I upgraded kafka client 
jars. But since you can't reproduce it, I'll dig in myself.

> impossible to set kafka offsets using kafka 0.10 and spark 2.0.0
> 
>
> Key: SPARK-20037
> URL: https://issues.apache.org/jira/browse/SPARK-20037
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
> Attachments: Main.java, offsets.png
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the a topic starting with offsets. 
> The topic has 4 partitions that start somewhere before 585000 and end after 
> 674000. So I wanted to read all partitions starting with 585000
> fromOffsets.put(new TopicPartition(topic, 0), 585000L);
> fromOffsets.put(new TopicPartition(topic, 1), 585000L);
> fromOffsets.put(new TopicPartition(topic, 2), 585000L);
> fromOffsets.put(new TopicPartition(topic, 3), 585000L);
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> The code immediately throws:
> Beginning offset 585000 is after the ending offset 584464 for topic 
> commerce_item_expectation partition 1
> It does not make sense because this topic/partition starts at 584464, not ends
> I use this as a base: 
> https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
> But I use direct stream:
> KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
> ConsumerStrategies.Subscribe(
> topics, kafkaParams, fromOffsets
> )
> )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20037) impossible to set kafka offsets using kafka 0.10 and spark 2.0.0

2017-03-27 Thread Daniel Nuriyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Nuriyev updated SPARK-20037:
---
Attachment: Main.java

> impossible to set kafka offsets using kafka 0.10 and spark 2.0.0
> 
>
> Key: SPARK-20037
> URL: https://issues.apache.org/jira/browse/SPARK-20037
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
> Attachments: Main.java, offsets.png
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the a topic starting with offsets. 
> The topic has 4 partitions that start somewhere before 585000 and end after 
> 674000. So I wanted to read all partitions starting with 585000
> fromOffsets.put(new TopicPartition(topic, 0), 585000L);
> fromOffsets.put(new TopicPartition(topic, 1), 585000L);
> fromOffsets.put(new TopicPartition(topic, 2), 585000L);
> fromOffsets.put(new TopicPartition(topic, 3), 585000L);
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> The code immediately throws:
> Beginning offset 585000 is after the ending offset 584464 for topic 
> commerce_item_expectation partition 1
> It does not make sense because this topic/partition starts at 584464, not ends
> I use this as a base: 
> https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
> But I use direct stream:
> KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
> ConsumerStrategies.Subscribe(
> topics, kafkaParams, fromOffsets
> )
> )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20037) impossible to set kafka offsets using kafka 0.10 and spark 2.0.0

2017-03-27 Thread Daniel Nuriyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943642#comment-15943642
 ] 

Daniel Nuriyev commented on SPARK-20037:


My system is absolutely simple: a topic whose offset starts at X. A single Java 
method that opens a streaming context and reads the topic starting with a 
existing offset. The only dependencies are listed above.
I do not think that this is a spark problem. I think it is a problem in one of 
the kafka jars.
I will attach the Java method.
Have you tried reproducing it? For me it's consistent.

> impossible to set kafka offsets using kafka 0.10 and spark 2.0.0
> 
>
> Key: SPARK-20037
> URL: https://issues.apache.org/jira/browse/SPARK-20037
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
> Attachments: offsets.png
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the a topic starting with offsets. 
> The topic has 4 partitions that start somewhere before 585000 and end after 
> 674000. So I wanted to read all partitions starting with 585000
> fromOffsets.put(new TopicPartition(topic, 0), 585000L);
> fromOffsets.put(new TopicPartition(topic, 1), 585000L);
> fromOffsets.put(new TopicPartition(topic, 2), 585000L);
> fromOffsets.put(new TopicPartition(topic, 3), 585000L);
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> The code immediately throws:
> Beginning offset 585000 is after the ending offset 584464 for topic 
> commerce_item_expectation partition 1
> It does not make sense because this topic/partition starts at 584464, not ends
> I use this as a base: 
> https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
> But I use direct stream:
> KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
> ConsumerStrategies.Subscribe(
> topics, kafkaParams, fromOffsets
> )
> )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20037) impossible to set kafka offsets using kafka 0.10 and spark 2.0.0

2017-03-23 Thread Daniel Nuriyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Nuriyev updated SPARK-20037:
---
Attachment: offsets.png

> impossible to set kafka offsets using kafka 0.10 and spark 2.0.0
> 
>
> Key: SPARK-20037
> URL: https://issues.apache.org/jira/browse/SPARK-20037
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
> Attachments: offsets.png
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the a topic starting with offsets. 
> The topic has 4 partitions that start somewhere before 585000 and end after 
> 674000. So I wanted to read all partitions starting with 585000
> fromOffsets.put(new TopicPartition(topic, 0), 585000L);
> fromOffsets.put(new TopicPartition(topic, 1), 585000L);
> fromOffsets.put(new TopicPartition(topic, 2), 585000L);
> fromOffsets.put(new TopicPartition(topic, 3), 585000L);
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> The code immediately throws:
> Beginning offset 585000 is after the ending offset 584464 for topic 
> commerce_item_expectation partition 1
> It does not make sense because this topic/partition starts at 584464, not ends
> I use this as a base: 
> https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
> But I use direct stream:
> KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
> ConsumerStrategies.Subscribe(
> topics, kafkaParams, fromOffsets
> )
> )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20037) impossible to set kafka offsets using kafka 0.10 and spark 2.0.0

2017-03-23 Thread Daniel Nuriyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15938600#comment-15938600
 ] 

Daniel Nuriyev commented on SPARK-20037:


This is en exception from partition 1 of another topic:
Beginning offset 290 is after the ending offset 2806790 for topic 
cimba_raw_inbox partition 1.
I am attaching a screenshot from Kafka Tool that shows offsets of that 
topic/partition

> impossible to set kafka offsets using kafka 0.10 and spark 2.0.0
> 
>
> Key: SPARK-20037
> URL: https://issues.apache.org/jira/browse/SPARK-20037
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
> Attachments: offsets.png
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the a topic starting with offsets. 
> The topic has 4 partitions that start somewhere before 585000 and end after 
> 674000. So I wanted to read all partitions starting with 585000
> fromOffsets.put(new TopicPartition(topic, 0), 585000L);
> fromOffsets.put(new TopicPartition(topic, 1), 585000L);
> fromOffsets.put(new TopicPartition(topic, 2), 585000L);
> fromOffsets.put(new TopicPartition(topic, 3), 585000L);
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> The code immediately throws:
> Beginning offset 585000 is after the ending offset 584464 for topic 
> commerce_item_expectation partition 1
> It does not make sense because this topic/partition starts at 584464, not ends
> I use this as a base: 
> https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
> But I use direct stream:
> KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
> ConsumerStrategies.Subscribe(
> topics, kafkaParams, fromOffsets
> )
> )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20036) impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0

2017-03-23 Thread Daniel Nuriyev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15938472#comment-15938472
 ] 

Daniel Nuriyev commented on SPARK-20036:


To provide more info I am attaching the pom.xml and the code with comments that 
i used to narrow down onto the issue.
Debugging lead me to KafkaUtils.fixKafkaParams that replaces "earliest" with 
"none":
https://github.com/apache/spark/blob/master/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala
KafkaUtils.fixKafkaParams is called from the package private class 
DirectKafkaInputDStream.
I do not know if this is the reason.

The way to reproduce is to run the attached code against a topic that already 
has entries with offsets > 0. The problem is that no existing entries are read, 
only new entries are read.
I could consistently reproduce the problem.
The problem appeared when we upgraded the kafka client from 0.8 to 0.10.0

> impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0 
> 
>
> Key: SPARK-20036
> URL: https://issues.apache.org/jira/browse/SPARK-20036
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
> Attachments: Main.java, pom.xml
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the whole topic using:
> kafkaParams.put("auto.offset.reset", "earliest");
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> Each batch returns empty.
> I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
> overrides "earliest" with "none".
> Whether this is related or not, when I used kafka 0.8 on the client with 
> kafka 0.10.1 on the server, I could read the whole topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20036) impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0

2017-03-23 Thread Daniel Nuriyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Nuriyev updated SPARK-20036:
---
Attachment: Main.java
pom.xml

> impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0 
> 
>
> Key: SPARK-20036
> URL: https://issues.apache.org/jira/browse/SPARK-20036
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
> Attachments: Main.java, pom.xml
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the whole topic using:
> kafkaParams.put("auto.offset.reset", "earliest");
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> Each batch returns empty.
> I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
> overrides "earliest" with "none".
> Whether this is related or not, when I used kafka 0.8 on the client with 
> kafka 0.10.1 on the server, I could read the whole topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20037) impossible to set kafka offsets using kafka 0.10 and spark 2.0.0

2017-03-20 Thread Daniel Nuriyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Nuriyev updated SPARK-20037:
---
Description: 
I use kafka 0.10.1 and java code with the following dependencies:

org.apache.kafka
kafka_2.11
0.10.1.1


org.apache.kafka
kafka-clients
0.10.1.1


org.apache.spark
spark-streaming_2.11
2.0.0


org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0


The code tries to read the a topic starting with offsets. 
The topic has 4 partitions that start somewhere before 585000 and end after 
674000. So I wanted to read all partitions starting with 585000

fromOffsets.put(new TopicPartition(topic, 0), 585000L);
fromOffsets.put(new TopicPartition(topic, 1), 585000L);
fromOffsets.put(new TopicPartition(topic, 2), 585000L);
fromOffsets.put(new TopicPartition(topic, 3), 585000L);
Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));

The code immediately throws:
Beginning offset 585000 is after the ending offset 584464 for topic 
commerce_item_expectation partition 1

It does not make sense because this topic/partition starts at 584464, not ends

I use this as a base: 
https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
But I use direct stream:
KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
ConsumerStrategies.Subscribe(
topics, kafkaParams, fromOffsets
)
)



  was:
I use kafka 0.10.1 and java code with the following dependencies:

org.apache.kafka
kafka_2.11
0.10.1.1


org.apache.kafka
kafka-clients
0.10.1.1


org.apache.spark
spark-streaming_2.11
2.0.0


org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0


The code tries to read the a topic starting with offsets. 
The topic has 4 partitions that start somewhere before 300 and end after 
300. So I wanted to read all partitions starting with 300

fromOffsets.put(new TopicPartition(topic, 0), 300L);
fromOffsets.put(new TopicPartition(topic, 1), 300L);
fromOffsets.put(new TopicPartition(topic, 2), 300L);
fromOffsets.put(new TopicPartition(topic, 3), 300L);

Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));

The code immediately throws:
numRecords must not be negative

I use this as a base: 
https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
But I use direct stream:
KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
ConsumerStrategies.Subscribe(
topics, kafkaParams, fromOffsets
)
)




> impossible to set kafka offsets using kafka 0.10 and spark 2.0.0
> 
>
> Key: SPARK-20037
> URL: https://issues.apache.org/jira/browse/SPARK-20037
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
>Priority: Critical
> Fix For: 2.0.3
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the a topic starting with offsets. 
> The topic has 4 partitions that start somewhere before 585000 and end after 
> 674000. So I wanted to read all partitions starting with 585000
> fromOffsets.put(new TopicPartition(topic, 0), 585000L);
> fromOffsets.put(new TopicPartition(topic, 1), 585000L);
> fromOffsets.put(new TopicPartition(topic, 2), 585000L);
> fromOffsets.put(new TopicPartition(topic, 3), 585000L);
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> The code immediately throws:
> Beginning offset 585000 is after the ending offset 584464 for topic 
> commerce_item_expectation partition 1
> It does not make sense because this topic/partition starts at 584464, not ends
> I use this as a base: 
> https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
> But I use direct stream:
> KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
> ConsumerStrategies.Subscribe(
> topics, kafkaParams, fromOffsets
> )
> )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20036) impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0

2017-03-20 Thread Daniel Nuriyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Nuriyev updated SPARK-20036:
---
Description: 
I use kafka 0.10.1 and java code with the following dependencies:

org.apache.kafka
kafka_2.11
0.10.1.1


org.apache.kafka
kafka-clients
0.10.1.1


org.apache.spark
spark-streaming_2.11
2.0.0


org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0

The code tries to read the whole topic using:
kafkaParams.put("auto.offset.reset", "earliest");
Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));
Each batch returns empty.
I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
overrides "earliest" with "none".
Whether this is related or not, when I used kafka 0.8 on the client with kafka 
0.10.1 on the server, I could read the whole topic.


  was:
I use kafka 0.10.1 and java code with the following dependencies:

org.apache.kafka
kafka_2.11
0.10.1.1


org.apache.kafka
kafka-clients
0.10.1.1


org.apache.spark
spark-streaming_2.11
2.0.0


org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0

The code tries to read the whole topic using:
kafkaParams.put("auto.offset.reset", "earliest");
Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));
Each batch returns empty.
I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
overrides "earliest" with "none".
Whether this is related or not, when I used kafka 0.8 on the client with kafka 
0.10.1 on the server, I could read the whole topic.

I use this as a base: 
https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
But I use direct stream:
KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
ConsumerStrategies.Subscribe(
topics, kafkaParams, fromOffsets
)
)



> impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0 
> 
>
> Key: SPARK-20036
> URL: https://issues.apache.org/jira/browse/SPARK-20036
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
>Priority: Critical
> Fix For: 2.0.3
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the whole topic using:
> kafkaParams.put("auto.offset.reset", "earliest");
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> Each batch returns empty.
> I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
> overrides "earliest" with "none".
> Whether this is related or not, when I used kafka 0.8 on the client with 
> kafka 0.10.1 on the server, I could read the whole topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20036) impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0

2017-03-20 Thread Daniel Nuriyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Nuriyev updated SPARK-20036:
---
Description: 
I use kafka 0.10.1 and java code with the following dependencies:

org.apache.kafka
kafka_2.11
0.10.1.1


org.apache.kafka
kafka-clients
0.10.1.1


org.apache.spark
spark-streaming_2.11
2.0.0


org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0

The code tries to read the whole topic using:
kafkaParams.put("auto.offset.reset", "earliest");
Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));
Each batch returns empty.
I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
overrides "earliest" with "none".
Whether this is related or not, when I used kafka 0.8 on the client with kafka 
0.10.1 on the server, I could read the whole topic.

I use this as a base: 
https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
But I use direct stream:
KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
ConsumerStrategies.Subscribe(
topics, kafkaParams, fromOffsets
)
)


  was:
I use kafka 0.10.1 and java code with the following dependencies:

org.apache.kafka
kafka_2.11
0.10.1.1


org.apache.kafka
kafka-clients
0.10.1.1


org.apache.spark
spark-streaming_2.11
2.0.0


org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0

The code tries to read the whole topic using:
kafkaParams.put("auto.offset.reset", "earliest");
Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));
Each batch returns empty.
I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
overrides "earliest" with "none".
Whether this is related or not, when I used kafka 0.8 on the client with kafka 
0.10.1 on the server, I could read the whole topic.



> impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0 
> 
>
> Key: SPARK-20036
> URL: https://issues.apache.org/jira/browse/SPARK-20036
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
>Priority: Critical
> Fix For: 2.0.3
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the whole topic using:
> kafkaParams.put("auto.offset.reset", "earliest");
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> Each batch returns empty.
> I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
> overrides "earliest" with "none".
> Whether this is related or not, when I used kafka 0.8 on the client with 
> kafka 0.10.1 on the server, I could read the whole topic.
> I use this as a base: 
> https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
> But I use direct stream:
> KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
> ConsumerStrategies.Subscribe(
> topics, kafkaParams, fromOffsets
> )
> )



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20037) impossible to set kafka offsets using kafka 0.10 and spark 2.0.0

2017-03-20 Thread Daniel Nuriyev (JIRA)
Daniel Nuriyev created SPARK-20037:
--

 Summary: impossible to set kafka offsets using kafka 0.10 and 
spark 2.0.0
 Key: SPARK-20037
 URL: https://issues.apache.org/jira/browse/SPARK-20037
 Project: Spark
  Issue Type: Bug
  Components: Input/Output
Affects Versions: 2.0.0
Reporter: Daniel Nuriyev
Priority: Critical
 Fix For: 2.0.3


I use kafka 0.10.1 and java code with the following dependencies:

org.apache.kafka
kafka_2.11
0.10.1.1


org.apache.kafka
kafka-clients
0.10.1.1


org.apache.spark
spark-streaming_2.11
2.0.0


org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0


The code tries to read the a topic starting with offsets. 
The topic has 4 partitions that start somewhere before 300 and end after 
300. So I wanted to read all partitions starting with 300

fromOffsets.put(new TopicPartition(topic, 0), 300L);
fromOffsets.put(new TopicPartition(topic, 1), 300L);
fromOffsets.put(new TopicPartition(topic, 2), 300L);
fromOffsets.put(new TopicPartition(topic, 3), 300L);

Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));

The code immediately throws:
numRecords must not be negative

I use this as a base: 
https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
But I use direct stream:
KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),
ConsumerStrategies.Subscribe(
topics, kafkaParams, fromOffsets
)
)





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20036) impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0

2017-03-20 Thread Daniel Nuriyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Nuriyev updated SPARK-20036:
---
Description: 
I use kafka 0.10.1 and java code with the following dependencies:

org.apache.kafka
kafka_2.11
0.10.1.1


org.apache.kafka
kafka-clients
0.10.1.1


org.apache.spark
spark-streaming_2.11
2.0.0


org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0

The code tries to read the whole topic using:
kafkaParams.put("auto.offset.reset", "earliest");
Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));
Each batch returns empty.
I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
overrides "earliest" with "none".
Whether this is related or not, when I used kafka 0.8 on the client with kafka 
0.10.1 on the server, I could read the whole topic.


  was:
I use kafka 0.10.1 and java code with the following dependencies:

org.apache.kafka
kafka_2.11
0.10.1.1


org.apache.kafka
kafka-clients
0.10.1.1


org.apache.spark
spark-streaming_2.11
2.0.0


org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0

I code tries to read the whole topic using:
kafkaParams.put("auto.offset.reset", "earliest");
Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));
Each batch returns empty.
I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
overrides "earliest" with "none".
Whether this is related or not, when I used kafka 0.8 on the client with kafka 
0.10.1 on the server, I could read the whole topic.



> impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0 
> 
>
> Key: SPARK-20036
> URL: https://issues.apache.org/jira/browse/SPARK-20036
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
>Priority: Critical
> Fix For: 2.0.3
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> The code tries to read the whole topic using:
> kafkaParams.put("auto.offset.reset", "earliest");
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> Each batch returns empty.
> I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
> overrides "earliest" with "none".
> Whether this is related or not, when I used kafka 0.8 on the client with 
> kafka 0.10.1 on the server, I could read the whole topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-20036) impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0

2017-03-20 Thread Daniel Nuriyev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-20036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Nuriyev updated SPARK-20036:
---
Component/s: (was: Spark Core)
 Input/Output

> impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0 
> 
>
> Key: SPARK-20036
> URL: https://issues.apache.org/jira/browse/SPARK-20036
> Project: Spark
>  Issue Type: Bug
>  Components: Input/Output
>Affects Versions: 2.0.0
>Reporter: Daniel Nuriyev
>Priority: Critical
> Fix For: 2.0.3
>
>
> I use kafka 0.10.1 and java code with the following dependencies:
> 
> org.apache.kafka
> kafka_2.11
> 0.10.1.1
> 
> 
> org.apache.kafka
> kafka-clients
> 0.10.1.1
> 
> 
> org.apache.spark
> spark-streaming_2.11
> 2.0.0
> 
> 
> org.apache.spark
> spark-streaming-kafka-0-10_2.11
> 2.0.0
> 
> I code tries to read the whole topic using:
> kafkaParams.put("auto.offset.reset", "earliest");
> Using 5 second batches:
> jssc = new JavaStreamingContext(conf, Durations.seconds(5));
> Each batch returns empty.
> I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
> overrides "earliest" with "none".
> Whether this is related or not, when I used kafka 0.8 on the client with 
> kafka 0.10.1 on the server, I could read the whole topic.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-20036) impossible to read a whole kafka topic using kafka 0.10 and spark 2.0.0

2017-03-20 Thread Daniel Nuriyev (JIRA)
Daniel Nuriyev created SPARK-20036:
--

 Summary: impossible to read a whole kafka topic using kafka 0.10 
and spark 2.0.0 
 Key: SPARK-20036
 URL: https://issues.apache.org/jira/browse/SPARK-20036
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 2.0.0
Reporter: Daniel Nuriyev
Priority: Critical
 Fix For: 2.0.3


I use kafka 0.10.1 and java code with the following dependencies:

org.apache.kafka
kafka_2.11
0.10.1.1


org.apache.kafka
kafka-clients
0.10.1.1


org.apache.spark
spark-streaming_2.11
2.0.0


org.apache.spark
spark-streaming-kafka-0-10_2.11
2.0.0

I code tries to read the whole topic using:
kafkaParams.put("auto.offset.reset", "earliest");
Using 5 second batches:
jssc = new JavaStreamingContext(conf, Durations.seconds(5));
Each batch returns empty.
I debugged the code I noticed that KafkaUtils.fixKafkaParams is called that 
overrides "earliest" with "none".
Whether this is related or not, when I used kafka 0.8 on the client with kafka 
0.10.1 on the server, I could read the whole topic.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org