Github user YuvalItzchakov commented on a diff in the pull request:
https://github.com/apache/spark/pull/21997#discussion_r207716764
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSourceRDD.scala
---
@@ -124,8 +124,6 @@ private[kafka010] class
Github user YuvalItzchakov commented on the issue:
https://github.com/apache/spark/pull/21997
This is the same as https://github.com/apache/spark/pull/21983 only merged
against master (after @felixcheung comment). Should be merged to branch-2.3
Github user YuvalItzchakov closed the pull request at:
https://github.com/apache/spark/pull/21983
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user YuvalItzchakov opened a pull request:
https://github.com/apache/spark/pull/21997
[SPARK-24987][SS] - Fix Kafka consumer leak when no new offsets for
TopicPartition
## What changes were proposed in this pull request?
This small fix adds a `consumer.release
Github user YuvalItzchakov commented on the issue:
https://github.com/apache/spark/pull/21983
@felixcheung Thanks, will do.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user YuvalItzchakov commented on the issue:
https://github.com/apache/spark/pull/21983
Should I create a separate PR for the master branch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
GitHub user YuvalItzchakov opened a pull request:
https://github.com/apache/spark/pull/21983
SPARK-24987 - Fix Kafka consumer leak when no new offsets for TopicPartition
## What changes were proposed in this pull request?
This small fix adds a `consumer.release()` call to
Github user YuvalItzchakov commented on the issue:
https://github.com/apache/spark/pull/19059
https://issues.apache.org/jira/browse/SPARK-21873
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user YuvalItzchakov commented on a diff in the pull request:
https://github.com/apache/spark/pull/19059#discussion_r135405801
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -125,8 +131,11 @@ private[kafka010
Github user YuvalItzchakov commented on the issue:
https://github.com/apache/spark/pull/18928
Right. We've had some problems with reading snapshots after executors dying
on OOM, I hope this does the trick :)
Thanks.
---
If your project is set up for it, you can rep
Github user YuvalItzchakov commented on a diff in the pull request:
https://github.com/apache/spark/pull/19059#discussion_r135397171
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -112,9 +112,15 @@ private[kafka010
Github user YuvalItzchakov commented on a diff in the pull request:
https://github.com/apache/spark/pull/19059#discussion_r135397155
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -112,9 +112,15 @@ private[kafka010
Github user YuvalItzchakov commented on the issue:
https://github.com/apache/spark/pull/18928
@zsxwing Can you elaborate on why writing to a file directly may generate a
partial file? Is this about an exception terminating the write to the
underlying file system while the maintenance
GitHub user YuvalItzchakov opened a pull request:
https://github.com/apache/spark/pull/19059
[SS] - Avoid using `return` inside `CachedKafkaConsumer.get`
During profiling of a structured streaming application with Kafka as the
source, I came across this exception
Github user YuvalItzchakov commented on the pull request:
https://github.com/apache/spark/pull/11081#issuecomment-180041624
Yes, that's how I've worked around it in the meanwhile. Thx @zsxwing
---
If your project is set up for it, you can reply to this email and have
Github user YuvalItzchakov commented on the pull request:
https://github.com/apache/spark/pull/11081#issuecomment-180038981
Thank you guys for the quick fix. Will the updated binary be available via
the spark download page or will I need to compile from source?
---
If your project
16 matches
Mail list logo