Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22639
@mccheah and @skonto Do you have suggestion how to go forward from here? I
wanted to write more tests, like how to recover from checkpoints etc
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/22639#discussion_r229218791
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/StreamingCompatibilitySuite.scala
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22639
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22639
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22639
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22639
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/22639#discussion_r225105051
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/StreamingCompatibilitySuite.scala
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22639
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22639
@mccheah Thanks for taking a look. Overall nice suggestion, I am okay with
idea of having a pod, I am struggling with creating a pod for socket server, I
can only think of non trivial options
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22564
I am sorry for the trouble, @liyinan926 and @srowen.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22564
Looks like this is working without making a release. It is not clear what
change could have fixed the problem. Closing the PR for now
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/22564
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22639
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/22639#discussion_r222942209
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesTestComponents.scala
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/22639
[SPARK-25647][k8s] Add spark streaming compatibility suite for kubernetes.
## What changes were proposed in this pull request?
Adds integration tests for spark streaming compatibility
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22564
@liyinan926 Do you have some comments, if this will be helpful.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22339
Thank you @srowen and @steveloughran.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22339
Hi @srowen, would you like to take a look? Is there anything I can do, if
this patch is missing something? I have tested it thoroughly against an object
store
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22565
@dongjoon-hyun Thanks for looking.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/22565
[MINOR][SPARK-25543][K8s] Confusing log messages at DEBUG level, in K8s
mode.
## What changes were proposed in this pull request?
Spurious logs like /sec.
2018-09-26 09:33:57
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/22564
[SPARK-25282][K8s][DOC] Improved docs to avoid running into
InvalidClassException.
## What changes were proposed in this pull request?
Documentation changes, on client mode. A user
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/22339
For numbers, while testing with object store having 50 files/dirs, without
this patch it took 130 REST requests for 2 batches to complete and with this
patch it took 56 rest requests. So number
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/22339
SPARK-17159 Significant speed up for running spark streaming against Object
store.
## What changes were proposed in this pull request?
Original work by Steve Loughran
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17745
It appears, there are more people using object store now, than ever. For
those who are attached to old versions of spark streaming, having this would be
good.
Hi @steveloughran, are you
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17745
Can you please reopen this? I had like to discuss, if we can merge it in
the spark itself.
---
-
To unsubscribe, e-mail
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17745#discussion_r212571984
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -196,29 +191,29 @@ class FileInputDStream[K, V, F
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17745#discussion_r212267619
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -196,29 +191,29 @@ class FileInputDStream[K, V, F
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17745#discussion_r212251757
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -196,29 +191,29 @@ class FileInputDStream[K, V, F
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/19096
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/18143
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/18143
@brkyvz and @zsxwing No comments on whether this will be useful or not, so
far. Should I consider closing
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/18143#discussion_r162045340
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -45,9 +46,6 @@ private[kafka010] case
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/19096#discussion_r162022825
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaContinuousWriter.scala
---
@@ -112,8 +112,8 @@ class
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/19096
@zsxwing, please take another look.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14151#discussion_r155706338
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -313,11 +313,16 @@ def text(self, paths):
Each line in the text file is a new row
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
This python pydoc style is failing at `[Row(value=u'hello\nthis')]`. I
could not find a way to fix it. Any help will be appreciated. It does not like
the literal `'\n
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
@viirya Can you please take another look?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/19785
[MINOR][doc] The left navigation bar should be fixed with respect to
scrolling.
## What changes were proposed in this pull request?
A minor CSS style change to make Left navigation bar
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
@gatorsmile Ping !
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/19096
Hi @zsxwing, are you okay with the changes?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17357#discussion_r143407147
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/DriverWrapper.scala ---
@@ -23,14 +23,15 @@ import org.apache.commons.lang3.StringUtils
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/19096#discussion_r137778385
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaWriteTask.scala
---
@@ -43,8 +43,10 @@ private[kafka010] class
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/19096
[SPARK-21869][SS] A cached Kafka producer should not be closed if any task
is using it.
## What changes were proposed in this pull request?
By updating the access time for the producer
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17357#discussion_r135699350
--- Diff:
core/src/main/scala/org/apache/spark/deploy/rest/StandaloneRestServer.scala ---
@@ -139,7 +139,9 @@ private[rest] class
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17357#discussion_r135485503
--- Diff:
core/src/main/scala/org/apache/spark/deploy/rest/StandaloneRestServer.scala ---
@@ -139,7 +139,9 @@ private[rest] class
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17357#discussion_r135187410
--- Diff:
core/src/main/scala/org/apache/spark/deploy/rest/StandaloneRestServer.scala ---
@@ -139,7 +139,9 @@ private[rest] class
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17357#discussion_r134986472
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/DriverWrapper.scala ---
@@ -38,8 +39,10 @@ object DriverWrapper
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17357#discussion_r134985214
--- Diff:
core/src/main/scala/org/apache/spark/deploy/rest/StandaloneRestServer.scala ---
@@ -139,7 +139,9 @@ private[rest] class
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
@gatorsmile @jiangxb1987 ping!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17357
I will update it, soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
@jiangxb1987 @viirya your feedback has been incorporated, please take
another look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14151#discussion_r131100257
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/HadoopFileWholeTextReader.scala
---
@@ -0,0 +1,57
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/18143
I am currently, trying to run some performance tests and see how this
change impacts performance in any case. Meanwhile, if I could get an idea if
things are moving in the right direction
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
@sameeragarwal Do you think this change still makes sense? Can I improve it
somehow?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/18143
@zsxwing, can you please take a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/18562
[SPARK-21069][SS][DOCS] Add rate source to programming guide.
Tested by running jekyll locally.
You can merge this pull request into a Git repository by running:
$ git pull https
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/18143
@brkyvz and @zsxwing, do you think this object pool should be bounded ? Or
they can become weak reference values, incase the object pool is unbounded ?
---
If your project is set up
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/18417
15258 can also be taken out
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/15258
@gatorsmile Thanks for pinging this, I am closing it myself for now. And
once it is ready, I will open it again. I am planning to update it soon.
---
If your project is set up for it, you can
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/15258
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/17983
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/18143
Thanks @koeninger for helping me.
Thanks @brkyvz, please take a look again and see if what I have done is
along the lines the change you wanted.
---
If your project is set up
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/18143#discussion_r119582631
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -45,9 +46,6 @@ private[kafka010] case
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/18143
Thanks @koeninger for taking a look. I did some testing to see if
performance was impacted, and made corrections.
---
If your project is set up for it, you can reply to this email and have
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/18143#discussion_r119277567
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -310,62 +308,45 @@ private[kafka010
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/18143
Thanks @srowen for taking a look. I have tried to address your comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/18143#discussion_r119088849
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -18,19 +18,19 @@
package
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/18143#discussion_r119085683
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -383,19 +362,16 @@ private[kafka010
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/18143#discussion_r119084772
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -18,19 +18,19 @@
package
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17357
In essence, if environment variable SPARK_LOCAL* or config
spark.driver.host or port values will not be picked up correctly without this
fix. This would cause driver failover fail
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/18143
Hi, @brkyvz as we discussed on the PR for CachedKafkaProducer that Guava
cache can be used for consumer as well. May be you could take a look at this as
well ?
---
If your project is set up
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/18143
Simplificaiton of CachedKafkaConsumer using guava cache.
## What changes were proposed in this pull request?
On the lines of SPARK-19968, guava cache can be used to simplify the code
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
Thanks @viirya and @zsxwing. I have tried to address you comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
@marmbrus Thank you for taking a look again. Surely, shut down hook is not
ideal for closing kafka producers. In fact, for the case of kafka sink, it
might be correct to skip cleanup step. I
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
Jenkins, retest this please !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
Build is failing due to "Our attempt to download sbt locally to
build/sbt-launch-0.13.13.jar failed. Please install sbt manually from
http://www.scala-sbt.org/;
---
If your project is s
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
Jenkins, retest this please !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r118200287
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaProducer.scala
---
@@ -36,7 +36,7 @@ private[kafka010] object
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r118194379
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/CanonicalizeKafkaParamsSuite.scala
---
@@ -0,0 +1,61
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r117717831
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaWriter.scala
---
@@ -94,4 +94,10 @@ private[kafka010] object
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
@viirya Thank you for taking a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
@brkyvz I am toying with Guava cache, the only problem is, it does not have
an in-built mechanism to do any cleanup on shutdown.
---
If your project is set up for it, you can reply
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
@marmbrus and @brkyvz Please take another look, and let me know how it can
be improved further. Thank you for the help so far.
---
If your project is set up for it, you can reply to this email
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r116958096
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaWriter.scala
---
@@ -94,4 +94,10 @@ private[kafka010] object
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17983
@srowen Can you please take a look !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17982
If I am not able to come up with the better alternative, then following not
so ideal option exist.
1) we can do a scala version check and have two different code paths based
on scala
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17982
Actually, now I am wondering, how would it work without createInerpreter is
executed. You constraint, makes things a bit tricky. I will think more before I
reply.
---
If your project is set up
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17982
Hi Sean,
Taking a quick look at the changes that removed loadFiles.
[changes](https://github.com/scala/scala/commit/99dad60d984d3f72338f3bad4c4fe905090edd51#diff
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/17983
[SPARK-20738][BUILD] Hide building of docs in sbt behind an option.
sbt publish-local tries to build the docs along with other artifacts
and as the codebase is being updated with no build
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
SPARK-20737 is created to look into cleanup mechanism in a separate JIRA.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r116158469
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaSink.scala
---
@@ -30,14 +30,19 @@ private[kafka010] class
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
Hi @marmbrus and @brkyvz, Thanks a lot of taking a look.
@marmbrus You are right, we should have another way to canonicalize kafka
params. I can only think of appending a unique id
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r110625538
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaProducer.scala
---
@@ -0,0 +1,70 @@
+/*
+ * Licensed
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17357
Hi @JoshRosen, can you please review this PR ? I have tested it, by running
and verifying the applied configuration on the workers.
---
If your project is set up for it, you can reply
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
I can further confirm, that in logs, a kafkaproducer instance is created
almost every instant.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
@tdas ping !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17357
Hi @rxin, I was not sure whom to ping for this PR. Would you like to take a
look ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
1 - 100 of 696 matches
Mail list logo