Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2886#issuecomment-60313702
Thanks a lot @tsliwowicz I'll do that shortly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2864#issuecomment-60314190
I see... when the JVM's `UncaughtExceptionHandler` catches an OOM exception
it doesn't actually kill the JVM. However, we do want to log the fact that we
are facing
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2746#issuecomment-60314429
@vanzin Yes, apparently so. I suppose it's fine to let `YarnAllocator` take
care of this for us for now since we're only targeting Yarn at the moment.
---
If your
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60314638
Hi @CodingCat - This is a great handy feature, but what I meant was that
we should NOT have custom code that tracks broadcast blocks. We can have
special UIs for reporting
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/2912#issuecomment-60315168
Interesting question. For uncaching, I think it makes the most sense to
refer to the tables by name even if two cached tables might be the same RDD.
Thus you should not
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60315631
@rxin, I see
then I will try to refactor the reporting mechanism (currently piggyback in
heartbeat) to make it more general
---
If your project is set up
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60315673
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22088/consoleFull)
for PR 2913 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60315701
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60315698
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22088/consoleFull)
for PR 2913 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2913#discussion_r19310826
--- Diff:
core/src/main/scala/org/apache/spark/util/SparkUncaughtExceptionHandler.scala
---
@@ -15,17 +15,17 @@
* limitations under the License.
*/
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2913#discussion_r19310864
--- Diff:
core/src/main/scala/org/apache/spark/util/SparkUncaughtExceptionHandler.scala
---
@@ -15,17 +15,17 @@
* limitations under the License.
*/
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2703#issuecomment-60316196
Jenkins, this is ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60316298
@rxin - Here is a simpler design -- What if we report all broadcast blocks
to the master when they are added to a block manager as well (tellMaster = true
instead of
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2735#issuecomment-60316389
Jenkins, this is ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60316582
Sorry that link should have been
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala#L181
---
If your
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2864#issuecomment-60316623
@andrewor14, (just met a LiveListernBus uncaught exception this
afternoon)
personally, I feel that we shall stop the driver when such things happen
...
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2900#issuecomment-60316676
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/413/consoleFull)
for PR 2900 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2703#issuecomment-60316893
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22090/consoleFull)
for PR 2703 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2735#issuecomment-60316955
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22089/consoleFull)
for PR 2735 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2912#issuecomment-60318114
I agree with matei on the desired semantics here. However, if this only
applies to cases where a table is _identical_ to another cached table, I don't
think it's a
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2866#issuecomment-60318489
Based on some discussion, we're going to stick with
`RoaringBitmap.contains` for now. Fancier solutions which use non-compressed
bitmaps for lookups may be faster for
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2866#issuecomment-60319091
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22091/consoleFull)
for PR 2866 at commit
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60319128
Hi, @shivaram, do you mean we send the report with tell instead of
askDriverWithReplyhmmm...
what's the original motivation to send BlockInfo
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60319425
I mean Akka's tell, not
```
private def tell(message: Any) {
if (!askDriverWithReply[Boolean](message)) {
throw new
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/2901#discussion_r19312590
--- Diff: python/pyspark/sql.py ---
@@ -1065,7 +1074,9 @@ def applySchema(self, rdd, schema):
[Row(field1=1, field2=u'row1'),..., Row(field1=3,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60320561
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22092/consoleFull)
for PR 2913 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/2901#discussion_r19313312
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/json/JsonRDD.scala
---
@@ -372,13 +372,20 @@ private[sql] object JsonRDD extends Logging {
}
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2911#issuecomment-60320940
It looks like these test failures might be due missing classes in the
snappy-java 1.1.1.4 JAR: https://github.com/xerial/snappy-java/issues/90
---
If your project is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2914#issuecomment-60320994
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22086/consoleFull)
for PR 2914 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2914#issuecomment-60321003
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/2866#issuecomment-60321093
LGTM!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/2901#discussion_r19313556
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/json/JsonRDD.scala
---
@@ -372,13 +372,20 @@ private[sql] object JsonRDD extends Logging {
}
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2852#discussion_r19313715
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/HistoryPage.scala ---
@@ -21,12 +21,28 @@ import javax.servlet.http.HttpServletRequest
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/2901#issuecomment-60321798
Thanks for fix so many typos!
It will be awesome to recognize all Date/Timestamps values in JsonRDD. If
it's not easy to do it in this PR, we could do it in
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2900#issuecomment-60322017
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/413/consoleFull)
for PR 2900 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2882#issuecomment-60322270
@JoshRosen whenever you get a chance. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2900#issuecomment-60322368
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/414/consoleFull)
for PR 2900 at commit
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/2433#issuecomment-60322408
@holdenk Could you add some examples about how the logging levels should
be? All list all the valid names in docstring.
@tdas We could use this in the Streaming
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2911#issuecomment-60322887
**[Tests timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22084/consoleFull)**
for PR 2911 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2911#issuecomment-60322891
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/2701#discussion_r19314368
--- Diff: python/pyspark/sql.py ---
@@ -305,12 +305,15 @@ class StructField(DataType):
-def __init__(self, name, dataType,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2703#issuecomment-60323993
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22090/consoleFull)
for PR 2703 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2703#issuecomment-60324001
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2735#issuecomment-60324050
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22089/consoleFull)
for PR 2735 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2735#issuecomment-60324055
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user davies opened a pull request:
https://github.com/apache/spark/pull/2916
[SPARK-2652] [PySpark] donot use KyroSerializer as default serializer
KyroSerializer can not serialize customized class without registered
explicitly, use it as default serializer in PySpark will
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2916#issuecomment-60324288
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22093/consoleFull)
for PR 2916 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60324309
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60324304
**[Tests timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22085/consoleFull)**
for PR 2913 at commit
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2884#issuecomment-60324639
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2852#issuecomment-60324752
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22094/consoleFull)
for PR 2852 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2852#issuecomment-60324848
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22094/consoleFull)
for PR 2852 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2852#issuecomment-60324852
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2781#issuecomment-60324935
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/415/consoleFull)
for PR 2781 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60325166
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22095/consoleFull)
for PR 2913 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2884#issuecomment-60325180
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22096/consoleFull)
for PR 2884 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2673#issuecomment-60325425
Thanks - I reverted this patch in master.
On Thu, Oct 23, 2014 at 2:39 AM, Guoqiang Li notificati...@github.com
wrote:
@ScrapCodes
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2915#issuecomment-60325586
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2915#issuecomment-60325584
**[Tests timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22087/consoleFull)**
after a configured wait of `120m`.
---
If your project
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2866#issuecomment-60325676
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22091/consoleFull)
for PR 2866 at commit
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/2755#issuecomment-60325693
LGTM. Let me try jenkins for the last time on someone else's PR... Jenkins,
test this please.
---
If your project is set up for it, you can reply to this email and have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2866#issuecomment-60325682
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2866#issuecomment-60325904
Okay I'm gonna pull this in - thanks Josh!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2866
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/2917
[SPARK-4071] Unroll fails silently if BlockManager is small
@tdas
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/andrewor14/spark
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2917#issuecomment-60326459
Ideal would be throwing a warning in creating a BlockManager if the memory
store size * unrolling fraction = initial memory threshold. Because in that
case, any insertion
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60326659
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22092/consoleFull)
for PR 2913 at commit
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2703#discussion_r19316316
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/StreamingContext.scala ---
@@ -331,8 +331,8 @@ class StreamingContext private[streaming] (
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2917#issuecomment-60326693
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22097/consoleFull)
for PR 2917 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-6032
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2900#issuecomment-60326696
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/414/consoleFull)
for PR 2900 at commit
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2703#discussion_r19316400
--- Diff:
streaming/src/test/java/org/apache/spark/streaming/JavaAPISuite.java ---
@@ -1703,6 +1710,65 @@ public void testTextFileStream() {
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2703#issuecomment-60327025
Thanks @holdenk for doing this. This is a great fix.
But since this is an API change which breaks binary compatibility (I am
okay with it, as no one is probably using
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/2126#discussion_r19316607
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -19,19 +19,16 @@ package
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2703#discussion_r19316615
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/StreamingContext.scala ---
@@ -331,8 +331,8 @@ class StreamingContext private[streaming] (
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/2126#issuecomment-60327161
This looks good to me based on my understanding of Mesos. @tnachen will
this still work okay if Mesos is not running as root (and can't switch user)?
---
If your project
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1358#issuecomment-60327322
@tsliwowicz your fix seems good -- thanks for getting to the bottom of this!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2781#issuecomment-60327453
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/415/consoleFull)
for PR 2781 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2701#issuecomment-60327497
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22098/consoleFull)
for PR 2701 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2433#issuecomment-60328324
Great! But how does this interact in situations where the downstream
applications have explicitly removed Log4j from their path because they use
other logging libraries
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2838#issuecomment-60328732
Since this approach seems to be technically correct given our current code,
and since it fixes an urgent blocker, I'm going to merge this. We can revisit
other
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2884#issuecomment-60329177
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22096/consoleFull)
for PR 2884 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2838
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2884#issuecomment-60329180
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2916#issuecomment-60329607
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22093/consoleFull)
for PR 2916 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2916#issuecomment-60329611
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/2918
[SPARK-4068][SQL] NPE in jsonRDD schema inference
Please refer to added tests for cases that can trigger the bug.
JIRA: https://issues.apache.org/jira/browse/SPARK-4068
You can merge this
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2882#discussion_r19318044
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/util/HdfsUtils.scala ---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2882#discussion_r19318068
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/util/HdfsUtils.scala ---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2882#discussion_r19318091
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/util/WriteAheadLogManager.scala
---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2882#discussion_r19318155
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/util/WriteAheadLogManager.scala
---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2918#issuecomment-60330427
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22099/consoleFull)
for PR 2918 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60330567
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60330564
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/22095/consoleFull)
for PR 2913 at commit
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2882#discussion_r19318248
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/util/WriteAheadLogManager.scala
---
@@ -0,0 +1,223 @@
+/*
+ * Licensed to the Apache
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2882#discussion_r19318338
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/util/WriteAheadLogManager.scala
---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/2126#issuecomment-60330890
@mateiz Mesos will throw a TASK_FAILED whenever it can't chown the work
directory, or when it was launched with the default mesos containerizer it will
just fail with a
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2882#discussion_r19318524
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/util/HdfsUtils.scala ---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software
Github user xerial commented on the pull request:
https://github.com/apache/spark/pull/2911#issuecomment-60331324
Please use snappy-java-1.1.1.5, which fixes the broken build.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2882#discussion_r19318631
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/util/HdfsUtils.scala ---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software
201 - 300 of 412 matches
Mail list logo