GitHub user manishamde opened a pull request:
https://github.com/apache/spark/pull/79
MLI-1 Decision Trees
Joint work with @hirakendu, @etrain, @atalwalkar and @harsha2010.
Key features:
+ Supports binary classification and regression
+ Supports gini, entropy and var
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/75#issuecomment-36716624
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/76#issuecomment-36716619
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/75#issuecomment-36716623
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/76#issuecomment-36716620
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user prabinb commented on a diff in the pull request:
https://github.com/apache/spark/pull/76#discussion_r10289574
--- Diff: python/pyspark/rdd.py ---
@@ -1057,6 +1057,24 @@ def coalesce(self, numPartitions, shuffle=False):
jrdd = self._jrdd.coalesce(numPartition
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/75#issuecomment-36715060
Jenkins, this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/76#discussion_r10289240
--- Diff: python/pyspark/rdd.py ---
@@ -1057,6 +1057,24 @@ def coalesce(self, numPartitions, shuffle=False):
jrdd = self._jrdd.coalesce(numPartitions
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/76#issuecomment-36714999
Jenkins, this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have thi
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/71
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is en
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-36708976
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13000/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-36708975
Build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fea
Just subscribing to this list, so apologies for quoting weirdly and any other
etiquette offenses.
DB Tsai wrote
> Hi Deb,
>
> I had tried breeze L-BFGS algorithm, and when I tried it couple weeks
> ago, it's not as stable as the fortran implementation. I guessed the
> problem is in the line sear
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-36706595
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-36706594
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this f
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36706568
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12999/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36706567
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36703974
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36703975
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36703844
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36703845
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12998/
---
If your pr
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36703761
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/44#issuecomment-36703762
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/12#issuecomment-36702525
This is ready to merge?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36702509
How about this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this featur
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/44#discussion_r10284698
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -847,6 +847,8 @@ class SparkContext(
partitions: Seq[Int],
allowLoca
Thanks for the clarification, so now I understand that only failed tasks
will re-scheduled, and only the input partitions of these tasks will be
re-computed.
Another confusing point from paper is:
"Because of these properties, D-Streams can parallelize recovery over
hundreds of cores and recover
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10284630
--- Diff: core/src/main/scala/org/apache/spark/ui/UIReloader.scala ---
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10284454
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -17,38 +17,80 @@
package org.apache.spark.ui
+import java.io.{File
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10284428
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -17,38 +17,80 @@
package org.apache.spark.ui
+import java.io.{File
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/44#discussion_r10283513
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -847,6 +847,8 @@ class SparkContext(
partitions: Seq[Int],
allow
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10283514
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -17,38 +17,80 @@
package org.apache.spark.ui
+import java.io.{File
BlockManager is only responsible for in-memory/on-disk storage. It has
nothing to do with re-computation.
All the recomputation / retry code are done in the DAGScheduler. Note that
when a node crashes, due to lazy evaluation, there is no task that needs to
be re-run. Those tasks are re-run only wh
Hello, developers,
I am just curious about the following two things which seems to be
contradictory to each other, please help me find out my understanding
mistakes:
1) Excerpted from sosp 2013 paper, "Then, when a node fails, the system
detects all missing RDD partitions and launches tasks to re
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/35#issuecomment-36698403
personally, I felt that,
https://spark-project.atlassian.net/browse/SPARK-1175 is also related to this
issue.
---
If your project is set up for it, you can reply to
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/58#issuecomment-36698085
fixed that line as well as others with the same issue
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If y
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/44#discussion_r10283043
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -847,6 +847,8 @@ class SparkContext(
partitions: Seq[Int],
allowL
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/73
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enable
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/73#issuecomment-36692956
Thanks, merged in 0.9 and master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/78#issuecomment-36692137
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
Hi Xiangrui,
It seems that Robert is busy recently. I setup the org.riso in
maven central for him, and I was waiting for his response for
a while without any news. So, I decided to maintain myself.
I'm more favor of using breeze for both sparse and optimization
core math library. When I tried L-B
GitHub user markgrover opened a pull request:
https://github.com/apache/spark/pull/78
SPARK-1184: Update the distribution tar.gz to include spark-assembly jar
See JIRA for details.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/m
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/77#issuecomment-36686522
OK that works, to package and then test. In the canonical Maven lifecycle,
packaging comes after test, so test would not depend on packaging. In practice
this is at worst a
Github user srowen closed the pull request at:
https://github.com/apache/spark/pull/77
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enable
Hey All,
Just a heads up that there are a bunch of updated developer docs on
the wiki including posting the dates around the current merge window.
Some of the new docs might be useful for developers/committers:
https://cwiki.apache.org/confluence/display/SPARK/Wiki+Homepage
Cheers,
- Patrick
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10273889
--- Diff: core/src/main/scala/org/apache/spark/ui/UIReloader.scala ---
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under o
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10273534
--- Diff: core/src/main/scala/org/apache/spark/ui/UIReloader.scala ---
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under o
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10272892
--- Diff: docs/configuration.md ---
@@ -444,7 +444,21 @@ Apart from these, the following properties are also
available, and may be useful
spark.logConf
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/77#issuecomment-36669615
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12997/
---
If your pr
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/77#issuecomment-36669614
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10272860
--- Diff: core/src/main/scala/org/apache/spark/ui/UISparkListener.scala ---
@@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) u
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10272735
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -17,38 +17,80 @@
package org.apache.spark.ui
+import java.io.{FileIn
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10272659
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -17,38 +17,80 @@
package org.apache.spark.ui
+import java.io.{FileIn
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10272629
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -17,38 +17,80 @@
package org.apache.spark.ui
+import java.io.{FileIn
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10272195
--- Diff: core/src/main/scala/org/apache/spark/util/FileLogger.scala ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) unde
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/42#discussion_r10272169
--- Diff: core/src/main/scala/org/apache/spark/util/FileLogger.scala ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) unde
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/77#issuecomment-36665859
In Maven, you can run tests that depend on packages/assemblies during
Maven's `integration-test` phase, which automatically runs after the Maven
`package` phase. I'm not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/77#issuecomment-36663340
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/77#issuecomment-36663341
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36663027
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36663030
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12996/
---
If your project i
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/72
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enable
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/74
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enable
Hi Lars,
Unfortunately, due to some incompatible changes we pulled in to be closer
to YARN trunk, Spark-on-YARN does not work against CDH 4.4+ (but does work
against CDH5)
-Sandy
On Tue, Mar 4, 2014 at 6:33 AM, Tom Graves wrote:
> What is your question about Any hints?
> The maven build worke
Hi DB,
I saw you released the L-BFGS code under com.dbtsai.lbfgs on maven
central, so I assume that Robert (the author of RISO) is not going to
maintain it. Is it correct?
For the breeze implementation, do you mind sharing more details about
the issues you have?
I saw the hack you did to get reg
Github user markhamstra commented on the pull request:
https://github.com/apache/spark/pull/77#issuecomment-36659947
The standard maven build procedure should be to run `mvn -DskipTests
package` first (which builds the assembly) and then `mvn test`. The "Building
Spark with Maven" pa
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/77
SPARK-1181. 'mvn test' fails out of the box since sbt assembly does not
necessarily exist
The test suite requires that "sbt assembly" has been run in order for some
tests (like DriverSuite) to pass. T
Hi Deb,
I've been working with David to add or enhance some features to breeze
to make its performance comparable to bare-bone implementations. I'm
going to update that PR this week with sparse support to KMeans. You
are certainly welcome to update the GLM part. Make sure you are using
the master
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/74#issuecomment-36656962
Thanks. Merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enab
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/72#issuecomment-36656843
Thanks. Merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enab
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36656693
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36656695
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36656411
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36656412
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12995/
---
If your project i
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36653243
This new version of the change doesn't look any simpler to me than the
current version of the code and I think is a slightly confusing way of using
worker offers to s
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36649969
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36649970
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/63#discussion_r10263675
--- Diff: core/src/main/scala/org/apache/spark/scheduler/WorkerOffer.scala
---
@@ -21,4 +21,4 @@ package org.apache.spark.scheduler
* Represents free re
Github user markhamstra commented on a diff in the pull request:
https://github.com/apache/spark/pull/63#discussion_r10262224
--- Diff: core/src/main/scala/org/apache/spark/scheduler/WorkerOffer.scala
---
@@ -21,4 +21,4 @@ package org.apache.spark.scheduler
* Represents free
Yeah we should move f2j L-BFGS and L-BFGS-B to breeze..they already have 2
line searches..also the OWL-QN outline...
Hi Xiangrui,
What's the plan on the PR ?
https://github.com/apache/incubator-spark/pull/575
Will you add breeze as a dependency for the sparse support ?
I looked at your branch
h
What is your question about Any hints?
The maven build worked for me yesterday again fine.
You should create a jira for any pull request like the documentation states.
The jira thing is new so I think people are still getting used to it.
Tom
On Tuesday, March 4, 2014 2:51 AM, Lars Francke
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/73#issuecomment-36628043
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12992/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/73#issuecomment-36628039
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/74#issuecomment-36628021
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12991/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/72#issuecomment-36628046
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12993/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36628040
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36628044
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/12994/
---
If your project i
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/72#issuecomment-36628041
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/74#issuecomment-36628018
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have t
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/76#issuecomment-36623694
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
GitHub user prabinb opened a pull request:
https://github.com/apache/spark/pull/76
SPARK-977 Added Python RDD.zip function
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/prabinb/spark python-api-zip
Alternatively you can review
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36623050
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/73#issuecomment-36623039
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/72#issuecomment-36623042
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/72#issuecomment-36623044
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/74#issuecomment-36623037
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/74#issuecomment-36623036
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/75#issuecomment-36623034
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your proje
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/73#issuecomment-36623041
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/63#issuecomment-36623049
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
1 - 100 of 121 matches
Mail list logo