Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/2092#discussion_r16634474
--- Diff: python/pyspark/rdd.py ---
@@ -1715,6 +1715,52 @@ def batch_as(rdd, batchSize):
other._jrdd_deserializer)
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/2092#discussion_r16634476
--- Diff: python/pyspark/rdd.py ---
@@ -1715,6 +1715,52 @@ def batch_as(rdd, batchSize):
other._jrdd_deserializer)
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/2092#issuecomment-53180206
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1959#issuecomment-53180216
LGTM, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2092#issuecomment-53180259
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19121/consoleFull)
for PR 2092 at commit
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/1886#issuecomment-53180409
LGTM.
@sarutak Did you merge my branch to yours? 'Cause I saw my commits in the
history of this branch. Actually you can first update your local master branch,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2093#issuecomment-53180512
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19122/consoleFull)
for PR 2093 at commit
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/2093#issuecomment-53180510
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2093#issuecomment-53180657
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19123/consoleFull)
for PR 2093 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1856#issuecomment-53180967
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19119/consoleFull)
for PR 1856 at commit
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/2091#issuecomment-53180978
@mateiz @JoshRosen I would like to change `evenBuckets` to `even`, the
later one is meaningful enough and much shorter.
One concern is that we will have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2052#issuecomment-53181130
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19120/consoleFull)
for PR 2052 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2092#issuecomment-53181636
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19121/consoleFull)
for PR 2092 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2093#issuecomment-53181857
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19122/consoleFull)
for PR 2093 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2093#issuecomment-53182020
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19123/consoleFull)
for PR 2093 at commit
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-53184688
Thanks @mridulm , @JoshRosen !
So, how about the solution as follows?
* Leave try / catch for CancelledKeyException which can be thrown from
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-53184900
Sounds good, tux !
On 24-Aug-2014 2:28 pm, Kousuke Saruta notificati...@github.com wrote:
Thanks @mridulm https://github.com/mridulm , @JoshRosen
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1886#issuecomment-53185942
@liancheng Yes, I merged your branch to mine. I'll update my branch if your
#1994 is updated, thanks!
---
If your project is set up for it, you can reply to this email
Github user chuxi commented on the pull request:
https://github.com/apache/spark/pull/2084#issuecomment-53189498
I think CAST is the better choice(Compared with the NO CAST method). It is
implemented in the
case class Cast(child: Expression, dataType: DataType) extends
Github user mattf commented on the pull request:
https://github.com/apache/spark/pull/2093#issuecomment-53190051
i'm never a fan of code reformatting, whitespace changes or refactoring at
the same time as functional changes, e.g. if (k, v) = if k, v; v if k not in m
else func(m[k],
GitHub user SpyderRiverA opened a pull request:
https://github.com/apache/spark/pull/2107
update fork
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/SpyderRiverA/spark master
Alternatively you can review and apply these
Github user SpyderRiverA closed the pull request at:
https://github.com/apache/spark/pull/2107
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/2019#issuecomment-53192139
One of issues I'd like to resolve in this PR is miss-detection when
SedingConnection is closed by corresponding ReceivingConnection in
removeConnection.
If
GitHub user scwf opened a pull request:
https://github.com/apache/spark/pull/2108
[SPARK-3193]output errer info when Process exit code is not zero in test
suite
https://issues.apache.org/jira/browse/SPARK-3193
I noticed that sometimes pr tests failed due to the Process exitcode
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2108#issuecomment-53194216
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-53197438
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19125/consoleFull)
for PR 2078 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1670#issuecomment-53198943
Yea I can close it - thanks Josh.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell closed the pull request at:
https://github.com/apache/spark/pull/1670
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2078#issuecomment-53199027
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19125/consoleFull)
for PR 2078 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2104#issuecomment-53199130
Thanks - this seems straightforward. I merged this into master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2104
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/918#issuecomment-53199274
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1940#issuecomment-53199778
I commented on the JIRA - but we already have code that handles the fact
that cancellation is not supported in Mesos. It's likely this is related to
some other type of
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53200124
Hey Martin,
I'm having a bit of trouble seeing how this works around the issue. From
what I can tell the issue is that if someone creates Executors that consume
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/2093#issuecomment-53200221
@mattf While I was scanning down the whole file line by line in order to
find out all the issues related to persersesPartitioning, reformatting them in
the same time, if
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2093#issuecomment-53201526
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19126/consoleFull)
for PR 2093 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2093#issuecomment-53203062
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19126/consoleFull)
for PR 2093 at commit
Github user MartinWeindel commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53203084
Hey Patrick,
first of all let me emphasize again that this is only a work-around. The
real problem is that Mesos only makes offers if there are at
GitHub user marmbrus opened a pull request:
https://github.com/apache/spark/pull/2109
[WIP][SPARK-3194][SQL] Add AttributeSet to fix bugs with invalid
comparisons of AttributeReferences
It is common to want to describe sets of attributes that are in various
parts of a query plan.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2109#issuecomment-53206312
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19127/consoleFull)
for PR 2109 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2109#issuecomment-53206395
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19127/consoleFull)
for PR 2109 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2109#issuecomment-53209466
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19128/consoleFull)
for PR 2109 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2109#issuecomment-53209493
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19128/consoleFull)
for PR 2109 at commit
GitHub user fjiang6 opened a pull request:
https://github.com/apache/spark/pull/2110
[SPARK-3188][MLLIB]: Add Robust Regression Algorithm with Turkey bisquare
(Biweight) function
Biweight Robust Regression including the test case and an example.
Passed the style checks
You can
Github user chesterxgchen closed the pull request at:
https://github.com/apache/spark/pull/2090
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user chesterxgchen commented on the pull request:
https://github.com/apache/spark/pull/2090#issuecomment-53211490
Patrick
The release version is incorrect in the YARN pom intentionally (it's a
weird artifact of the way we publish builds).
It would nice that someone
Github user chesterxgchen commented on the pull request:
https://github.com/apache/spark/pull/2090#issuecomment-53212454
BTW, I have to change the pom.xml in order to fix the unit test
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user GrahamDennis commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-53213573
@rxin: I haven't modified the Mesos code, and it seems that wouldn't be too
hard to do, but I have no way of testing it. Suggestions welcomed.
As for YARN,
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/1717#issuecomment-53214500
Jenkins, this is ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1717#issuecomment-53214781
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19129/consoleFull)
for PR 1717 at commit
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/1726#discussion_r16639309
--- Diff: pom.xml ---
@@ -125,6 +125,7 @@
protobuf.version2.4.1/protobuf.version
yarn.version${hadoop.version}/yarn.version
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/2068#issuecomment-53217627
LGTM. Merged into master and branch-1.1! Thanks for helping on the
documentation!!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2068
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/2070#issuecomment-53217715
I've merged this into master and branch-1.1. Thanks!!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2070
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/1717#issuecomment-53217871
@sjbrunst This is great addition! Thanks for the effort. However, from the
patch, I can see that this changes the signature of a few methods, which
required the examples to
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/1717#discussion_r16639408
--- Diff:
external/twitter/src/main/scala/org/apache/spark/streaming/twitter/TwitterInputDStream.scala
---
@@ -85,9 +89,14 @@ class TwitterReceiver(
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/1717#discussion_r16639417
--- Diff:
external/twitter/src/main/scala/org/apache/spark/streaming/twitter/TwitterUtils.scala
---
@@ -75,16 +80,44 @@ object TwitterUtils {
* OAuth
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/1717#discussion_r16639413
--- Diff:
external/twitter/src/main/scala/org/apache/spark/streaming/twitter/TwitterUtils.scala
---
@@ -33,15 +33,20 @@ object TwitterUtils {
*
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1717#issuecomment-53217956
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19129/consoleFull)
for PR 1717 at commit
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/1717#discussion_r16639433
--- Diff:
external/twitter/src/main/scala/org/apache/spark/streaming/twitter/TwitterUtils.scala
---
@@ -115,17 +148,45 @@ object TwitterUtils {
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/1717#discussion_r16639523
--- Diff:
external/twitter/src/test/java/org/apache/spark/streaming/twitter/JavaTwitterStreamSuite.java
---
@@ -31,16 +31,19 @@
@Test
public void
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/1717#issuecomment-53218083
The units tests failed because these new functions are not binary
compatible with previous versions of Spark.
---
If your project is set up for it, you can reply to this
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/1726#issuecomment-53218467
LGTM. Also spoke to @pwendell offline that its cool to add flume version
(that is version of libraries used in external projects) in root pom.xml. Test
this once more and
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/1726#issuecomment-53218469
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user chesterxgchen opened a pull request:
https://github.com/apache/spark/pull/2111
SPARK-3177 : Yarn-alpha ClientBaseSuite Unit test failed
This second try to fix the Yarn-Alpha unit test failure due to Yarn API
changes.
I have to include SPARK-3175 (pom.xml)
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1726#issuecomment-53218619
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19130/consoleFull)
for PR 1726 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2111#issuecomment-53218717
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/1846#discussion_r16639969
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/CreateTableAsSelect.scala
---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1846#issuecomment-53219649
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19131/consoleFull)
for PR 1846 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/1846#issuecomment-53219646
Thank you @yhuai I've updated the code style issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1726#issuecomment-53220264
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19130/consoleFull)
for PR 1726 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/1846#discussion_r16640131
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicOperators.scala
---
@@ -132,6 +132,7 @@ case class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1846#issuecomment-53220384
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19132/consoleFull)
for PR 1846 at commit
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/1846#discussion_r16640276
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicOperators.scala
---
@@ -132,6 +132,7 @@ case class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1846#issuecomment-53222714
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19131/consoleFull)
for PR 1846 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1846#issuecomment-53223497
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19132/consoleFull)
for PR 1846 at commit
Github user chuxi commented on the pull request:
https://github.com/apache/spark/pull/2109#issuecomment-53225175
I think references is used for transformExpression in QueryPlan to make
sure the same attribute just traversed once when Analyzer is explaining the
Attributes in a
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2092#issuecomment-53225461
LGTM, so I've merged this into `master`. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2092
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53226637
From my knowledge of Mesos, this seems like a good fix. I think we should
do this until MESOS-1688 is fixed.
---
If your project is set up for it, you can reply to this
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53226713
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/1612#discussion_r16642061
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -57,6 +61,8 @@ class JdbcRDD[T: ClassTag](
mapRow: (ResultSet) = T =
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53226740
BTW @MartinWeindel one small request -- can you update the
docs/running-on-mesos.md page to explain that each task will consume 32 MB?
Otherwise people might set Spark's
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/1612#discussion_r16642093
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -81,8 +113,14 @@ class JdbcRDD[T: ClassTag](
logInfo(statement fetch
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/1612#discussion_r16642111
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcResultSetRDD.scala ---
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53226938
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19133/consoleFull)
for PR 1860 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53226981
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19133/consoleFull)
for PR 1860 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/1612#discussion_r16642168
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JdbcResultSetRDDSuite.scala
---
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53227103
BTW this failure is due to a style check -- you can run sbt scalastyle
locally to find all style issues (the Jenkins log also lists the problem).
---
If your project is
Github user iven commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53228456
@MartinWeindel I think you should check if there's enough memory in the
offer first.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/2112
[SPARK-2495][MLLIB] make KMeans constructor public
to re-construct k-means models
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/mengxr/spark
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2112#issuecomment-53228709
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/19134/consoleFull)
for PR 2112 at commit
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53229242
That's true, now that we take 32 MB extra you need to change the logic
about how many tasks we can allocate. That will make it trickier.
---
If your project is set up
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53229289
Hey @MartinWeindel - I'm curious, which of the following cases are you in:
Case 1. You have individual executors that attempt to acquire all the
memory on the
Github user mubarak commented on the pull request:
https://github.com/apache/spark/pull/1723#issuecomment-53229308
@tdas
I don't think the new (proposed) REGEX in `Utils.getCallSite` works for
test suite. For instance,
96 matches
Mail list logo