Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/1631#issuecomment-50437417
Cool, much cleaner than the previous code, looks good to me :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1536#discussion_r15507202
--- Diff:
examples/src/main/scala/org/apache/spark/examples/pythonconverters/AvroGenericConverter.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to
Github user ueshin commented on the pull request:
https://github.com/apache/spark/pull/1586#issuecomment-50437674
I'm sorry but now I become confused.
`Length` and `Strlen` look like becoming almost the same implementation.
What do you intend the difference between them is?
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1624#issuecomment-50438015
QA results for PR 1624:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user javadba commented on the pull request:
https://github.com/apache/spark/pull/1586#issuecomment-50439423
@ueshin The length applies to any datatype - i described in a prior
comment. AFA getBytes, I am following the recommendation of @chenghao-intel :
I
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1490#issuecomment-50439564
I looked at this (with some confusion). Yes, I agree it would be great to
just signal failure using the promise when an error occurs.
@sarutak do you think you can
Github user avati commented on the pull request:
https://github.com/apache/spark/pull/996#issuecomment-50439713
@ScrapCodes @mateiz looks like there is some parallel efforts here
(github.com/avati/spark/commits/scala-2.11). It is true some upstream artifacts
are pending (from other
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/478#issuecomment-50440084
Hi @prabinb,
Thanks for submitting this PR. This issue has been fixed by #1606, so do
you mind closing this? Thanks!
---
If your project is set up for it,
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/515#issuecomment-50440531
If you can figure out a way to retain backwards-compatibility with IPython
2, I'd be happy to merge this. Maybe you can do something like parsing
`ipython --version`
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/1632
[SPARK-2677] BasicBlockFetchIterator#next can wait forever
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/sarutak/spark SPARK-2677
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1630#issuecomment-50440788
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1632#issuecomment-50440763
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1614#issuecomment-50440880
@XuTingjun mind creating a JIRA issue on
https://issues.apache.org/jira/browse/SPARK so we can track this? When you do,
update the pull request's title with the JIRA
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/554#issuecomment-50440910
Hi @kalpit,
Since this PR has been superseded by #644, do you mind closing it? Thanks!
---
If your project is set up for it, you can reply to this email and
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1499#discussion_r15508024
--- Diff:
core/src/main/scala/org/apache/spark/rdd/OrderedRDDFunctions.scala ---
@@ -43,10 +44,10 @@ import org.apache.spark.{Logging, RangePartitioner}
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1631#issuecomment-50441055
QA results for PR 1631:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brclass
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/681#issuecomment-50441094
Libcloud looks good actually, and it's nice that it's another Apache
project. Would be worth a try if you guys want to investigate it. It would be
awesome if we also get
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1630#issuecomment-50441147
QA tests have started for PR 1630. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17341/consoleFull
---
GitHub user hzw19900416 opened a pull request:
https://github.com/apache/spark/pull/1633
fix a mistaken type of if in description of trait Partitioning
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/hzw19900416/spark
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1631#issuecomment-50441277
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508170
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1518#issuecomment-50441485
@dbtsai I thought another way to do this and want to know your opinion. We
can add an optional argument to `appendBias`: `appendBias(bias: Double = 1.0)`.
If this is used
Github user hzw19900416 commented on the pull request:
https://github.com/apache/spark/pull/1529#issuecomment-50441495
This error is due to the environment of mine. So close it.
In addition, using the mvn package to do the unit test while compiling is
better than using mvn test,
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15508220
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -246,28 +246,36 @@ private[spark] class TaskSchedulerImpl(
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1632#discussion_r15508224
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockFetcherIterator.scala ---
@@ -117,31 +121,45 @@ object BlockFetcherIterator {
})
Github user hzw19900416 closed the pull request at:
https://github.com/apache/spark/pull/1529
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1625#issuecomment-50441519
@JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15508258
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -113,6 +114,10 @@ private[spark] class TaskSetManager(
// but at
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1536#discussion_r15508267
--- Diff:
examples/src/main/scala/org/apache/spark/examples/pythonconverters/AvroGenericConverter.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508295
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15508307
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -341,20 +346,31 @@ private[spark] class TaskSetManager(
*
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15508323
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -363,38 +379,44 @@ private[spark] class TaskSetManager(
}
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15508338
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -435,6 +460,13 @@ private[spark] class TaskSetManager(
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15508347
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -751,20 +787,7 @@ private[spark] class TaskSetManager(
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508334
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508331
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1536#discussion_r15508369
--- Diff:
examples/src/main/scala/org/apache/spark/examples/pythonconverters/AvroGenericConverter.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15508357
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala ---
@@ -80,7 +80,7 @@ class FakeTaskSetManager(
override def
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15508376
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -341,20 +346,31 @@ private[spark] class TaskSetManager(
*
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50442021
This looks a lot better, thanks. Still made a few comments throughout it. I
think we can get rid of the fine-grained tracking of which nodes have node-only
tasks, that is
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508428
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1480#issuecomment-50442106
But why is that? The JVM should always call shutdown hooks when it exists.
Is Mesos killing the process?
I'm curious because we might have other behavior that
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1338#issuecomment-50442216
Looks good to me as well. @JoshRosen any comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508491
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508543
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1551#issuecomment-50442362
We aren't passing completely arbitrary iterators of Java objects to
writeIteratorToStream; instead, we only handle iterators of strings and byte
arrays. Nulls in data
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508571
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/RowMatrix.scala
---
@@ -104,13 +105,11 @@ class RowMatrix(
val nt: Int = n *
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508559
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user ueshin commented on the pull request:
https://github.com/apache/spark/pull/1586#issuecomment-50442618
First, I would like to confirm, but which do you want to add to HQL,
`Length` or `Strlen`?
The title of this PR says to add `Length` to HQL, but the implementation
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508694
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508695
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1619#issuecomment-50442727
@witgo @pwendell I have already noticed there is not a configuration for
timeout for ConnectionManager, but the timeout for ConnectionManager does not
resolve this issue
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1338#discussion_r15508664
--- Diff: core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala
---
@@ -65,20 +66,49 @@ private[python] object SerDeUtil extends Logging {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508718
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508725
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508785
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508790
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508808
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/1110#discussion_r15508803
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/rdd/RDDFunctions.scala ---
@@ -44,6 +47,65 @@ class RDDFunctions[T: ClassTag](self: RDD[T]) {
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1383#issuecomment-50443081
Sorry to come back to this after a while. Disk faults can be transient as
well right? I'm not sure if we'd want to exit the executor simply because of
one disk fault.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1625#issuecomment-50443121
QA tests have started for PR 1625. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17343/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1346#issuecomment-50443467
QA tests have started for PR 1346. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17344/consoleFull
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1110#issuecomment-50443472
QA tests have started for PR 1110. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17345/consoleFull
---
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1625#issuecomment-50443446
I've merged this into `master` and `branch-1.0`. Thanks Davies!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1631#issuecomment-50443804
Looks good to me too, though it might be better to use Java's Arrays.sort
instead of Scala's quickSort since Java has fancier algorithms in new versions.
---
If your
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/1625#issuecomment-50443830
@JoshRosen did it include the last commit? I didn't find them in master nor
branch-1.0. It's delayed?
---
If your project is set up for it, you can reply to this email
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1309#issuecomment-50443836
QA results for PR 1309:br- This patch FAILED unit tests.brbrFor more
information see test
Github user javadba commented on the pull request:
https://github.com/apache/spark/pull/1586#issuecomment-50443929
@ueshin That is not what the title reads. Here is the title:
Add Length support to Spark SQL and HQL and Strlen support to SQ
---
If your project is set
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1625#issuecomment-50443990
@davies Yeah, it included both commits. If you check the [Apache
repo](https://git-wip-us.apache.org/repos/asf/spark/repo?p=spark.git;a=summary),
you should see the
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/1434#issuecomment-50443974
@pdeyhim can you take a look over this too when you have a chance?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1110#issuecomment-50444114
QA tests have started for PR 1110. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17346/consoleFull
---
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/1612#discussion_r15509316
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcResultSetRDD.scala ---
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/1612#discussion_r15509342
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcTypes.scala
---
@@ -0,0 +1,56 @@
+/*
+* Licensed to the Apache Software
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/1612#discussion_r15509361
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcTypes.scala
---
@@ -0,0 +1,56 @@
+/*
+* Licensed to the Apache Software
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1633#issuecomment-50444578
Do you mind closing the pull request? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1631#issuecomment-50444740
I tried that - had some issues with types between Scala and Java and
resorted to the current implementation. In any case because this code will
likely be replaced soon by
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1110#issuecomment-50445699
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1625#issuecomment-50446807
QA results for PR 1625:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1110#issuecomment-50446850
QA results for PR 1110:br- This patch FAILED unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1631#issuecomment-50447222
QA results for PR 1631:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds the following public classes
(experimental):brclass
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1631#issuecomment-50447480
Ok I'm merging this. Thanks for reviewing.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1110#issuecomment-50447573
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1110#issuecomment-50447677
QA results for PR 1110:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1110#issuecomment-50447729
Merging this in master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1627#issuecomment-50447804
Merged into master, branch-1.0, and branch-0.9. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1110#issuecomment-50447939
QA tests have started for PR 1110. This patch merges cleanly. brView
progress:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17347/consoleFull
---
Github user rxin closed the pull request at:
https://github.com/apache/spark/pull/1631
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1346#discussion_r15510908
--- Diff: python/pyspark/sql.py ---
@@ -20,8 +20,457 @@
from py4j.protocol import Py4JError
-__all__ = [SQLContext, HiveContext,
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1346#discussion_r15510935
--- Diff: python/pyspark/sql.py ---
@@ -20,8 +20,457 @@
from py4j.protocol import Py4JError
-__all__ = [SQLContext, HiveContext,
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1547#discussion_r15510944
--- Diff: core/src/main/scala/org/apache/spark/Logging.scala ---
@@ -110,23 +110,26 @@ trait Logging {
}
private def initializeLogging()
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/1547#discussion_r15510992
--- Diff: core/src/main/scala/org/apache/spark/Logging.scala ---
@@ -110,23 +110,26 @@ trait Logging {
}
private def initializeLogging()
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1346#discussion_r15510995
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/WrapDynamic.scala
---
@@ -21,7 +21,9 @@ import scala.language.dynamics
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1547#issuecomment-50448651
Thanks for catching this and digging into the fix. Some small questions in
the PR, but generally looks good!
---
If your project is set up for it, you can reply to
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1346#discussion_r15511092
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/dataTypes.scala
---
@@ -201,47 +231,139 @@ object FractionalType {
}
}
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1630#issuecomment-50448836
QA results for PR 1630:br- This patch PASSES unit tests.br- This patch
merges cleanlybr- This patch adds no public classesbrbrFor more
information see test
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1346#discussion_r15511210
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/api/java/types/DataType.java ---
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1547#discussion_r15511218
--- Diff: core/src/main/scala/org/apache/spark/Logging.scala ---
@@ -110,23 +110,26 @@ trait Logging {
}
private def initializeLogging() {
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1346#discussion_r15511199
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/api/java/types/DataType.java ---
@@ -0,0 +1,212 @@
+/*
+ * Licensed to the Apache Software
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1346#discussion_r15511259
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -89,6 +90,45 @@ class SQLContext(@transient val sparkContext:
SparkContext)
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1547#discussion_r15511267
--- Diff: core/src/main/scala/org/apache/spark/Logging.scala ---
@@ -110,23 +110,26 @@ trait Logging {
}
private def initializeLogging() {
1 - 100 of 582 matches
Mail list logo