Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/13257#issuecomment-222180929
@HyukjinKwon , closing this PR for now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user sbcd90 closed the pull request at:
https://github.com/apache/spark/pull/13257
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/13257#issuecomment-221452898
Hello @HyukjinKwon , I think it is an ORC issue. There is a final call to
ORC api & feel that the issue should be fixed in ORC.
```
new OrcOutputFo
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/13257#issuecomment-220959617
@HyukjinKwon, this PR exactly fixes that issue. If you test this PR with
the sample code you provided in the JIRA ticket
```
val conf = new SparkConf
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/13257#issuecomment-220885344
Hello @HyukjinKwon , I have added a testcase now. Please review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user sbcd90 opened a pull request:
https://github.com/apache/spark/pull/13257
[SPARK-15474][SQL]ORC data source fails to write and read back empty
dataframe
## What changes were proposed in this pull request?
Currently ORC data source fails to write and read empty
Github user sbcd90 closed the pull request at:
https://github.com/apache/spark/pull/13012
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/13012#issuecomment-218054997
Hello @rxin , @koertkuipers ,
I have been following your conversation & feel that the right fix will be
```
override def nullable: Boo
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12458#issuecomment-217992684
Hello @rxin , @zsxwing ,
I made the suggested changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/13012#issuecomment-217985550
Jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/13012#issuecomment-217982471
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user sbcd90 opened a pull request:
https://github.com/apache/spark/pull/13012
[SPARK-15204][SQL] Nullable is not correct for Aggregator
## What changes were proposed in this pull request?
```
object SimpleSum extends Aggregator[Row, Int, Int] {
def zero
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12458#issuecomment-217732899
Hello @zsxwing ,
Can this be merged now? I fixed the scala style tests.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12458#issuecomment-217726669
jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12458#issuecomment-217604922
Hello @zsxwing ,
I have resolved the conflicts. Please have a look.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12774#issuecomment-215927369
Hello @HyukjinKwon , I am able to reproduce the same issue even in Spark
1.6.1. I had two files like this
```
/test_spark/join1.json
{"a":
GitHub user sbcd90 opened a pull request:
https://github.com/apache/spark/pull/12774
[SPARK-14997]Files in subdirectories are incorrectly considered in
sqlContext.read.json()
## What changes were proposed in this pull request?
This PR fixes the issue of "Fil
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12666#issuecomment-214586830
Hi @srowen ,
changed to `IllegalArgumentException`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12666#issuecomment-214548968
Hi @jaceklaskowski ,
Thanks for your review. I think it is safe that we make the changes at both
the places.
I also checked for the occurrence of scheduler
GitHub user sbcd90 opened a pull request:
https://github.com/apache/spark/pull/12666
[SPARK-14889][Spark Core] scala.MatchError: NONE (of class
scala.Enumeration) when spark.scheduler.mode=NONE
## What changes were proposed in this pull request?
Handling exception for the
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12532#issuecomment-212607935
Hi @rxin ,
Can this be merged?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12532#issuecomment-212572446
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user sbcd90 opened a pull request:
https://github.com/apache/spark/pull/12532
[SPARK-14749][SQL, Tests] PlannerSuite failed when it run individually
## What changes were proposed in this pull request?
3 testcases namely,
```
"count is part
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12458#issuecomment-211207173
Hello @yhuai ,
I tested the scenario you mentioned & it works fine for me without any
errors.
However, as you rightly mentioned, the following state
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12458#issuecomment-211140805
Hi @rxin , I updated the title & description. Please have a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on Gi
GitHub user sbcd90 opened a pull request:
https://github.com/apache/spark/pull/12458
[SPARK-14642][SQL] import org.apache.spark.sql.expressions._ breaks uâ¦
## What changes were proposed in this pull request?
PR fixes the import issue which breaks udf functions
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12438#issuecomment-211007083
@rxin @sameeragarwal Looks like the 2 tests which failed are not because of
the fix. Please comment.
---
If your project is set up for it, you can reply to this email
Github user sbcd90 commented on the pull request:
https://github.com/apache/spark/pull/12438#issuecomment-210936324
Hello @rxin , please have a look at the fix now. The fix now solves the
randomsplit issue without making maps comparable.
---
If your project is set up for it, you can
GitHub user sbcd90 opened a pull request:
https://github.com/apache/spark/pull/12438
SPARK-14632 randomSplit method fails on dataframes with maps in schema
## What changes were proposed in this pull request?
The patch fixes the issue with the randomSplit method which is not
29 matches
Mail list logo