Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4243
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72960213
Thanks! Merged to master and branch-1.3
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user kul commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72786838
@marmbrus Thanks for review!
Rebased against master and sqashed in a new commit renaming
`schemaRDDOperations` to now more aptly called `dataFrameRDDOperations`.
---
If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72787085
[Test build #26718 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26718/consoleFull)
for PR 4243 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72792070
[Test build #26718 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26718/consoleFull)
for PR 4243 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72792072
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72614630
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72723360
Can you please merge with master. It would be awesome to include this in
1.3! (which we just cut a branch for)
---
If your project is set up for it, you can reply to
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72723284
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72724110
[Test build #26670 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26670/consoleFull)
for PR 4243 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72742147
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72742131
[Test build #26670 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26670/consoleFull)
for PR 4243 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72603227
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user kul opened a pull request:
https://github.com/apache/spark/pull/4243
[SPARK-5426][SQL] Add SparkSQL Java API helper methods.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/kul/spark master
Alternatively you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-71801054
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user kul commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-71807950
Looking into it further, seems like even in Scala one will have to do with
`.rdd` for normal spark operations as functions like filter etc are being
overwritten for the
16 matches
Mail list logo