[
https://issues.apache.org/jira/browse/PHOENIX-1071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14485434#comment-14485434
]
ASF GitHub Bot commented on PHOENIX-1071:
-----------------------------------------
GitHub user jmahonin opened a pull request:
https://github.com/apache/phoenix/pull/65
PHOENIX-1071 Get the phoenix-spark integration tests running.
Uses the BaseHBaseManagedTimeIT framework now for creating the
test cluster and setup/teardown.
Tested with Java 7u75 i386 on Ubuntu, and 7u40 x64 on OS X.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/FileTrek/phoenix PHOENIX-1071
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/phoenix/pull/65.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #65
----
commit a4f40723048d84cd865d425393de29d00e98919a
Author: Josh Mahonin <[email protected]>
Date: 2015-04-08T02:33:17Z
PHOENIX-1071 Get the phoenix-spark integration tests running.
Uses the BaseHBaseManagedTimeIT framework now for creating the
test cluster and setup/teardown.
Tested with Java 7u75 i386 on Ubuntu, and 7u40 x64 on OS X.
----
> Provide integration for exposing Phoenix tables as Spark RDDs
> -------------------------------------------------------------
>
> Key: PHOENIX-1071
> URL: https://issues.apache.org/jira/browse/PHOENIX-1071
> Project: Phoenix
> Issue Type: New Feature
> Reporter: Andrew Purtell
> Assignee: Josh Mahonin
>
> A core concept of Apache Spark is the resilient distributed dataset (RDD), a
> "fault-tolerant collection of elements that can be operated on in parallel".
> One can create a RDDs referencing a dataset in any external storage system
> offering a Hadoop InputFormat, like PhoenixInputFormat and
> PhoenixOutputFormat. There could be opportunities for additional interesting
> and deep integration.
> Add the ability to save RDDs back to Phoenix with a {{saveAsPhoenixTable}}
> action, implicitly creating necessary schema on demand.
> Add support for {{filter}} transformations that push predicates to the server.
> Add a new {{select}} transformation supporting a LINQ-like DSL, for example:
> {code}
> // Count the number of different coffee varieties offered by each
> // supplier from Guatemala
> phoenixTable("coffees")
> .select(c =>
> where(c.origin == "GT"))
> .countByKey()
> .foreach(r => println(r._1 + "=" + r._2))
> {code}
> Support conversions between Scala and Java types and Phoenix table data.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)