[ 
https://issues.apache.org/jira/browse/PHOENIX-1071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated PHOENIX-1071:
------------------------------------

    Description: 
A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
"fault-tolerant collection of elements that can be operated on in parallel". 
One can create a RDDs referencing a dataset in any external storage system 
offering a Hadoop InputFormat, like PhoenixInputFormat and PhoenixOutputFormat. 
There could be opportunities for additional interesting and deep integration. 

Add the ability to save RDDs back to Phoenix with a {{saveAsPhoenixTable}} 
action, implicitly creating necessary schema on demand.

Add support for {{filter}} transformations that push predicates to the server.

Add a new {{select}} transformation supporting a LINQ-like DSL, for example:
{code}
// Count the number of different coffee varieties offered by each
// supplier from Guatemala
phoenixTable("coffees")
    .select(c =>
        where(c.origin == "GT"))
    .countByKey()
    .foreach(r => println(r._1 + "=" + r._2))
{code} 

Support conversions between Scala and Java types and Phoenix table data.

  was:
A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
"fault-tolerant collection of elements that can be operated on in parallel". 
One can create a RDDs referencing a dataset in any external storage system 
offering a Hadoop InputFormat, like HBase's TableInputFormat and 
TableSnapshotInputFormat. Phoenix as JDBC driver supporting a SQL dialect can 
provide interesting and deep integration. 

Add the ability to save RDDs back to Phoenix with a {{saveAsPhoenixTable}} 
action, implicitly creating necessary schema on demand.

Add support for {{filter}} transformations that push predicates to the server.

Add a new {{select}} transformation supporting a LINQ-like DSL, for example:
{code}
// Count the number of different coffee varieties offered by each
// supplier from Guatemala
phoenixTable("coffees")
    .select(c =>
        where(c.origin == "GT"))
    .countByKey()
    .foreach(r => println(r._1 + "=" + r._2))
{code} 

Support conversions between Scala and Java types and Phoenix table data.


> Provide integration for exposing Phoenix tables as Spark RDDs
> -------------------------------------------------------------
>
>                 Key: PHOENIX-1071
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1071
>             Project: Phoenix
>          Issue Type: New Feature
>            Reporter: Andrew Purtell
>
> A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
> "fault-tolerant collection of elements that can be operated on in parallel". 
> One can create a RDDs referencing a dataset in any external storage system 
> offering a Hadoop InputFormat, like PhoenixInputFormat and 
> PhoenixOutputFormat. There could be opportunities for additional interesting 
> and deep integration. 
> Add the ability to save RDDs back to Phoenix with a {{saveAsPhoenixTable}} 
> action, implicitly creating necessary schema on demand.
> Add support for {{filter}} transformations that push predicates to the server.
> Add a new {{select}} transformation supporting a LINQ-like DSL, for example:
> {code}
> // Count the number of different coffee varieties offered by each
> // supplier from Guatemala
> phoenixTable("coffees")
>     .select(c =>
>         where(c.origin == "GT"))
>     .countByKey()
>     .foreach(r => println(r._1 + "=" + r._2))
> {code} 
> Support conversions between Scala and Java types and Phoenix table data.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to