Github user jmahonin commented on the pull request:

    https://github.com/apache/phoenix/pull/63#issuecomment-92343659
  
    Thanks for the review @mravi 
    
    That HBaseConfiguration.create() step is a great idea, I'll make that 
change ASAP.
    
    Re: naming scheme, I'd attempted to follow Cassandra-Spark connector, since 
there's not yet too much available for reference code, but also the feature 
sets would be relatively closely aligned:
    
https://github.com/datastax/spark-cassandra-connector/tree/master/spark-cassandra-connector/src/main/scala/com/datastax/spark/connector
    
    Although I'm not completely married to the idea, both Datastax (Cassandra) 
and Databricks (Spark) seem to follow a _Functions.scala scheme, where _ is the 
class to which implicit helper parameters are being attached. In this case, the 
new 'ProductRDDFunctions' applies the implicit helper function 'saveToPhoenix' 
to objects of type RDD[Product], or an RDD of tuples.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to