[ 
https://issues.apache.org/jira/browse/SPARK-7591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Lian updated SPARK-7591:
------------------------------
    Description: 
# Renaming {{FSBasedRelation}} to {{HadoopFsRelation}}

  Since itss all coupled with Hadoop {{FileSystem}} and job API.

# {{HadoopFsRelation}} should have a no-arg constructor

  {{paths}} and {{partitionColumns}} should just be methods to be overridden, 
rather than constructor arguments. This makes data source developers life 
easier by having a no-arg constructor and being serialization friendly.

# Renaming {{HadoopFsRelation.prepareForWrite}} to 
{{HadoopFsRelation.prepareJobForWrite}}

  The new name explicitly suggests developers should only touch the {{Job}} 
instance for preparation work (which is also documented in Scaladoc).

# Allowing serialization while creating {{OutputWriter}}s

  To be more precise, {{OutputWriter}}s are never created on driver side and 
serialized to executor side. But the factory that creates {{OutputWriter}}s 
should be created on driver side and serialized.

  The reason behind this is that, passing all needed materials to 
{{OutputWriter}}s via Hadoop Configuration is doable but sometimes neither 
intuitive nor convenient. Resorting to serialization makes data source 
developers' life easier. Actually this happens when I was migrating the Parquet 
data source, and wanted to pass the final output path (instead of temporary 
work path) to the output writer (see 
[here|https://github.com/liancheng/spark/commit/ec9950c591e5b981ce20fab96562db28488e0035#diff-53521d336f7259e859fea4d3ca4dc888R74]).
 There I have to put a property into the Configuration object.

  was:
# Renaming {{FSBasedRelation}} to {{HadoopFsRelation}}

  Since itss all coupled with Hadoop {{FileSystem}} and job API.

# {{HadoopFsRelation}} should have a no-arg constructor

  {{paths}} and {{partitionColumns}} should just be methods to be overridden, 
rather than constructor arguments. This makes data source developers life 
easier by having a no-arg constructor and being serialization friendly.

# Renaming {{HadoopFsRelation.prepareForWrite}} to 
{{HadoopFsRelation.prepareJobForWrite}}

  The new name explicitly suggests developers should only touch the {{Job}} 
instance for preparation work (which is also documented in Scaladoc).

# Allowing serialization while creating {{OutputWriter}}s

  To be more precise, {{OutputWriter}}s are never created on driver side and 
serialized to executor side. But the factory that creates {{OutputWriter}}s 
should be created on driver side and serialized. The reason behind this is 
that, passing all needed materials to {{OutputWriter}}s via Hadoop 
Configuration is doable but sometimes neither intuitive nor convenient. 
Resorting to serialization makes data source developers' life easier. Actually 
this happens when I was migrating the Parquet data source, and wanted to pass 
the final output path (instead of temporary work path) to the output writer 
(see [here] [1]). There I have to put a property into the Configuration object.


> FSBasedRelation interface tweaks
> --------------------------------
>
>                 Key: SPARK-7591
>                 URL: https://issues.apache.org/jira/browse/SPARK-7591
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>            Reporter: Reynold Xin
>            Assignee: Cheng Lian
>            Priority: Blocker
>
> # Renaming {{FSBasedRelation}} to {{HadoopFsRelation}}
>   Since itss all coupled with Hadoop {{FileSystem}} and job API.
> # {{HadoopFsRelation}} should have a no-arg constructor
>   {{paths}} and {{partitionColumns}} should just be methods to be overridden, 
> rather than constructor arguments. This makes data source developers life 
> easier by having a no-arg constructor and being serialization friendly.
> # Renaming {{HadoopFsRelation.prepareForWrite}} to 
> {{HadoopFsRelation.prepareJobForWrite}}
>   The new name explicitly suggests developers should only touch the {{Job}} 
> instance for preparation work (which is also documented in Scaladoc).
> # Allowing serialization while creating {{OutputWriter}}s
>   To be more precise, {{OutputWriter}}s are never created on driver side and 
> serialized to executor side. But the factory that creates {{OutputWriter}}s 
> should be created on driver side and serialized.
>   The reason behind this is that, passing all needed materials to 
> {{OutputWriter}}s via Hadoop Configuration is doable but sometimes neither 
> intuitive nor convenient. Resorting to serialization makes data source 
> developers' life easier. Actually this happens when I was migrating the 
> Parquet data source, and wanted to pass the final output path (instead of 
> temporary work path) to the output writer (see 
> [here|https://github.com/liancheng/spark/commit/ec9950c591e5b981ce20fab96562db28488e0035#diff-53521d336f7259e859fea4d3ca4dc888R74]).
>  There I have to put a property into the Configuration object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to