[ 
https://issues.apache.org/jira/browse/PIG-652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12671224#action_12671224
 ] 

Alan Gates commented on PIG-652:
--------------------------------

In response to Ben's question in comment 
https://issues.apache.org/jira/browse/PIG-652?focusedCommentId=12671009#action_12671009
 of the motivating scenario, the issue is that right now we pass an already 
opened output stream to the store function.  This, and the fact that a fair 
amount of setup is done in the PigRecordWriter forces all stores to be done to 
an HDFS text file.  If a user wants to store to a different type of HDFS file 
(like Table) or to a non-HDFS store (such as a database, hbase, a socket, 
whatever) there's no option for that.  We don't want to export all of the setup 
to the StoreFunc.  The RecordWriter is the right place to do that setup.

> Need to give user control of OutputFormat
> -----------------------------------------
>
>                 Key: PIG-652
>                 URL: https://issues.apache.org/jira/browse/PIG-652
>             Project: Pig
>          Issue Type: New Feature
>          Components: impl
>            Reporter: Alan Gates
>            Assignee: Alan Gates
>
> Pig currently allows users some control over InputFormat via the Slicer and 
> Slice interfaces.  It does not allow any control over OutputFormat and 
> RecordWriter interfaces.  It just allows the user to implement a storage 
> function that controls how the data is serialized.  For hadoop tables, we 
> will need to allow custom OutputFormats that prepare output information and 
> objects needed by a Table store function.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to