[ 
https://issues.apache.org/jira/browse/HIVE-1620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12908854#action_12908854
 ] 

Namit Jain commented on HIVE-1620:
----------------------------------

The approach looks good, but can you move all the checks to compile time 
instead.

I mean, while generating a plan, create a S3FileSinkOperator instead of 
FileSinkOperator, if the
destination under consideration is on S3 FileSystem - there will be no move 
task etc.
The explain work will show the correct plan


The commit for S3FileSystem will be a no-op. That way, FileSinkOperator does 
not change much

> Patch to write directly to S3 from Hive
> ---------------------------------------
>
>                 Key: HIVE-1620
>                 URL: https://issues.apache.org/jira/browse/HIVE-1620
>             Project: Hadoop Hive
>          Issue Type: New Feature
>            Reporter: Vaibhav Aggarwal
>            Assignee: Vaibhav Aggarwal
>         Attachments: HIVE-1620.patch
>
>
> We want to submit a patch to Hive which allows user to write files directly 
> to S3.
> This patch allow user to specify an S3 location as the table output location 
> and hence eliminates the need  of copying data from HDFS to S3.
> Users can run Hive queries directly over the data stored in S3.
> This patch helps integrate hive with S3 better and quicker.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to