[ 
https://issues.apache.org/jira/browse/HIVE-14373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15435905#comment-15435905
 ] 

Thomas Poepping commented on HIVE-14373:
----------------------------------------

Abdullah, what is wrong with:
* at the beginning of the test run, do a mkdir in S3 for a unique test run id
* at the end of the test run, do a rmdir for that directory

That will remove all leftover data. Maybe we could have a setting to optionally 
not delete data at the end, to allow for more targeted debugging. Then it would 
be the responsibility of the user to delete those files after the fact.

Are you planning on updating this patch again?

> Add integration tests for hive on S3
> ------------------------------------
>
>                 Key: HIVE-14373
>                 URL: https://issues.apache.org/jira/browse/HIVE-14373
>             Project: Hive
>          Issue Type: Sub-task
>            Reporter: Sergio Peña
>            Assignee: Abdullah Yousufi
>         Attachments: HIVE-14373.02.patch, HIVE-14373.03.patch, 
> HIVE-14373.04.patch, HIVE-14373.patch
>
>
> With Hive doing improvements to run on S3, it would be ideal to have better 
> integration testing on S3.
> These S3 tests won't be able to be executed by HiveQA because it will need 
> Amazon credentials. We need to write suite based on ideas from the Hadoop 
> project where:
> - an xml file is provided with S3 credentials
> - a committer must run these tests manually to verify it works
> - the xml file should not be part of the commit, and hiveqa should not run 
> these tests.
> https://wiki.apache.org/hadoop/HowToContribute#Submitting_patches_against_object_stores_such_as_Amazon_S3.2C_OpenStack_Swift_and_Microsoft_Azure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to