[ 
https://issues.apache.org/jira/browse/HADOOP-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12667425#action_12667425
 ] 

Steve Loughran commented on HADOOP-5123:
----------------------------------------

File operations

*Touch, copy in, copy out. Not using distcp, so for small data. 
* Rename, 
* add a condition for a file existing, maybe minimum size.
* DfsMkDir: create a directory

A first pass would use resources, 
[[http://ant.apache.org/manual/CoreTypes/resources.html#resource]], which can 
be used in existing Ant tasks; they extend the resource class
[[https://svn.apache.org/viewvc/ant/core/trunk/src/main/org/apache/tools/ant/types/Resource.java?view=markup]]
and can be used in the existing, {{<copy>}}, {{<touch>}} tasks, and the like. 
The resource would need to implement the getOutputStream() and getInputStream() 
operations, also, ideally, {{Touchable}}, for the touch() operation.  

Tests without a cluster
* Some meaningful failure if the hdfs:// URLS don't work

Tests with a cluster
* Copy in, copy-out, copy inside
* touch
* delete
* test for a resource existing
* some of the resource selection operations

Tests against other file systems
* S3:// URLs? Test that it works, but then assume that it stays working.
* Test that s3 urls fail gracefully if the URL is missing/forbidden

> Ant tasks for job submission
> ----------------------------
>
>                 Key: HADOOP-5123
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5123
>             Project: Hadoop Core
>          Issue Type: New Feature
>    Affects Versions: 0.21.0
>         Environment: Both platforms, Linux and Windows
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Ant tasks to make it easy to work with hadoop filesystem and submit jobs. 
> <submit> : uploads JAR, submits job as user, with various settings
> filesystem operations: mkdir, copyin, copyout, delete
>  -We could maybe use Ant1.7 "resources" here, and so use hdfs as a source or 
> dest in Ant's own tasks
> # security. Need to specify user; pick up user.name from JVM as default?
> # cluster binding: namenode/job tracker (hostname,port) or url are all that 
> is needed?
> #job conf: how to configure the job that is submitted? support a list of 
> <property name="name" value="something"> children
> # testing. AntUnit to generate <junitreport> compatible XML files
> # Documentation. With an example using Ivy to fetch the JARs for the tasks 
> and hadoop client.
> # Polling: ant task to block for a job finished? 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to