[ 
https://issues.apache.org/jira/browse/HDFS-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13132691#comment-13132691
 ] 

Alejandro Abdelnur commented on HDFS-2178:
------------------------------------------

---
@Sanjay,

* *100 Continue issue*, the problem I see with Arpit's solution is that if 
somebody is creating/appending an empty stream it will go into an infinite loop 
because the server will not see any content and it will respond with a 307 & 
Location over and over. IMO we need a getCreateHandle & getAppendHandle calls.

* *Proxy and webhdfs API the same or almost the same*, for the operations that 
do not make sense in one an 'unsupported operation' should be returned, for the 
operations that are supported in both, the call should be the same. This would 
allow applications built against proxy to work with webhdfs and viceversa 
(assuming they account for the unsupported operations in each case). Note that 
redirection in proxy makes sense to ensure authentication, you don't want to 
put/post (create/append) data just to find the request being rejected because 
of not authentication credentials.

* *API*, see comment @Nicholas in HDFS-2316.

* *Pure proxy vs hdfs proxy*. I'm not understanding the question 'Would it make 
sense ...'

                
> Contributing Hoop to HDFS, replacement for HDFS proxy with read/write 
> capabilities
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-2178
>                 URL: https://issues.apache.org/jira/browse/HDFS-2178
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 0.23.0
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>             Fix For: 0.23.0
>
>         Attachments: HDFS-2178.patch, HDFSoverHTTP-API.html, HdfsHttpAPI.pdf
>
>
> We'd like to contribute Hoop to Hadoop HDFS as a replacement (an improvement) 
> for HDFS Proxy.
> Hoop provides access to all Hadoop Distributed File System (HDFS) operations 
> (read and write) over HTTP/S.
> The Hoop server component is a REST HTTP gateway to HDFS supporting all file 
> system operations. It can be accessed using standard HTTP tools (i.e. curl 
> and wget), HTTP libraries from different programing languages (i.e. Perl, 
> Java Script) as well as using the Hoop client. The Hoop server component is a 
> standard Java web-application and it has been implemented using Jersey 
> (JAX-RS).
> The Hoop client component is an implementation of Hadoop FileSystem client 
> that allows using the familiar Hadoop filesystem API to access HDFS data 
> through a Hoop server.
>   Repo: https://github.com/cloudera/hoop
>   Docs: http://cloudera.github.com/hoop
>   Blog: http://www.cloudera.com/blog/2011/07/hoop-hadoop-hdfs-over-http/
> Hoop is a Maven based project that depends on Hadoop HDFS and Alfredo (for 
> Kerberos HTTP SPNEGO authentication). 
> To make the integration easy, HDFS Mavenization (HDFS-2096) would have to be 
> done first, as well as the Alfredo contribution (HADOOP-7119).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to