[ https://issues.apache.org/jira/browse/HADOOP-496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12515742 ]
Enis Soztutar edited comment on HADOOP-496 at 7/26/07 8:35 AM: --------------------------------------------------------------- This patch is an improvement on the previous one. Added file creation, and content length. litmus test results are as follows : {code} -> running `basic': 0. init.................. pass 1. begin................. pass 2. options............... WARNING: server does not claim Class 2 compliance ...................... pass (with 1 warning) 3. put_get............... WARNING: length mismatch: 0 vs 41 ...................... pass (with 1 warning) 4. put_get_utf8_segment.. WARNING: length mismatch: 0 vs 41 ...................... pass (with 1 warning) 5. mkcol_over_plain...... pass 6. delete................ pass 7. delete_null........... pass 8. delete_fragment....... pass 9. mkcol................. pass 10. mkcol_again........... pass 11. delete_coll........... pass 12. mkcol_no_parent....... pass 13. mkcol_with_body....... FAIL (MKCOL with weird body must fail (RFC2518:8.3.1)) 14. finish................ pass {code} content length warnings are due to avoiding costly du operation on folders. The patch depends on HADOOP-1654. was: This patch is an improvement on the previous one. Added file creation, and content length. litmus test results are as follows : {code} -> running `basic': 0. init.................. pass 1. begin................. pass 2. options............... WARNING: server does not claim Class 2 compliance ...................... pass (with 1 warning) 3. put_get............... WARNING: length mismatch: 0 vs 41 ...................... pass (with 1 warning) 4. put_get_utf8_segment.. WARNING: length mismatch: 0 vs 41 ...................... pass (with 1 warning) 5. mkcol_over_plain...... pass 6. delete................ pass 7. delete_null........... pass 8. delete_fragment....... pass 9. mkcol................. pass 10. mkcol_again........... pass 11. delete_coll........... pass 12. mkcol_no_parent....... pass 13. mkcol_with_body....... FAIL (MKCOL with weird body must fail (RFC2518:8.3.1)) 14. finish................ pass {code} content length warnings are due to avoiding costly du operation on folders. > Expose HDFS as a WebDAV store > ----------------------------- > > Key: HADOOP-496 > URL: https://issues.apache.org/jira/browse/HADOOP-496 > Project: Hadoop > Issue Type: New Feature > Components: dfs > Reporter: Michel Tourn > Assignee: Enis Soztutar > Attachments: hadoop-webdav.zip, jetty-slide.xml, lib.webdav.tar.gz, > slideusers.properties, webdav_wip1.patch, webdav_wip2.patch > > > WebDAV stands for Distributed Authoring and Versioning. It is a set of > extensions to the HTTP protocol that lets users collaboratively edit and > manage files on a remote web server. It is often considered as a replacement > for NFS or SAMBA > HDFS (Hadoop Distributed File System) needs a friendly file system interface. > DFSShell commands are unfamiliar. Instead it is more convenient for Hadoop > users to use a mountable network drive. A friendly interface to HDFS will be > used both for casual browsing of data and for bulk import/export. > The FUSE provider for HDFS is already available ( > http://issues.apache.org/jira/browse/HADOOP-17 ) but it had scalability > problems. WebDAV is a popular alternative. > The typical licensing terms for WebDAV tools are also attractive: > GPL for Linux client tools that Hadoop would not redistribute anyway. > More importantly, Apache Project/Apache license for Java tools and for server > components. > This allows for a tighter integration with the HDFS code base. > There are some interesting Apache projects that support WebDAV. > But these are probably too heavyweight for the needs of Hadoop: > Tomcat servlet: > http://tomcat.apache.org/tomcat-4.1-doc/catalina/docs/api/org/apache/catalina/servlets/WebdavServlet.html > Slide: http://jakarta.apache.org/slide/ > Being HTTP-based and "backwards-compatible" with Web Browser clients, the > WebDAV server protocol could even be piggy-backed on the existing Web UI > ports of the Hadoop name node / data nodes. WebDAV can be hosted as (Jetty) > servlets. This minimizes server code bloat and this avoids additional network > traffic between HDFS and the WebDAV server. > General Clients (read-only): > Any web browser > Linux Clients: > Mountable GPL davfs2 http://dav.sourceforge.net/ > FTP-like GPL Cadaver http://www.webdav.org/cadaver/ > Server Protocol compliance tests: > http://www.webdav.org/neon/litmus/ > A goal is for Hadoop HDFS to pass this test (minus support for Properties) > Pure Java clients: > DAV Explorer Apache lic. http://www.ics.uci.edu/~webdav/ > WebDAV also makes it convenient to add advanced features in an incremental > fashion: > file locking, access control lists, hard links, symbolic links. > New WebDAV standards get accepted and more or less featured WebDAV clients > exist. > core http://www.webdav.org/specs/rfc2518.html > ACLs http://www.webdav.org/specs/rfc3744.html > redirects "soft links" http://greenbytes.de/tech/webdav/rfc4437.html > BIND "hard links" http://www.webdav.org/bind/ > quota http://tools.ietf.org/html/rfc4331 -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.