Dear Huy Phan,
Thanks a lot!
It seems like the diff in the patch you sent me should be the other way
around, which is like the following:
diff --git b/src/webdav.c a/src/webdav.c
index 8ec7a2d..4bdaece 100644
--- b/src/webdav.c
+++ a/src/webdav.c
@@ -472,7 +472,7 @@ dav_init_connection(const char *path)
if (!ret) {
initialized = 1;
- if (!caps.dav_class1 && !ignore_dav_header) {
+ if (!caps.dav_class1 && !caps.dav_class2 && !ignore_dav_header) {
if (have_terminal) {
error(EXIT_FAILURE, 0,
_("mounting failed; the server does not support
WebDAV"));
After applying this patch, the error "the server does not support WebDAV" is
gone. After a simple test of the WebDAV + davfs2 mix, I also experienced
very poor performance. I think I have to go back to fuse-dfs performance
wise.
Thanks a lot for your quick help!
Best regards,
Zhang Bingjun (Eddy)
E-mail: [email protected], [email protected], [email protected]
Tel No: +65-96188110 (M)
On Wed, Oct 28, 2009 at 9:22 AM, Huy Phan <[email protected]> wrote:
> Hi Zhang,
> I applied my patch to davfs2-1.4.0 and it's working fine with Hadoop
> 0.20.1.
> If you didn't define any access restriction in account.properties file, you
> can ignore the authentication when mounting davfs2.
>
>
> Best,
> Huy Phan
>
>
> Zhang Bingjun (Eddy) wrote:
>
>> Dear Huy Phan,
>>
>> I downloaded davfs2-1.4.3 and in this version the patch you sent me seems
>> to be applied already. I compiled and installed this version. However, the
>> error message is still around like below...
>>
>> had...@hdfs2:/mnt$ sudo mount.davfs http://192.168.0.131:9800hdfs-webdav/
>> Please enter the username to authenticate with server
>> http://192.168.0.131:9800 or hit enter for none.
>> Username: hadoop
>> Please enter the password to authenticate user hadoop with server
>> http://192.168.0.131:9800 or hit enter for none.
>> Password:
>> mount.davfs: mounting failed; the server does not support WebDAV
>>
>> Which username or password should I input? Any user in the
>> account.properties file or the user in the WebDAV OS?
>>
>> Regarding the memory leak in fuse-dfs and libhdfs, I posted one patch in
>> apache jira. However, when used in production environment, the memory leak
>> still exists and cause the mounting point unusable after a number of
>> write/read operations. The memory leak there is really annoying...
>>
>> I hope I can setup the mix of davfs2 and WebDAV to have a try on its
>> performance. Any ideas to get around the error "mount failed; the server
>> does not support WebDAV"?
>>
>> Thank you so much for your help!
>>
>> Best regards,
>> Zhang Bingjun (Eddy)
>>
>> E-mail: [email protected] <mailto:[email protected]>,
>> [email protected] <mailto:[email protected]>,
>> [email protected]<mailto:
>> [email protected]>
>> Tel No: +65-96188110 (M)
>>
>>
>> On Tue, Oct 27, 2009 at 7:19 PM, Huy Phan <[email protected] <mailto:
>> [email protected]>> wrote:
>>
>> Hi Zhang,
>> I didn't play much with fuse-dfs, in my opinion, memory leak is
>> something solvable and I can see Apache had made some fixes for
>> this issue on libhdfs.
>> If you encounter these problems with older version of Hadoop, I
>> think you should give a try on the latest stable version.
>> Since I didn't have much fun so far with fuse-dfs, i cannot say
>> it's the best or not, but it's definitely better than mixing
>> davfs2 and webdav together.
>>
>>
>> Best,
>> Huy Phan
>>
>> Zhang Bingjun (Eddy) wrote:
>>
>> Dear Huy Phan,
>>
>>
>> Thanks for your quick reply.
>> I was using fuse-dfs before. But I found serious memory leak
>> with fuse-dfs about 10MB leakage per 10k file read/write. When
>> the occupied memory size reached about 150MB, the read/write
>> performance dropped dramatically. Did you encounter these
>> problems?
>>
>> What I am trying to do is to mount HDFS as a local directory
>> in Ubuntu. Do you think fuse-dfs is the best option so far?
>>
>> Thank you so much for your input!
>>
>> Best regards,
>> Zhang Bingjun (Eddy)
>>
>> E-mail: [email protected] <mailto:[email protected]>
>> <mailto:[email protected] <mailto:[email protected]>>,
>> [email protected] <mailto:[email protected]>
>> <mailto:[email protected] <mailto:[email protected]>>,
>> [email protected] <mailto:[email protected]>
>> <mailto:[email protected] <mailto:[email protected]>>
>> Tel No: +65-96188110 (M)
>>
>>
>> On Tue, Oct 27, 2009 at 6:55 PM, Huy Phan <[email protected]
>> <mailto:[email protected]> <mailto:[email protected]
>>
>> <mailto:[email protected]>>> wrote:
>>
>> Hi Zhang,
>>
>> Here is the patch for davfs2 to solve "server does not support
>> WebDAV" issue:
>>
>> diff --git a/src/webdav.c b/src/webdav.c
>> index 8ec7a2d..4bdaece 100644
>> --- a/src/webdav.c
>> +++ b/src/webdav.c
>> @@ -472,7 +472,7 @@ dav_init_connection(const char *path)
>>
>> if (!ret) {
>> initialized = 1;
>> - if (!caps.dav_class1 && !caps.dav_class2 &&
>> !ignore_dav_header) {
>> + if (!caps.dav_class1 && !ignore_dav_header) {
>> if (have_terminal) {
>> error(EXIT_FAILURE, 0,
>> _("mounting failed; the server does not
>> support WebDAV"));
>>
>>
>> davfs2 and webdav is not a good mix actually, I had tried
>> to mix
>> them together and the performance were really bad. With the
>> load
>> test of 10 requests/s, load average on my namenode were
>> always >
>> 15 and it took me about 5 mins for `ls` the root directory
>> of HDFS
>> during the test.
>>
>> Since you're using Hadoop 0.20.1, it's better to use fusedfs
>> library provided in Hadoop package. You have to do some
>> tricks to
>> compile fusedfs with Hadoop, otherwise it would take you a
>> lot of
>> time for compiling redundant things.
>>
>> Best,
>> Huy Phan
>>
>> Zhang Bingjun (Eddy) wrote:
>>
>> Dear Huy Phan and others,
>>
>> Thanks a lot for your efforts in customizing the WebDav
>> server
>> <http://github.com/huyphan/HDFS-over-Webdav> and make
>> it work
>> for Hadoop-0.20.1.
>> After setting up the WebDav server, I could access it using
>> Cadaver client in Ubuntu without using any username
>> password.
>> Operations like deleting files, etc, were working. The
>> command
>> is: *cadaver http://server:9800*
>>
>> However, when I was trying to mount the WebDav server using
>> davfs2 in Ubuntu, I always get the following error:
>> "mount.davfs: mounting failed; the server does not support
>> WebDAV".
>>
>> I was promoted to input username and password like below:
>> had...@hdfs2:/mnt$ sudo mount.davfs
>> http://192.168.0.131:9800/test hdfs-webdav/
>> Please enter the username to authenticate with server
>> http://192.168.0.131:9800/test or hit enter for none.
>> Username: hadoop
>> Please enter the password to authenticate user hadoop
>> with server
>> http://192.168.0.131:9800/test or hit enter for none.
>> Password:
>> mount.davfs: mounting failed; the server does not
>> support WebDAV
>>
>> Even though I have tried all possible usernames and
>> passwords
>> either from the WebDAV accounts.properties file or from the
>> Ubuntu system of the WebDAV server, I still got this error
>> message.
>> Could you and anyone give me some hints on this
>> problem? How
>> could I solve it? Very much appreciate your help!
>>
>> Best regards,
>> Zhang Bingjun (Eddy)
>>
>> E-mail: [email protected] <mailto:[email protected]>
>> <mailto:[email protected] <mailto:[email protected]>>
>> <mailto:[email protected] <mailto:[email protected]>
>> <mailto:[email protected] <mailto:[email protected]>>>,
>>
>> [email protected] <mailto:[email protected]>
>> <mailto:[email protected] <mailto:[email protected]>>
>> <mailto:[email protected] <mailto:[email protected]>
>> <mailto:[email protected] <mailto:[email protected]>>>,
>>
>> [email protected]
>> <mailto:[email protected]>
>> <mailto:[email protected] <mailto:[email protected]>>
>> <mailto:[email protected]
>> <mailto:[email protected]>
>> <mailto:[email protected]
>> <mailto:[email protected]>>>
>>
>>
>> Tel No: +65-96188110 (M)
>>
>>
>>
>>
>>
>>
>