[ https://issues.apache.org/jira/browse/HDFS-420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13005982#comment-13005982 ]
Eli Collins commented on HDFS-420: ---------------------------------- Hey Brian, Good catch. The patch does two things which are probably better implemented as separate patches, let me know what you think. # Removes the ability to compile-out perms with the libhdfs.noperms option. Should be orthogonal to #2, think we should do this, but in a separate patch. IIUC the code is currently optional (though compiled in by default) so people can disable it for performance. I think we can always have perms enabled in fuse-dfs but a better way to do this is use the reentrant versions of the relevant functions (and beware of HADOOP-7156) rather than introduce new locks. Btw if we're removing PERM we also need to remove the code that sets it in src/contrib/fuse-dfs/build.xml. # Makes each fuse-dfs operation get (a new) and release FS handles. While the change introduces some additional overhead (each operation now needs a new client) we need to do it for correctness (can later make FS handle caching work in another jira). How about this jira just covers the patch for #2? Some additional comments from looking at the code: * In fuse_imples_getattr.c line 37 it looks like this change is still in progress, eg why does it connect unconditionally? You can remove the commented out code on the following line. * In fuse_impls_chown.c line 60 we should not reconenct here right? * Nit on line 76 of the same file, you don't need to assign null here. Thanks, Eli > fuse_dfs is unable to connect to the dfs after a copying a large number of > files into the dfs over fuse > ------------------------------------------------------------------------------------------------------- > > Key: HDFS-420 > URL: https://issues.apache.org/jira/browse/HDFS-420 > Project: Hadoop HDFS > Issue Type: Bug > Components: contrib/fuse-dfs > Affects Versions: 0.20.2 > Environment: Fedora core 10, x86_64, 2.6.27.7-134.fc10.x86_64 #1 SMP > (AMD 64), gcc 4.3.2, java 1.6.0 (IcedTea6 1.4 (fedora-7.b12.fc10-x86_64) > Runtime Environment (build 1.6.0_0-b12) OpenJDK 64-Bit Server VM (build > 10.0-b19, mixed mode) > Reporter: Dima Brodsky > Assignee: Brian Bockelman > Fix For: 0.20.3 > > Attachments: fuse_dfs_020_memleaks.patch > > > I run the following test: > 1. Run hadoop DFS in single node mode > 2. start up fuse_dfs > 3. copy my source tree, about 250 megs, into the DFS > cp -av * /mnt/hdfs/ > in /var/log/messages I keep seeing: > Dec 22 09:02:08 bodum fuse_dfs: ERROR: hdfs trying to utime > /bar/backend-trunk2/src/machinery/hadoop/output/2008/11/19 to > 1229385138/1229963739 > and then eventually > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1333 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1333 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1333 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1333 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1209 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1209 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1333 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1209 > Dec 22 09:03:49 bodum fuse_dfs: ERROR: could not connect to dfs > fuse_dfs.c:1037 > and the file system hangs. hadoop is still running and I don't see any > errors in it's logs. I have to unmount the dfs and restart fuse_dfs and then > everything is fine again. At some point I see the following messages in the > /var/log/messages: > ERROR: dfs problem - could not close file_handle(139677114350528) for > /bar/backend-trunk2/src/machinery/hadoop/input/2008/12/14/actionrecordlog-8339-93825052368848-1229278807.log > fuse_dfs.c:1464 > Dec 22 09:04:49 bodum fuse_dfs: ERROR: dfs problem - could not close > file_handle(139676770220176) for > /bar/backend-trunk2/src/machinery/hadoop/input/2008/12/14/actionrecordlog-8140-93825025883216-1229278759.log > fuse_dfs.c:1464 > Dec 22 09:05:13 bodum fuse_dfs: ERROR: dfs problem - could not close > file_handle(139677114812832) for > /bar/backend-trunk2/src/machinery/hadoop/input/2008/12/14/actionrecordlog-8138-93825070138960-1229251587.log > fuse_dfs.c:1464 > Is this a known issue? Am I just flooding the system too much. All of this > is being performed on a single, dual core, machine. > Thanks! > ttyl > Dima -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira