[ https://issues.apache.org/jira/browse/HADOOP-4952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12754728#action_12754728 ]
Sanjay Radia commented on HADOOP-4952: -------------------------------------- Propose that we change create and mkdirs apis as follows. * create - will not create missing directories * createRecursive - will create missing parent directories (ie like FileSystem#create.) * mkdir - will not create missing directories * mkdirRecursive (or mkdirs) - will create missing parent directories (i.e. like FileSystem#mkdirs) Create and mkdir are atomic; the recursive versions are not guaranteed to be atomic. (separate discussion on whether or not hdfs will continue to support the recursive versions atomically). There is a jira (my search attempts failed to find it) that has argued that a non-recursive create is important for MR. IMHO the recursive create and recursive mkdirs of FileSystem were a big mistake; we should have stuck to the Unix spec; every diversion from the Unix spec should be a very conscious decision. > Improved files system interface for the application writer. > ----------------------------------------------------------- > > Key: HADOOP-4952 > URL: https://issues.apache.org/jira/browse/HADOOP-4952 > Project: Hadoop Common > Issue Type: Improvement > Affects Versions: 0.21.0 > Reporter: Sanjay Radia > Assignee: Sanjay Radia > Attachments: FileContext-common10.patch, FileContext-common11.patch, > FileContext-common12.patch, FileContext-common13.patch, > FileContext-hdfs10.patch, FileContext-hdfs11.patch, FileContext3.patch, > FileContext5.patch, FileContext6.patch, FileContext7.patch, > FileContext9.patch, Files.java, Files.java, FilesContext1.patch, > FilesContext2.patch > > > Currently the FIleSystem interface serves two purposes: > - an application writer's interface for using the Hadoop file system > - a file system implementer's interface (e.g. hdfs, local file system, kfs, > etc) > This Jira proposes that we provide a simpler interfaces for the application > writer and leave the FilsSystem interface for the implementer of a > filesystem. > - Filesystem interface has a confusing set of methods for the application > writer > - We could make it easier to take advantage of the URI file naming > ** Current approach is to get FileSystem instance by supplying the URI and > then access that name space. It is consistent for the FileSystem instance to > not accept URIs for other schemes, but we can do better. > ** The special copyFromLocalFIle can be generalized as a copyFile where the > src or target can be generalized to any URI, including the local one. > ** The proposed scheme (below) simplifies this. > - The client side config can be simplified. > ** New config() by default uses the default config. Since this is the common > usage pattern, one should not need to always pass the config as a parameter > when accessing the file system. > - > ** It does not handle multiple file systems too well. Today a site.xml is > derived from a single Hadoop cluster. This does not make sense for multiple > Hadoop clusters which may have different defaults. > ** Further one should need very little to configure the client side: > *** Default files system. > *** Block size > *** Replication factor > *** Scheme to class mapping > ** It should be possible to take Blocksize and replication factors defaults > from the target file system, rather then the client size config. I am not > suggesting we don't allow setting client side defaults, but most clients do > not care and would find it simpler to take the defaults for their systems > from the target file system. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.