[
https://issues.apache.org/jira/browse/HADOOP-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12831044#action_12831044
]
Sanjay Radia commented on HADOOP-6537:
--------------------------------------
* HadoopIllegalArgumentException - parameters should be final.
* InvalidPathException - package should be fs not io (same as Path),
Make InvalidPath a subclass of HadoopIllegalArgument; ie an unchecked
exception
** create and mkdir should document them in javadoc
** Q. do the other methods throw InvalidPathException or FileNotFound in case
of an illegal path?
* CheckPath - javadoc add @throws HadoopIllegalArgumentException
* Spelling typos: "acess"
* FileContext - the internal private methods should also throw more specific
exceptions
* FileContext#create:
FileNotFoundException - typo "parent of dir does not exist" and .. ->
parent of f does not exist and ...
Missing exception: ParentNotDirectoryException
Remove InvalidPath as it is unchecked exception - see my comment above.
* FileContext#mkdir: Missing exception: ParentNotDirectoryException
* FileContext#rename
Javadoc: @throws FileAlreadyExists if dst already exist ... ADD --- and
OVERWRITE is false.
* FileContext#setVerifyChecksum
shouldn't it throw notSupportedException ? This is a subtle issue. For
the convenience of testing one does not want the exception.
* While reading the patch I came up with some further cleanup of the
FileContext APis - some of them are related to exceptions but others
are not. We should file a separate Jira on this if we decide to change the
spec.
** isFile, isDir - should it throw notFoundException or return false if the
path does not exist?
** util#getFileStatus - we return a partial list if some of the paths are
invalid or not accessible. Should we throw an exception if any of the paths are
invalid or not accessible?
** deleteOnExit should return void instead of boolean.
> Proposal for exceptions thrown by FileContext and Abstract File System
> ----------------------------------------------------------------------
>
> Key: HADOOP-6537
> URL: https://issues.apache.org/jira/browse/HADOOP-6537
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Jitendra Nath Pandey
> Assignee: Suresh Srinivas
> Fix For: 0.22.0
>
> Attachments: hdfs-717.1.patch, hdfs-717.patch, hdfs-717.patch
>
>
> Currently the APIs in FileContext throw only IOException. Going forward these
> APIs will throw more specific exceptions.
> This jira proposes following hierarchy of exceptions to be thrown by
> FileContext and AFS (Abstract File System) classes.
> InterruptedException (java.lang.InterruptedException)
> IOException
> /* Following exceptions extend IOException */
> FileNotFoundException
> FileAlreadyExistsException
> DirectoryNotEmptyException
> NotDirectoryException
> AccessDeniedException
> IsDirectoryException
> InvalidPathNameException
>
> FileSystemException
> /* Following exceptions extend
> FileSystemException */
> FileSystemNotReadyException
> ReadOnlyFileSystemException
> QuotaExceededException
> OutOfSpaceException
> RemoteException (java.rmi.RemoteException)
> Most of the IOExceptions above are caused by invalid user input, while
> FileSystemException is thrown when FS is in such a state that the requested
> operation cannot proceed.
> Please note that the proposed RemoteException is from standard java rmi
> package, which also extends IOException.
>
> HDFS throws many exceptions which are not in the above list. The DFSClient
> will unwrap the exceptions thrown by HDFS, and any exception not in the above
> list will be thrown as IOException or FileSystemException.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.