[
https://issues.apache.org/jira/browse/HDDS-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17758989#comment-17758989
]
Sadanand Shenoy commented on HDDS-4838:
---------------------------------------
The problem here is specific to HDFS client , In order to use Ozone Trash, one
must set fs.trash.classname to *org.apache.hadoop.ozone.om.TrashPolicyOzone* on
the client .
If this is not set on the hdfs client , the default TrashPolicy i.e
TrashPolicyDefault is used. This will work for some cases but it is not
recommended as the renamed paths are different (See HDDS-5866 where we changed
renamed paths in case of OFS. These changes are missing in TrashPolicyDefault
and will lead to wrong paths.)
The particular issue in this jira is due to difference in behaviour of
getTrashRoot method in DistributedFileSystem vs OzoneFilesystem. In
DistributedFilesystem getTrashRoot always returns a fully qualified path but
that doesn't happen in OFS. If we make that change in OFS to make the
getTrashRoot return a qualified path, the NPE issue here would get resolved. We
can make this change if it doesn't break many tests and doesn't get complicated
else leave it as is.
Either ways it is not recommended to use TrashPolicyDefault for Ozone Trash due
to the difference quoted above.
In summary,
# Setting trash policy to TrashPolicyOzone on the client will not lead to this
problem.
# Changing getTrashRoot to return a qualified path will solve the particular
NPE even if TrashPolicyDefault is used. The impact is only cosmetic as we
cannot delete the root, Instead of NPE we would get a clean error message.
> Deleting at bucket root throws NPE
> ----------------------------------
>
> Key: HDDS-4838
> URL: https://issues.apache.org/jira/browse/HDDS-4838
> Project: Apache Ozone
> Issue Type: Bug
> Components: Ozone Client
> Affects Versions: 1.0.0
> Reporter: Wei-Chiu Chuang
> Assignee: Himanshi Darvekar
> Priority: Major
>
> {noformat}
> $ sudo -u hdfs hdfs dfs -rm -r o3fs://tpcds100gb.sparksqldata.ozone1/
> 21/02/18 10:29:53 INFO Configuration.deprecation: io.bytes.per.checksum is
> deprecated. Instead, use dfs.bytes-per-checksum
> -rm: Fatal internal error
> java.lang.NullPointerException
> at org.apache.hadoop.fs.Path.mergePaths(Path.java:273)
> at
> org.apache.hadoop.fs.TrashPolicyDefault.makeTrashRelativePath(TrashPolicyDefault.java:113)
> at
> org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:146)
> at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:110)
> at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:96)
> at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:153)
> at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:118)
> at
> org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:367)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
> at
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:304)
> at
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:286)
> at
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:270)
> at
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> {noformat}
> Expected: it should throw a more graceful error. If the same command with
> -skipTrash option, it errors out with this:
> {noformat}
> $ sudo -u hdfs hdfs dfs -rm -r -skipTrash
> o3fs://tpcds100gb.sparksqldata.ozone1/
> 21/02/18 10:31:24 WARN ozone.BasicOzoneFileSystem: Cannot delete root
> directory.
> rm: `o3fs://tpcds100gb.sparksqldata.ozone1/': Input/output error
> {noformat}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]