[ 
https://issues.apache.org/jira/browse/HADOOP-14418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-14418:
-------------------------------------
    Labels: hdfs-ec-3.0-nice-to-have  (was: )

> Confusing failure stack trace when codec fallback is happend
> ------------------------------------------------------------
>
>                 Key: HADOOP-14418
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14418
>             Project: Hadoop Common
>          Issue Type: Sub-task
>            Reporter: Kai Sasaki
>            Assignee: Kai Sasaki
>            Priority: Minor
>              Labels: hdfs-ec-3.0-nice-to-have
>         Attachments: HADOOP-14418.01.patch, HADOOP-14418.02.patch
>
>
> When erasure codec is fallbacked, all stacktrace is shown to client.
> {code}
> root@990705591ccc:/usr/local/hadoop# bin/hadoop fs -put README.txt /ec
> 17/05/13 08:23:46 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 17/05/13 08:23:47 WARN erasurecode.CodecUtil: Failed to create raw erasure 
> encoder rs_native, fallback to next codec if possible
> java.lang.ExceptionInInitializerError
>       at 
> org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawErasureCoderFactory.createEncoder(NativeRSRawErasureCoderFactory.java:35)
>       at 
> org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoderWithFallback(CodecUtil.java:173)
>       at 
> org.apache.hadoop.io.erasurecode.CodecUtil.createRawEncoder(CodecUtil.java:129)
>       at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.<init>(DFSStripedOutputStream.java:302)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:309)
>       at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1214)
>       at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1193)
>       at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1131)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:449)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:446)
>       at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:460)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
>       at 
> org.apache.hadoop.fs.FilterFileSystem.create(FilterFileSystem.java:181)
>       at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1074)
>       at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1054)
>       at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:943)
>       at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.create(CommandWithDestination.java:509)
>       at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:484)
>       at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:407)
>       at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:342)
>       at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:277)
>       at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:262)
>       at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>       at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
>       at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:257)
>       at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
>       at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
>       at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:228)
>       at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:286)
>       at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>       at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
>       at org.apache.hadoop.fs.FsShell.run(FsShell.java:326)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>       at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.RuntimeException: hadoop native library cannot be loaded.
>       at 
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.checkNativeCodeLoaded(ErasureCodeNative.java:69)
>       at 
> org.apache.hadoop.io.erasurecode.rawcoder.NativeRSRawEncoder.<clinit>(NativeRSRawEncoder.java:33)
>       ... 36 more
> root@990705591ccc:/usr/local/hadoop#
> {code}
> This message is confusing to users because it looks like writing ec file is 
> failed. It is useful to print only error message.
> {code}
>         LOG.warn("Failed to create raw erasure encoder " + rawCoderName +
>             ", fallback to next codec if possible", e);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to