[ 
https://issues.apache.org/jira/browse/HDFS-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HDFS-10806.
-----------------------------------------
    Resolution: Duplicate

> Mapreduce jobs fail when StoragePolicy is set
> ---------------------------------------------
>
>                 Key: HDFS-10806
>                 URL: https://issues.apache.org/jira/browse/HDFS-10806
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 2.7.1
>            Reporter: Dennis Lattka
>            Priority: Critical
>
> Before applying any StoragePolicy, running any mapreduce jobs will complete 
> as expected. As soon as the StoragePolicy is set (Tested using HOT and 
> ONE_SSD) any mapreduce job will fail.
> NOTE: I also tested this with hadoop streaming using two python scripts, one 
> for the mapper and one for the reduce and the error is identical.
> STORAGE POLICY:
> [hdfs@hadoop-vm-client 12:58:41] hdfs storagepolicies -getStoragePolicy -path 
> /
> The storage policy of /:
> BlockStoragePolicy{ONE_SSD:10, storageTypes=[SSD, DISK], 
> creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}
> VERSION:
> [hdfs@hadoop-vm-client 13:02:59] hdfs version
> Hadoop 2.7.1.2.4.0.0-169
> Subversion g...@github.com:hortonworks/hadoop.git -r 
> 26104d8ac833884c8776473823007f176854f2eb
> Compiled by jenkins on 2016-02-10T06:18Z
> Compiled with protoc 2.5.0
> From source with checksum cf48a4c63aaec76a714c1897e2ba8be6
> This command was run using 
> /usr/hdp/2.4.0.0-169/hadoop/hadoop-common-2.7.1.2.4.0.0-169.jar
> ERROR:
> [hdfs@hadoop-vm-client 16:25:53] /usr/hdp/current/hadoop-client/bin/yarn 
> --config /usr/hdp/current/hadoop-client/conf jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> randomtextwriter -D mapreduce.randomtextwriter.totalbytes=2147483648000 
> /benchmarks/Wordcount/Input.11358
> 16/08/26 12:58:38 INFO impl.TimelineClientImpl: Timeline service address: 
> http://hadoop-vm-rm.aae.lcl:8188/ws/v1/timeline/
> 16/08/26 12:58:38 INFO client.RMProxy: Connecting to ResourceManager at 
> hadoop-vm-rm.aae.lcl/172.16.4.12:8050
> Running 2000 maps.
> Job started: Fri Aug 26 12:58:39 CDT 2016
> 16/08/26 12:58:39 INFO impl.TimelineClientImpl: Timeline service address: 
> http://hadoop-vm-rm.aae.lcl:8188/ws/v1/timeline/
> 16/08/26 12:58:39 INFO client.RMProxy: Connecting to ResourceManager at 
> hadoop-vm-rm.aae.lcl/172.16.4.12:8050
> 16/08/26 12:58:40 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /user/hdfs/.staging/job_1472151637713_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): 
> java.lang.IllegalArgumentException
>       at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getStorageTypeDeltas(FSDirectory.java:789)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:711)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSetReplication(FSDirAttrOp.java:397)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setReplication(FSDirAttrOp.java:151)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setReplication(FSNamesystem.java:1968)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setReplication(NameNodeRpcServer.java:740)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setReplication(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1427)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1358)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>       at com.sun.proxy.$Proxy22.setReplication(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setReplication(ClientNamenodeProtocolTranslatorPB.java:349)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:497)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>       at com.sun.proxy.$Proxy23.setReplication(Unknown Source)
>       at org.apache.hadoop.hdfs.DFSClient.setReplication(DFSClient.java:1902)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$9.doCall(DistributedFileSystem.java:517)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$9.doCall(DistributedFileSystem.java:513)
>       at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.setReplication(DistributedFileSystem.java:513)
>       at 
> org.apache.hadoop.mapreduce.split.JobSplitWriter.createFile(JobSplitWriter.java:104)
>       at 
> org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:77)
>       at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:307)
>       at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
>       at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
>       at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
>       at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>       at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
>       at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
>       at 
> org.apache.hadoop.examples.RandomTextWriter.run(RandomTextWriter.java:237)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>       at 
> org.apache.hadoop.examples.RandomTextWriter.main(RandomTextWriter.java:248)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:497)
>       at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>       at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>       at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:497)
>       at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>       at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> 16/08/26 12:58:40 ERROR hdfs.DFSClient: Failed to close inode 278124
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /user/hdfs/.staging/job_1472151637713_0002/job.split (inode 
> 278124): File does not exist. Holder DFSClient_NONMAPREDUCE_2142218507_1 does 
> not have any open files.
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3439)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3529)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3496)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:851)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:536)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1427)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1358)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>       at com.sun.proxy.$Proxy22.complete(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:462)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:497)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>       at com.sun.proxy.$Proxy23.complete(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2358)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2340)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2304)
>       at 
> org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:951)
>       at 
> org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:983)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1086)
>       at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2744)
>       at 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2761)
>       at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to