Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/333/

[Jun 1, 2017 2:29:29 PM] (brahma) HDFS-11893. Fix 
TestDFSShell.testMoveWithTargetPortEmpty failure.
[Jun 1, 2017 4:28:33 PM] (brahma) HDFS-11905. Fix license header inconsistency 
in hdfs. Contributed by
[Jun 1, 2017 6:52:11 PM] (liuml07) HADOOP-14460. Azure: update doc for live and 
contract tests. Contributed
[Jun 1, 2017 9:05:37 PM] (xiao) HDFS-11741. Long running balancer may fail due 
to expired
[Jun 1, 2017 9:13:57 PM] (xiao) HDFS-11904. Reuse iip in 
unprotectedRemoveXAttrs calls.
[Jun 1, 2017 10:20:18 PM] (wang) HDFS-11383. Intern strings in BlockLocation 
and ExtendedBlock.
[Jun 2, 2017 1:30:23 AM] (vrushali) YARN-6316 Provide help information and 
documentation for
[Jun 2, 2017 4:48:30 AM] (yqlin) HDFS-11359. DFSAdmin report command supports 
displaying maintenance




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.sftp.TestSFTPFileSystem 
   hadoop.hdfs.TestBlockStoragePolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancer 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapreduce.TestMRJobClient 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   
org.apache.hadoop.yarn.client.api.impl.TestOpportunisticContainerAllocation 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/333/artifact/out/patch-mvninstall-root.txt
  [496K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/333/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/333/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/333/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/333/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/333/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/333/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [400K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/333/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   

[jira] [Created] (HDFS-11923) Stress test of DFSNetworkTopology

2017-06-02 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11923:
-

 Summary: Stress test of DFSNetworkTopology
 Key: HDFS-11923
 URL: https://issues.apache.org/jira/browse/HDFS-11923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


I wrote a stress test with {{DFSNetworkTopology}} to verify its correctness 
under huge number of datanode changes e.g., data node insert/delete, storage 
addition/removal etc. The goal is to show that the topology maintains the 
correct counters all time. The test is written that, unless manually 
terminated, it will keep randomly performing the operations nonstop. (and 
because of this, the test is ignored in the patch).

My local test lasted 40 min before I stopped it, it involved more than one 
million datanode changes, and no error happened. We believe this should be 
sufficient to show the correctness of {{DFSNetworkTopology}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11922) Ozone: KSM: Garbage collect deleted blocks

2017-06-02 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11922:
---

 Summary: Ozone: KSM: Garbage collect deleted blocks
 Key: HDFS-11922
 URL: https://issues.apache.org/jira/browse/HDFS-11922
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Anu Engineer
Priority: Critical


We need to garbage collect deleted blocks from the Datanodes. There are two 
cases where we will have orphaned blocks. One is like the classical HDFS, where 
someone deletes a key and we need to delete the corresponding blocks.

Another case, is when someone overwrites a key -- an overwrite can be treated 
as a delete and a new put -- that means that older blocks need to be GC-ed at 
some point of time. 

Couple of JIRAs has discussed this in one form or another -- so consolidating 
all those discussions in this JIRA. 

HDFS-11796 -- needs to fix this issue for some tests to pass 
HDFS-11780 -- changed the old overwriting behavior to not supporting this 
feature for time being.
HDFS-11920 - Once again runs into this issue when user tries to put an existing 
key.
HDFS-11781 - delete key API in KSM only deletes the metadata -- and relies on 
GC for Datanodes. 

When we solve this issue, we should also consider 2 more aspects. 

One, we support versioning in the buckets, tracking which blocks are really 
orphaned is something that KSM will do. So delete and overwrite at some point 
needs to decide how to handle versioning of buckets.

Two, If a key exists in a closed container, then it is immutable, hence the 
strategy of removing the key might be more complex than just talking to an open 
container.
cc : [~xyao], [~cheersyang], [~vagarychen], [~msingh], [~yuanbo], [~szetszwo], 
[~nandakumar131]

 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11921) Ozone: KSM: Unable to put keys with zero length

2017-06-02 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11921:
---

 Summary: Ozone: KSM: Unable to put keys with zero length
 Key: HDFS-11921
 URL: https://issues.apache.org/jira/browse/HDFS-11921
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Priority: Minor


As part of working on HDFS-11909, I was trying to put zero length keys. I found 
that put key refuses to do that. Here is the call trace, 

bq. at ScmBlockLocationProtocolClientSideTranslatorPB.allocateBlock 

we check if the block size is greater than 0, which makes sense since we should 
not call into SCM to allocate a block of zero size.

However these 2 calls are invoked for creating the key, so that metadata for 
key can be created, we should probably take care of this behavior here.
bq. ksm.KeyManagerImpl.allocateKey
bq. ksm.KeySpaceManager.allocateKey(KeySpaceManager.java:428)

Another way to fix this might be to just allocate a block with at least 1 byte 
always, which might be easier than special casing code.

[~vagarychen] Would you like to fix this in the next patch you are working on ? 





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11920) Ozone : add key partition

2017-06-02 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11920:
-

 Summary: Ozone : add key partition
 Key: HDFS-11920
 URL: https://issues.apache.org/jira/browse/HDFS-11920
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


Currently, each key corresponds to one single SCM block, and putKey/getKey 
writes/reads to this single SCM block. This works fine for keys with reasonably 
small data size. However if the data is too huge, (e.g. not even fits into a 
single container), then we need to be able to partition the key data into 
multiple blocks, each in one container. This JIRA changes the key-related 
classes to support this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11919) Ozone: SCM: TestNodeManager takes too long to execute

2017-06-02 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11919:
---

 Summary: Ozone: SCM: TestNodeManager takes too long to execute
 Key: HDFS-11919
 URL: https://issues.apache.org/jira/browse/HDFS-11919
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Priority: Trivial


On my laptop it takes 97.645 seconds to execute this test. We should explore if 
we can make this test run faster. 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11913) Ozone: TestKeySpaceManager#testDeleteVolume fails

2017-06-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-11913.
-
Resolution: Fixed

> Ozone: TestKeySpaceManager#testDeleteVolume fails
> -
>
> Key: HDFS-11913
> URL: https://issues.apache.org/jira/browse/HDFS-11913
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
> Attachments: HDFS-11913-HDFS-7240.001.patch
>
>
> HDFS-11774 introduces an UT failure, 
> {{TestKeySpaceManager#testDeleteVolume}}, error as below
> {noformat}
> java.util.NoSuchElementException
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.peekNext(JniDBIterator.java:84)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:98)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:45)
>  at 
> org.apache.hadoop.ozone.ksm.MetadataManagerImpl.isVolumeEmpty(MetadataManagerImpl.java:221)
>  at 
> org.apache.hadoop.ozone.ksm.VolumeManagerImpl.deleteVolume(VolumeManagerImpl.java:294)
>  at 
> org.apache.hadoop.ozone.ksm.KeySpaceManager.deleteVolume(KeySpaceManager.java:340)
>  at 
> org.apache.hadoop.ozone.protocolPB.KeySpaceManagerProtocolServerSideTranslatorPB.deleteVolume(KeySpaceManagerProtocolServerSideTranslatorPB.java:200)
>  at 
> org.apache.hadoop.ozone.protocol.proto.KeySpaceManagerProtocolProtos$KeySpaceManagerService$2.callBlockingMethod(KeySpaceManagerProtocolProtos.java:22742)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
> {noformat}
> this is caused by a buggy code in {{MetadataManagerImpl#isVolumeEmpty}}, 
> there are 2 issues need to be fixed
> # Iterate next element will throw this exception if it doesn't have next. 
> This always fail when a volume is empty.
> # The code was checking if the first bucket name start with "/volume_name", 
> this will return a wrong value if I have several empty volumes with same 
> prefix, e.g "/volA/", "/volAA/". Such case {{isVolumeEmpty}} will return 
> false as the next element from "/volA/" is not a bucket, it's another volume 
> "/volAA/" but matches the prefix.
> For now an empty volume with name "/volA/" is probably not valid, but if we 
> make sure our bucket key starts with "/volA/" instead of just "/volA" is a 
> good idea to leave us away from weird problems.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11913) Ozone: TestKeySpaceManager#testDeleteVolume fails

2017-06-02 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reopened HDFS-11913:
-

> Ozone: TestKeySpaceManager#testDeleteVolume fails
> -
>
> Key: HDFS-11913
> URL: https://issues.apache.org/jira/browse/HDFS-11913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: HDFS-7240
>
> Attachments: HDFS-11913-HDFS-7240.001.patch
>
>
> HDFS-11774 introduces an UT failure, 
> {{TestKeySpaceManager#testDeleteVolume}}, error as below
> {noformat}
> java.util.NoSuchElementException
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.peekNext(JniDBIterator.java:84)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:98)
>  at 
> org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:45)
>  at 
> org.apache.hadoop.ozone.ksm.MetadataManagerImpl.isVolumeEmpty(MetadataManagerImpl.java:221)
>  at 
> org.apache.hadoop.ozone.ksm.VolumeManagerImpl.deleteVolume(VolumeManagerImpl.java:294)
>  at 
> org.apache.hadoop.ozone.ksm.KeySpaceManager.deleteVolume(KeySpaceManager.java:340)
>  at 
> org.apache.hadoop.ozone.protocolPB.KeySpaceManagerProtocolServerSideTranslatorPB.deleteVolume(KeySpaceManagerProtocolServerSideTranslatorPB.java:200)
>  at 
> org.apache.hadoop.ozone.protocol.proto.KeySpaceManagerProtocolProtos$KeySpaceManagerService$2.callBlockingMethod(KeySpaceManagerProtocolProtos.java:22742)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
> {noformat}
> this is caused by a buggy code in {{MetadataManagerImpl#isVolumeEmpty}}, 
> there are 2 issues need to be fixed
> # Iterate next element will throw this exception if it doesn't have next. 
> This always fail when a volume is empty.
> # The code was checking if the first bucket name start with "/volume_name", 
> this will return a wrong value if I have several empty volumes with same 
> prefix, e.g "/volA/", "/volAA/". Such case {{isVolumeEmpty}} will return 
> false as the next element from "/volA/" is not a bucket, it's another volume 
> "/volAA/" but matches the prefix.
> For now an empty volume with name "/volA/" is probably not valid, but if we 
> make sure our bucket key starts with "/volA/" instead of just "/volA" is a 
> good idea to leave us away from weird problems.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/422/

[Jun 1, 2017 2:29:29 PM] (brahma) HDFS-11893. Fix 
TestDFSShell.testMoveWithTargetPortEmpty failure.
[Jun 1, 2017 4:28:33 PM] (brahma) HDFS-11905. Fix license header inconsistency 
in hdfs. Contributed by
[Jun 1, 2017 6:52:11 PM] (liuml07) HADOOP-14460. Azure: update doc for live and 
contract tests. Contributed
[Jun 1, 2017 9:05:37 PM] (xiao) HDFS-11741. Long running balancer may fail due 
to expired
[Jun 1, 2017 9:13:57 PM] (xiao) HDFS-11904. Reuse iip in 
unprotectedRemoveXAttrs calls.
[Jun 1, 2017 10:20:18 PM] (wang) HDFS-11383. Intern strings in BlockLocation 
and ExtendedBlock.
[Jun 2, 2017 1:30:23 AM] (vrushali) YARN-6316 Provide help information and 
documentation for
[Jun 2, 2017 4:48:30 AM] (yqlin) HDFS-11359. DFSAdmin report command supports 
displaying maintenance




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 351] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
 

[jira] [Resolved] (HDFS-11917) Why when using the hdfs nfs gateway, a file which is smaller than one block size required a block

2017-06-02 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-11917.

Resolution: Not A Problem
  Assignee: Weiwei Yang

> Why when using the hdfs nfs gateway, a file which is smaller than one block 
> size required a block
> -
>
> Key: HDFS-11917
> URL: https://issues.apache.org/jira/browse/HDFS-11917
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.8.0
>Reporter: BINGHUI WANG
>Assignee: Weiwei Yang
>
> I use the linux shell to put the file into the hdfs throuth the hdfs nfs 
> gateway. I found that if the file which size is smaller than one block(128M), 
> it will still takes one block(128M) of hdfs storage by this way. But after a 
> few minitues the excess storage will be released.
> e.g:If I put the file(60M) into the hdfs throuth the hdfs nfs gateway, it 
> will takes one block(128M) at first. After a few minitues the excess 
> storage(68M) will
> be released. The file only use 60M hdfs storage at last.
> Why is will be this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11918) Ozone: Encapsulate KSM metadata key into protobuf messages for better (de)serialization

2017-06-02 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11918:
--

 Summary: Ozone: Encapsulate KSM metadata key into protobuf 
messages for better (de)serialization
 Key: HDFS-11918
 URL: https://issues.apache.org/jira/browse/HDFS-11918
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang
Priority: Critical


There are multiple type of keys stored in KSM database
# Volume Key
# Bucket Key
# Object Key
# User Key

Currently they are represented as plain string with different convention, such 
as
# /volume
# /volume/bucket
# /volume/bucket/key
# $user

this approach makes it so difficult to parse volume/bucket/keys from KSM 
database. Propose to encapsulate these types of keys into protobuf messages, 
and take advantage of protobuf to serialize(deserialize) classes to byte arrays 
(and vice versa).





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11917) Why when using the hdfs nfs gateway, a file which is smaller than one block size required a block

2017-06-02 Thread BINGHUI WANG (JIRA)
BINGHUI WANG created HDFS-11917:
---

 Summary: Why when using the hdfs nfs gateway, a file which is 
smaller than one block size required a block
 Key: HDFS-11917
 URL: https://issues.apache.org/jira/browse/HDFS-11917
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.8.0
Reporter: BINGHUI WANG


I use the linux shell to put the file into the hdfs throuth the hdfs nfs 
gateway. I found that if the file which size is smaller than one block(128M), 
it will still takes one block(128M) of hdfs storage by this way. But after a 
few minitues the excess storage will be released.
e.g:If I put the file(60M) into the hdfs throuth the hdfs nfs gateway, it will 
takes one block(128M) at first. After a few minitues the excess storage(68M) 
will
be released. The file only use 60M hdfs storage at last.
Why is will be this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11916) Extend TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC policy

2017-06-02 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-11916:
---

 Summary: Extend 
TestErasureCodingPolicies/TestErasureCodingPolicyWithSnapshot with a random EC 
policy
 Key: HDFS-11916
 URL: https://issues.apache.org/jira/browse/HDFS-11916
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11915) Sync rbw dir on the first hsync() to avoid file lost on power failure

2017-06-02 Thread Kanaka Kumar Avvaru (JIRA)
Kanaka Kumar Avvaru created HDFS-11915:
--

 Summary: Sync rbw dir on the first hsync() to avoid file lost on 
power failure
 Key: HDFS-11915
 URL: https://issues.apache.org/jira/browse/HDFS-11915
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kanaka Kumar Avvaru
Priority: Critical


As discussed in HDFS-5042, there is a chance to loose blocks on power failure 
if rbw file creation entry is not yet sync to device. Then the block created is 
nowhere exists on disk. Neither in rbw nor in finalized. 

As suggested by [~kihwal], will discuss and track it in this JIRA.

As suggested by [~vinayrpet], May be first hsync() request on block file can 
call fsync on its parent directory (rbw) directory.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11914) Add more diagnosis info for fsimage transfer failure.

2017-06-02 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-11914:


 Summary: Add more diagnosis info for fsimage transfer failure.
 Key: HDFS-11914
 URL: https://issues.apache.org/jira/browse/HDFS-11914
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang


Hit a fsimage download problem:

Client tries to download fsimage, and got:

 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: 
File http://x.y.z:50070/imagetransfer?getimage=1=latest received length 
xyz is not of the advertised size abc.

Basically client does not get enough fsimage data and finished prematurely 
without any exception thrown, until it finds the size of data received is 
smaller than expected. The client then closed the conenction to NN, that caused 
NN to report

INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Connection closed 
by client

This jira is to add some more information in logs to help debugging the 
sitaution. Specifically, report the stack trace when the connection is closed. 
And how much data has been sent at that point. etc.
 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org