[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377658#comment-14377658
 ] 

Hudson commented on HDFS-7942:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #876 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/876/])
HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts. 
Contributed by Brandon Li (brandonli: rev 
36af4a913c97113bd0486c48e1cb864c5cba46fd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
* 
hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java


 NFS: support regexp grouping in nfs.exports.allowed.hosts
 -

 Key: HDFS-7942
 URL: https://issues.apache.org/jira/browse/HDFS-7942
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch


 Thanks, [~yeshavora], for reporting this problem.
 Set regex value in nfs.exports.allowed.hosts property.
 {noformat}
 propertynamenfs.exports.allowed.hosts/namevalue206.190.52.[26|23] 
 rw/value/property
 {noformat}
 With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
 act as nfs client. In conclusion, no host can mount nfs with this regex value 
 due to access denied error.
 {noformat}
 $ sudo su - -c mount -o 
 soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
 /tmp/tmp_mnt root
 mount.nfs: access denied by server while mounting 206.190.52.23:/
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377651#comment-14377651
 ] 

Hudson commented on HDFS-7917:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #876 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/876/])
HDFS-7917. Use file to replace data dirs in test to simulate a disk failure. 
Contributed by Lei (Eddy) Xu. (cnauroth: rev 
2c238ae4e00371ef76582b007bb0e20ac8455d9c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java


 Use file to replace data dirs in test to simulate a disk failure. 
 --

 Key: HDFS-7917
 URL: https://issues.apache.org/jira/browse/HDFS-7917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7917.000.patch, HDFS-7917.001.patch, 
 HDFS-7917.002.patch


 Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
 {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
 directory's executable permission as false. However, it raises the risk that 
 if the cleanup code could not be executed, the directory can not be easily 
 removed by Jenkins job. 
 Since in {{DiskChecker#checkDirAccess}}:
 {code}
 private static void checkDirAccess(File dir) throws DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
 checkAccessByFileMethods(dir);
   }
 {code}
 We can replace the DN data directory as a file to achieve the same fault 
 injection goal, while it is safer for cleaning up in any circumstance. 
 Additionally, as [~cnauroth] suggested: 
 bq. That might even let us enable some of these tests that are skipped on 
 Windows, because Windows allows access for the owner even after permissions 
 have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7982) huge non dfs space used

2015-03-24 Thread regis le bretonnic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

regis le bretonnic updated HDFS-7982:
-
Description: 
Hi...

I'm trying to load an external textfile table into a internal orc table using 
hive. My process failed with the following error :
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/tmp/hive/blablabla could only be replicated to 0 nodes instead of 
minReplication (=1).  There are 3 datanode(s) running and no node(s) are 
excluded in this operation.

After investigation, I saw that the quantity of non dfs space grows more and 
more, until the job fails.
Just before failing, the non dfs used space reaches 54.GB on each datanode. I 
still have space in remaining DFS.

Here the dfsadmin report just before the issue :

[hdfs@hadoop-01 data]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 475193597952 (442.56 GB)
Present Capacity: 290358095182 (270.42 GB)
DFS Remaining: 228619903369 (212.92 GB)
DFS Used: 61738191813 (57.50 GB)
DFS Used%: 21.26%
Under replicated blocks: 38
Blocks with corrupt replicas: 0
Missing blocks: 0

-
Live datanodes (3):

Name: 192.168.3.36:50010 (hadoop-04.X.local)
Hostname: hadoop-04.X.local
Decommission Status : Normal
Configured Capacity: 158397865984 (147.52 GB)
DFS Used: 20591481196 (19.18 GB)
Non DFS Used: 61522602976 (57.30 GB)
DFS Remaining: 76283781812 (71.04 GB)
DFS Used%: 13.00%
DFS Remaining%: 48.16%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 182
Last contact: Tue Mar 24 10:56:05 CET 2015


Name: 192.168.3.35:50010 (hadoop-03.X.local)
Hostname: hadoop-03.X.local
Decommission Status : Normal
Configured Capacity: 158397865984 (147.52 GB)
DFS Used: 20555853589 (19.14 GB)
Non DFS Used: 61790296136 (57.55 GB)
DFS Remaining: 76051716259 (70.83 GB)
DFS Used%: 12.98%
DFS Remaining%: 48.01%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 184
Last contact: Tue Mar 24 10:56:05 CET 2015


Name: 192.168.3.37:50010 (hadoop-05.X.local)
Hostname: hadoop-05.X.local
Decommission Status : Normal
Configured Capacity: 158397865984 (147.52 GB)
DFS Used: 20590857028 (19.18 GB)
Non DFS Used: 61522603658 (57.30 GB)
DFS Remaining: 76284405298 (71.05 GB)
DFS Used%: 13.00%
DFS Remaining%: 48.16%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 182
Last contact: Tue Mar 24 10:56:05 CET 2015

I was expected to find a temporary space used within my filesystem (ie /data).
I found the DFS usage under /data/hadoop/hdfs/data (19GB) but no trace of 57GB 
for non DFS...

[root@hadoop-05 hadoop]# df -h /data
FilesystemSize  Used Avail Use% Mounted on
/dev/sdb1 148G   20G  121G  14% /data

I also checked dfs.datanode.du.reserved that is set to zero.
[root@hadoop-05 hadoop]# hdfs getconf -confkey dfs.datanode.du.reserved
0

Did I miss something ? Where is non DFS space on linux ? Why did I get this 
message could only be replicated to 0 nodes instead of minReplication (=1).  
There are 3 datanode(s) running and no node(s) are excluded in this operation. 
knowing that datanodes were up and running with still remaining DFS space.

This error is blocking us.

  was:
Hi...

I'm try to load an external textfile table into a internal orc table using 
hive. My process failed with the following error :
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/tmp/hive/blablabla could only be replicated to 0 nodes instead of 
minReplication (=1).  There are 3 datanode(s) running and no node(s) are 
excluded in this operation.

After investigation, I saw that the quantity of non dfs space grows more and 
more, until the job fails.
Just before failing, the non dfs used space reaches 54.GB on each datanode. I 
still have space in remaining DFS.

Here the dfsadmin report just before the issue :

[hdfs@hadoop-01 data]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 475193597952 (442.56 GB)
Present Capacity: 290358095182 (270.42 GB)
DFS Remaining: 228619903369 (212.92 GB)
DFS Used: 61738191813 (57.50 GB)
DFS Used%: 21.26%
Under replicated blocks: 38
Blocks with corrupt replicas: 0
Missing blocks: 0

-
Live datanodes (3):

Name: 192.168.3.36:50010 (hadoop-04.X.local)
Hostname: hadoop-04.X.local
Decommission Status : Normal
Configured 

[jira] [Commented] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377923#comment-14377923
 ] 

Hudson commented on HDFS-7956:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2074 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2074/])
HDFS-7956. Improve logging for DatanodeRegistration. Contributed by Plamen 
Jeliazkov. (shv: rev 970ee3fc56a68afade98017296cf9d057f225a46)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeRegistration.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Improve logging for DatanodeRegistration.
 -

 Key: HDFS-7956
 URL: https://issues.apache.org/jira/browse/HDFS-7956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Fix For: 2.7.0

 Attachments: HDFS-7956.1.patch


 {{DatanodeRegistration.toString()}} 
 prints only its address without the port, it should print its full address, 
 similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7884) NullPointerException in BlockSender

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377925#comment-14377925
 ] 

Hudson commented on HDFS-7884:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2074 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2074/])
HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp 
provided by the client is larger than the one stored in the datanode.  
Contributed by Brahma Reddy Battula (szetszwo: rev 
d7e3c3364eb904f55a878bc14c331952f9dadab2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 NullPointerException in BlockSender
 ---

 Key: HDFS-7884
 URL: https://issues.apache.org/jira/browse/HDFS-7884
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Brahma Reddy Battula
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7884-002.patch, HDFS-7884.patch, 
 h7884_20150313.patch, 
 org.apache.hadoop.hdfs.TestAppendSnapshotTruncate-output.txt


 {noformat}
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:264)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
   at java.lang.Thread.run(Thread.java:745)
 {noformat}
 BlockSender.java:264 is shown below
 {code}
   this.volumeRef = datanode.data.getVolume(block).obtainReference();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3325) When configuring dfs.namenode.safemode.threshold-pct to a value greater or equal to 1 there is mismatch in the UI report

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377929#comment-14377929
 ] 

Hudson commented on HDFS-3325:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2074 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2074/])
HDFS-3325. When configuring 'dfs.namenode.safemode.threshold-pct' to a value 
greater or equal to 1 there is mismatch in the UI report (Contributed by 
J.Andreina) (vinayakumarb: rev c6c396fcd69514ba93583268b2633557c3d74a47)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java


 When configuring dfs.namenode.safemode.threshold-pct to a value greater or 
 equal to 1 there is mismatch in the UI report
 --

 Key: HDFS-3325
 URL: https://issues.apache.org/jira/browse/HDFS-3325
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch


 When dfs.namenode.safemode.threshold-pct is configured to n
 Namenode will be in safemode until n percentage of blocks that should satisfy 
 the minimal replication requirement defined by 
 dfs.namenode.replication.min is reported to namenode
 But in UI it displays that n percentage of total blocks + 1 blocks  are 
 additionally needed
 to come out of the safemode
 Scenario 1:
 
 Configurations:
 dfs.namenode.safemode.threshold-pct = 2
 dfs.replication = 2
 dfs.namenode.replication.min =2
 Step 1: Start NN,DN1,DN2
 Step 2: Write a file a.txt which has got 167 blocks
 step 3: Stop NN,DN1,DN2
 Step 4: start NN
 In UI report the Number of blocks needed to come out of safemode and number 
 of blocks actually present is different.
 {noformat}
 Cluster Summary
 Security is OFF 
 Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
 the threshold 2. of total blocks 167. Safe mode will be turned off 
 automatically.
 2 files and directories, 167 blocks = 169 total.
 Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
 is 2 GB. 
 Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
 Max Non Heap Memory is 176 MB.{noformat}
 Scenario 2:
 ===
 Configurations:
 dfs.namenode.safemode.threshold-pct = 1
 dfs.replication = 2
 dfs.namenode.replication.min =2
 Step 1: Start NN,DN1,DN2
 Step 2: Write a file a.txt which has got 167 blocks
 step 3: Stop NN,DN1,DN2
 Step 4: start NN
 In UI report the Number of blocks needed to come out of safemode and number 
 of blocks actually present is different
 {noformat}
 Cluster Summary
 Security is OFF 
 Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
 the threshold 1. of total blocks 167. Safe mode will be turned off 
 automatically.
 2 files and directories, 167 blocks = 169 total.
 Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
 is 2 GB. 
 Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
 Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377927#comment-14377927
 ] 

Hudson commented on HDFS-7960:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2074 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2074/])
HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu. (wang: rev 
50ee8f4e67a66aa77c5359182f61f3e951844db6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDnRespectsBlockReportSplitThreshold.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesBlockReportPerStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesCombinedBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestBlockListAsLongs.java


 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
 HDFS-7960.007.patch, HDFS-7960.008.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377933#comment-14377933
 ] 

Hudson commented on HDFS-7942:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2074 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2074/])
HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts. 
Contributed by Brandon Li (brandonli: rev 
36af4a913c97113bd0486c48e1cb864c5cba46fd)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md


 NFS: support regexp grouping in nfs.exports.allowed.hosts
 -

 Key: HDFS-7942
 URL: https://issues.apache.org/jira/browse/HDFS-7942
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch


 Thanks, [~yeshavora], for reporting this problem.
 Set regex value in nfs.exports.allowed.hosts property.
 {noformat}
 propertynamenfs.exports.allowed.hosts/namevalue206.190.52.[26|23] 
 rw/value/property
 {noformat}
 With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
 act as nfs client. In conclusion, no host can mount nfs with this regex value 
 due to access denied error.
 {noformat}
 $ sudo su - -c mount -o 
 soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
 /tmp/tmp_mnt root
 mount.nfs: access denied by server while mounting 206.190.52.23:/
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377926#comment-14377926
 ] 

Hudson commented on HDFS-7917:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2074 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2074/])
HDFS-7917. Use file to replace data dirs in test to simulate a disk failure. 
Contributed by Lei (Eddy) Xu. (cnauroth: rev 
2c238ae4e00371ef76582b007bb0e20ac8455d9c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java


 Use file to replace data dirs in test to simulate a disk failure. 
 --

 Key: HDFS-7917
 URL: https://issues.apache.org/jira/browse/HDFS-7917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7917.000.patch, HDFS-7917.001.patch, 
 HDFS-7917.002.patch


 Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
 {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
 directory's executable permission as false. However, it raises the risk that 
 if the cleanup code could not be executed, the directory can not be easily 
 removed by Jenkins job. 
 Since in {{DiskChecker#checkDirAccess}}:
 {code}
 private static void checkDirAccess(File dir) throws DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
 checkAccessByFileMethods(dir);
   }
 {code}
 We can replace the DN data directory as a file to achieve the same fault 
 injection goal, while it is safer for cleaning up in any circumstance. 
 Additionally, as [~cnauroth] suggested: 
 bq. That might even let us enable some of these tests that are skipped on 
 Windows, because Windows allows access for the owner even after permissions 
 have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-24 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated HDFS-7875:

Attachment: 0004-HDFS-7875.patch

thanks Harsh for the comments
Updated patch with the changes

 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Attachments: 0001-HDFS-7875.patch, 0002-HDFS-7875.patch, 
 0003-HDFS-7875.patch, 0004-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7982) huge non dfs space used

2015-03-24 Thread regis le bretonnic (JIRA)
regis le bretonnic created HDFS-7982:


 Summary: huge non dfs space used
 Key: HDFS-7982
 URL: https://issues.apache.org/jira/browse/HDFS-7982
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: regis le bretonnic






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377864#comment-14377864
 ] 

Hudson commented on HDFS-7917:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2092 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2092/])
HDFS-7917. Use file to replace data dirs in test to simulate a disk failure. 
Contributed by Lei (Eddy) Xu. (cnauroth: rev 
2c238ae4e00371ef76582b007bb0e20ac8455d9c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java


 Use file to replace data dirs in test to simulate a disk failure. 
 --

 Key: HDFS-7917
 URL: https://issues.apache.org/jira/browse/HDFS-7917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7917.000.patch, HDFS-7917.001.patch, 
 HDFS-7917.002.patch


 Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
 {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
 directory's executable permission as false. However, it raises the risk that 
 if the cleanup code could not be executed, the directory can not be easily 
 removed by Jenkins job. 
 Since in {{DiskChecker#checkDirAccess}}:
 {code}
 private static void checkDirAccess(File dir) throws DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
 checkAccessByFileMethods(dir);
   }
 {code}
 We can replace the DN data directory as a file to achieve the same fault 
 injection goal, while it is safer for cleaning up in any circumstance. 
 Additionally, as [~cnauroth] suggested: 
 bq. That might even let us enable some of these tests that are skipped on 
 Windows, because Windows allows access for the owner even after permissions 
 have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377866#comment-14377866
 ] 

Hudson commented on HDFS-7960:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2092 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2092/])
HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu. (wang: rev 
50ee8f4e67a66aa77c5359182f61f3e951844db6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestBlockListAsLongs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesBlockReportPerStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesCombinedBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDnRespectsBlockReportSplitThreshold.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java


 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
 HDFS-7960.007.patch, HDFS-7960.008.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HDFS-3325) When configuring dfs.namenode.safemode.threshold-pct to a value greater or equal to 1 there is mismatch in the UI report

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377872#comment-14377872
 ] 

Hudson commented on HDFS-3325:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2092 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2092/])
HDFS-3325. When configuring 'dfs.namenode.safemode.threshold-pct' to a value 
greater or equal to 1 there is mismatch in the UI report (Contributed by 
J.Andreina) (vinayakumarb: rev c6c396fcd69514ba93583268b2633557c3d74a47)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java


 When configuring dfs.namenode.safemode.threshold-pct to a value greater or 
 equal to 1 there is mismatch in the UI report
 --

 Key: HDFS-3325
 URL: https://issues.apache.org/jira/browse/HDFS-3325
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch


 When dfs.namenode.safemode.threshold-pct is configured to n
 Namenode will be in safemode until n percentage of blocks that should satisfy 
 the minimal replication requirement defined by 
 dfs.namenode.replication.min is reported to namenode
 But in UI it displays that n percentage of total blocks + 1 blocks  are 
 additionally needed
 to come out of the safemode
 Scenario 1:
 
 Configurations:
 dfs.namenode.safemode.threshold-pct = 2
 dfs.replication = 2
 dfs.namenode.replication.min =2
 Step 1: Start NN,DN1,DN2
 Step 2: Write a file a.txt which has got 167 blocks
 step 3: Stop NN,DN1,DN2
 Step 4: start NN
 In UI report the Number of blocks needed to come out of safemode and number 
 of blocks actually present is different.
 {noformat}
 Cluster Summary
 Security is OFF 
 Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
 the threshold 2. of total blocks 167. Safe mode will be turned off 
 automatically.
 2 files and directories, 167 blocks = 169 total.
 Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
 is 2 GB. 
 Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
 Max Non Heap Memory is 176 MB.{noformat}
 Scenario 2:
 ===
 Configurations:
 dfs.namenode.safemode.threshold-pct = 1
 dfs.replication = 2
 dfs.namenode.replication.min =2
 Step 1: Start NN,DN1,DN2
 Step 2: Write a file a.txt which has got 167 blocks
 step 3: Stop NN,DN1,DN2
 Step 4: start NN
 In UI report the Number of blocks needed to come out of safemode and number 
 of blocks actually present is different
 {noformat}
 Cluster Summary
 Security is OFF 
 Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
 the threshold 1. of total blocks 167. Safe mode will be turned off 
 automatically.
 2 files and directories, 167 blocks = 169 total.
 Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
 is 2 GB. 
 Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
 Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7884) NullPointerException in BlockSender

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377865#comment-14377865
 ] 

Hudson commented on HDFS-7884:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2092 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2092/])
HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp 
provided by the client is larger than the one stored in the datanode.  
Contributed by Brahma Reddy Battula (szetszwo: rev 
d7e3c3364eb904f55a878bc14c331952f9dadab2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 NullPointerException in BlockSender
 ---

 Key: HDFS-7884
 URL: https://issues.apache.org/jira/browse/HDFS-7884
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Brahma Reddy Battula
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7884-002.patch, HDFS-7884.patch, 
 h7884_20150313.patch, 
 org.apache.hadoop.hdfs.TestAppendSnapshotTruncate-output.txt


 {noformat}
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:264)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
   at java.lang.Thread.run(Thread.java:745)
 {noformat}
 BlockSender.java:264 is shown below
 {code}
   this.volumeRef = datanode.data.getVolume(block).obtainReference();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377863#comment-14377863
 ] 

Hudson commented on HDFS-7956:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2092 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2092/])
HDFS-7956. Improve logging for DatanodeRegistration. Contributed by Plamen 
Jeliazkov. (shv: rev 970ee3fc56a68afade98017296cf9d057f225a46)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeRegistration.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Improve logging for DatanodeRegistration.
 -

 Key: HDFS-7956
 URL: https://issues.apache.org/jira/browse/HDFS-7956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Fix For: 2.7.0

 Attachments: HDFS-7956.1.patch


 {{DatanodeRegistration.toString()}} 
 prints only its address without the port, it should print its full address, 
 similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377870#comment-14377870
 ] 

Hudson commented on HDFS-7942:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2092 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2092/])
HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts. 
Contributed by Brandon Li (brandonli: rev 
36af4a913c97113bd0486c48e1cb864c5cba46fd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java


 NFS: support regexp grouping in nfs.exports.allowed.hosts
 -

 Key: HDFS-7942
 URL: https://issues.apache.org/jira/browse/HDFS-7942
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch


 Thanks, [~yeshavora], for reporting this problem.
 Set regex value in nfs.exports.allowed.hosts property.
 {noformat}
 propertynamenfs.exports.allowed.hosts/namevalue206.190.52.[26|23] 
 rw/value/property
 {noformat}
 With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
 act as nfs client. In conclusion, no host can mount nfs with this regex value 
 due to access denied error.
 {noformat}
 $ sudo su - -c mount -o 
 soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
 /tmp/tmp_mnt root
 mount.nfs: access denied by server while mounting 206.190.52.23:/
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7982) huge non dfs space used

2015-03-24 Thread regis le bretonnic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

regis le bretonnic updated HDFS-7982:
-
Description: 
Hi...

I'm try to load an external textfile table into a internal orc table using 
hive. My process failed with the following error :
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/tmp/hive/blablabla could only be replicated to 0 nodes instead of 
minReplication (=1).  There are 3 datanode(s) running and no node(s) are 
excluded in this operation.

After investigation, I saw that the quantity of non dfs space grows more and 
more, until the job fails.
Just before failing, the non dfs used space reaches 54.GB on each datanode. I 
still have space in remaining DFS.

Here the dfsadmin report just before the issue :

[hdfs@hadoop-01 data]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 475193597952 (442.56 GB)
Present Capacity: 290358095182 (270.42 GB)
DFS Remaining: 228619903369 (212.92 GB)
DFS Used: 61738191813 (57.50 GB)
DFS Used%: 21.26%
Under replicated blocks: 38
Blocks with corrupt replicas: 0
Missing blocks: 0

-
Live datanodes (3):

Name: 192.168.3.36:50010 (hadoop-04.X.local)
Hostname: hadoop-04.X.local
Decommission Status : Normal
Configured Capacity: 158397865984 (147.52 GB)
DFS Used: 20591481196 (19.18 GB)
Non DFS Used: 61522602976 (57.30 GB)
DFS Remaining: 76283781812 (71.04 GB)
DFS Used%: 13.00%
DFS Remaining%: 48.16%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 182
Last contact: Tue Mar 24 10:56:05 CET 2015


Name: 192.168.3.35:50010 (hadoop-03.X.local)
Hostname: hadoop-03.X.local
Decommission Status : Normal
Configured Capacity: 158397865984 (147.52 GB)
DFS Used: 20555853589 (19.14 GB)
Non DFS Used: 61790296136 (57.55 GB)
DFS Remaining: 76051716259 (70.83 GB)
DFS Used%: 12.98%
DFS Remaining%: 48.01%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 184
Last contact: Tue Mar 24 10:56:05 CET 2015


Name: 192.168.3.37:50010 (hadoop-05.X.local)
Hostname: hadoop-05.X.local
Decommission Status : Normal
Configured Capacity: 158397865984 (147.52 GB)
DFS Used: 20590857028 (19.18 GB)
Non DFS Used: 61522603658 (57.30 GB)
DFS Remaining: 76284405298 (71.05 GB)
DFS Used%: 13.00%
DFS Remaining%: 48.16%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 182
Last contact: Tue Mar 24 10:56:05 CET 2015

I was expected to find a temporary space used within my filesystem (ie /data).
I found the DFS usage under /data/hadoop/hdfs/data (19GB) but no trace of 57GB 
for non DFS...

[root@hadoop-05 hadoop]# df -h /data
FilesystemSize  Used Avail Use% Mounted on
/dev/sdb1 148G   20G  121G  14% /data

I also checked dfs.datanode.du.reserved that is set to zero.
[root@hadoop-05 hadoop]# hdfs getconf -confkey dfs.datanode.du.reserved
0

Did I miss something ? Where is non DFS space on linux ? Why did I get this 
message could only be replicated to 0 nodes instead of minReplication (=1).  
There are 3 datanode(s) running and no node(s) are excluded in this operation. 
knowing that datanodes were up and running with still remaining DFS space.

This error is blocking us.

 huge non dfs space used
 ---

 Key: HDFS-7982
 URL: https://issues.apache.org/jira/browse/HDFS-7982
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: regis le bretonnic

 Hi...
 I'm try to load an external textfile table into a internal orc table using 
 hive. My process failed with the following error :
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
 /tmp/hive/blablabla could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 3 datanode(s) running and no node(s) are 
 excluded in this operation.
 After investigation, I saw that the quantity of non dfs space grows more 
 and more, until the job fails.
 Just before failing, the non dfs used space reaches 54.GB on each datanode. 
 I still have space in remaining DFS.
 Here the dfsadmin report just before the issue :
 [hdfs@hadoop-01 data]$ hadoop dfsadmin -report
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Configured Capacity: 475193597952 (442.56 GB)
 Present Capacity: 290358095182 (270.42 GB)
 DFS Remaining: 228619903369 (212.92 GB)
 DFS 

[jira] [Created] (HDFS-7981) getStoragePolicy() regards HOT policy as EC policy

2015-03-24 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-7981:
--

 Summary: getStoragePolicy() regards HOT policy as EC policy
 Key: HDFS-7981
 URL: https://issues.apache.org/jira/browse/HDFS-7981
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


Now, {{testStoragePoliciesCK()}} in {{TestFsck}} is failed in EC branch.

A part of the test result is below:
{noformat}
A part of the test result is odd:
Blocks NOT satisfying the specified storage policy:
Storage Policy  Specified Storage Policy  # of blocks   
% of blocks
DISK:3(EC)  HOT   1 
 33.%
{noformat}

I found that {{getStoragePolicy(StorageType[] storageTypes)}} in 
{{StoragePolicySummary}} regarding HOT policy as EC policy. We should fix this 
problem.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-03-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su reassigned HDFS-7980:
---

Assignee: Walter Su

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su

 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377632#comment-14377632
 ] 

Hudson commented on HDFS-7956:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #142 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/142/])
HDFS-7956. Improve logging for DatanodeRegistration. Contributed by Plamen 
Jeliazkov. (shv: rev 970ee3fc56a68afade98017296cf9d057f225a46)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeRegistration.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Improve logging for DatanodeRegistration.
 -

 Key: HDFS-7956
 URL: https://issues.apache.org/jira/browse/HDFS-7956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Fix For: 2.7.0

 Attachments: HDFS-7956.1.patch


 {{DatanodeRegistration.toString()}} 
 prints only its address without the port, it should print its full address, 
 similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377636#comment-14377636
 ] 

Hudson commented on HDFS-7960:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #142 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/142/])
HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu. (wang: rev 
50ee8f4e67a66aa77c5359182f61f3e951844db6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesBlockReportPerStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestBlockListAsLongs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesCombinedBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDnRespectsBlockReportSplitThreshold.java


 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
 HDFS-7960.007.patch, HDFS-7960.008.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HDFS-7884) NullPointerException in BlockSender

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377634#comment-14377634
 ] 

Hudson commented on HDFS-7884:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #142 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/142/])
HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp 
provided by the client is larger than the one stored in the datanode.  
Contributed by Brahma Reddy Battula (szetszwo: rev 
d7e3c3364eb904f55a878bc14c331952f9dadab2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 NullPointerException in BlockSender
 ---

 Key: HDFS-7884
 URL: https://issues.apache.org/jira/browse/HDFS-7884
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Brahma Reddy Battula
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7884-002.patch, HDFS-7884.patch, 
 h7884_20150313.patch, 
 org.apache.hadoop.hdfs.TestAppendSnapshotTruncate-output.txt


 {noformat}
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:264)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
   at java.lang.Thread.run(Thread.java:745)
 {noformat}
 BlockSender.java:264 is shown below
 {code}
   this.volumeRef = datanode.data.getVolume(block).obtainReference();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3325) When configuring dfs.namenode.safemode.threshold-pct to a value greater or equal to 1 there is mismatch in the UI report

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377638#comment-14377638
 ] 

Hudson commented on HDFS-3325:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #142 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/142/])
HDFS-3325. When configuring 'dfs.namenode.safemode.threshold-pct' to a value 
greater or equal to 1 there is mismatch in the UI report (Contributed by 
J.Andreina) (vinayakumarb: rev c6c396fcd69514ba93583268b2633557c3d74a47)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java


 When configuring dfs.namenode.safemode.threshold-pct to a value greater or 
 equal to 1 there is mismatch in the UI report
 --

 Key: HDFS-3325
 URL: https://issues.apache.org/jira/browse/HDFS-3325
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch


 When dfs.namenode.safemode.threshold-pct is configured to n
 Namenode will be in safemode until n percentage of blocks that should satisfy 
 the minimal replication requirement defined by 
 dfs.namenode.replication.min is reported to namenode
 But in UI it displays that n percentage of total blocks + 1 blocks  are 
 additionally needed
 to come out of the safemode
 Scenario 1:
 
 Configurations:
 dfs.namenode.safemode.threshold-pct = 2
 dfs.replication = 2
 dfs.namenode.replication.min =2
 Step 1: Start NN,DN1,DN2
 Step 2: Write a file a.txt which has got 167 blocks
 step 3: Stop NN,DN1,DN2
 Step 4: start NN
 In UI report the Number of blocks needed to come out of safemode and number 
 of blocks actually present is different.
 {noformat}
 Cluster Summary
 Security is OFF 
 Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
 the threshold 2. of total blocks 167. Safe mode will be turned off 
 automatically.
 2 files and directories, 167 blocks = 169 total.
 Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
 is 2 GB. 
 Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
 Max Non Heap Memory is 176 MB.{noformat}
 Scenario 2:
 ===
 Configurations:
 dfs.namenode.safemode.threshold-pct = 1
 dfs.replication = 2
 dfs.namenode.replication.min =2
 Step 1: Start NN,DN1,DN2
 Step 2: Write a file a.txt which has got 167 blocks
 step 3: Stop NN,DN1,DN2
 Step 4: start NN
 In UI report the Number of blocks needed to come out of safemode and number 
 of blocks actually present is different
 {noformat}
 Cluster Summary
 Security is OFF 
 Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
 the threshold 1. of total blocks 167. Safe mode will be turned off 
 automatically.
 2 files and directories, 167 blocks = 169 total.
 Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
 is 2 GB. 
 Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
 Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377635#comment-14377635
 ] 

Hudson commented on HDFS-7917:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #142 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/142/])
HDFS-7917. Use file to replace data dirs in test to simulate a disk failure. 
Contributed by Lei (Eddy) Xu. (cnauroth: rev 
2c238ae4e00371ef76582b007bb0e20ac8455d9c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java


 Use file to replace data dirs in test to simulate a disk failure. 
 --

 Key: HDFS-7917
 URL: https://issues.apache.org/jira/browse/HDFS-7917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7917.000.patch, HDFS-7917.001.patch, 
 HDFS-7917.002.patch


 Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
 {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
 directory's executable permission as false. However, it raises the risk that 
 if the cleanup code could not be executed, the directory can not be easily 
 removed by Jenkins job. 
 Since in {{DiskChecker#checkDirAccess}}:
 {code}
 private static void checkDirAccess(File dir) throws DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
 checkAccessByFileMethods(dir);
   }
 {code}
 We can replace the DN data directory as a file to achieve the same fault 
 injection goal, while it is safer for cleaning up in any circumstance. 
 Additionally, as [~cnauroth] suggested: 
 bq. That might even let us enable some of these tests that are skipped on 
 Windows, because Windows allows access for the owner even after permissions 
 have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377642#comment-14377642
 ] 

Hudson commented on HDFS-7942:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #142 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/142/])
HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts. 
Contributed by Brandon Li (brandonli: rev 
36af4a913c97113bd0486c48e1cb864c5cba46fd)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md


 NFS: support regexp grouping in nfs.exports.allowed.hosts
 -

 Key: HDFS-7942
 URL: https://issues.apache.org/jira/browse/HDFS-7942
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch


 Thanks, [~yeshavora], for reporting this problem.
 Set regex value in nfs.exports.allowed.hosts property.
 {noformat}
 propertynamenfs.exports.allowed.hosts/namevalue206.190.52.[26|23] 
 rw/value/property
 {noformat}
 With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
 act as nfs client. In conclusion, no host can mount nfs with this regex value 
 due to access denied error.
 {noformat}
 $ sudo su - -c mount -o 
 soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
 /tmp/tmp_mnt root
 mount.nfs: access denied by server while mounting 206.190.52.23:/
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-03-24 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377730#comment-14377730
 ] 

Walter Su commented on HDFS-7980:
-

Hi, [~huizane]. It looks like a blockreport storm, which may causes Namenode 
fullgc. Have you tried {{dfs.blockreport.initialDelay}} ?

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su

 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-03-24 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377756#comment-14377756
 ] 

Hui Zheng commented on HDFS-7980:
-

Hi [~walter.k.su]
dfs.blockreport.initailDelay is 600 (10minutes).
dfs.blockreport.intervalmsec is 3600 (10 hours).

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su

 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-03-24 Thread Hui Zheng (JIRA)
Hui Zheng created HDFS-7980:
---

 Summary: Incremental BlockReport will dramatically slow down the 
startup of  a namenode
 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng


In the current implementation the datanode will call the 
reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
calling the bpNamenode.blockReport() method. So in a large(several thousands of 
datanodes) and busy cluster it will slow down(more than one hour) the startup 
of namenode. 

{code}
ListDatanodeCommand blockReport() throws IOException {
// send block report if timer has expired.
final long startTime = now();
if (startTime - lastBlockReport = dnConf.blockReportInterval) {
  return null;
}

final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();

// Flush any block information that precedes the block report. Otherwise
// we have a chance that we will miss the delHint information
// or we will report an RBW replica after the BlockReport already reports
// a FINALIZED one.
reportReceivedDeletedBlocks();
lastDeletedReport = startTime;
.
// Send the reports to the NN.
int numReportsSent = 0;
int numRPCs = 0;
boolean success = false;
long brSendStartTime = now();
try {
  if (totalBlockCount  dnConf.blockReportSplitThreshold) {
// Below split threshold, send all reports in a single message.
DatanodeCommand cmd = bpNamenode.blockReport(
bpRegistration, bpos.getBlockPoolId(), reports);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7956) Improve logging for DatanodeRegistration.

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377648#comment-14377648
 ] 

Hudson commented on HDFS-7956:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #876 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/876/])
HDFS-7956. Improve logging for DatanodeRegistration. Contributed by Plamen 
Jeliazkov. (shv: rev 970ee3fc56a68afade98017296cf9d057f225a46)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeRegistration.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Improve logging for DatanodeRegistration.
 -

 Key: HDFS-7956
 URL: https://issues.apache.org/jira/browse/HDFS-7956
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Fix For: 2.7.0

 Attachments: HDFS-7956.1.patch


 {{DatanodeRegistration.toString()}} 
 prints only its address without the port, it should print its full address, 
 similar {{NamenodeRegistration}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7884) NullPointerException in BlockSender

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377650#comment-14377650
 ] 

Hudson commented on HDFS-7884:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #876 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/876/])
HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp 
provided by the client is larger than the one stored in the datanode.  
Contributed by Brahma Reddy Battula (szetszwo: rev 
d7e3c3364eb904f55a878bc14c331952f9dadab2)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java


 NullPointerException in BlockSender
 ---

 Key: HDFS-7884
 URL: https://issues.apache.org/jira/browse/HDFS-7884
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Brahma Reddy Battula
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7884-002.patch, HDFS-7884.patch, 
 h7884_20150313.patch, 
 org.apache.hadoop.hdfs.TestAppendSnapshotTruncate-output.txt


 {noformat}
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:264)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
   at java.lang.Thread.run(Thread.java:745)
 {noformat}
 BlockSender.java:264 is shown below
 {code}
   this.volumeRef = datanode.data.getVolume(block).obtainReference();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3325) When configuring dfs.namenode.safemode.threshold-pct to a value greater or equal to 1 there is mismatch in the UI report

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377654#comment-14377654
 ] 

Hudson commented on HDFS-3325:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #876 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/876/])
HDFS-3325. When configuring 'dfs.namenode.safemode.threshold-pct' to a value 
greater or equal to 1 there is mismatch in the UI report (Contributed by 
J.Andreina) (vinayakumarb: rev c6c396fcd69514ba93583268b2633557c3d74a47)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java


 When configuring dfs.namenode.safemode.threshold-pct to a value greater or 
 equal to 1 there is mismatch in the UI report
 --

 Key: HDFS-3325
 URL: https://issues.apache.org/jira/browse/HDFS-3325
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch


 When dfs.namenode.safemode.threshold-pct is configured to n
 Namenode will be in safemode until n percentage of blocks that should satisfy 
 the minimal replication requirement defined by 
 dfs.namenode.replication.min is reported to namenode
 But in UI it displays that n percentage of total blocks + 1 blocks  are 
 additionally needed
 to come out of the safemode
 Scenario 1:
 
 Configurations:
 dfs.namenode.safemode.threshold-pct = 2
 dfs.replication = 2
 dfs.namenode.replication.min =2
 Step 1: Start NN,DN1,DN2
 Step 2: Write a file a.txt which has got 167 blocks
 step 3: Stop NN,DN1,DN2
 Step 4: start NN
 In UI report the Number of blocks needed to come out of safemode and number 
 of blocks actually present is different.
 {noformat}
 Cluster Summary
 Security is OFF 
 Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
 the threshold 2. of total blocks 167. Safe mode will be turned off 
 automatically.
 2 files and directories, 167 blocks = 169 total.
 Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
 is 2 GB. 
 Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
 Max Non Heap Memory is 176 MB.{noformat}
 Scenario 2:
 ===
 Configurations:
 dfs.namenode.safemode.threshold-pct = 1
 dfs.replication = 2
 dfs.namenode.replication.min =2
 Step 1: Start NN,DN1,DN2
 Step 2: Write a file a.txt which has got 167 blocks
 step 3: Stop NN,DN1,DN2
 Step 4: start NN
 In UI report the Number of blocks needed to come out of safemode and number 
 of blocks actually present is different
 {noformat}
 Cluster Summary
 Security is OFF 
 Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
 the threshold 1. of total blocks 167. Safe mode will be turned off 
 automatically.
 2 files and directories, 167 blocks = 169 total.
 Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
 is 2 GB. 
 Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
 Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377652#comment-14377652
 ] 

Hudson commented on HDFS-7960:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #876 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/876/])
HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu. (wang: rev 
50ee8f4e67a66aa77c5359182f61f3e951844db6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestBlockListAsLongs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesCombinedBlockReport.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesBlockReportPerStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDnRespectsBlockReportSplitThreshold.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java


 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
 HDFS-7960.007.patch, HDFS-7960.008.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-03-24 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377689#comment-14377689
 ] 

Hui Zheng commented on HDFS-7980:
-

In our environment there are over 3000 datanodes, 100 million file and 150 
million blocks.
When all the datanodes are working, it will take more than one hour to restart 
the namenode(the time is almost used for block report).
But if we 
first stop all the datanodes,
then restart the namenode,
last start all the datanodes after the namenode finishes loading editlog,
it will only take around 20 minutes. 

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su

 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7687) Change fsck to support EC files

2015-03-24 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377686#comment-14377686
 ] 

Takanobu Asanuma commented on HDFS-7687:


I created a JIRA about this problem in HDFS-7981.

 Change fsck to support EC files
 ---

 Key: HDFS-7687
 URL: https://issues.apache.org/jira/browse/HDFS-7687
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Takanobu Asanuma

 We need to change fsck so that it can detect under replicated and corrupted 
 EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7981) getStoragePolicy() regards HOT policy as EC policy

2015-03-24 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-7981:

Fix Version/s: HDFS-7285

 getStoragePolicy() regards HOT policy as EC policy
 --

 Key: HDFS-7981
 URL: https://issues.apache.org/jira/browse/HDFS-7981
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma
 Fix For: HDFS-7285


 Now, {{testStoragePoliciesCK()}} in {{TestFsck}} is failed in EC branch.
 A part of the test result is below:
 {noformat}
 A part of the test result is odd:
 Blocks NOT satisfying the specified storage policy:
 Storage Policy  Specified Storage Policy  # of blocks 
   % of blocks
 DISK:3(EC)  HOT   1   
33.%
 {noformat}
 I found that {{getStoragePolicy(StorageType[] storageTypes)}} in 
 {{StoragePolicySummary}} regarding HOT policy as EC policy. We should fix 
 this problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7715) Implement the Hitchhiker erasure coding algorithm

2015-03-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377598#comment-14377598
 ] 

Kai Zheng commented on HDFS-7715:
-

Took a quick at the large patch, and my comments so far are:
1. Why we need to change hadoop-common/pom file ?
2. Please clean up and refine your codes considering: 1) regarding public or 
protected variables and methods; 2) coding style; 3) comments.
3. Maybe we can have a utility class for piggyback stuff to simplify the HH 
coders.

My major concern is that we're implementing the algorithm and the 3 modes from 
bottom, which might be avoided since in the underlying, it uses XOR and 
Reed-Solomon calculation, so existing XOR and RS raw coders may be possibly 
used. In this way HH coders can be much simplified, more important, native XOR 
and RS raw coders can be utilized to benefit from the performance improvement. 

To make it much easier for review, would you:
1. Attach a patch with only HH basics plus the most simple mode coder, so that 
it's minimized to ease the understanding.
2. Please don't use zip format, attach the patch directly.

 Implement the Hitchhiker erasure coding algorithm
 -

 Key: HDFS-7715
 URL: https://issues.apache.org/jira/browse/HDFS-7715
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: jack liuquan
 Attachments: HDFS-7715.zip


 [Hitchhiker | 
 http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
 a new erasure coding algorithm developed as a research project at UC 
 Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
 during data reconstruction. This JIRA aims to introduce Hitchhiker to the 
 HDFS-EC framework, as one of the pluggable codec algorithms.
 The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-24 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377728#comment-14377728
 ] 

Yi Liu commented on HDFS-7960:
--

Thanks Colin and Eddy for the patch, and Andrew for reviewing.

 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
 HDFS-7960.007.patch, HDFS-7960.008.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7616) Change FSImage to support BlockGroup

2015-03-24 Thread Takuya Fukudome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takuya Fukudome updated HDFS-7616:
--
Attachment: HDFS-7616.2.patch

Hi [~zhz]

I attached a new patch. I added a test which checks the informations of striped 
block. Then, the test passed with the latest HDFS-7285 branch. I think that 
FSImage already supported BlockGroup on HDFS-7749. Could you review it? Thank 
you.

 Change FSImage to support BlockGroup
 

 Key: HDFS-7616
 URL: https://issues.apache.org/jira/browse/HDFS-7616
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Takuya Fukudome
 Attachments: HDFS-7616.1.patch, HDFS-7616.2.patch


 We need to change FSImage for support BlockGroup and other new structures 
 introduced for EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378746#comment-14378746
 ] 

Andrew Wang commented on HDFS-7985:
---

Removing a configuration option sounds incompatible. This probably shouldn't go 
into 2.x, I noticed no target version is set right now. The default is also 
already true, so this should be a pretty uncommon issue.

In terms of the patch, the DEFAULT value in DFSConfigKeys should also be 
removed.

 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7986) Allow files / directories to be deleted

2015-03-24 Thread Ravi Prakash (JIRA)
Ravi Prakash created HDFS-7986:
--

 Summary: Allow files / directories to be deleted
 Key: HDFS-7986
 URL: https://issues.apache.org/jira/browse/HDFS-7986
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash


Users should be able to delete files or directories using the Namenode UI.

I'm thinking there ought to be a confirmation dialog. For directories recursive 
should be set to true. Initially there should be no option to skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7779) Improve the HDFS Web UI browser to allow chowning / chmoding and setting replication

2015-03-24 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7779:
---
Attachment: HDFS-7779.01.patch

Here's a patch which adds the features to 
# chmod
# chown
# chgrp
# change replication

 Improve the HDFS Web UI browser to allow chowning / chmoding and setting 
 replication
 

 Key: HDFS-7779
 URL: https://issues.apache.org/jira/browse/HDFS-7779
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
 Attachments: Chmod.png, Chown.png, HDFS-7779.01.patch


 This JIRA converts the owner, group and replication fields into 
 contenteditable fields which can be modified by the user from the browser 
 itself. It too uses the WebHDFS to affect these changes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7988) Replace usage of ExactSizeInputStream with LimitInputStream.

2015-03-24 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-7988:
---

 Summary: Replace usage of ExactSizeInputStream with 
LimitInputStream.
 Key: HDFS-7988
 URL: https://issues.apache.org/jira/browse/HDFS-7988
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chris Nauroth
Priority: Minor


HDFS has a class named {{ExactSizeInputStream}} used in the protobuf 
translation layer.  This class wraps another {{InputStream}}, but constraints 
the readable bytes to a specified length.  The functionality is nearly 
identical to {{LimitInputStream}} in Hadoop Common, with some differences in 
semantics regarding premature EOF.  This issue proposes to eliminate 
{{ExactSizeInputStream}} in favor of {{LimitInputStream}} to reduce the size of 
the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378820#comment-14378820
 ] 

Haohui Mai commented on HDFS-7985:
--

bq. Removing a configuration option sounds incompatible.

In general yes but (1) it enables a feature which means that it is backward 
compatible, (2) setting {{dfs.webhdfs.enabled}} to false becomes an invalid 
configuration now. The code should no longer allow this configuration.

Another thing is that it has little impacts on rolling upgrades. Maybe we can 
document it in the release notes?


 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7713) Implement mkdirs in the HDFS Web UI

2015-03-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7713:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~raviprak] for the 
contribution.

 Implement mkdirs in the HDFS Web UI
 ---

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 2.8.0

 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch, HDFS-7713.05.patch, 
 HDFS-7713.06.patch, HDFS-7713.07.patch, HDFS-7713.08.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7988) Replace usage of ExactSizeInputStream with LimitInputStream.

2015-03-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su reassigned HDFS-7988:
---

Assignee: Walter Su

 Replace usage of ExactSizeInputStream with LimitInputStream.
 

 Key: HDFS-7988
 URL: https://issues.apache.org/jira/browse/HDFS-7988
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chris Nauroth
Assignee: Walter Su
Priority: Minor

 HDFS has a class named {{ExactSizeInputStream}} used in the protobuf 
 translation layer.  This class wraps another {{InputStream}}, but constraints 
 the readable bytes to a specified length.  The functionality is nearly 
 identical to {{LimitInputStream}} in Hadoop Common, with some differences in 
 semantics regarding premature EOF.  This issue proposes to eliminate 
 {{ExactSizeInputStream}} in favor of {{LimitInputStream}} to reduce the size 
 of the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-24 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378924#comment-14378924
 ] 

Jitendra Nath Pandey commented on HDFS-6826:


I have committed this to trunk, branch-2 and branch-2.7. Thanks to [~asuresh] 
for driving this to completion through several iterations on the patch, and 
thanks to [~tucu00] for initial versions of the patch.

 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
 HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
 HDFS-6826.15.patch, HDFS-6826.16.patch, HDFS-6826v3.patch, HDFS-6826v4.patch, 
 HDFS-6826v5.patch, HDFS-6826v6.patch, HDFS-6826v7.1.patch, 
 HDFS-6826v7.2.patch, HDFS-6826v7.3.patch, HDFS-6826v7.4.patch, 
 HDFS-6826v7.5.patch, HDFS-6826v7.6.patch, HDFS-6826v7.patch, 
 HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7987) Allow files / directories to be moved

2015-03-24 Thread Ravi Prakash (JIRA)
Ravi Prakash created HDFS-7987:
--

 Summary: Allow files / directories to be moved
 Key: HDFS-7987
 URL: https://issues.apache.org/jira/browse/HDFS-7987
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash


Users should be able to move files / directories using the Namenode UI. WebHDFS 
supports a rename operation that can be used for this purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378873#comment-14378873
 ] 

Andrew Wang commented on HDFS-7985:
---

It disables the webui, but you can still use HDFS without the webui. Granted, 
it's less useful, but that doesn't mean it's an invalid configuration.

Since it smacks of incompatibility, isn't it better to not remove this config 
in 2.x? Since it defaults to true, I don't see a false value biting us very 
often. Another more friendly (and compatible) approach would be improving the 
error message when WebHDFS is disabled.

 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-24 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-6826:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Fix For: 2.7.0

 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
 HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
 HDFS-6826.15.patch, HDFS-6826.16.patch, HDFS-6826v3.patch, HDFS-6826v4.patch, 
 HDFS-6826v5.patch, HDFS-6826v6.patch, HDFS-6826v7.1.patch, 
 HDFS-6826v7.2.patch, HDFS-6826v7.3.patch, HDFS-6826v7.4.patch, 
 HDFS-6826v7.5.patch, HDFS-6826v7.6.patch, HDFS-6826v7.patch, 
 HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7748) Separate ECN flags from the Status in the DataTransferPipelineAck

2015-03-24 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-7748:
---
Attachment: hdfs-7748.005.patch

Changes :
1) Based on [~wheat9]'s suggestion added a long that binds the status and ECN ( 
flags) value together is java code. This will hopefully help in debugging when 
we don't have to look for these values in separate places.

2) Added a new test case of verify the above functionality

3) Made the flags a bit mask and a repeated unit64 in the protobuf space.



 Separate ECN flags from the Status in the DataTransferPipelineAck
 -

 Key: HDFS-7748
 URL: https://issues.apache.org/jira/browse/HDFS-7748
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Anu Engineer
Priority: Blocker
 Attachments: hdfs-7748.001.patch, hdfs-7748.002.patch, 
 hdfs-7748.003.patch, hdfs-7748.004.patch, hdfs-7748.005.patch


 Prior to the discussions on HDFS-7270, the old clients might fail to talk to 
 the newer server when ECN is turned on. This jira proposes to separate the 
 ECN flags in a separate protobuf field to make the ack compatible on both 
 versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7748) Separate ECN flags from the Status in the DataTransferPipelineAck

2015-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378904#comment-14378904
 ] 

Hadoop QA commented on HDFS-7748:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707052/hdfs-7748.005.patch
  against trunk revision 53a28af.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10059//console

This message is automatically generated.

 Separate ECN flags from the Status in the DataTransferPipelineAck
 -

 Key: HDFS-7748
 URL: https://issues.apache.org/jira/browse/HDFS-7748
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Anu Engineer
Priority: Blocker
 Attachments: hdfs-7748.001.patch, hdfs-7748.002.patch, 
 hdfs-7748.003.patch, hdfs-7748.004.patch, hdfs-7748.005.patch


 Prior to the discussions on HDFS-7270, the old clients might fail to talk to 
 the newer server when ECN is turned on. This jira proposes to separate the 
 ECN flags in a separate protobuf field to make the ack compatible on both 
 versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs

2015-03-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378823#comment-14378823
 ] 

Haohui Mai commented on HDFS-7713:
--

+1. I'll commit it shortly.

 Improve the HDFS Web UI browser to allow creating dirs
 --

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch, HDFS-7713.05.patch, 
 HDFS-7713.06.patch, HDFS-7713.07.patch, HDFS-7713.08.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7713) Implement mkdirs in the HDFS Web UI

2015-03-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7713:
-
Summary: Implement mkdirs in the HDFS Web UI  (was: Improve the HDFS Web UI 
browser to allow creating dirs)

 Implement mkdirs in the HDFS Web UI
 ---

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch, HDFS-7713.05.patch, 
 HDFS-7713.06.patch, HDFS-7713.07.patch, HDFS-7713.08.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7713) Implement mkdirs in the HDFS Web UI

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378846#comment-14378846
 ] 

Hudson commented on HDFS-7713:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7423 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7423/])
HDFS-7713. Implement mkdirs in the HDFS Web UI. Contributed by Ravi Prakash. 
(wheat9: rev e38ef70fbc60f062992c834b1cca6e9ba4baef6e)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Implement mkdirs in the HDFS Web UI
 ---

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 2.8.0

 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch, HDFS-7713.05.patch, 
 HDFS-7713.06.patch, HDFS-7713.07.patch, HDFS-7713.08.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2015-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378845#comment-14378845
 ] 

Hadoop QA commented on HDFS-7501:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12706977/HDFS-7501-3.patch
  against trunk revision a16bfff.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSClientRetries
  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10054//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10054//console

This message is automatically generated.

 TransactionsSinceLastCheckpoint can be negative on SBNs
 ---

 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Gautam Gopalakrishnan
Priority: Trivial
 Attachments: HDFS-7501-2.patch, HDFS-7501-3.patch, HDFS-7501.patch


 The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
 NNStorage.mostRecentCheckpointTxId.
 In Standby mode, the former does not increment beyond the loaded or 
 last-when-active value, but the latter does change due to checkpoints done 
 regularly in this mode. Thereby, the SBN will eventually end up showing 
 negative values for TransactionsSinceLastCheckpoint.
 This is not an issue as the metric only makes sense to be monitored on the 
 Active NameNode, but we should perhaps just show the value 0 by detecting if 
 the NN is in SBN form, as allowing a negative number is confusing to view 
 within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HDFS-7985:

Attachment: HDFS-7985-032415-1.patch

Removed the default value (true) of DFS_PERMISSIONS_ENABLED_KEY in the latest 
patch. If there are special concerns about compatibility we may want to just 
put it in trunk? 

 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HDFS-7985-032415-1.patch, HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-03-24 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-7986:
--
Summary: Allow files / directories to be deleted from the NameNode UI  
(was: Allow files / directories to be deleted)

 Allow files / directories to be deleted from the NameNode UI
 

 Key: HDFS-7986
 URL: https://issues.apache.org/jira/browse/HDFS-7986
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash

 Users should be able to delete files or directories using the Namenode UI.
 I'm thinking there ought to be a confirmation dialog. For directories 
 recursive should be set to true. Initially there should be no option to 
 skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6826) Plugin interface to enable delegation of HDFS authorization assertions

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378901#comment-14378901
 ] 

Hudson commented on HDFS-6826:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7424 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7424/])
HDFS-6826. Plugin interface to enable delegation of HDFS authorization 
assertions. Contributed by Arun Suresh. (jitendra: rev 
53a28afe293e5bf185c8d4f2c7aea212e66015c2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeAttributeProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DefaultINodeAttributesProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java


 Plugin interface to enable delegation of HDFS authorization assertions
 --

 Key: HDFS-6826
 URL: https://issues.apache.org/jira/browse/HDFS-6826
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HDFS-6826-idea.patch, HDFS-6826-idea2.patch, 
 HDFS-6826-permchecker.patch, HDFS-6826.10.patch, HDFS-6826.11.patch, 
 HDFS-6826.12.patch, HDFS-6826.13.patch, HDFS-6826.14.patch, 
 HDFS-6826.15.patch, HDFS-6826.16.patch, HDFS-6826v3.patch, HDFS-6826v4.patch, 
 HDFS-6826v5.patch, HDFS-6826v6.patch, HDFS-6826v7.1.patch, 
 HDFS-6826v7.2.patch, HDFS-6826v7.3.patch, HDFS-6826v7.4.patch, 
 HDFS-6826v7.5.patch, HDFS-6826v7.6.patch, HDFS-6826v7.patch, 
 HDFS-6826v8.patch, HDFS-6826v9.patch, 
 HDFSPluggableAuthorizationProposal-v2.pdf, 
 HDFSPluggableAuthorizationProposal.pdf


 When Hbase data, HiveMetaStore data or Search data is accessed via services 
 (Hbase region servers, HiveServer2, Impala, Solr) the services can enforce 
 permissions on corresponding entities (databases, tables, views, columns, 
 search collections, documents). It is desirable, when the data is accessed 
 directly by users accessing the underlying data files (i.e. from a MapReduce 
 job), that the permission of the data files map to the permissions of the 
 corresponding data entity (i.e. table, column family or search collection).
 To enable this we need to have the necessary hooks in place in the NameNode 
 to delegate authorization to an external system that can map HDFS 
 files/directories to data entities and resolve their permissions based on the 
 data entities permissions.
 I’ll be posting a design proposal in the next few days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7985:
-
Target Version/s: 3.0.0

 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HDFS-7985-032415-1.patch, HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377997#comment-14377997
 ] 

Hudson commented on HDFS-7960:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #142 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/142/])
HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu. (wang: rev 
50ee8f4e67a66aa77c5359182f61f3e951844db6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestBlockListAsLongs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDnRespectsBlockReportSplitThreshold.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesCombinedBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesBlockReportPerStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java


 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
 HDFS-7960.007.patch, HDFS-7960.008.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA

[jira] [Resolved] (HDFS-7968) Properly encode WebHDFS requests coming from the NN UI

2015-03-24 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HDFS-7968.

Resolution: Duplicate

Lol! The patch was independently an exact duplicate of HDFS-7953.

 Properly encode WebHDFS requests coming from the NN UI
 --

 Key: HDFS-7968
 URL: https://issues.apache.org/jira/browse/HDFS-7968
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7968.01.patch


 Thanks to [~wheat9] for pointing out this 
 [issue|https://issues.apache.org/jira/browse/HDFS-7713?focusedCommentId=14371788page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14371788]
  e.g. you cannot descend into a directory named {{asdf#df+1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7884) NullPointerException in BlockSender

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377945#comment-14377945
 ] 

Hudson commented on HDFS-7884:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #133 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/133/])
HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp 
provided by the client is larger than the one stored in the datanode.  
Contributed by Brahma Reddy Battula (szetszwo: rev 
d7e3c3364eb904f55a878bc14c331952f9dadab2)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 NullPointerException in BlockSender
 ---

 Key: HDFS-7884
 URL: https://issues.apache.org/jira/browse/HDFS-7884
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Brahma Reddy Battula
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HDFS-7884-002.patch, HDFS-7884.patch, 
 h7884_20150313.patch, 
 org.apache.hadoop.hdfs.TestAppendSnapshotTruncate-output.txt


 {noformat}
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:264)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:249)
   at java.lang.Thread.run(Thread.java:745)
 {noformat}
 BlockSender.java:264 is shown below
 {code}
   this.volumeRef = datanode.data.getVolume(block).obtainReference();
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377953#comment-14377953
 ] 

Hudson commented on HDFS-7942:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #133 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/133/])
HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts. 
Contributed by Brandon Li (brandonli: rev 
36af4a913c97113bd0486c48e1cb864c5cba46fd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsNfsGateway.md
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java


 NFS: support regexp grouping in nfs.exports.allowed.hosts
 -

 Key: HDFS-7942
 URL: https://issues.apache.org/jira/browse/HDFS-7942
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.7.0

 Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch


 Thanks, [~yeshavora], for reporting this problem.
 Set regex value in nfs.exports.allowed.hosts property.
 {noformat}
 propertynamenfs.exports.allowed.hosts/namevalue206.190.52.[26|23] 
 rw/value/property
 {noformat}
 With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
 act as nfs client. In conclusion, no host can mount nfs with this regex value 
 due to access denied error.
 {noformat}
 $ sudo su - -c mount -o 
 soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
 /tmp/tmp_mnt root
 mount.nfs: access denied by server while mounting 206.190.52.23:/
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3325) When configuring dfs.namenode.safemode.threshold-pct to a value greater or equal to 1 there is mismatch in the UI report

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377949#comment-14377949
 ] 

Hudson commented on HDFS-3325:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #133 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/133/])
HDFS-3325. When configuring 'dfs.namenode.safemode.threshold-pct' to a value 
greater or equal to 1 there is mismatch in the UI report (Contributed by 
J.Andreina) (vinayakumarb: rev c6c396fcd69514ba93583268b2633557c3d74a47)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 When configuring dfs.namenode.safemode.threshold-pct to a value greater or 
 equal to 1 there is mismatch in the UI report
 --

 Key: HDFS-3325
 URL: https://issues.apache.org/jira/browse/HDFS-3325
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HDFS-3325.1.patch, HDFS-3325.2.patch


 When dfs.namenode.safemode.threshold-pct is configured to n
 Namenode will be in safemode until n percentage of blocks that should satisfy 
 the minimal replication requirement defined by 
 dfs.namenode.replication.min is reported to namenode
 But in UI it displays that n percentage of total blocks + 1 blocks  are 
 additionally needed
 to come out of the safemode
 Scenario 1:
 
 Configurations:
 dfs.namenode.safemode.threshold-pct = 2
 dfs.replication = 2
 dfs.namenode.replication.min =2
 Step 1: Start NN,DN1,DN2
 Step 2: Write a file a.txt which has got 167 blocks
 step 3: Stop NN,DN1,DN2
 Step 4: start NN
 In UI report the Number of blocks needed to come out of safemode and number 
 of blocks actually present is different.
 {noformat}
 Cluster Summary
 Security is OFF 
 Safe mode is ON. The reported blocks 0 needs additional 335 blocks to reach 
 the threshold 2. of total blocks 167. Safe mode will be turned off 
 automatically.
 2 files and directories, 167 blocks = 169 total.
 Heap Memory used 57.05 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
 is 2 GB. 
 Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
 Max Non Heap Memory is 176 MB.{noformat}
 Scenario 2:
 ===
 Configurations:
 dfs.namenode.safemode.threshold-pct = 1
 dfs.replication = 2
 dfs.namenode.replication.min =2
 Step 1: Start NN,DN1,DN2
 Step 2: Write a file a.txt which has got 167 blocks
 step 3: Stop NN,DN1,DN2
 Step 4: start NN
 In UI report the Number of blocks needed to come out of safemode and number 
 of blocks actually present is different
 {noformat}
 Cluster Summary
 Security is OFF 
 Safe mode is ON. The reported blocks 0 needs additional 168 blocks to reach 
 the threshold 1. of total blocks 167. Safe mode will be turned off 
 automatically.
 2 files and directories, 167 blocks = 169 total.
 Heap Memory used 56.2 MB is 2% of Commited Heap Memory 2 GB. Max Heap Memory 
 is 2 GB. 
 Non Heap Memory used 23.37 MB is 17% of Commited Non Heap Memory 130.44 MB. 
 Max Non Heap Memory is 176 MB.{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7960) The full block report should prune zombie storages even if they're not empty

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377947#comment-14377947
 ] 

Hudson commented on HDFS-7960:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #133 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/133/])
HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu. (wang: rev 
50ee8f4e67a66aa77c5359182f61f3e951844db6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolServerSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeProtocolRetryPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesCombinedBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDnRespectsBlockReportSplitThreshold.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/BlockReportContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestNNHandlesBlockReportPerStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestBlockListAsLongs.java


 The full block report should prune zombie storages even if they're not empty
 

 Key: HDFS-7960
 URL: https://issues.apache.org/jira/browse/HDFS-7960
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Colin Patrick McCabe
Priority: Critical
 Fix For: 2.7.0

 Attachments: HDFS-7960.002.patch, HDFS-7960.003.patch, 
 HDFS-7960.004.patch, HDFS-7960.005.patch, HDFS-7960.006.patch, 
 HDFS-7960.007.patch, HDFS-7960.008.patch


 The full block report should prune zombie storages even if they're not empty. 
  We have seen cases in production where zombie storages have not been pruned 
 subsequent to HDFS-7575.  This could arise any time the NameNode thinks there 
 is a block in some old storage which is actually not there.  In this case, 
 the block will not show up in the new storage (once old is renamed to new) 
 and the old storage will linger forever as a zombie, even with the HDFS-7596 
 fix applied.  This also happens with datanode hotplug, when a drive is 
 removed.  In this case, an entire storage (volume) goes away but the blocks 
 do not show up in another storage on the same datanode.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HDFS-7917) Use file to replace data dirs in test to simulate a disk failure.

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377946#comment-14377946
 ] 

Hudson commented on HDFS-7917:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #133 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/133/])
HDFS-7917. Use file to replace data dirs in test to simulate a disk failure. 
Contributed by Lei (Eddy) Xu. (cnauroth: rev 
2c238ae4e00371ef76582b007bb0e20ac8455d9c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java


 Use file to replace data dirs in test to simulate a disk failure. 
 --

 Key: HDFS-7917
 URL: https://issues.apache.org/jira/browse/HDFS-7917
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7917.000.patch, HDFS-7917.001.patch, 
 HDFS-7917.002.patch


 Currently, in several tests, e.g., {{TestDataNodeVolumeFailureXXX}} and 
 {{TestDataNotHowSwapVolumes}},  we simulate a disk failure by setting a 
 directory's executable permission as false. However, it raises the risk that 
 if the cleanup code could not be executed, the directory can not be easily 
 removed by Jenkins job. 
 Since in {{DiskChecker#checkDirAccess}}:
 {code}
 private static void checkDirAccess(File dir) throws DiskErrorException {
 if (!dir.isDirectory()) {
   throw new DiskErrorException(Not a directory: 
+ dir.toString());
 }
 checkAccessByFileMethods(dir);
   }
 {code}
 We can replace the DN data directory as a file to achieve the same fault 
 injection goal, while it is safer for cleaning up in any circumstance. 
 Additionally, as [~cnauroth] suggested: 
 bq. That might even let us enable some of these tests that are skipped on 
 Windows, because Windows allows access for the owner even after permissions 
 have been stripped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2015-03-24 Thread Gautam Gopalakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379024#comment-14379024
 ] 

Gautam Gopalakrishnan commented on HDFS-7501:
-

Thanks Harsh, sorry I couldn't get to this earlier.




 TransactionsSinceLastCheckpoint can be negative on SBNs
 ---

 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Gautam Gopalakrishnan
Priority: Trivial
 Attachments: HDFS-7501-2.patch, HDFS-7501-3.patch, HDFS-7501.patch


 The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
 NNStorage.mostRecentCheckpointTxId.
 In Standby mode, the former does not increment beyond the loaded or 
 last-when-active value, but the latter does change due to checkpoints done 
 regularly in this mode. Thereby, the SBN will eventually end up showing 
 negative values for TransactionsSinceLastCheckpoint.
 This is not an issue as the metric only makes sense to be monitored on the 
 Active NameNode, but we should perhaps just show the value 0 by detecting if 
 the NN is in SBN form, as allowing a negative number is confusing to view 
 within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7907) Update DecommissionManager to support striped blocks

2015-03-24 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7907:

Attachment: HDFS-7907.000.patch

Initial patch. The patch also tracks invalid and corrupt striped blocks.

 Update DecommissionManager to support striped blocks
 

 Key: HDFS-7907
 URL: https://issues.apache.org/jira/browse/HDFS-7907
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7907.000.patch


 With recent changes from HDFS-7411, we need to update DecommissionManager to 
 support striped blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7796) Include X-editable for slick contenteditable fields in the web UI

2015-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379030#comment-14379030
 ] 

Hadoop QA commented on HDFS-7796:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698922/HDFS-7796.01.patch
  against trunk revision a16bfff.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10057//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10057//console

This message is automatically generated.

 Include X-editable for slick contenteditable fields in the web UI
 -

 Key: HDFS-7796
 URL: https://issues.apache.org/jira/browse/HDFS-7796
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7796.01.patch


 This JIRA is for include X-editable (https://vitalets.github.io/x-editable/) 
 in the Hadoop UI. It is released under the MIT license so its fine. We need 
 it to make the owner / group / replication and possibly other fields in the 
 UI editable easily



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379109#comment-14379109
 ] 

Hadoop QA commented on HDFS-7985:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707013/HDFS-7985-032415.patch
  against trunk revision a16bfff.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 14 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10058//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10058//console

This message is automatically generated.

 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HDFS-7985-032415-1.patch, HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-24 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379171#comment-14379171
 ] 

Walter Su commented on HDFS-7978:
-

PerformanceAdvisory.LOG is slf4j, so I rewrite the message arguments in 
template-style.

 Add LOG.isDebugEnabled() guard for some LOG.debug(..)
 -

 Key: HDFS-7978
 URL: https://issues.apache.org/jira/browse/HDFS-7978
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-7978.001.patch, HDFS-7978.002.patch


 {{isDebugEnabled()}} is optional. But when there are :
 1. lots of concatenating Strings
 2. complicated function calls
 in the arguments, {{LOG.debug(..)}} should be guarded with 
 {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
 performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7988) Replace usage of ExactSizeInputStream with LimitInputStream.

2015-03-24 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379237#comment-14379237
 ] 

Walter Su commented on HDFS-7988:
-

Currently, hadoop heavily depends on Guava. Guava provide {{LimitInputStream}} 
(ver=14) and {{ByteStreams.limit(InputStream in,
 long limit)}}.   {{ByteStreams.limit}} is stable. {{LimitInputStream}} has 
always been @Beta. So that's why we copied {{LimitInputStream}} to 
{{hadoop.util}} package in HADOOP-11286.
Right now, we can remove {{ExactSizeInputStream}}. 
In the future, when we stop support Guava (ver=14) , we can also remove 
{{hadoop.util.LimitInputStream}} and replace it with  {{ByteStreams.limit(..)}}

 Replace usage of ExactSizeInputStream with LimitInputStream.
 

 Key: HDFS-7988
 URL: https://issues.apache.org/jira/browse/HDFS-7988
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chris Nauroth
Assignee: Walter Su
Priority: Minor

 HDFS has a class named {{ExactSizeInputStream}} used in the protobuf 
 translation layer.  This class wraps another {{InputStream}}, but constraints 
 the readable bytes to a specified length.  The functionality is nearly 
 identical to {{LimitInputStream}} in Hadoop Common, with some differences in 
 semantics regarding premature EOF.  This issue proposes to eliminate 
 {{ExactSizeInputStream}} in favor of {{LimitInputStream}} to reduce the size 
 of the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-24 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379126#comment-14379126
 ] 

Walter Su commented on HDFS-7978:
-

I just find out that common-logging doesn't support {} placeholder or variable 
length argument lists. Sorry about that. You are right. We can switch over to 
slf4j. There are still a lot class use common-logging. It's a long way to go.
I'll appreciate if you can apply the patch so we guard them now. Slf4j also 
support isDebugEnabled(), I think it's ok. Or we can wait another patch for 
switching to slf4j. Both works for me. Thanks introducing me Slf4j.

 Add LOG.isDebugEnabled() guard for some LOG.debug(..)
 -

 Key: HDFS-7978
 URL: https://issues.apache.org/jira/browse/HDFS-7978
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-7978.001.patch


 {{isDebugEnabled()}} is optional. But when there are :
 1. lots of concatenating Strings
 2. complicated function calls
 in the arguments, {{LOG.debug(..)}} should be guarded with 
 {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
 performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379189#comment-14379189
 ] 

Hadoop QA commented on HDFS-7978:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707113/HDFS-7978.002.patch
  against trunk revision 53a28af.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10063//console

This message is automatically generated.

 Add LOG.isDebugEnabled() guard for some LOG.debug(..)
 -

 Key: HDFS-7978
 URL: https://issues.apache.org/jira/browse/HDFS-7978
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-7978.001.patch, HDFS-7978.002.patch


 {{isDebugEnabled()}} is optional. But when there are :
 1. lots of concatenating Strings
 2. complicated function calls
 in the arguments, {{LOG.debug(..)}} should be guarded with 
 {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
 performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379245#comment-14379245
 ] 

Hadoop QA commented on HDFS-7985:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707068/HDFS-7985-032415-1.patch
  against trunk revision 53a28af.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 14 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.tracing.TestTracing

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10061//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10061//console

This message is automatically generated.

 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HDFS-7985-032415-1.patch, HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7748) Separate ECN flags from the Status in the DataTransferPipelineAck

2015-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378997#comment-14378997
 ] 

Hadoop QA commented on HDFS-7748:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707052/hdfs-7748.005.patch
  against trunk revision 53a28af.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10060//console

This message is automatically generated.

 Separate ECN flags from the Status in the DataTransferPipelineAck
 -

 Key: HDFS-7748
 URL: https://issues.apache.org/jira/browse/HDFS-7748
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Anu Engineer
Priority: Blocker
 Attachments: hdfs-7748.001.patch, hdfs-7748.002.patch, 
 hdfs-7748.003.patch, hdfs-7748.004.patch, hdfs-7748.005.patch


 Prior to the discussions on HDFS-7270, the old clients might fail to talk to 
 the newer server when ECN is turned on. This jira proposes to separate the 
 ECN flags in a separate protobuf field to make the ack compatible on both 
 versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379115#comment-14379115
 ] 

Hadoop QA commented on HDFS-7985:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707068/HDFS-7985-032415-1.patch
  against trunk revision 53a28af.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 14 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10062//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10062//console

This message is automatically generated.

 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HDFS-7985-032415-1.patch, HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7978) Add LOG.isDebugEnabled() guard for some LOG.debug(..)

2015-03-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7978:

Attachment: HDFS-7978.002.patch

 Add LOG.isDebugEnabled() guard for some LOG.debug(..)
 -

 Key: HDFS-7978
 URL: https://issues.apache.org/jira/browse/HDFS-7978
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Walter Su
Assignee: Walter Su
 Attachments: HDFS-7978.001.patch, HDFS-7978.002.patch


 {{isDebugEnabled()}} is optional. But when there are :
 1. lots of concatenating Strings
 2. complicated function calls
 in the arguments, {{LOG.debug(..)}} should be guarded with 
 {{LOG.isDebugEnabled()}} to avoid unnecessary arguments evaluation and impove 
 performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7988) Replace usage of ExactSizeInputStream with LimitInputStream.

2015-03-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7988:

Attachment: (was: HDFS-7988.001.patch)

 Replace usage of ExactSizeInputStream with LimitInputStream.
 

 Key: HDFS-7988
 URL: https://issues.apache.org/jira/browse/HDFS-7988
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chris Nauroth
Assignee: Walter Su
Priority: Minor

 HDFS has a class named {{ExactSizeInputStream}} used in the protobuf 
 translation layer.  This class wraps another {{InputStream}}, but constraints 
 the readable bytes to a specified length.  The functionality is nearly 
 identical to {{LimitInputStream}} in Hadoop Common, with some differences in 
 semantics regarding premature EOF.  This issue proposes to eliminate 
 {{ExactSizeInputStream}} in favor of {{LimitInputStream}} to reduce the size 
 of the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-24 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379153#comment-14379153
 ] 

nijel commented on HDFS-7875:
-

Thanks Harsh :)

 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Fix For: 2.8.0

 Attachments: 0001-HDFS-7875.patch, 0002-HDFS-7875.patch, 
 0003-HDFS-7875.patch, 0004-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7988) Replace usage of ExactSizeInputStream with LimitInputStream.

2015-03-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7988:

Attachment: HDFS-7988.001.patch

 Replace usage of ExactSizeInputStream with LimitInputStream.
 

 Key: HDFS-7988
 URL: https://issues.apache.org/jira/browse/HDFS-7988
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chris Nauroth
Assignee: Walter Su
Priority: Minor
 Attachments: HDFS-7988.001.patch


 HDFS has a class named {{ExactSizeInputStream}} used in the protobuf 
 translation layer.  This class wraps another {{InputStream}}, but constraints 
 the readable bytes to a specified length.  The functionality is nearly 
 identical to {{LimitInputStream}} in Hadoop Common, with some differences in 
 semantics regarding premature EOF.  This issue proposes to eliminate 
 {{ExactSizeInputStream}} in favor of {{LimitInputStream}} to reduce the size 
 of the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7988) Replace usage of ExactSizeInputStream with LimitInputStream.

2015-03-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7988:

Attachment: HDFS-7988.001.patch

 Replace usage of ExactSizeInputStream with LimitInputStream.
 

 Key: HDFS-7988
 URL: https://issues.apache.org/jira/browse/HDFS-7988
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Chris Nauroth
Assignee: Walter Su
Priority: Minor
 Attachments: HDFS-7988.001.patch


 HDFS has a class named {{ExactSizeInputStream}} used in the protobuf 
 translation layer.  This class wraps another {{InputStream}}, but constraints 
 the readable bytes to a specified length.  The functionality is nearly 
 identical to {{LimitInputStream}} in Hadoop Common, with some differences in 
 semantics regarding premature EOF.  This issue proposes to eliminate 
 {{ExactSizeInputStream}} in favor of {{LimitInputStream}} to reduce the size 
 of the codebase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7344) Erasure Coding worker and support in DataNode

2015-03-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379277#comment-14379277
 ] 

Tsz Wo Nicholas Sze edited comment on HDFS-7344 at 3/25/15 4:40 AM:


 In most recovery cases, each ECWorker only generates 1 block. ...

For 1 missing block, we may not need to recover it at all since 
(6,3)\-Reed-Solomon can tolerate 3 missing blocks.  Also recovery is more 
efficient for 2- or  3- missing blocks.


was (Author: szetszwo):
 In most recovery cases, each ECWorker only generates 1 block. ...

For 1 missing block, we may not need to recover it at all since 
(6,3)-Reed-Solomon can tolerate 3 missing blocks.  Also recovery is more 
efficient for 2- or  3- missing blocks.

 Erasure Coding worker and support in DataNode
 -

 Key: HDFS-7344
 URL: https://issues.apache.org/jira/browse/HDFS-7344
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Kai Zheng
Assignee: Li Bo
 Attachments: HDFS ECWorker Design.pdf, hdfs-ec-datanode.0108.zip, 
 hdfs-ec-datanode.0108.zip


 According to HDFS-7285 and the design, this handles DataNode side extension 
 and related support for Erasure Coding, and implements ECWorker. It mainly 
 covers the following aspects, and separate tasks may be opened to handle each 
 of them.
 * Process encoding work, calculating parity blocks as specified in block 
 groups and codec schema;
 * Process decoding work, recovering data blocks according to block groups and 
 codec schema;
 * Handle client requests for passive recovery blocks data and serving data on 
 demand while reconstructing;
 * Write parity blocks according to storage policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7350) WebHDFS: Support EC commands through webhdfs

2015-03-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379280#comment-14379280
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7350:
---

What are the commands do you mean?

 WebHDFS: Support EC commands through webhdfs
 

 Key: HDFS-7350
 URL: https://issues.apache.org/jira/browse/HDFS-7350
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379304#comment-14379304
 ] 

Hudson commented on HDFS-7985:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7425 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7425/])
HDFS-7985. WebHDFS should be always enabled. Contributed by Li Lu. (wheat9: rev 
80278a5f85a91b3e02e700e0b3c0a433c15e0565)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsWithMultipleNameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogs.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestSymlinkHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestHttpsFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRespectsBindHostKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestFSMainOperationsWebHdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java


 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Fix For: 3.0.0

 Attachments: HDFS-7985-032415-1.patch, HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7344) Erasure Coding worker and support in DataNode

2015-03-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379281#comment-14379281
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7344:
---

BTW, any progress on this JIRA and the related tasks?

 Erasure Coding worker and support in DataNode
 -

 Key: HDFS-7344
 URL: https://issues.apache.org/jira/browse/HDFS-7344
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Kai Zheng
Assignee: Li Bo
 Attachments: HDFS ECWorker Design.pdf, hdfs-ec-datanode.0108.zip, 
 hdfs-ec-datanode.0108.zip


 According to HDFS-7285 and the design, this handles DataNode side extension 
 and related support for Erasure Coding, and implements ECWorker. It mainly 
 covers the following aspects, and separate tasks may be opened to handle each 
 of them.
 * Process encoding work, calculating parity blocks as specified in block 
 groups and codec schema;
 * Process decoding work, recovering data blocks according to block groups and 
 codec schema;
 * Handle client requests for passive recovery blocks data and serving data on 
 demand while reconstructing;
 * Write parity blocks according to storage policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6649) Documentation for setrep is wrong

2015-03-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6649:

   Resolution: Fixed
Fix Version/s: 1.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to branch-1. Thanks [~qwertymaniac] for your review!

 Documentation for setrep is wrong
 -

 Key: HDFS-6649
 URL: https://issues.apache.org/jira/browse/HDFS-6649
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.4
Reporter: Alexander Fahlke
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Fix For: 1.3.0

 Attachments: HDFS-6649.branch-1.patch


 The documentation in: 
 http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#setrep states 
 that one must use the command as follows:
 - {{Usage: hdfs dfs -setrep [-R] path}}
 - {{Example: hdfs dfs -setrep -w 3 -R /user/hadoop/dir1}}
 Correct would be to state that setrep needs the replication factor and the 
 replication factor needs to be right before the DFS path.
 Must look like this:
 - {{Usage: hdfs dfs -setrep [-R] [-w] rep path/file}}
 - {{Example: hdfs dfs -setrep -w -R 3 /user/hadoop/dir1}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7344) Erasure Coding worker and support in DataNode

2015-03-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379277#comment-14379277
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7344:
---

 In most recovery cases, each ECWorker only generates 1 block. ...

For 1 missing block, we may not need to recover it at all since 
(6,3)-Reed-Solomon can tolerate 3 missing blocks.  Also recovery is more 
efficient for 2- or  3- missing blocks.

 Erasure Coding worker and support in DataNode
 -

 Key: HDFS-7344
 URL: https://issues.apache.org/jira/browse/HDFS-7344
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Kai Zheng
Assignee: Li Bo
 Attachments: HDFS ECWorker Design.pdf, hdfs-ec-datanode.0108.zip, 
 hdfs-ec-datanode.0108.zip


 According to HDFS-7285 and the design, this handles DataNode side extension 
 and related support for Erasure Coding, and implements ECWorker. It mainly 
 covers the following aspects, and separate tasks may be opened to handle each 
 of them.
 * Process encoding work, calculating parity blocks as specified in block 
 groups and codec schema;
 * Process decoding work, recovering data blocks according to block groups and 
 codec schema;
 * Handle client requests for passive recovery blocks data and serving data on 
 demand while reconstructing;
 * Write parity blocks according to storage policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7985:
-
Release Note: WebHDFS is mandatory and cannot be disabled.
Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Fix For: 3.0.0

 Attachments: HDFS-7985-032415-1.patch, HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6729) Support maintenance mode for DN

2015-03-24 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6729:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Duplicated by HDFS-7877, which is a more comprehensive solution.

 Support maintenance mode for DN
 ---

 Key: HDFS-6729
 URL: https://issues.apache.org/jira/browse/HDFS-6729
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.5.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
 Attachments: HDFS-6729.000.patch, HDFS-6729.001.patch, 
 HDFS-6729.002.patch, HDFS-6729.003.patch, HDFS-6729.004.patch, 
 HDFS-6729.005.patch


 Some maintenance works (e.g., upgrading RAM or add disks) on DataNode only 
 takes a short amount of time (e.g., 10 minutes). In these cases, the users do 
 not want to report missing blocks on this DN because the DN will be online 
 shortly without data lose. Thus, we need a maintenance mode for a DN so that 
 maintenance work can be carried out on the DN without having to decommission 
 it or the DN being marked as dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7966) New Data Transfer Protocol via HTTP/2

2015-03-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379298#comment-14379298
 ] 

Haohui Mai commented on HDFS-7966:
--

Hi students, please take a look at the GSoC 2015 FAQ and submit a proposal. 
Thanks.

 New Data Transfer Protocol via HTTP/2
 -

 Key: HDFS-7966
 URL: https://issues.apache.org/jira/browse/HDFS-7966
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Haohui Mai
Assignee: Qianqian Shi
  Labels: gsoc, gsoc2015, mentor

 The current Data Transfer Protocol (DTP) implements a rich set of features 
 that span across multiple layers, including:
 * Connection pooling and authentication (session layer)
 * Encryption (presentation layer)
 * Data writing pipeline (application layer)
 All these features are HDFS-specific and defined by implementation. As a 
 result it requires non-trivial amount of work to implement HDFS clients and 
 servers.
 This jira explores to delegate the responsibilities of the session and 
 presentation layers to the HTTP/2 protocol. Particularly, HTTP/2 handles 
 connection multiplexing, QoS, authentication and encryption, reducing the 
 scope of DTP to the application layer only. By leveraging the existing HTTP/2 
 library, it should simplify the implementation of both HDFS clients and 
 servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379294#comment-14379294
 ] 

Haohui Mai commented on HDFS-7985:
--

+1

 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Fix For: 3.0.0

 Attachments: HDFS-7985-032415-1.patch, HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7985) WebHDFS should be always enabled

2015-03-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7985:
-
   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~gtCarrera9] for the 
contribution and Andrew for the review.

 WebHDFS should be always enabled
 

 Key: HDFS-7985
 URL: https://issues.apache.org/jira/browse/HDFS-7985
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Fix For: 3.0.0

 Attachments: HDFS-7985-032415-1.patch, HDFS-7985-032415.patch


 Since 2.7 the HTML 5 UI depends on WebHDFS. Disabling WebHDFS will break the 
 UI, thus WebHDFS should be always enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7354) Support parity blocks in block management

2015-03-24 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379271#comment-14379271
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7354:
---

 Parity blocks still need special treatment though, because they should be 
 recovered with lower priority than data blocks. ...

I think the priority should be depended on the number of missing blocks since 
all the missing blocks in a block group can be reconstructed at the same time.  
For (6,3)-Reed-Solomon, the priority should be:
- 1-missing: lowest priority (or do not reconstruct it at all)
- 2-missing: low priority (the same as 1-missing in 3-replication)
- 3-missing: high priority (data loss if one more block is missing.)

 Support parity blocks in block management
 -

 Key: HDFS-7354
 URL: https://issues.apache.org/jira/browse/HDFS-7354
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 Parity blocks are not accessed during normal I/O operations. They should 
 therefore be treated with lower priority in the block recovery framework. 
 This JIRA tracks this effort as well as other special treatments which might 
 be needed for parity blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7713) Improve the HDFS Web UI browser to allow creating dirs

2015-03-24 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7713:
---
Attachment: HDFS-7713.08.patch

HDFS-7968 turned out to be a duplicate of HDFS-7953.

Here's a patch which correctly encodes the path using encode_path

 Improve the HDFS Web UI browser to allow creating dirs
 --

 Key: HDFS-7713
 URL: https://issues.apache.org/jira/browse/HDFS-7713
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-7713.01.patch, HDFS-7713.02.patch, 
 HDFS-7713.03.patch, HDFS-7713.04.patch, HDFS-7713.05.patch, 
 HDFS-7713.06.patch, HDFS-7713.07.patch, HDFS-7713.08.patch


 This sub-task JIRA is for improving the NN HTML5 UI to allow the user to 
 create directories. It uses WebHDFS and adds to the great work done in 
 HDFS-6252



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7726) Parse and check the configuration settings of edit log to prevent runtime errors

2015-03-24 Thread Tianyin Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378180#comment-14378180
 ] 

Tianyin Xu commented on HDFS-7726:
--

Thank you so much, Zhe!
I will do that ASAP this week.

 Parse and check the configuration settings of edit log to prevent runtime 
 errors
 

 Key: HDFS-7726
 URL: https://issues.apache.org/jira/browse/HDFS-7726
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Tianyin Xu
Priority: Minor
 Attachments: check_config_EditLogTailer.patch, 
 check_config_val_EditLogTailer.patch.1


 
 Problem
 -
 Similar as the following two issues addressed in 2.7.0,
 https://issues.apache.org/jira/browse/YARN-2165
 https://issues.apache.org/jira/browse/YARN-2166
 The edit log related configuration settings should be checked in the 
 constructor rather than being applied directly at runtime. This would cause 
 runtime failures if the values are wrong.
 Take dfs.ha.tail-edits.period as an example, currently in 
 EditLogTailer.java, its value is not checked but directly used in doWork(), 
 as the following code snippets. Any negative values would cause 
 IllegalArgumentException (which is not caught) and impair the component. 
 {code:title=EditLogTailer.java|borderStyle=solid}
 private void doWork() {
 {
 .
 Thread.sleep(sleepTimeMs);
 
 }
 {code}
 Another example is dfs.ha.log-roll.rpc.timeout. Right now, we use getInt() 
 to parse the value at runtime in the getActiveNodeProxy() function which is 
 called by doWork(), shown as below. Any erroneous settings (e.g., 
 ill-formatted integer) would cause exceptions.
 {code:title=EditLogTailer.java|borderStyle=solid}
 private NamenodeProtocol getActiveNodeProxy() throws IOException {
 {
 .
 int rpcTimeout = conf.getInt(
   DFSConfigKeys.DFS_HA_LOGROLL_RPC_TIMEOUT_KEY,
   DFSConfigKeys.DFS_HA_LOGROLL_RPC_TIMEOUT_DEFAULT);
 
 }
 {code}
 
 Solution (the attached patch)
 -
 Basically, the idea of the attached patch is to move the parsing and checking 
 logics into the constructor to expose the error at initialization, so that 
 the errors won't be latent at the runtime (same as YARN-2165 and YARN-2166)
 I'm not aware of the implementation of 2.7.0. It seems there's checking 
 utilities such as the validatePositiveNonZero function in YARN-2165. If so, 
 we can use that one to make the checking more systematic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7984) webhdfs:// needs to support provided delegation tokens

2015-03-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7984:
---
Summary: webhdfs:// needs to support provided delegation tokens  (was: 
WebHDFS needs to support )

 webhdfs:// needs to support provided delegation tokens
 --

 Key: HDFS-7984
 URL: https://issues.apache.org/jira/browse/HDFS-7984
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Blocker

 When using the webhdfs:// filesystem (especially from distcp), we need the 
 ability to inject a delegation token rather than webhdfs initialize its own.  
 This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378235#comment-14378235
 ] 

Haohui Mai commented on HDFS-7977:
--

+1

 NFS couldn't take percentile intervals
 --

 Key: HDFS-7977
 URL: https://issues.apache.org/jira/browse/HDFS-7977
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7977.001.patch


 The configuration nfs.metrics.percentiles.intervals is not recognized by 
 NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7875) Improve log message when wrong value configured for dfs.datanode.failed.volumes.tolerated

2015-03-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378241#comment-14378241
 ] 

Hudson commented on HDFS-7875:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7420 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7420/])
HDFS-7875. Improve log message when wrong value configured for 
dfs.datanode.failed.volumes.tolerated. Contributed by Nijel. (harsh: rev 
eda02540ce53732585b3f31411b2e65db569eb25)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Improve log message when wrong value configured for 
 dfs.datanode.failed.volumes.tolerated 
 --

 Key: HDFS-7875
 URL: https://issues.apache.org/jira/browse/HDFS-7875
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.6.0
Reporter: nijel
Assignee: nijel
Priority: Trivial
 Fix For: 2.8.0

 Attachments: 0001-HDFS-7875.patch, 0002-HDFS-7875.patch, 
 0003-HDFS-7875.patch, 0004-HDFS-7875.patch


 By mistake i configured dfs.datanode.failed.volumes.tolerated equal to the 
 number of volume configured. Got stuck for some time in debugging since the 
 log message didn't give much details.
 The log message can be more detail. Added a patch with change in message.
 Please have a look



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >