[jira] [Created] (HDFS-11213) FilterFileSystem should override rename(.., options) to take effect of Rename options called via FilterFileSystem implementations

2016-12-05 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-11213:


 Summary: FilterFileSystem should override rename(.., options) to 
take effect of Rename options called via FilterFileSystem implementations
 Key: HDFS-11213
 URL: https://issues.apache.org/jira/browse/HDFS-11213
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B


HDFS-8312 Added Rename.TO_TRASH option to add a security check before moving to 
trash.

But for FilterFileSystem implementations since this rename(..options) is not 
overridden, it uses default FileSystem implementation where Rename.TO_TRASH 
option is not delegated to NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11212) FilterFileSystem should override rename(.., options) to take effect of Rename options called via FilterFileSystem implementations

2016-12-05 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-11212:


 Summary: FilterFileSystem should override rename(.., options) to 
take effect of Rename options called via FilterFileSystem implementations
 Key: HDFS-11212
 URL: https://issues.apache.org/jira/browse/HDFS-11212
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B


HDFS-8312 Added Rename.TO_TRASH option to add a security check before moving to 
trash.

But for FilterFileSystem implementations since this rename(..options) is not 
overridden, it uses default FileSystem implementation where Rename.TO_TRASH 
option is not delegated to NameNode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11211) Add a time unit to the DataNode client trace format

2016-12-05 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-11211:


 Summary: Add a time unit to the DataNode client trace format 
 Key: HDFS-11211
 URL: https://issues.apache.org/jira/browse/HDFS-11211
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Akira Ajisaka
Priority: Minor


{code:title=DataNode.java}
  public static final String DN_CLIENTTRACE_FORMAT =
"src: %s" +  // src IP
", dest: %s" +   // dst IP
", bytes: %s" +  // byte count
", op: %s" + // operation
", cliID: %s" +  // DFSClient id
", offset: %s" + // offset
", srvID: %s" +  // DatanodeRegistration
", blockid: %s" + // block id
", duration: %s";  // duration time
{code}
The time unit of the duration is nanosecond, but it is not documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2016-12-05 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HDFS-11156:


> Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-11156
> URL: https://issues.apache.org/jira/browse/HDFS-11156
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.7.3
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, 
> HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, 
> HDFS-11156.06.patch
>
>
> Following webhdfs REST API
> {code}
> http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
> {code}
> will get a response like
> {code}
> {
>   "LocatedBlocks" : {
> "fileLength" : 1073741824,
> "isLastBlockComplete" : true,
> "isUnderConstruction" : false,
> "lastLocatedBlock" : { ... },
> "locatedBlocks" : [ {...} ]
>   }
> }
> {code}
> This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
> *FileSystem* API, 
> {code}
> public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
> {code}
> clients would expect an array of BlockLocation. This mismatch should be 
> fixed. Marked as Incompatible change as this will change the output of the 
> GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11210) Enhance key rolling to be atomic

2016-12-05 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-11210:


 Summary: Enhance key rolling to be atomic
 Key: HDFS-11210
 URL: https://issues.apache.org/jira/browse/HDFS-11210
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: encryption, kms
Affects Versions: 2.6.5
Reporter: Xiao Chen
Assignee: Xiao Chen


To support re-encrypting EDEK, we need to make sure after a key is rolled, no 
old version EDEKs are used anymore. This includes various caches when 
generating EDEK.
This is not true currently, simply because no such requirements / necessities 
before.

This includes
- Client Provider(s), and corresponding cache(s).
When LoadBalancingKMSCP is used, we need to clear all KMSCPs.
- KMS server instance(s), and corresponding cache(s)
When KMS HA is configured with multiple KMS instances, only 1 will receive the 
{{rollNewVersion}} request, we need to make sure other instances are rolled too.
- The Client instance inside NN(s), and corresponding cache(s)
When {{hadoop key roll}} is succeeded, the client provider inside NN should be 
drained too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11209) SNN can't checkpoint when rolling upgrade is not finalized

2016-12-05 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-11209:
-

 Summary: SNN can't checkpoint when rolling upgrade is not finalized
 Key: HDFS-11209
 URL: https://issues.apache.org/jira/browse/HDFS-11209
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Similar problem has been fixed with HDFS-7185. Recent change in HDFS-8432 
brings this back. 

With HDFS-8432, the primary NN will not update the VERSION file to the new 
version after running with "rollingUpgrade" option until upgrade is finalized. 
This is to support more downgrade use cases.

However, the checkpoint on the SNN is incorrectly updating the VERSION file 
when the rollingUpgrade is not finalized yet. As a result, the SNN checkpoint 
successfully but fail to push it to the primary NN because its version is 
higher than the primary NN as shown below.

{code}
2016-12-02 05:25:31,918 ERROR namenode.SecondaryNameNode 
(SecondaryNameNode.java:doWork(399)) - Exception in doCheckpoint
org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpPutFailedException: 
Image uploading failed, status: 403, url: 
http://NN:50070/imagetransfer?txid=345404754&imageFile=IMAGE&File-Le..., 
message: This namenode has storage info -60:221856466:1444080250181:clusterX 
but the secondary expected -63:221856466:1444080250181:clusterX
{code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11205) Fix findbugs issue with BlockPoolSlice#validateIntegrityAndSetLength

2016-12-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-11205.
---
Resolution: Not A Problem

Will track the fix along with the recommit of HDFS-10930, which will fix both 
the commit message and the findbugs issue.

> Fix findbugs issue with BlockPoolSlice#validateIntegrityAndSetLength
> 
>
> Key: HDFS-11205
> URL: https://issues.apache.org/jira/browse/HDFS-11205
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HDFS-11205.001.patch
>
>
> This ticket is opened to fix the follow up findbugs issue introduced by 
> HDFS-10930. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11208) Deadlock in WebHDFS on shutdown

2016-12-05 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-11208:
--

 Summary: Deadlock in WebHDFS on shutdown
 Key: HDFS-11208
 URL: https://issues.apache.org/jira/browse/HDFS-11208
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0-alpha1, 2.6.5, 2.7.3, 2.8.0
Reporter: Erik Krogen
Assignee: Erik Krogen


Currently on the client side if the {{DelegationTokenRenewer}} attempts to 
renew a WebHdfs delegation token while the client system is shutting down (i.e. 
{{FileSystem.Cache.ClientFinalizer}} is running) a deadlock may occur. This 
happens because {{ClientFinalizer}} calls {{FileSystem.Cache.closeAll()}} which 
first takes a lock on the {{FileSystem.Cache}} object and then locks each file 
system in the cache as it iterates over them. {{DelegationTokenRenewer}} takes 
a lock on a filesystem object while it is renewing that filesystem's token, but 
within {{TokenAspect.TokenManager.renew()}} (used for renewal of WebHdfs 
tokens) {{FileSystem.get}} is called, which in turn takes a lock on the 
FileSystem cache object, potentially causing deadlock if {{ClientFinalizer}} is 
currently running.

See below for example deadlock output:
{code}
Found one Java-level deadlock:
=
"Thread-8572":
waiting to lock monitor 0x7eff401f9878 (object 0x00051ec3f930, a
dali.hdfs.web.WebHdfsFileSystem),
which is held by "FileSystem-DelegationTokenRenewer"
"FileSystem-DelegationTokenRenewer":
waiting to lock monitor 0x7f005c08f5c8 (object 0x00050389c8b8, a
dali.fs.FileSystem$Cache),
which is held by "Thread-8572"

Java stack information for the threads listed above:
===
"Thread-8572":
at dali.hdfs.web.WebHdfsFileSystem.close(WebHdfsFileSystem.java:864)

   - waiting to lock <0x00051ec3f930> (a
   dali.hdfs.web.WebHdfsFileSystem)
   at dali.fs.FilterFileSystem.close(FilterFileSystem.java:449)
   at dali.fs.FileSystem$Cache.closeAll(FileSystem.java:2407)
   - locked <0x00050389c8b8> (a dali.fs.FileSystem$Cache)
   at dali.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2424)
   - locked <0x00050389c8d0> (a
   dali.fs.FileSystem$Cache$ClientFinalizer)
   at dali.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
   "FileSystem-DelegationTokenRenewer":
   at dali.fs.FileSystem$Cache.getInternal(FileSystem.java:2343)
   - waiting to lock <0x00050389c8b8> (a dali.fs.FileSystem$Cache)
   at dali.fs.FileSystem$Cache.get(FileSystem.java:2332)
   at dali.fs.FileSystem.get(FileSystem.java:369)
   at
   dali.hdfs.web.TokenAspect$TokenManager.getInstance(TokenAspect.java:92)
   at dali.hdfs.web.TokenAspect$TokenManager.renew(TokenAspect.java:72)
   at dali.security.token.Token.renew(Token.java:373)
   at

   
dali.fs.DelegationTokenRenewer$RenewAction.renew(DelegationTokenRenewer.java:127)
   - locked <0x00051ec3f930> (a dali.hdfs.web.WebHdfsFileSystem)
   at

   
dali.fs.DelegationTokenRenewer$RenewAction.access$300(DelegationTokenRenewer.java:57)
   at dali.fs.DelegationTokenRenewer.run(DelegationTokenRenewer.java:258)

Found 1 deadlock.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11207) Unnecessary incompatible change of NNHAStatusHeartbeat.state in DatanodeProtocolProtos

2016-12-05 Thread Eric Badger (JIRA)
Eric Badger created HDFS-11207:
--

 Summary: Unnecessary incompatible change of 
NNHAStatusHeartbeat.state in DatanodeProtocolProtos
 Key: HDFS-11207
 URL: https://issues.apache.org/jira/browse/HDFS-11207
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


HDFS-5079 changed the meaning of state in {{NNHAStatusHeartbeat}} when it added 
in the {{INITIALIZING}} state via {{HAServiceStateProto}}.

Before change:
{noformat}
enum State {
   ACTIVE = 0;
   STANDBY = 1;
}
{noformat}

After change:
{noformat}
enum HAServiceStateProto {
  INITIALIZING = 0;
  ACTIVE = 1;
  STANDBY = 2;
}
{noformat}

So the new {{INITIALIZING}} state will be interpreted as {{ACTIVE}}, new 
{{ACTIVE}} interpreted as {{STANDBY}} and new {{STANDBY}} interpreted as 
unknown. Any rolling upgrade to 3.0.0 will break because the datanodes that 
haven't been updated will misinterpret the NN state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11206) libhdfs++: FileSystem doesn't handle directory paths with a trailing "/"

2016-12-05 Thread James Clampffer (JIRA)
James Clampffer created HDFS-11206:
--

 Summary: libhdfs++: FileSystem doesn't handle directory paths with 
a trailing "/"
 Key: HDFS-11206
 URL: https://issues.apache.org/jira/browse/HDFS-11206
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer
Priority: Trivial


FileSystem methods that expect directories error when they receive a path with 
a trailing slash.  The java hadoop CLI tool is able to handle this without any 
issues. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11205) Fix findbugs issue with BlockPoolSlice#validateIntegrityAndSetLength

2016-12-05 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-11205:
-

 Summary: Fix findbugs issue with 
BlockPoolSlice#validateIntegrityAndSetLength
 Key: HDFS-11205
 URL: https://issues.apache.org/jira/browse/HDFS-11205
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This ticket is opened to fix the follow up findbugs issue introduced by 
HDFS-10930. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-12-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.validateIntegrityAndSetLength(File,
 long) may fail to clean up java.io.InputStream on checked exception Obligation 
to clean up resource created at BlockPoolSlice.java:clean up 
java.io.InputStream on checked exception Obligation to clean up resource 
created at BlockPoolSlice.java:[line 720] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.mapred.gridmix.TestResourceUsageEmulators 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/diff-patch-shellcheck.txt
  [28K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/whitespace-tabs.txt
  [1.3M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [152K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [316K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/patch-unit-hadoop-tools_hadoop-gridmix.txt
  [16K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/246/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org