[jira] [Resolved] (HDFS-13205) Incorrect path is passed to checkPermission during authorization of file under a snapshot (specifically under a subdir) after original subdir is deleted

2018-08-10 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HDFS-13205.

Resolution: Not A Problem

Resolving as Not A Problem.

> Incorrect path is passed to checkPermission during authorization of file 
> under a snapshot (specifically under a subdir) after original subdir is 
> deleted
> 
>
> Key: HDFS-13205
> URL: https://issues.apache.org/jira/browse/HDFS-13205
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.7.4
>Reporter: Raghavender Rao Guruvannagari
>Assignee: Shashikant Banerjee
>Priority: Major
>
> Steps to reproduce the issue.
> +As 'hdfs' superuser+ 
>  – Create a folder (/hdptest/test) with 700 permissions and ( 
> /hdptest/test/mydir) with 755.
> --HDFS Ranger policy is defined  with RWX for user "test" on /hdptest/test/ 
> recursively.
>  --Allow snapshot on the directory  /hdptest/test/mydir: 
> {code:java}
> #su - test
> [test@node1 ~]$ hdfs dfs -ls /hdptest/test/mydir
> [test@node1 ~]$ hdfs dfs -mkdir /hdptest/test/mydir/test
> [test@node1 ~]$ hdfs dfs -put /etc/passwd /hdptest/test/mydir/test
> [test@node1 ~]$ hdfs lsSnapshottableDir
> drwxr-xr-x 0 test hdfs 0 2018-01-25 14:22 1 65536 /hdptest/test/mydir
>  
> {code}
>  
> -->Create Snapshot  
> {code:java}
> [test@node1 ~]$ hdfs dfs -createSnapshot /hdptest/test/mydir
> Created snapshot /hdptest/test/mydir/.snapshot/s20180125-135430.953
> {code}
>  -->Verifying that snapshot directory has the current files from directory 
> and verify the file is accessible  .snapshot path:  
> {code:java}
> [test@node1 ~]$ hdfs dfs -ls -R 
> /hdptest/test/mydir/.snapshot/s20180125-135430.953
> drwxr-xr-x   - test hdfs  0 2018-01-25 13:53 
> /hdptest/test/mydir/.snapshot/s20180125-135430.953/test
> -rw-r--r--   3 test hdfs   3227 2018-01-25 13:53 
> /hdptest/test/mydir/.snapshot/s20180125-135430.953/test/passwd
> [test@node1 ~]$ hdfs dfs -cat 
> /hdptest/test/mydir/.snapshot/s20180125-135430.953/test/passwd | tail
> livytest:x:1015:496::/home/livytest:/bin/bash
> ehdpzepp:x:1016:496::/home/ehdpzepp:/bin/bash
> zepptest:x:1017:496::/home/zepptest:/bin/bash
> {code}
>  -->Remove the file from main directory and verified that file is still 
> accessible:
> {code:java}
> [test@node1 ~]$ hdfs dfs -rm /hdptest/test/mydir/test/passwd
> 18/01/25 13:55:06 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://rangerSME/hdptest/test/mydir/test/passwd' to trash at: 
> hdfs://rangerSME/user/test/.Trash/Current/hdptest/test/mydir/test/passwd
> [test@node1 ~]$ hdfs dfs -cat 
> /hdptest/test/mydir/.snapshot/s20180125-135430.953/test/passwd | tail
> livytest:x:1015:496::/home/livytest:/bin/bash
> {code}
>  -->Remove the parent directory of the file which was deleted, now accessing 
> the same file under .snapshot dir fails with permission denied error
> {code:java}
> [test@node1 ~]$ hdfs dfs -rm -r /hdptest/test/mydir/test
> 18/01/25 13:55:25 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://rangerSME/hdptest/test/mydir/test' to trash at: 
> hdfs://rangerSME/user/test/.Trash/Current/hdptest/test/mydir/test1516888525269
> [test@node1 ~]$ hdfs dfs -cat 
> /hdptest/test/mydir/.snapshot/s20180125-135430.953/test/passwd | tail
> cat: Permission denied: user=test, access=EXECUTE, 
> inode="/hdptest/test/mydir/.snapshot/s20180125-135430.953/test/passwd":hdfs:hdfs:drwxr-x---
>  
> {code}
>  Ranger policies are not honored in this case for .snapshot directories/files 
> after main directory is deleted under snapshotable directory.
>  Workaround is to provide execute permission at HDFS level for the parent 
> folder 
> {code:java}
> #su - hdfs
> #hdfs dfs -chmod 701 /hdptest/test
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-08-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/

[Aug 9, 2018 5:57:24 AM] (stevel) HADOOP-15583. Stabilize S3A Assumed Role 
support. Contributed by Steve
[Aug 9, 2018 6:48:32 AM] (sunilg) YARN-8633. Update DataTables version in 
yarn-common in line with JQuery
[Aug 9, 2018 9:06:03 AM] (elek) HDDS-219. Genearate version-info.properties for 
hadoop and ozone.
[Aug 9, 2018 12:26:37 PM] (elek) HDDS-344. Remove multibyte characters from 
OzoneAcl. Contributed by
[Aug 9, 2018 3:17:34 PM] (jlowe) YARN-8331. Race condition in NM container 
launched after done.
[Aug 9, 2018 3:46:53 PM] (wwei) YARN-8559. Expose mutable-conf scheduler's 
configuration in RM
[Aug 9, 2018 5:11:47 PM] (cliang) HDFS-13735. Make QJM HTTP URL connection 
timeout configurable.
[Aug 9, 2018 6:04:02 PM] (wangda) YARN-8588. Logging improvements for better 
debuggability. (Suma
[Aug 9, 2018 6:04:02 PM] (wangda) YARN-8136. Add version attribute to site doc 
examples and quickstart.
[Aug 9, 2018 9:58:04 PM] (rkanter) YARN-4946. RM should not consider an 
application as COMPLETED when log
[Aug 9, 2018 11:55:39 PM] (xyao) HDDS-245. Handle ContainerReports in the SCM. 
Contributed by Elek
[Aug 10, 2018 12:32:02 AM] (wwei) YARN-8521. NPE in AllocationTagsManager when 
a container is removed more




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.util.TestBasicDiskValidator 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/864/artifact/out/branch-findbugs-hadoop-ozone_tools.txt

Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-08-10 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/554/

[Aug 9, 2018 3:46:53 PM] (wwei) YARN-8559. Expose mutable-conf scheduler's 
configuration in RM
[Aug 9, 2018 5:11:47 PM] (cliang) HDFS-13735. Make QJM HTTP URL connection 
timeout configurable.
[Aug 9, 2018 6:04:02 PM] (wangda) YARN-8588. Logging improvements for better 
debuggability. (Suma
[Aug 9, 2018 6:04:02 PM] (wangda) YARN-8136. Add version attribute to site doc 
examples and quickstart.
[Aug 9, 2018 9:58:04 PM] (rkanter) YARN-4946. RM should not consider an 
application as COMPLETED when log
[Aug 9, 2018 11:55:39 PM] (xyao) HDDS-245. Handle ContainerReports in the SCM. 
Contributed by Elek
[Aug 10, 2018 12:32:02 AM] (wwei) YARN-8521. NPE in AllocationTagsManager when 
a container is removed more
[Aug 10, 2018 6:37:45 AM] (wwei) YARN-8575. Avoid committing allocation 
proposal to unavailable nodes in


ERROR: File 'out/email-report.txt' does not exist

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-13818) Extend OIV to detect offline FSImage corruption

2018-08-10 Thread Adam Antal (JIRA)
Adam Antal created HDFS-13818:
-

 Summary: Extend OIV to detect offline FSImage corruption
 Key: HDFS-13818
 URL: https://issues.apache.org/jira/browse/HDFS-13818
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Adam Antal
Assignee: Adam Antal


A follow-up Jira for HDFS-13031: an improvement of the OIV is suggested for 
detecting corruptions like HDFS-13101 in an offline way.

The reasoning is the following. Apart from a NN startup throwing the error, 
there is nothing in the customer's hand that could reassure him/her that the 
FSImages is good or corrupted.

Although real full checking of the FSImage is only possible by the NN, for 
stack traces associated with the observed corruption cases the solution of 
putting up a tertiary NN is a little bit of overkill. 

The OIV would be a handy choice, already having functionality like loading the 
fsimage and constructing the folder structure, we just have to add the option 
of detecting the null INodes.

For e.g. the Delimited OIV processor can already use in disk MetadataMap, which 
reduces memory consumption. Also there may be a window for parallelizing: 
iterating through INodes for e.g. could be done distributed, increasing 
efficiency, and we wouldn't need a high mem-high CPU setup for just checking 
the FSImage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-347) Ozone Integration Tests : testCloseContainerViaStandaAlone fails sometimes

2018-08-10 Thread LiXin Ge (JIRA)
LiXin Ge created HDDS-347:
-

 Summary: Ozone Integration Tests : 
testCloseContainerViaStandaAlone fails sometimes
 Key: HDDS-347
 URL: https://issues.apache.org/jira/browse/HDDS-347
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: LiXin Ge


This issue was finded in the automatic JenKins unit test of HDDS-265.


 The container life cycle state is : Open -> Closing -> closed, this test 
submit the container close command and wait for container state change to *not 
equal to open*, actually even when the state condition(not equal to open) is 
satisfied, the container may still in process of closing, so the LOG which will 
printf after the container closed can't be find sometimes and the test fails.
{code:java|title=KeyValueContainer.java|borderStyle=solid}
try {
  writeLock();

  containerData.closeContainer();
  File containerFile = getContainerFile();
  // update the new container data to .container File
  updateContainerFile(containerFile);

} catch (StorageContainerException ex) {
{code}
Looking at the code above, the container state changes from CLOSING to CLOSED 
in the first step, the remaining *updateContainerFile* may take hundreds of 
milliseconds, so even we modify the test logic to wait for the *CLOSED* state 
will not guarantee the test success, too.


 These are two way to fix this:
 1, Remove one of the double check which depends on the LOG.
 2, If we have to preserve the double check, we should wait for the *CLOSED* 
state and sleep for a while to wait for the LOG appears.


 patch 000 is based on the second way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13817) HDFSRouterFederation : when we create Mount point with RANDOM policy and with 2 Nameservices, it won't work properly

2018-08-10 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-13817:


 Summary: HDFSRouterFederation : when we create Mount point with 
RANDOM policy and with 2 Nameservices, it won't work properly 
 Key: HDFS-13817
 URL: https://issues.apache.org/jira/browse/HDFS-13817
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation
Reporter: Harshakiran Reddy


{{Scenario:-}} 

# Create a mount point with RANDOM policy and with 2 Nameservices .
# List the target mount path of the Global path.

Actual Output: 
=== 
{{ls: `/apps5': No such file or directory}}

Expected Output: 
=

{{if the files are availabel list those files or if it's emtpy it will disply 
nothing}}

{noformat} 
bin> ./hdfs dfsrouteradmin -add /apps5 hacluster,ns2 /tmp10 -order RANDOM 
-owner securedn -group hadoop
Successfully added mount point /apps5
bin> ./hdfs dfs -ls /apps5
ls: `/apps5': No such file or directory
bin> ./hdfs dfs -ls /apps3
Found 2 items
drwxrwxrwx   - user group 0 2018-08-09 19:55 /apps3/apps1
-rw-r--r--   3   - user group  4 2018-08-10 11:55 /apps3/ttt
 {noformat}

{{please refer the bellow image for mount inofrmation}}

{{/apps3 tagged with HASH policy}}
{{/apps5 tagged with RANDOM policy}}

{noformat}
/bin> ./hdfs dfsrouteradmin -ls

Mount Table Entries:
SourceDestinations  Owner 
Group Mode  Quota/Usage

/apps3hacluster->/tmp3,ns2->/tmp4 securedn  
users rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]

/apps5hacluster->/tmp5,ns2->/tmp5 securedn  
users rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]

{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-346) ozoneShell show the new volume info after updateVolume command like updateBucket command.

2018-08-10 Thread chencan (JIRA)
chencan created HDDS-346:


 Summary: ozoneShell show the new volume info after updateVolume 
command like updateBucket command.
 Key: HDDS-346
 URL: https://issues.apache.org/jira/browse/HDDS-346
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: chencan


ozoneShell  show nothing after UpdateVolume,we may list the new volume info 
after update command.

Like this:

[root@localhost bin]# ./ozone oz -updateVolume /volume -quota 10GB
2018-08-10 09:40:02,241 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
{
  "owner" : {
    "name" : "root"
  },
  "quota" : {
    "unit" : "GB",
    "size" : 10
  },
  "volumeName" : "volume",
  "createdOn" : "Tue, 01 Jun +50573 08:11:18 GMT",
  "createdBy" : "root"
}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-322) Restructure ChunkGroupOutputStream and ChunkOutputStream

2018-08-10 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-322.
--
Resolution: Won't Do

> Restructure ChunkGroupOutputStream and ChunkOutputStream
> 
>
> Key: HDDS-322
> URL: https://issues.apache.org/jira/browse/HDDS-322
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
>
> Currently, ChunkOutputStream allocate a chunk size buffer to cache client 
> data. The idea here is to allocate the buffer in ChunkGroupOutputStream and 
> pass it to underlying  ChunkOutputStream so as to make reclaiming of the 
> uncommitted leftover data in the buffer and reallocating to next Block while 
> handling  CLOSE_CONTAINER_EXCEPTION in ozone client becomes simpler. This 
> Jira will also add the code to close the underlying ChunkOutputStream as soon 
> as the complete block data is written.
> This Jira will also modify the PutKey response to contain the committed block 
> length to ozone client  to do some validation checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13816) dfs.getQuotaUsage() throws NPE on non-existent dir instead of FileNotFoundException

2018-08-10 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-13816:


 Summary: dfs.getQuotaUsage() throws NPE on non-existent dir 
instead of FileNotFoundException
 Key: HDFS-13816
 URL: https://issues.apache.org/jira/browse/HDFS-13816
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B


{{dfs.getQuotaUsage()}} on non-existent path should throw FileNotFoundException.

{noformat}
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getQuotaUsageInt(FSDirStatAndListingOp.java:573)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getQuotaUsage(FSDirStatAndListingOp.java:554)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getQuotaUsage(FSNamesystem.java:3221)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getQuotaUsage(NameNodeRpcServer.java:1404)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getQuotaUsage(ClientNamenodeProtocolServerSideTranslatorPB.java:1861)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org