Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-11-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/

[Nov 8, 2018 4:23:00 AM] (wwei) YARN-8880. Add configurations for pluggable 
plugin framework.
[Nov 8, 2018 9:47:18 AM] (wwei) YARN-8988. Reduce the verbose log on RM 
heartbeat path when distributed
[Nov 8, 2018 1:03:38 PM] (nanda) HDDS-737. Introduce Incremental Container 
Report. Contributed by Nanda
[Nov 8, 2018 3:41:43 PM] (yqlin) HDDS-802. Container State Manager should get 
open pipelines for
[Nov 8, 2018 5:21:40 PM] (stevel) HADOOP-15846. ABFS: fix mask related bugs in 
setAcl, modifyAclEntries
[Nov 8, 2018 6:01:19 PM] (xiao) HDFS-14039. ec -listPolicies doesn't show 
correct state for the default
[Nov 8, 2018 6:35:45 PM] (shashikant) HDDS-806. Update Ratis to latest snapshot 
version in ozone. Contributed
[Nov 8, 2018 10:52:24 PM] (gifuma) HADOOP-15903. Allow HttpServer2 to discover 
resources in /static when
[Nov 9, 2018 12:02:48 AM] (haibochen) YARN-8990. Fix fair scheduler race 
condition in app submit and queue




-1 overall


The following subsystems voted -1:
findbugs hadolint pathlen shadedclient unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/diff-compile-javac-root.txt
  [324K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/diff-patch-shellcheck.txt
  [68K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [44K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/patch-unit-hadoop-common-project_hadoop-minikdc.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/patch-unit-hadoop-common-project_hadoop-auth.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/952/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [720K]
   

[jira] [Created] (HDDS-825) Code cleanup based on messages from ErrorProne

2018-11-08 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-825:
-

 Summary: Code cleanup based on messages from ErrorProne
 Key: HDDS-825
 URL: https://issues.apache.org/jira/browse/HDDS-825
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.3.0
Reporter: Anu Engineer
Assignee: Anu Engineer


I ran ErrorProne (http://errorprone.info/) on Ozone/HDDS code base and it threw 
lots of errors. This patch fixes many issues pointed out by ErrorProne.

The main classes of errors fixed in this patch are:
* http://errorprone.info/bugpattern/DefaultCharset
* http://errorprone.info/bugpattern/ComparableType
* http://errorprone.info/bugpattern/StringSplitter
* http://errorprone.info/bugpattern/IntLongMath
* http://errorprone.info/bugpattern/JavaLangClash
* http://errorprone.info/bugpattern/CatchFail
* http://errorprone.info/bugpattern/JdkObsolete
* http://errorprone.info/bugpattern/AssertEqualsArgumentOrderChecker
* http://errorprone.info/bugpattern/CatchAndPrintStackTrace

It is pretty educative to read through these errors and see the mistakes we 
made.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-822) Adding SCM audit log

2018-11-08 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin resolved HDDS-822.

Resolution: Duplicate

> Adding SCM audit log
> 
>
> Key: HDDS-822
> URL: https://issues.apache.org/jira/browse/HDDS-822
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>
> As OM audit log has been added in HDDS-98, this ticket is opened to add SCM's 
> audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-822) Adding SCM audit log

2018-11-08 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin reopened HDDS-822:


> Adding SCM audit log
> 
>
> Key: HDDS-822
> URL: https://issues.apache.org/jira/browse/HDDS-822
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>
> As OM audit log has been added in HDDS-98, this ticket is opened to add SCM's 
> audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-822) Adding SCM audit log

2018-11-08 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin resolved HDDS-822.

Resolution: Fixed

> Adding SCM audit log
> 
>
> Key: HDDS-822
> URL: https://issues.apache.org/jira/browse/HDDS-822
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Major
>
> As OM audit log has been added in HDDS-98, this ticket is opened to add SCM's 
> audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14059) Test reads from standby on a secure cluster with Configured failover

2018-11-08 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14059:
--

 Summary: Test reads from standby on a secure cluster with 
Configured failover
 Key: HDFS-14059
 URL: https://issues.apache.org/jira/browse/HDFS-14059
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov


Run standard HDFS tests to verify reading from ObserverNode on a secure HA 
cluster with {{ConfiguredFailoverProxyProvider}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14058) Test reads from standby on a secure cluster with IP failover

2018-11-08 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-14058:
--

 Summary: Test reads from standby on a secure cluster with IP 
failover
 Key: HDFS-14058
 URL: https://issues.apache.org/jira/browse/HDFS-14058
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Konstantin Shvachko
Assignee: Chen Liang


Run standard HDFS tests to verify reading from ObserverNode on a secure HA 
cluster with {{IPFailoverProxyProvider}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14055) Over-eager allocation in ByteBufferUtil.fallbackRead

2018-11-08 Thread Vanco Buca (JIRA)
Vanco Buca created HDFS-14055:
-

 Summary: Over-eager allocation in ByteBufferUtil.fallbackRead
 Key: HDFS-14055
 URL: https://issues.apache.org/jira/browse/HDFS-14055
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs
Reporter: Vanco Buca


The heap-memory path of ByteBufferUtil.fallbackRead ([see master branch code 
here|[https://github.com/apache/hadoop/blob/a0da1ec01051108b77f86799dd5e97563b2a3962/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferUtil.java#L95])]
 massively overallocates memory when the underlying input stream returns data 
in smaller chunks. This happens on a regular basis when using the S3 input 
stream as input.

The behavior is an O(N^2)-ish. In a recent debug session, we were trying to 
read 6MB, but getting 16K at a time. The code would:
 * allocate 16M, use the first 16K
 * allocate 16M - 16K, use the first 16K of that
 * allocate 16M - 32K, use the first 16K of that
 * (etc)

The patch is simple. Here's the text version of the patch:
{code}
@@ -88,10 +88,17 @@ public final class ByteBufferUtil {
 buffer.flip();
   } else {
 buffer.clear();
-int nRead = stream.read(buffer.array(),
-  buffer.arrayOffset(), maxLength);
-if (nRead >= 0) {
-  buffer.limit(nRead);
+int totalRead = 0;
+while (totalRead < maxLength) {
+  final int nRead = stream.read(buffer.array(),
+buffer.arrayOffset() + totalRead, maxLength - totalRead);
+  if (nRead <= 0) {
+break;
+  }
+  totalRead += nRead;
+}
+if (totalRead >= 0) {
+  buffer.limit(totalRead);
   success = true;
 }
   }
{code}

so, essentially, do the same thing that the code in the direct memory path is 
doing



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



HDFS/HDDS unit tests running failed in Jenkins precommit building

2018-11-08 Thread Lin,Yiqun(vip.com)
Hi developers,

Recently, I found following error frequently appeared in HDFS/HDDS Jenkins 
building.
The link: 
https://builds.apache.org/job/PreCommit-HDDS-Build/1632/artifact/out/patch-unit-hadoop-ozone_ozone-manager.txt

[ERROR] ExecutionException The forked VM terminated without properly saying 
goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd /testptch/hadoop/hadoop-ozone/ozone-manager 
&& /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError -DminiClusterDedicatedDirs=true -jar 
/testptch/hadoop/hadoop-ozone/ozone-manager/target/surefire/surefirebooter6481080145571841952.jar
 /testptch/hadoop/hadoop-ozone/ozone-manager/target/surefire 
2018-11-07T22-53-35_334-jvmRun1 surefire2897373403289443808tmp 
surefire_42678601136131093095tmp

And this error makes the unit tests not be real tested in Jenkins precommit 
buildings. Anyone who knows the root cause of this?

Thanks
Yiqun
本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
 This communication is intended only for the addressee(s) and may contain 
information that is privileged and confidential. You are hereby notified that, 
if you are not an intended recipient listed above, or an authorized employee or 
agent of an addressee of this communication responsible for delivering e-mail 
messages to an intended recipient, any dissemination, distribution or 
reproduction of this communication (including any attachments hereto) is 
strictly prohibited. If you have received this communication in error, please 
notify us immediately by a reply e-mail addressed to the sender and permanently 
delete the original e-mail communication and any attachments from all storage 
devices without making or otherwise retaining a copy.


[jira] [Created] (HDFS-14057) Improve ErasureCodingPolicyManager's data structures

2018-11-08 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HDFS-14057:
---

 Summary: Improve ErasureCodingPolicyManager's data structures
 Key: HDFS-14057
 URL: https://issues.apache.org/jira/browse/HDFS-14057
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding, hdfs
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


In the review of [HDFS-14039|https://issues.apache.org/jira/browse/HDFS-14039] 
we saw that there are multiple data structures in ErasureCodingPolicyManager 
for storing the ec policies, because of performance reasons. We should check if 
all of them are really needed and refactor if not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14056) Fix error messages in HDFS-12716

2018-11-08 Thread Adam Antal (JIRA)
Adam Antal created HDFS-14056:
-

 Summary: Fix error messages in HDFS-12716
 Key: HDFS-14056
 URL: https://issues.apache.org/jira/browse/HDFS-14056
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.10.0, 3.2.0, 3.0.4, 3.1.2
Reporter: Adam Antal


There are misleading error messages in the committed HDFS-12716 patch.

As I saw in the code in DataNode.java:startDataNode
{code:java}
throw new DiskErrorException("Invalid value configured for "
+ "dfs.datanode.failed.volumes.tolerated - " + volFailuresTolerated
+ ". Value configured is either greater than -1 or >= "
+ "to the number of configured volumes (" + volsConfigured + ").");
  }
{code}
Here the error message seems a bit misleading. The error comes up when the 
given quantity in the configuration set to volsConfigured is set lower than -1 
but in that case the error should say something like "Value configured is 
either _less_ than -1 or >= ...".

Also the general error message in DataNode.java
{code:java}
public static final String MAX_VOLUME_FAILURES_TOLERATED_MSG = "should be 
greater than -1";
{code}
May be better changed to "should be greater than _or equal to_ -1" to be 
precise, as -1 is a valid choice.

In hdfs-default.xml I couldn't understand the phrase "The range of the value is 
-1 now, -1 represents the minimum of volume valids is 1." It might be better to 
write something clearer like "The minimum is -1 representing 1 valid remaining 
volume".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-824) WriteStateMachineData Times Out leading to Datanode crash

2018-11-08 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-824:


 Summary: WriteStateMachineData Times Out leading to Datanode crash
 Key: HDDS-824
 URL: https://issues.apache.org/jira/browse/HDDS-824
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.3.0
Reporter: Nilotpal Nandi
Assignee: Tsz Wo Nicholas Sze
 Fix For: 0.3.0
 Attachments: HDDS-806.001.patch, HDDS-806.002.patch, 
HDDS-806_20181107.patch, all-node-ozone-logs-1540979056.tar.gz

datanode stopped due to following error :

datanode.log
{noformat}
2018-10-31 09:12:04,517 INFO org.apache.ratis.server.impl.RaftServerImpl: 
9fab9937-fbcd-4196-8014-cb165045724b: set configuration 169: 
[9fab9937-fbcd-4196-8014-cb165045724b:172.27.15.131:9858, 
ce0084c2-97cd-4c97-9378-e5175daad18b:172.27.15.139:9858, 
f0291cb4-7a48-456a-847f-9f91a12aa850:172.27.38.9:9858], old=null at 169
2018-10-31 09:12:22,187 ERROR org.apache.ratis.server.storage.RaftLogWorker: 
Terminating with exit status 1: 
9fab9937-fbcd-4196-8014-cb165045724b-RaftLogWorker failed.
org.apache.ratis.protocol.TimeoutIOException: Timeout: WriteLog:182: (t:10, 
i:182), STATEMACHINELOGENTRY, client-611073BBFA46, cid=127-writeStateMachineData
 at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:87)
 at 
org.apache.ratis.server.storage.RaftLogWorker$WriteLog.execute(RaftLogWorker.java:310)
 at org.apache.ratis.server.storage.RaftLogWorker.run(RaftLogWorker.java:182)
 at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException
 at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1771)
 at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
 at org.apache.ratis.util.IOUtils.getFromFuture(IOUtils.java:79)
 ... 3 more{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-820) Use more strict data format for the Last-Modified headers of s3 gateway

2018-11-08 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-820:
-

 Summary: Use more strict data format for the Last-Modified headers 
of s3 gateway
 Key: HDDS-820
 URL: https://issues.apache.org/jira/browse/HDDS-820
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: S3
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: 0.4.0


The format of HTTP Last-Modified header is defined by rfc-1123 (which updates 
the earlier rfc-822)

>From https://tools.ietf.org/html/rfc1123

{code}
   5.2.14  RFC-822 Date and Time Specification: RFC-822 Section 5

 The syntax for the date is hereby changed to:

date = 1*2DIGIT month 2*4DIGIT
{code}

>From https://tools.ietf.org/html/rfc822
{code}
 2.4.  *RULE:  REPETITION
 
  The character "*" preceding an element indicates repetition.
 The full form is:
 
  *element
 
 indicating at least  and at most  occurrences  of  element.
 Default values are 0 and infinity so that "*(element)" allows any
 number, including zero; "1*element" requires at  least  one;  and
 "1*2element" allows one or two.
{code}

It means that both of the following dates are good:

* Wed, 07 Nov 2018 10:31:05 GMT (two digits day)
* Wed, 7 Nov 2018 10:31:05 GMT (one digits day)

Java implements it in the right way in DateTimeFormatter.RFC_1123_DATE_TIME, it 
sets the minimum and maximum size of the day field (1-2)

Golang follows a different way to define the date format, there is a fixed date 
which should be used as an example format which will be followed.

http.TimeFormat (in golang) defines the format of the HTTP date:

>From https://golang.org/src/time/format.go

{code}
RFC1123 = "Mon, 02 Jan 2006 15:04:05 MST"
{code}

Base on this definition the day also should be two digits.

*Summary*: As rfc1123 allows the usage of both format I propose two use the two 
digit days all the time, to make it possible to use the s3g from golang.

Note: this is required as the CTrox/csi-s3 driver uses the golang base minio s3 
client to create/get/list buckets before mounting it with fuse drivers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-823) OzoneRestClient is failing with NPE on getKeyDetails call

2018-11-08 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-823:


 Summary: OzoneRestClient is failing with NPE on getKeyDetails call
 Key: HDDS-823
 URL: https://issues.apache.org/jira/browse/HDDS-823
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.3.0
Reporter: Nanda kumar


{{RestClient#getKeyDetails}} is failing with {{NullPointerException}} which is 
causing a lot of unit test and smoke test to fail.
Exception trace:
{code:java}
Error while calling command 
(org.apache.hadoop.ozone.web.ozShell.keys.InfoKeyHandler@13713486): 
java.lang.NullPointerException
at picocli.CommandLine.execute(CommandLine.java:926)
at picocli.CommandLine.access$700(CommandLine.java:104)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
at 
org.apache.hadoop.ozone.ozShell.TestOzoneShell.execute(TestOzoneShell.java:259)
at 
org.apache.hadoop.ozone.ozShell.TestOzoneShell.testInfoDirKey(TestOzoneShell.java:1013)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.ozone.client.rest.RestClient.getKeyDetails(RestClient.java:817)
at 
org.apache.hadoop.ozone.client.OzoneBucket.getKey(OzoneBucket.java:282)
at 
org.apache.hadoop.ozone.web.ozShell.keys.InfoKeyHandler.call(InfoKeyHandler.java:65)
at 
org.apache.hadoop.ozone.web.ozShell.keys.InfoKeyHandler.call(InfoKeyHandler.java:37)
at picocli.CommandLine.execute(CommandLine.java:919)
... 18 more
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-821) Handle empty x-amz-storage-class header in Ozone S3 gateway

2018-11-08 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-821:
-

 Summary: Handle empty x-amz-storage-class header in Ozone S3 
gateway
 Key: HDDS-821
 URL: https://issues.apache.org/jira/browse/HDDS-821
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: S3
Reporter: Elek, Marton
Assignee: Elek, Marton


Ozone replication type is set based on the x-amz-storage-class HTTP header in 
s3g thanks to HDDS-712.

If header is not set the default replication type will be used (RATIS/3).

Unfortunately some tricky clients (such as the goofys FUSE driver) sends an 
empty header.

This patch fixes the behaviour to use the default replication type in case of 
an existing but empty header. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-08 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-819:
---

 Summary: Match OzoneFileSystem behavior with S3AFileSystem
 Key: HDDS-819
 URL: https://issues.apache.org/jira/browse/HDDS-819
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


To match the behavior of o3fs with that of the S3AFileSystem, following changes 
need to be made to OzoneFileSystem.
 1. When creating files, we should add only 1 key. Keys corresponding to the 
parent directories should not be created.
 2. {{GetFileStatus}} should return the status for fake directories 
(directories which do not actually exist as a key but there exists a key which 
is a child of this directory). For example, if there exists a key 
_/dir1/dir2/file2_, {{GetFileStatus("/dir1/")}} should return _/dir1/_ as a 
directory.
3. {{ListStatus}} on a directory should list fake sub-directories.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-822) Adding SCM audit log

2018-11-08 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDDS-822:
--

 Summary: Adding SCM audit log
 Key: HDDS-822
 URL: https://issues.apache.org/jira/browse/HDDS-822
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Yiqun Lin
Assignee: Yiqun Lin


As OM audit log has been added in HDDS-98, this ticket is opened to add SCM's 
audit log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-11-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/

[Nov 7, 2018 2:17:35 AM] (aajisaka) YARN-8233. NPE in 
CapacityScheduler#tryCommit when handling
[Nov 7, 2018 2:48:07 AM] (wwei) YARN-8976. Remove redundant modifiers in 
interface ApplicationConstants.
[Nov 7, 2018 5:54:08 AM] (yqlin) HDDS-809. Refactor SCMChillModeManager.
[Nov 7, 2018 8:45:16 AM] (wwei) HADOOP-15907. Add missing maven modules in 
BUILDING.txt. Contributed
[Nov 7, 2018 9:26:07 AM] (tasanuma) YARN-8866. Fix a parsing error for 
crossdomain.xml.
[Nov 7, 2018 2:20:49 PM] (jlowe) MAPREDUCE-7148. Fast fail jobs when exceeds 
dfs quota limitation.
[Nov 7, 2018 2:42:22 PM] (wwei) YARN-8977. Remove unnecessary type casting when 
calling




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.util.TestBasicDiskValidator 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/diff-compile-javac-root.txt
  [324K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/diff-patch-shellcheck.txt
  [68K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/951/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [196K]