[jira] [Created] (HADOOP-16401) ABFS: port Azure doc to 3.2 branch

2019-06-27 Thread Da Zhou (JIRA)
Da Zhou created HADOOP-16401:


 Summary: ABFS: port Azure doc to 3.2 branch
 Key: HADOOP-16401
 URL: https://issues.apache.org/jira/browse/HADOOP-16401
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Da Zhou


Need to port the latest Azure markdown docs from trunk to 3.2.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Hadoop Community Sync Up Schedule

2019-06-27 Thread Wangda Tan
Hi folks,

Here's the Hadoop Community Sync Up proposal/schedule:
https://docs.google.com/document/d/1GfNpYKhNUERAEH7m3yx6OfleoF3MqoQk3nJ7xqHD9nY/edit#heading=h.xh4zfwj8ppmn

And here's calendar file:

https://calendar.google.com/calendar/ical/hadoop.community.sync.up%40gmail.com/public/basic.ics

We gave it a try this week for YARN+MR and Submarine sync, feedbacks from
participants seems pretty good, lots of new information shared during sync
up, and companies are using/developing Hadoop can better know each other.

Next week there're 4 community sync-ups (Two Submarine for different
timezones, one YARN+MR, one storage), please join to whichever you're
interested:

[image: image.png]

Zoom info and notes can be found in the Google calendar invitation.

Thanks,
Wangda


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-06-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1180/

[Jun 26, 2019 3:53:14 PM] (ztang) YARN-9477. Implement VE discovery using 
libudev. Contributed by Peter
[Jun 26, 2019 6:44:49 PM] (bharat) HDDS-1691 : RDBTable#isExist should use 
Rocksdb#keyMayExist (#1013)
[Jun 26, 2019 9:01:31 PM] (gifuma) YARN-6055. ContainersMonitorImpl need be 
adjusted when NM resource
[Jun 27, 2019 1:04:12 AM] (github) HDDS-1638.  Implement Key Write Requests to 
use Cache and DoubleBuffer.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen 
   Self assignment of field BlockInfo.replication in new 
org.apache.hadoop.tools.dynamometer.blockgenerator.BlockInfo(BlockInfo) At 
BlockInfo.java:in new 
org.apache.hadoop.tools.dynamometer.blockgenerator.BlockInfo(BlockInfo) At 
BlockInfo.java:[line 78] 

FindBugs :

   module:hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra 
   org.apache.hadoop.tools.dynamometer.Client.addFileToZipRecursively(File, 
File, ZipOutputStream) may fail to clean up java.io.InputStream on checked 
exception Obligation to clean up resource created at Client.java:to clean up 
java.io.InputStream on checked exception Obligation to clean up resource 
created at Client.java:[line 859] is not discharged 
   Exceptional return value of java.io.File.mkdirs() ignored in 
org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchHadoopTarball(File, 
String, Configuration, Logger) At DynoInfraUtils.java:ignored in 
org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchHadoopTarball(File, 
String, Configuration, Logger) At DynoInfraUtils.java:[line 138] 
   Found reliance on default encoding in 
org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]):in 
org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]): new 
java.io.InputStreamReader(InputStream) At SimulatedDataNodes.java:[line 149] 
   org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]) 
invokes System.exit(...), which shuts down the entire virtual machine At 
SimulatedDataNodes.java:down the entire virtual machine At 
SimulatedDataNodes.java:[line 123] 
   org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]) may 
fail to close stream At SimulatedDataNodes.java:stream At 
SimulatedDataNodes.java:[line 149] 

Failed junit tests :

   hadoop.hdfs.server.diskbalancer.TestDiskBalancer 
   hadoop.hdfs.TestDFSClientRetries 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.yarn.server.resourcemanager.TestLeaderElectorService 
   
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1180/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1180/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1180/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1180/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1180/artifact/out/pathlen.txt
  [12K]

   pylint:

   

How to test the hadoop-aws module

2019-06-27 Thread lqjacklee
Dear sir,


 I need to test the function in the hadoop-aws module, however I have
not own the credential to do that. so I wonder whether we could provide the
test account.
Besides, I notice that we can do in the education region, but not sure. How
can I to do that ,thanks a lot.


[jira] [Resolved] (HADOOP-12941) abort in Unsafe_GetLong when running IA64 HPUX 64bit mode

2019-06-27 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12941.
-
Resolution: Won't Fix

There is no IA64 any more, sorry

> abort in Unsafe_GetLong when running IA64 HPUX 64bit mode 
> --
>
> Key: HADOOP-12941
> URL: https://issues.apache.org/jira/browse/HADOOP-12941
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: hpux IA64  running 64bit mode 
>Reporter: gene bradley
>Priority: Major
>
> Now that we have a core to look at we can sorta see what is going on#14 
> 0x9fffaf000dd0 in Java native_call_stub frame#15 0x9fffaf014470 in 
> JNI frame: sun.misc.Unsafe::getLong (java.lang.Object, long) ->long#16 
> 0x9fffaf0067a0 in interpreted frame: 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
>  (byte[], int, int, byte[], int, int) ->int bci: 74#17 0x9fffaf0066e0 in 
> interpreted frame: 
> org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
>  (java.lang.Object, int, int, java.lang.Object, int, int) ->int bci: 16#18 
> 0x9fffaf006720 in interpreted frame: 
> org.apache.hadoop.hbase.util.Bytes::compareTo (byte[], int, int, byte[], int, 
> int) ->int bci: 11#19 0x9fffaf0066e0 in interpreted frame: 
> org.apache.hadoop.hbase.KeyValue$KVComparator::compareRowKey 
> (org.apache.hadoop.hbase.Cell, org.apache.hadoop.hbase.Cell) ->int bci: 36#20 
> 0x9fffaf0066e0 in interpreted frame: 
> org.apache.hadoop.hbase.KeyValue$KVComparator::compare 
> (org.apache.hadoop.hbase.Cell, org.apache.hadoop.hbase.Cell) ->int bci: 3#21 
> 0x9fffaf0066e0 in interpreted frame: 
> org.apache.hadoop.hbase.KeyValue$KVComparator::compare (java.lang.Object, 
> java.lang.Object) ->int bci: 9;; Line: 4000xc0003ad84d30:0 
> :(p1)  ld8  
> r45=[r34]0xc0003ad84d30:1 :  adds   
>   r34=16,r320xc0003ad84d30:2 :  adds
>  ret0=8,r32;;0xc0003ad84d40:0 :  add
>   ret1=r35,r45 < r35 is off0xc0003ad84d40:1 
> :  ld8  
> r35=[r34],240xc0003ad84d40:2 :  nop.i   
>  0x00xc0003ad84d50:0 :  ld8 
>  r41=[ret0];;0xc0003ad84d50:1 :  ld8.s  
>   r49=[r34],-240xc0003ad84d50:2 :  
> nop.i0x00xc0003ad84d60:0 :  ld8 
>  r39=[ret1];; <=== abort0xc0003ad84d60:1 
> :  ld8  
> ret0=[r35]0xc0003ad84d60:2 :  nop.i 
>0x0;;0xc0003ad84d70:0 :  cmp.ne.unc  
>  p1=r0,ret0;;M,MI0xc0003ad84d70:1 :(p1)  mov
>   r48=r410xc0003ad84d70:2 :(p1)  
> chk.s.i  r49,Unsafe_GetLong+0x290(gdb) x /10i 
> $pc-48*20x9fffaf000d70:   flushrs 
>MMI0x9fffaf000d71:   mov  
> r44=r320x9fffaf000d72:   mov  
> r45=r330x9fffaf000d80:   mov  r46=r34 
>   MMI0x9fffaf000d81:   mov  
> r47=r350x9fffaf000d82:   mov  
> r48=r360x9fffaf000d90:   mov  r49=r37 
>   MMI0x9fffaf000d91:   mov  
> r50=r380x9fffaf000d92:   mov  r51=r39
> 0x9fffaf000da0:   adds r14=0x270,r4   
>MMI(gdb) p /x $r35$9 = 0x22(gdb) x /x 
> $ret10x9ffe1d0d2bda: 0x677a68676c78743a(gdb) x /x 
> $r45+0x220x9ffe1d0d2bda: 0x677a68676c78743aSo here is the problem,  
> this is a 64bit JVM 0 : /opt/java8/bin/IA64W/java1 : 
> -Djava.util.logging.config.file=/test28/gzh/tomcat/conf/logging.properties2 : 
> -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager3 : 
> -Dorg.apache.catalina.security.SecurityListener.UMASK=0224 : -server5 : 
> -XX:PermSize=128m6 : -XX:MaxPermSize=256m7 : 
> -Djava.endorsed.dirs=/test28/gzh/tomcat/endorsed8 : -classpath9 : 
> /test28/gzh/tomcat/bin/bootstrap.jar:/test28/gzh/tomcat/bin/tomcat-juli.jar10 
> : -Dcatalina.base=/test28/gzh/tomcat11 : -Dcatalina.home=/test28/gzh/tomcat12 
> : -Djava.io.tmpdir=/test28/gzh/tomcat/temp13 : 
> org.apache.catalina.startup.Bootstrap14 : startSince they are not passing and 
> -Xmx values we are taking defaults which look at the system resources. So 
> what is happening here is a 32 bit word aligned address is being used to 
> index into a byte array (gdb) jo 0x9ffe1d0d2bb8_mark = 
> 0x0001, _klass = 0x9fffa8c00768, instance of type [Blength of 
> the array: 1180 0 0 102 0 0 0 8 0 70 103 122 104 103 108 120 116 58 

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-06-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.net.TestClusterTopology 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.mapreduce.v2.TestMRJobs 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/365/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt
  [1.1M]

   unit:

   

[jira] [Created] (HADOOP-16400) clover task failed

2019-06-27 Thread luhuachao (JIRA)
luhuachao created HADOOP-16400:
--

 Summary: clover task failed
 Key: HADOOP-16400
 URL: https://issues.apache.org/jira/browse/HADOOP-16400
 Project: Hadoop Common
  Issue Type: Bug
Reporter: luhuachao


ERROR when exec  'mvn clover2:setup test clover2:aggregate clover2:clover' in 
hadoop-common cause task failed, below is the info

 
{code:java}
[WARNING] Some messages have been simplified; recompile with -Xdiags:verbose to 
get full output
[INFO] 42 warnings
[INFO] -
[INFO] -
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] 
/data4/luhuachao/opensource/hadoop-git/hadoop/hadoop-common-project/hadoop-common/target/clover/src-instrumented/org/apache/hadoop/fs/Options.java:[349,52]
 reference to resolve is ambiguous
both method 
resolve(org.apache.hadoop.fs.FileSystem,org.apache.hadoop.fs.Options.HandleOpt...)
 in org.apache.hadoop.fs.Options.HandleOpt and method 
resolve(java.util.function.BiFunction,org.apache.hadoop.fs.Options.HandleOpt...)
 in org.apache.hadoop.fs.Options.HandleOpt match
[ERROR] 
/data4/luhuachao/opensource/hadoop-git/hadoop/hadoop-common-project/hadoop-common/target/clover/src-instrumented/org/apache/hadoop/fs/Options.java:[349,94]
 incompatible types: cannot infer type-variable(s) I,T
(argument mismatch; java.lang.Object is not a functional interface)
[ERROR] 
/data4/luhuachao/opensource/hadoop-git/hadoop/hadoop-common-project/hadoop-common/target/clover/src-instrumented/org/apache/hadoop/security/RuleBasedLdapGroupsMapping.java:[80,157]
 incompatible types: inference variable T has incompatible bounds
equality constraints: java.lang.String
lower bounds: java.lang.Object
[ERROR] 
/data4/luhuachao/opensource/hadoop-git/hadoop/hadoop-common-project/hadoop-common/target/clover/src-instrumented/org/apache/hadoop/security/RuleBasedLdapGroupsMapping.java:[83,157]
 incompatible types: inference variable T has incompatible bounds
equality constraints: java.lang.String
lower bounds: java.lang.Object
[ERROR] 
/data4/luhuachao/opensource/hadoop-git/hadoop/hadoop-common-project/hadoop-common/target/clover/src-instrumented/org/apache/hadoop/io/erasurecode/CodecRegistry.java:[123,11]
 no suitable method found for 
collect(java.util.stream.Collector)
method 
java.util.stream.Stream.collect(java.util.function.Supplier,java.util.function.BiConsumer,java.util.function.BiConsumer) is not applicable
(cannot infer type-variable(s) R
(actual and formal argument lists differ in length))
method java.util.stream.Stream.collect(java.util.stream.Collector) is not applicable
(cannot infer type-variable(s) R,A
(argument mismatch; java.util.stream.Collector cannot be converted to java.util.stream.Collector))
[INFO] 5 errors

{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org