[jira] [Resolved] (HADOOP-13872) KMS JMX exception

2017-06-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen resolved HADOOP-13872.

Resolution: Duplicate

Seems there's also HADOOP-14024 for branch-2, which I don't remember at 
all. re-resolving this as dup then. Thanks.

> KMS JMX exception
> -
>
> Key: HADOOP-13872
> URL: https://issues.apache.org/jira/browse/HADOOP-13872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>
> Run KMS in pseudo distributed mode, point browser to 
> http://localhost:16000/kms/jmx?user.name=kms, got "HTTP Status 500 - Servlet 
> execution threw an exception":
> {noformat}
> HTTP Status 500 - Servlet execution threw an exception
> type Exception report
> message Servlet execution threw an exception
> description The server encountered an internal error that prevented it from 
> fulfilling this request.
> exception
> javax.servlet.ServletException: Servlet execution threw an exception
>   
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:141)
> root cause
> java.lang.NoClassDefFoundError: 
> org/eclipse/jetty/server/handler/ContextHandler
>   java.lang.ClassLoader.defineClass1(Native Method)
>   java.lang.ClassLoader.defineClass(ClassLoader.java:760)
>   java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   
> org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2946)
>   
> org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1177)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1665)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
>   org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:176)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:141)
> root cause
> java.lang.ClassNotFoundException: 
> org.eclipse.jetty.server.handler.ContextHandler
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
>   java.lang.ClassLoader.defineClass1(Native Method)
>   java.lang.ClassLoader.defineClass(ClassLoader.java:760)
>   java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   
> org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2946)
>   
> org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1177)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1665)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
>   org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:176)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilt

[jira] [Reopened] (HADOOP-14515) Specifically configure zookeeper-related log levels in KMS log4j

2017-06-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reopened HADOOP-14515:


It seems I missed the main log4j here attaching addendum patch.

> Specifically configure zookeeper-related log levels in KMS log4j
> 
>
> Key: HADOOP-14515
> URL: https://issues.apache.org/jira/browse/HADOOP-14515
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14515.01.patch, HADOOP-14515.addendum.patch
>
>
> When investigating a case, we tried to turn on KMS DEBUG by setting the root 
> logger in the log4j to DEBUG. This ends up making 
> {{org.apache.zookeeper.ClientCnxn}} to generate 199.2M out of a 200M log 
> file, which made the kms.log rotate very quickly.
> We should keep zookeeper's log unaffected by the root logger, and only turn 
> it on when interested.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14602) allow custom release notes/changelog during create-release

2017-06-27 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14602:
-

 Summary: allow custom release notes/changelog during create-release
 Key: HADOOP-14602
 URL: https://issues.apache.org/jira/browse/HADOOP-14602
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, scripts
Affects Versions: 3.0.0-alpha3
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Minor


When doing a security release, we may not want to change JIRA to reflect that 
such a release is coming.  Therefore, it would be nice to provide custom-made 
versions instead of allowing releasedocs to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13872) KMS JMX exception

2017-06-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reopened HADOOP-13872:


> KMS JMX exception
> -
>
> Key: HADOOP-13872
> URL: https://issues.apache.org/jira/browse/HADOOP-13872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
>
> Run KMS in pseudo distributed mode, point browser to 
> http://localhost:16000/kms/jmx?user.name=kms, got "HTTP Status 500 - Servlet 
> execution threw an exception":
> {noformat}
> HTTP Status 500 - Servlet execution threw an exception
> type Exception report
> message Servlet execution threw an exception
> description The server encountered an internal error that prevented it from 
> fulfilling this request.
> exception
> javax.servlet.ServletException: Servlet execution threw an exception
>   
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:141)
> root cause
> java.lang.NoClassDefFoundError: 
> org/eclipse/jetty/server/handler/ContextHandler
>   java.lang.ClassLoader.defineClass1(Native Method)
>   java.lang.ClassLoader.defineClass(ClassLoader.java:760)
>   java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   
> org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2946)
>   
> org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1177)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1665)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
>   org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:176)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:141)
> root cause
> java.lang.ClassNotFoundException: 
> org.eclipse.jetty.server.handler.ContextHandler
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
>   java.lang.ClassLoader.defineClass1(Native Method)
>   java.lang.ClassLoader.defineClass(ClassLoader.java:760)
>   java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   
> org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2946)
>   
> org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1177)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1665)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
>   org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:176)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:141)
> note The full stack trace of the root cause is available in the Apache 

[jira] [Created] (HADOOP-14601) Azure: Reuse ObjectMapper

2017-06-27 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14601:
--

 Summary: Azure: Reuse ObjectMapper
 Key: HADOOP-14601
 URL: https://issues.apache.org/jira/browse/HADOOP-14601
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fa/azure
Reporter: Mingliang Liu
Assignee: Mingliang Liu


Currently there are a few places in {{hadoop-azure}} module that uses creates 
{{ObjectMapper}} for each request/call. We should re-use the object mapper for 
performance purpose.

The general caveat is about thread safety; I think the change will be safe 
though.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/

[Jun 26, 2017 5:54:01 PM] (wang) HDFS-12032. Inaccurate comment on
[Jun 26, 2017 6:20:07 PM] (wang) HDFS-11956. Do not require a storage ID or 
target storage IDs when
[Jun 26, 2017 8:24:27 PM] (raviprak) HDFS-11993. Add log info when connect to 
datanode socket address failed.
[Jun 26, 2017 10:43:50 PM] (lei) HDFS-12033. DatanodeManager picking EC 
recovery tasks should also
[Jun 27, 2017 12:35:55 AM] (rkanter) MAPREDUCE-6904. HADOOP_JOB_HISTORY_OPTS 
should be
[Jun 27, 2017 7:39:47 AM] (aengineer) HDFS-12045. Add log when Diskbalancer 
volume is transient storage type.
[Jun 27, 2017 11:49:26 AM] (aajisaka) HDFS-12040. 
TestFsDatasetImpl.testCleanShutdownOfVolume fails.




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-mvninstall-root.txt
  [500K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [488K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [76K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/358/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-

[jira] [Created] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-06-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14600:
---

 Summary: LocatedFileStatus constructor forces RawLocalFS to exec a 
process to get the permissions
 Key: HADOOP-14600
 URL: https://issues.apache.org/jira/browse/HADOOP-14600
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.3
 Environment: file:// in a dir with many files
Reporter: Steve Loughran


Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws against 
the local FS, because {{FileStatus.getPemissions}} call forces  
{{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI values.

That is: for every other FS, what's a field lookup or even a no-op, on the 
local FS it's a process exec/spawn, with all the costs. This gets expensive if 
you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14599) RPC queue time metrics omit timed out clients

2017-06-27 Thread Ashwin Ramesh (JIRA)
Ashwin Ramesh created HADOOP-14599:
--

 Summary: RPC queue time metrics omit timed out clients
 Key: HADOOP-14599
 URL: https://issues.apache.org/jira/browse/HADOOP-14599
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics, rpc-server
Affects Versions: 2.7.0
Reporter: Ashwin Ramesh


RPC average queue time metrics will now update even if the client who made the 
call timed out while the call was in the call queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14598) azure wasb failing: FsUrlConnection cannot be cast to HttpURLConnection

2017-06-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14598:
---

 Summary: azure wasb failing: FsUrlConnection cannot be cast to 
HttpURLConnection
 Key: HADOOP-14598
 URL: https://issues.apache.org/jira/browse/HADOOP-14598
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure, test
Affects Versions: 3.0.0-beta1
Reporter: Steve Loughran


my downstream-of-spark cloud integration tests (where I haven't been running 
the azure ones for a while) now have a few of the tests failing

{code}
 org.apache.hadoop.fs.azure.AzureException: 
com.microsoft.azure.storage.StorageException: 
org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
java.net.HttpURLConnection
{code}

No obvious cause, and it's only apparently happening in some of the (scalatest) 
tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14597) Native compilation broken with OpenSSL-1.1.0 because EVP_CIPHER_CTX has been made opaque

2017-06-27 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-14597:
-

 Summary: Native compilation broken with OpenSSL-1.1.0 because 
EVP_CIPHER_CTX has been made opaque
 Key: HADOOP-14597
 URL: https://issues.apache.org/jira/browse/HADOOP-14597
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha4
 Environment: openssl-1.1.0
Reporter: Ravi Prakash


Trying to build Hadoop trunk on Fedora 26 which has openssl-devel-1.1.0 fails 
with this error
{code}[WARNING] 
/home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:
 In function ‘check_update_max_output_len’:
[WARNING] 
/home/raviprak/Code/hadoop/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c:256:14:
 error: dereferencing pointer to incomplete type ‘EVP_CIPHER_CTX {aka struct 
evp_cipher_ctx_st}’
[WARNING]if (context->flags & EVP_CIPH_NO_PADDING) {
[WARNING]   ^~
{code}

https://github.com/openssl/openssl/issues/962 mattcaswell says
{quote}
One of the primary differences between master (OpenSSL 1.1.0) and the 1.0.2 
version is that many types have been made opaque, i.e. applications are no 
longer allowed to look inside the internals of the structures
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/

[Jun 26, 2017 7:41:00 AM] (aajisaka) HADOOP-14549. Use 
GenericTestUtils.setLogLevel when available in
[Jun 26, 2017 8:26:09 AM] (kai.zheng) HDFS-11943. [Erasure coding] Warn log 
frequently print to screen in
[Jun 26, 2017 12:39:47 PM] (stevel) HADOOP-14461 Azure: handle failure 
gracefully in case of missing account
[Jun 26, 2017 5:54:01 PM] (wang) HDFS-12032. Inaccurate comment on
[Jun 26, 2017 6:20:07 PM] (wang) HDFS-11956. Do not require a storage ID or 
target storage IDs when
[Jun 26, 2017 8:24:27 PM] (raviprak) HDFS-11993. Add log info when connect to 
datanode socket address failed.
[Jun 26, 2017 10:43:50 PM] (lei) HDFS-12033. DatanodeManager picking EC 
recovery tasks should also
[Jun 27, 2017 12:35:55 AM] (rkanter) MAPREDUCE-6904. HADOOP_JOB_HISTORY_OPTS 
should be




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 642] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 719] 
   Hard coded reference to an absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:[line 455] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 334] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 

Failed junit tests :

   hadoop.ipc.TestProtoBufRpcServerHandoff 
   hadoop.ipc.TestRPC 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport 
   hadoop.hdfs.TestFileAppend 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-compile-javac-root.txt
  [192K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/447/artifact/out/whitespace-tabs.txt
  [1.2M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-

[jira] [Created] (HADOOP-14596) latest SDK now telling us off on seeks

2017-06-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14596:
---

 Summary: latest SDK now telling us off on seeks
 Key: HADOOP-14596
 URL: https://issues.apache.org/jira/browse/HADOOP-14596
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran
Priority: Minor


The latest SDK now tells us off when we do a seek() by aborting the TCP stream
{code}
- Not all bytes were read from the S3ObjectInputStream, aborting HTTP 
connection. This is likely an error and may result in sub-optimal behavior. 
Request only the bytes you need via a ranged GET or drain the input stream 
after use.
2017-06-27 15:47:35,789 [ScalaTest-main-running-S3ACSVReadSuite] WARN  
internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - Not 
all bytes were read from the S3ObjectInputStream, aborting HTTP connection. 
This is likely an error and may result in sub-optimal behavior. Request only 
the bytes you need via a ranged GET or drain the input stream after use.
2017-06-27 15:47:37,409 [ScalaTest-main-running-S3ACSVReadSuite] WARN  
internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - Not 
all bytes were read from the S3ObjectInputStream, aborting HTTP connection. 
This is likely an error and may result in sub-optimal behavior. Request only 
the bytes you need via a ranged GET or drain the input stream after use.
2017-06-27 15:47:39,003 [ScalaTest-main-running-S3ACSVReadSuite] WARN  
internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - Not 
all bytes were read from the S3ObjectInputStream, aborting HTTP connection. 
This is likely an error and may result in sub-optimal behavior. Request only 
the bytes you need via a ranged GET or drain the input stream after use.
2017-06-27 15:47:40,627 [ScalaTest-main-running-S3ACSVReadSuite] WARN  
internal.S3AbortableInputStream (S3AbortableInputStream.java:close(163)) - Not 
all bytes were read from the S3ObjectInputStream, aborting HTTP connection. 
This is likely an error and may result in sub-optimal behavior. Request only 
the bytes you need via a ranged GET or drain the input stream after use.
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14595) Internal method has not been used, should be deleted or marked as unused

2017-06-27 Thread Yasen Liu (JIRA)
Yasen Liu created HADOOP-14595:
--

 Summary: Internal method has not been used, should be deleted or 
marked as unused
 Key: HADOOP-14595
 URL: https://issues.apache.org/jira/browse/HADOOP-14595
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yasen Liu
Priority: Trivial


i found the method is unused  method,and is internal implementation.In order to 
keep the code clean,I think it should be deleted or marked as unused.

/**
   * internal implementation of directory creation.
   *
   * @param path path to file
   * @return boolean file is created; false: no need to create
   * @throws IOException if specified path is file instead of directory
   */
  private boolean mkdir(Path path) throws IOException {
Path directory = makeAbsolute(path);
boolean shouldCreate = shouldCreate(directory);
if (shouldCreate) {
  forceMkdir(directory);
}
return shouldCreate;
  }



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org