[jira] [Created] (HDFS-11940) Throw an NoSuchMethodError exception when testing TestDFSPacket

2017-06-06 Thread legend (JIRA)
legend created HDFS-11940:
-

 Summary: Throw an NoSuchMethodError exception when testing 
TestDFSPacket 
 Key: HDFS-11940
 URL: https://issues.apache.org/jira/browse/HDFS-11940
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0-alpha3
 Environment: org.apache.maven.surefire 2.17
jdk 1.8
Reporter: legend


Throw an exception when I run TestDFSPacket. Details are listed below.

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs-client: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/hadoop/GitHub/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs-client: There are test failures.

Please refer to 
/home/hadoop/GitHub/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports
 for the individual test results.
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoFailureException: There are test 
failures.

Please refer to 
/home/hadoop/GitHub/hadoop/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports
 for the individual test results.
at 
org.apache.maven.plugin.surefire.SurefireHelper.reportExecution(SurefireHelper.java:82)
at 
org.apache.maven.plugin.surefire.SurefirePlugin.handleSummary(SurefirePlugin.java:195)
at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:861)
at 
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:729)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
... 20 more



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11939) Ozone : add read/write random access to Chunks of a key

2017-06-06 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11939:
-

 Summary: Ozone : add read/write random access to Chunks of a key
 Key: HDFS-11939
 URL: https://issues.apache.org/jira/browse/HDFS-11939
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang


In Ozone, the value of a key is a sequence of container chunks. Currently, the 
only way to read/write the chunks is by using ChunkInputStream and 
ChunkOutputStream. However, by the nature of streams, these classes are 
currently implemented to only allow sequential read/write. 

Ideally we would like to support random access of the chunks. For example, we 
want to be able to seek to a specific offset and read/write some data. This 
will be critical for key range read/write feature, and potentially important 
for supporting parallel read/write.

This JIRA tracks adding support by implementing FileChannel class on top Chunks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11938) Logs for KMS delegation token lifecycle

2017-06-06 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-11938:


 Summary: Logs for KMS delegation token lifecycle
 Key: HDFS-11938
 URL: https://issues.apache.org/jira/browse/HDFS-11938
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yongjun Zhang


We run into quite some customer cases about authentication failures related to 
KMS delegation token. It would be nice to see a log for each stage of the token:
1. creation
2. renewal
3. removal upon cancel
4. remove upon expiration
So that when we correlate the logs for the same DT, we can have a good picture 
about what's going on, and what could have caused the authentication failure.

The same is applicable to other delegation tokens.

NOTE: When log info about delagation token, we don't want leak user's secret 
info.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/337/

[Jun 5, 2017 4:21:03 PM] (brahma) HADOOP-14440. Add metrics for connections 
dropped. Contributed by Eric
[Jun 5, 2017 6:26:56 PM] (liuml07) HADOOP-14428. s3a: mkdir appears to be 
broken. Contributed by Mingliang
[Jun 5, 2017 8:16:57 PM] (jianhe) YARN-6683. Invalid event: COLLECTOR_UPDATE at 
KILLED.  Contributed by
[Jun 5, 2017 8:18:27 PM] (kihwal) HDFS-10816. 
TestComputeInvalidateWork#testDatanodeReRegistration fails
[Jun 5, 2017 8:21:22 PM] (arp) HDFS-11928. Segment overflow in 
FileDistributionCalculator. Contributed
[Jun 5, 2017 10:56:43 PM] (liuml07) HADOOP-14478. Optimize 
NativeAzureFsInputStream for positional reads.
[Jun 5, 2017 11:31:03 PM] (yzhang) HDFS-11914. Add more diagnosis info for 
fsimage transfer failure.
[Jun 6, 2017 4:31:40 AM] (brahma) HADOOP-14431. ModifyTime of FileStatus 
returned by SFTPFileSystem's


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-11937) Ozone: Support range in getKey operation

2017-06-06 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11937:
---

 Summary: Ozone: Support range in getKey operation
 Key: HDFS-11937
 URL: https://issues.apache.org/jira/browse/HDFS-11937
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer


We need to support HTTP ranges so that users can get a key by ranges.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11936) Ozone: TestNodeManager times out before it is able to find all nodes

2017-06-06 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11936:
---

 Summary: Ozone: TestNodeManager times out before it is able to 
find all nodes
 Key: HDFS-11936
 URL: https://issues.apache.org/jira/browse/HDFS-11936
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer


During the pre-commit build of 
https://builds.apache.org/job/PreCommit-HDFS-Build/19795/testReport/
we detected that a test in TestNodeManager is failing. Probably due to the
fact that we need more time to execute this test in jenkins. This might be 
related to HDFS-11919

The test failure report follows.
==
{noformat}
Regression

org.apache.hadoop.ozone.scm.node.TestNodeManager.testScmStatsFromNodeReport

Failing for the past 1 build (Since Failed#19795 )
Took 0.51 sec.
Error Message

expected:<2> but was:<18000>
Stacktrace

java.lang.AssertionError: expected:<2> but was:<18000>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.ozone.scm.node.TestNodeManager.testScmStatsFromNodeReport(TestNodeManager.java:972)
Standard Output

2017-06-06 13:45:30,909 [main] INFO   - Data node with ID: 
732ebd32-a926-44c5-afbb-c9f87513a67c Registered.
2017-06-06 13:45:30,937 [main] INFO   - Data node with ID: 
6860fd5d-94dc-4ba8-acd0-41cc3fa7232d Registered.
2017-06-06 13:45:30,971 [main] INFO   - Data node with ID: 
cad7174c-204c-4806-b3af-c874706d4bd9 Registered.
2017-06-06 13:45:30,996 [main] INFO   - Data node with ID: 
0130a672-719d-4b68-9a1e-13046f4281ff Registered.
2017-06-06 13:45:31,021 [main] INFO   - Data node with ID: 
8d9ea5d4-6752-48d4-9bf0-adb0bd1a651a Registered.
2017-06-06 13:45:31,046 [main] INFO   - Data node with ID: 
f122e372-5a38-476b-97dc-5ae449190485 Registered.
2017-06-06 13:45:31,071 [main] INFO   - Data node with ID: 
5750eb03-c1ac-4b3a-bc59-c4d9481e245b Registered.
2017-06-06 13:45:31,097 [main] INFO   - Data node with ID: 
aa2d90a1-9e85-41f8-a4e5-35c7d2ed7299 Registered.
2017-06-06 13:45:31,122 [main] INFO   - Data node with ID: 
5e52bf5c-7050-4fc9-bf10-0e52650229ee Registered.
2017-06-06 13:45:31,147 [main] INFO   - Data node with ID: 
eaac7b8f-a556-4afc-9163-7309f7ccea18 Registered.
2017-06-06 13:45:31,224 [SCM Heartbeat Processing Thread - 0] INFO   - 
Current Thread is interrupted, shutting down HB processing thread for Node 
Manager.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/426/

[Jun 5, 2017 4:21:03 PM] (brahma) HADOOP-14440. Add metrics for connections 
dropped. Contributed by Eric
[Jun 5, 2017 6:26:56 PM] (liuml07) HADOOP-14428. s3a: mkdir appears to be 
broken. Contributed by Mingliang
[Jun 5, 2017 8:16:57 PM] (jianhe) YARN-6683. Invalid event: COLLECTOR_UPDATE at 
KILLED.  Contributed by
[Jun 5, 2017 8:18:27 PM] (kihwal) HDFS-10816. 
TestComputeInvalidateWork#testDatanodeReRegistration fails
[Jun 5, 2017 8:21:22 PM] (arp) HDFS-11928. Segment overflow in 
FileDistributionCalculator. Contributed
[Jun 5, 2017 10:56:43 PM] (liuml07) HADOOP-14478. Optimize 
NativeAzureFsInputStream for positional reads.
[Jun 5, 2017 11:31:03 PM] (yzhang) HDFS-11914. Add more diagnosis info for 
fsimage transfer failure.
[Jun 6, 2017 4:31:40 AM] (brahma) HADOOP-14431. ModifyTime of FileStatus 
returned by SFTPFileSystem's




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 351] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile

[jira] [Resolved] (HDFS-11086) DataNode disk check improvements

2017-06-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-11086.
--
Resolution: Done

Resolving as all the sub-tasks are fixed.

> DataNode disk check improvements
> 
>
> Key: HDFS-11086
> URL: https://issues.apache.org/jira/browse/HDFS-11086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> This Jira tracks a few improvements to DataNode’s usage of DiskChecker to 
> address the following problems:
> # Checks are serialized so a single slow disk can indefinitely delay checking 
> the rest.
> # Related to 1, no detection of stalled checks.
> # Lack of granularity. A single IO error initiates checking all disks.
> # Inconsistent activation. Some DataNode IO failures trigger disk checks but 
> not all.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11935) Ozone: TestStorageContainerManager#testRpcPermission fails when ipv6 address used

2017-06-06 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-11935:


 Summary: Ozone: TestStorageContainerManager#testRpcPermission 
fails when ipv6 address used
 Key: HDFS-11935
 URL: https://issues.apache.org/jira/browse/HDFS-11935
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-7240
Reporter: Yiqun Lin
Assignee: Yiqun Lin


 TestStorageContainerManager#testRpcPermission ran failed when ipv6 address 
used in my local. The stack infos:
{noformat}
java.lang.IllegalArgumentException: Does not contain a valid host:port 
authority: 0:0:0:0:0:0:0:0:54846:9863
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:213)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
at 
org.apache.hadoop.ozone.OzoneClientUtils.getScmAddressForBlockClients(OzoneClientUtils.java:193)
at 
org.apache.hadoop.ozone.ksm.KeySpaceManager.getScmBlockClient(KeySpaceManager.java:117)
at 
org.apache.hadoop.ozone.ksm.KeySpaceManager.(KeySpaceManager.java:100)
at 
org.apache.hadoop.ozone.MiniOzoneCluster$Builder.build(MiniOzoneCluster.java:373)
at 
org.apache.hadoop.ozone.TestStorageContainerManager.testRpcPermissionWithConf(TestStorageContainerManager.java:95)
at 
org.apache.hadoop.ozone.TestStorageContainerManager.testRpcPermission(TestStorageContainerManager.java:79)
{noformat}
{{OzoneClientUtils#getHostName}} will return a wrong host name when input value 
is a ipv6 address.
The root cause of this is that we use {{ListenerAddress}} which can be the 
Iner6Address or Iner4Address instance}} to update address in 
{{OzoneClientUtils#updateListenAddress}}.
{code}
  public static InetSocketAddress updateListenAddress(
  OzoneConfiguration conf, String rpcAddressKey,
  InetSocketAddress addr, RPC.Server rpcServer) {
InetSocketAddress listenAddr = rpcServer.getListenerAddress();
InetSocketAddress updatedAddr = new InetSocketAddress(
   addr.getHostString(), listenAddr.getPort());
conf.set(rpcAddressKey,
listenAddr.getHostString() + ":" + listenAddr.getPort());
return updatedAddr;
  }
{code}
We can use {{updatedAddr.getHostString() + ":" + listenAddr.getPort()}} to 
replace {{listenAddr.getHostString() + ":" + listenAddr.getPort()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11934) Add assertion to TestDefaultNameNodePort#testGetAddressFromConf

2017-06-06 Thread legend (JIRA)
legend created HDFS-11934:
-

 Summary: Add assertion to 
TestDefaultNameNodePort#testGetAddressFromConf
 Key: HDFS-11934
 URL: https://issues.apache.org/jira/browse/HDFS-11934
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.0.0-alpha3
Reporter: legend
 Fix For: 3.0.0-alpha3


Add an additional assertion to TestDefaultNameNodePort, verify that 
testGetAddressFromConf returns 555 if setDefaultUri(conf, "foo:555").



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org