[jira] [Created] (HADOOP-14549) Use GenericTestUtils.setLogLevel when available

2017-06-19 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14549:
--

 Summary: Use GenericTestUtils.setLogLevel when available
 Key: HADOOP-14549
 URL: https://issues.apache.org/jira/browse/HADOOP-14549
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Akira Ajisaka


Based on Brahma's 
[comment|https://issues.apache.org/jira/browse/HADOOP-14296?focusedCommentId=16054390=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16054390]
 in HADOOP-14296, it's better to use GenericTestUtils.setLogLevel as possible 
to make the migration easier.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14548) S3Guard: issues running parallel tests w/ S3N

2017-06-19 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-14548:
-

 Summary: S3Guard: issues running parallel tests w/ S3N 
 Key: HADOOP-14548
 URL: https://issues.apache.org/jira/browse/HADOOP-14548
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Aaron Fabbri


In general, running S3Guard and parallel tests with S3A and S3N contract tests 
enabled is asking for trouble:  S3Guard code assumes there are not other 
non-S3Guard clients modifying the bucket.

Goal of this JIRA is to:

- Discuss current failures running `mvn verify -Dparallel-tests -Ds3guard 
-Ddynamo` with S3A and S3N contract tests configured.
- Identify any failures here that are worth looking into.
- Document (or enforce) that people should not do this, or should expect 
failures if they do.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14547) [WASB] the configured retry policy is not used for all storage operations.

2017-06-19 Thread Thomas (JIRA)
Thomas created HADOOP-14547:
---

 Summary: [WASB] the configured retry policy is not used for all 
storage operations.
 Key: HADOOP-14547
 URL: https://issues.apache.org/jira/browse/HADOOP-14547
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 2.9.0, 3.0.0-alpha4
Reporter: Thomas
Assignee: Thomas
 Fix For: 2.9.0, 3.0.0-alpha4


There are a few places where the WASB retry policy was not used, and instead 
the Azure Storage SDK default retry policy was used.  These places include some 
calls to blob.exist(), container.exists(), blob.delete(), and operations with 
secure mode enabled (fs.azure.secure.mode = true).  

You can set a break point on 
com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry in the Azure 
Storage SDK and see that the WASB configured retry policy 
(DEFAULT_MAX_RETRY_ATTEMPTS, etc.) is not used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14546) [WASB] Concurrent I/O does not work when secure.mode is enabled.

2017-06-19 Thread Thomas (JIRA)
Thomas created HADOOP-14546:
---

 Summary: [WASB] Concurrent I/O does not work when secure.mode is 
enabled.
 Key: HADOOP-14546
 URL: https://issues.apache.org/jira/browse/HADOOP-14546
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 2.9.0, 3.0.0-alpha4
Reporter: Thomas
Assignee: Thomas
 Fix For: 2.9.0, 3.0.0-alpha4


This change allows the concurrent I/O feature 
(fs.azure.io.read.tolerate.concurrent.append = true) to work when secure mode 
is enabled (fs.azure.secure.mode = true).

While running the test TestAzureConcurrentOutOfBandIo.testReadOOBWrites, I 
discovered that it fails when fs.azure.secure.mode = true with the error below:

com.microsoft.azure.storage.StorageException: The condition specified using 
HTTP conditional header(s) is not met. 
  at com.microsoft.azure.storage.core.Utility.initIOException(Utility.java:733)
  at 
com.microsoft.azure.storage.blob.BlobInputStream.dispatchRead(BlobInputStream.java:264)
  at 
com.microsoft.azure.storage.blob.BlobInputStream.readInternal(BlobInputStream.java:448)
  at 
com.microsoft.azure.storage.blob.BlobInputStream.read(BlobInputStream.java:420)
  at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
  at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
  at java.io.DataInputStream.read(DataInputStream.java:149)
  at 
org.apache.hadoop.fs.azure.TestAzureConcurrentOutOfBandIo.testReadOOBWrites(TestAzureConcurrentOutOfBandIo.java:167)

There were a couple problems causing this failure:

1) AzureNativeFileSystemStore.connectToAzureStorageInSecureMode was disabling 
concurrent I/O by setting fs.azure.io.read.tolerate.concurrent.append to false.

2) SendRequestIntercept was unnecessarily updating the SAS for the request.  
Since this intercept only sets the request header "If-Match: *" to override the 
existing recondition, it is not necessary to update the SAS.

The above issues have been fixed and a new test case has been added so that 
testReadOOBWrites now runs both with and without secure mode enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/

[Jun 18, 2017 2:23:42 PM] (naganarasimha_gr) YARN-6517. Fix warnings from 
Spotbugs in hadoop-yarn-common(addendum).
[Jun 19, 2017 12:16:45 AM] (iwasakims) HADOOP-14424. Add CRC32C performance 
test. Contributed by LiXin Ge.
[Jun 19, 2017 10:09:18 AM] (aajisaka) HADOOP-14538. Fix TestFilterFileSystem 
and TestHarFileSystem failures
[Jun 19, 2017 10:39:36 AM] (aajisaka) HADOOP-14540. Replace MRv1 specific terms 
in HostsFileReader.




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.TestMaintenanceState 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.client.api.impl.TestDistributedScheduling 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-mvninstall-root.txt
  [504K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [576K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [76K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/350/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [16K]
   

[jira] [Resolved] (HADOOP-14450) ADLS Python client inconsistent when used in tandem with AdlFileSystem

2017-06-19 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria resolved HADOOP-14450.
---
Resolution: Fixed

> ADLS Python client inconsistent when used in tandem with AdlFileSystem
> --
>
> Key: HADOOP-14450
> URL: https://issues.apache.org/jira/browse/HADOOP-14450
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Reporter: Sailesh Mukil
>Assignee: Atul Sikaria
>  Labels: infrastructure
>
> Impala uses the AdlFileSystem connector to talk to ADLS. As a part of the 
> Impala tests, we drop tables and verify that the files belonging to that 
> table have been dropped for all filesystems that Impala supports. These tests 
> however, fail with ADLS.
> If I use the Hadoop ADLS connector to delete a file, and then list the parent 
> directory of that file using the above Python client within the second, the 
> client still says that the file is available in ADLS.
> This is the Python client from Microsoft that we're using in our testing:
> https://github.com/Azure/azure-data-lake-store-python
> Their release notes say that it's still a "pre-release preview":
> https://github.com/Azure/azure-data-lake-store-python/releases
> Questions for the ADLS folks:
> Is this a known issue? If so, will it be fixed soon?
> Or is this expected behavior?
> I'm able to deterministically reproduce it in my tests, with Impala on ADLS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14545) Uninitialized S3A instance NPEs on toString()

2017-06-19 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14545:
---

 Summary: Uninitialized S3A instance NPEs on toString()
 Key: HADOOP-14545
 URL: https://issues.apache.org/jira/browse/HADOOP-14545
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.1
Reporter: Steve Loughran
Priority: Minor


You can't log an uninited S3AFileSystem instance without getting a stack trace
{code}

java.lang.NullPointerException
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getDefaultBlockSize(S3AFileSystem.java:2131)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.toString(S3AFileSystem.java:2148)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14544) DistCp documentation for command line options is misaligned.

2017-06-19 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-14544:
--

 Summary: DistCp documentation for command line options is 
misaligned.
 Key: HADOOP-14544
 URL: https://issues.apache.org/jira/browse/HADOOP-14544
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.3
Reporter: Chris Nauroth
Priority: Minor


In the DistCp documentation, the Command Line Options section appears to be 
misaligned/incorrect in some of the Notes for release 2.7.3.  This is the 
current stable version, so it's likely that users will drive into this version 
of the document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/439/

[Jun 18, 2017 2:23:42 PM] (naganarasimha_gr) YARN-6517. Fix warnings from 
Spotbugs in hadoop-yarn-common(addendum).
[Jun 19, 2017 12:16:45 AM] (iwasakims) HADOOP-14424. Add CRC32C performance 
test. Contributed by LiXin Ge.




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 351] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
100] 
   Useless object stored in variable seqOs of method 

[jira] [Created] (HADOOP-14543) Should use getAversion() while setting the zkacl

2017-06-19 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-14543:
-

 Summary: Should use getAversion() while setting the zkacl
 Key: HADOOP-14543
 URL: https://issues.apache.org/jira/browse/HADOOP-14543
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


while setting the zkacl we used {color:red}{{getVersion()}}{color} which is 
dataVersion,Ideally we should use {{{color:#14892c}getAversion{color}()}}. If 
there is any acl changes( i.e relam change/..) ,we set the ACL with dataversion 
which will cause {color:#d04437}BADVersion {color}and {color:#d04437}*process 
will not start*{color}. See 
[here|https://issues.apache.org/jira/browse/HDFS-11403?focusedCommentId=16051804=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051804]

{{zkClient.setACL(path, zkAcl, stat.getVersion());}}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14542) Add IOUtils.cleanup or something that accepts slf4j logger API

2017-06-19 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14542:
--

 Summary: Add IOUtils.cleanup or something that accepts slf4j 
logger API
 Key: HADOOP-14542
 URL: https://issues.apache.org/jira/browse/HADOOP-14542
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Akira Ajisaka


Split from HADOOP-14539.
Now IOUtils.cleanup only accepts commons-logging logger API. Now we are 
migrating the APIs to slf4j, slf4j logger API should be accepted as well. 
Adding {{IOUtils.cleanup(Logger, Closeable...)}} causes {{IOUtils.cleanup(null, 
Closeable)}} to fail (incompatible change), so it's better to change the method 
name to avoid the conflict.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org