[jira] [Created] (HADOOP-14131) kms.sh create erroneous dir

2017-02-27 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14131:
---

 Summary: kms.sh create erroneous dir
 Key: HADOOP-14131
 URL: https://issues.apache.org/jira/browse/HADOOP-14131
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.9.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


{{kms.sh start}} create dir {{$\{kms.log.dir\}}} on the current dir. Obvious 
the system property {{kms.log.dir}} is not set correctly, thus log4j fails to 
substitute the variable.

HADOOP-14083 introduced the issue by mistakenly moving {{kms.log.dir}} from 
option {{-D}} to file {{catalina.properties}}. The same goes to other 
properties not just used by Tomcat.  They should still be set by option {{-D}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14130) Simplify DynamoDBClientFactory for creating Amazon DynamoDB clients

2017-02-27 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14130:
--

 Summary: Simplify DynamoDBClientFactory for creating Amazon 
DynamoDB clients
 Key: HADOOP-14130
 URL: https://issues.apache.org/jira/browse/HADOOP-14130
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Mingliang Liu
Assignee: Mingliang Liu


So, we are using deprecated {{AmazonDynamoDBClient}} class to create a DynamoDB 
client instead of the recommended builder. We had discussion in [HADOOP-13345] 
for preferring region to endpoints for user to specify the DynamoDB region (if 
associated S3 region is unknown or different). We have reported inconsistent 
behavior if endpoint and S3 region are different in [HADOOP-14027]. We also 
noticed that {{DynamoDBMetadataStore}} may sometimes logs nonsense region. And 
in [HADOOP-13252], we also have feelings that file system URI is not needed to 
create a {{AWSCredentialProvider}}. Resultantly we don't need to pass down file 
system URI for creating a DynamoDB client.

So this JIRA is to change this, best effort.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14129) ITestS3ACredentialsInURL sometimes fails

2017-02-27 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-14129:
--

 Summary: ITestS3ACredentialsInURL sometimes fails
 Key: HADOOP-14129
 URL: https://issues.apache.org/jira/browse/HADOOP-14129
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sean Mackrory
Assignee: Sean Mackrory


This test sometimes fails. I believe it's expected that DynamoDB doesn't have 
access to the credentials if they're embedded in the URL instead of the 
configuration (and IMO that's fine - since the functionality hasn't been in 
previous releases and since we want to discourage this practice especially now 
that there are better alternatives). Weirdly, I only sometimes get this failure 
on the HADOOP-13345 branch. But if the problem turns out to be what I think it 
is, a simple Assume should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14128) ChecksumFs should override rename with overwrite flag

2017-02-27 Thread Mathieu Chataigner (JIRA)
Mathieu Chataigner created HADOOP-14128:
---

 Summary: ChecksumFs should override rename with overwrite flag
 Key: HADOOP-14128
 URL: https://issues.apache.org/jira/browse/HADOOP-14128
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, fs
Reporter: Mathieu Chataigner


When I call FileContext.rename(src, dst, Options.Rename.OVERWRITE) on a LocalFs 
(which extends ChecksumFs), it does not update crc files.
Every subsequent read on moved files will result in failures due to crc 
missmatch.
One solution is to override rename(src, dst, overwrite) the same way it's done 
with rename(src, dst) and moving crc files accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14127) Add log4j configuration to enable logging in hadoop-distcp's tests

2017-02-27 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-14127:
--

 Summary: Add log4j configuration to enable logging in 
hadoop-distcp's tests
 Key: HADOOP-14127
 URL: https://issues.apache.org/jira/browse/HADOOP-14127
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Xiao Chen
Assignee: Xiao Chen
Priority: Minor
 Attachments: HADOOP-14127.01.patch

>From review comment of HDFS-9868



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-02-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/

[Feb 27, 2017 4:16:36 AM] (kasha) YARN-6215. FairScheduler preemption and 
update should not run
[Feb 27, 2017 4:36:33 AM] (kasha) YARN-6172. FSLeafQueue demand update needs to 
be atomic. (Miklos Szegedi
[Feb 27, 2017 10:39:14 AM] (yqlin) HADOOP-14119. Remove unused imports from 
GzipCodec.java. Contributed by
[Feb 27, 2017 10:46:37 AM] (aajisaka) MAPREDUCE-6841. Fix dead link in 
MapReduce tutorial document.




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-compile-root.txt
  [140K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-compile-root.txt
  [140K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-compile-root.txt
  [140K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [404K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/242/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [52K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-02-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/

[Feb 27, 2017 4:16:36 AM] (kasha) YARN-6215. FairScheduler preemption and 
update should not run
[Feb 27, 2017 4:36:33 AM] (kasha) YARN-6172. FSLeafQueue demand update needs to 
be atomic. (Miklos Szegedi




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration 
   hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.client.api.impl.TestAMRMClient 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.namenode.TestEditLog 
   org.apache.hadoop.hdfs.TestDFSStripedOutputStream 
   org.apache.hadoop.hdfs.TestBlockStoragePolicy 
   org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/diff-compile-javac-root.txt
  [184K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [660K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/330/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-14126) remove jackson, joda and other transient aws SDK dependencies from hadoop-aws

2017-02-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14126:
---

 Summary: remove jackson, joda and other transient aws SDK 
dependencies from hadoop-aws
 Key: HADOOP-14126
 URL: https://issues.apache.org/jira/browse/HADOOP-14126
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, fs/s3
Affects Versions: 2.9.0
Reporter: Steve Loughran


With HADOOP-14040 in, we can cut out all declarations of dependencies on 
jackson, joda-time  from the hadoop-aws module, so avoiding it confusing 
downstream projects.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14125) s3guard tool tests aren't isolated; can't run in parallel

2017-02-27 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14125:
---

 Summary: s3guard tool tests aren't isolated; can't run in parallel
 Key: HADOOP-14125
 URL: https://issues.apache.org/jira/browse/HADOOP-14125
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: HADOOP-13345
Reporter: Steve Loughran


The {{S3GuardToolTestBase}} don't parallelize and break other tests. This can 
surface if you do a full run with -Ds3guard and -Ddynamo.

# many of the test paths they create are being requested with absolute paths, 
e.g {{"/test-diff"}}.
# the base class doesn't  set up a per-forked-JUnit-test runner path in the 
bucket
# and there's no cleanup at the end of each test case; teardown() is empty.

Ideally, the tests should be made child classes of  {{AbstractS3ATestBase}}, 
with its post-run cleanup, If that can't be done, then the tests must 
# use {{S3ATestUtils.createTestPath(super.getTestPath())} to create the base 
test path
# clean up that dir in teardown if the FS instance is non-null. 
{{ContractTestUtils.cleanup()}} can do this.

If it happens that the tests cannot run in parallel with others, then the build 
must be changed to exclude them from the parallel test run phase & include them 
in the serialized section.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13834) use JUnit categories for s3a scale tests and others

2017-02-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13834.
-
Resolution: Won't Fix

> use JUnit categories for s3a scale tests and others
> ---
>
> Key: HADOOP-13834
> URL: https://issues.apache.org/jira/browse/HADOOP-13834
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> I did the -Pscale thing in the wrong way. What I should have done is added a 
> junit4 category instead. This is cleaner, more elegant, and more flexible in 
> future (example, we could make the s3guard tests their own category)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org