[jira] [Updated] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-08-20 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-14194:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~uncleGen] for the contribution!

> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Genmao Yu
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14194.000.patch, HADOOP-14194.001.patch
>
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a default value.
> {code}
> String endPoint = conf.getTrimmed(ENDPOINT_KEY, "");
> {code}
> The plain value without validation is passed to OSSClient. If the endPoint is 
> not provided (empty string) or the endPoint is not valid, users will get 
> exception from Aliyun OSS sdk with raw exception message like:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected 
> authority at index 8: https://
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359)
>   at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313)
>   at com.aliyun.oss.OSSClient.(OSSClient.java:297)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63)
>   at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47)
>   at junit.framework.TestCase.runBare(TestCase.java:139)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.net.URISyntaxException: Expected authority at index 8: 
> https://
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.failExpecting(URI.java:2854)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3102)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:588)
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357)
> {code}
> Let's check endPoint is not null or empty, catch the IllegalArgumentException 
> and log it, wrapping the exception with clearer message stating the 
> misconfiguration in endpoint or credentials.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134718#comment-16134718
 ] 

Akira Ajisaka commented on HADOOP-14729:


Thanks [~ajayydv] for providing the patches! My comments:

* TestConfiguration#testSetPattern and #testGetClassByNameOrNull - missing 
@Test annotations.
* TestDU#testDUGetUsedWillNotReturnNegative - missing @Test annotation.
* TestGetFileBlockLocations#setup and #teardown - javadoc can be removed.
* TestTrash#testPluggableTrash and #performanceTestDeleteSameFile - missing 
@Test annotation.
* TestTrash#teardown - javadoc can be removed.
* Would you undo the change in the following class since the classes are 
already migrated to JUnit 4 style? :
** ViewFileSystemBaseTest
** TestActiveStandbyElectorRealZK
** TestArrayWritable
** TestWritable
** TestCodec
** TestTFileJClassComparatorByteArrays
** TestTFileLzoCodecsByteArrays
** TestTFileLzoCodecsStreams
** TestTFileNoneCodecsByteArrays
** TestTFileNoneCodecsJClassComparatorByteArrays
** TestTFileNoneCodecsStreams
** TestIPC
** TestDelegationToken
** TestTaskAttempt
** TestMapFileOutputFormat (Nice catch! Missing @After is a bug, should be 
addressed in a separate jira.)
** TestJobInfo
** TestInputPath
** TestMiniMRClasspath
** TestAdlFileContextMainOperationsLive

* org.apache.hadoop.mapred.TestFileOutputCommitter#testRecoveryV1 - missing 
@Test annotation.
* 
org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter#testCommitterWithDuplicatedCommitV1
 and testMapFileOutputCommitterV2 - missing @Test annotations.
* TestIndexCache#testCreateRace - missing @Test annotation.
* TestJobEndNotifier#testLocalJobRunnerUriSubstitution, 
testLocalJobRunnerRetryCount, and testNotificationTimeout - missing @Test 
annotations.
* TestMRCJCFileOutputCommitter#testFailAbort - missing @Test annotation.
* Please do not modify the ASF license header in TestLongLong.

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, 
> HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, 
> HADOOP-14729.006.patch, HADOOP-14729.007.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14519) Client$Connection#waitForWork may suffer from spurious wakeups

2017-08-20 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134712#comment-16134712
 ] 

John Zhuge commented on HADOOP-14519:
-

Sorry for the delay [~djp]. I will take a look shortly to make a decision.

> Client$Connection#waitForWork may suffer from spurious wakeups
> --
>
> Key: HADOOP-14519
> URL: https://issues.apache.org/jira/browse/HADOOP-14519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-14519.001.patch
>
>
> {{Client$Connection#waitForWork}} may suffer spurious wakeup because the 
> {{wait}} is not surrounded by a loop. See 
> [https://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#wait()].
> {code:title=Client$Connection#waitForWork}
>   if (calls.isEmpty() && !shouldCloseConnection.get() && running.get())  {
> long timeout = maxIdleTime-
>   (Time.now()-lastActivity.get());
> if (timeout>0) {
>   try {
> wait(timeout);  << spurious wakeup
>   } catch (InterruptedException e) {}
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14519) Client$Connection#waitForWork may suffer from spurious wakeups

2017-08-20 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134702#comment-16134702
 ] 

Junping Du commented on HADOOP-14519:
-

Hi [~jzhuge], any update on this? I am kicking off 2.8.2 RC0 soon. If we cannot 
progress it for short term, can we move target to 2.8.3?

> Client$Connection#waitForWork may suffer from spurious wakeups
> --
>
> Key: HADOOP-14519
> URL: https://issues.apache.org/jira/browse/HADOOP-14519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-14519.001.patch
>
>
> {{Client$Connection#waitForWork}} may suffer spurious wakeup because the 
> {{wait}} is not surrounded by a loop. See 
> [https://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#wait()].
> {code:title=Client$Connection#waitForWork}
>   if (calls.isEmpty() && !shouldCloseConnection.get() && running.get())  {
> long timeout = maxIdleTime-
>   (Time.now()-lastActivity.get());
> if (timeout>0) {
>   try {
> wait(timeout);  << spurious wakeup
>   } catch (InterruptedException e) {}
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-08-20 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134678#comment-16134678
 ] 

Genmao Yu commented on HADOOP-14194:


[~drankye] done, and i do not add any new unit test as this is a tiny change.

> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Genmao Yu
> Attachments: HADOOP-14194.000.patch, HADOOP-14194.001.patch
>
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a default value.
> {code}
> String endPoint = conf.getTrimmed(ENDPOINT_KEY, "");
> {code}
> The plain value without validation is passed to OSSClient. If the endPoint is 
> not provided (empty string) or the endPoint is not valid, users will get 
> exception from Aliyun OSS sdk with raw exception message like:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected 
> authority at index 8: https://
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359)
>   at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313)
>   at com.aliyun.oss.OSSClient.(OSSClient.java:297)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63)
>   at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47)
>   at junit.framework.TestCase.runBare(TestCase.java:139)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.net.URISyntaxException: Expected authority at index 8: 
> https://
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.failExpecting(URI.java:2854)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3102)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:588)
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357)
> {code}
> Let's check endPoint is not null or empty, catch the IllegalArgumentException 
> and log it, wrapping the exception with clearer message stating the 
> misconfiguration in endpoint or credentials.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134659#comment-16134659
 ] 

Hadoop QA commented on HADOOP-14194:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14194 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882803/HADOOP-14194.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3d0b127c193c 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7a82d7b |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13078/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13078/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Genmao Yu
> Attachments: HADOOP-14194.000.patch, HADOOP-14194.001.patch
>
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a 

[jira] [Commented] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-08-20 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134646#comment-16134646
 ] 

Kai Zheng commented on HADOOP-14194:


The change looks good. Looks like the two minor check styles should be 
addressed. Genmao could you update? Thanks!

> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Genmao Yu
> Attachments: HADOOP-14194.000.patch, HADOOP-14194.001.patch
>
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a default value.
> {code}
> String endPoint = conf.getTrimmed(ENDPOINT_KEY, "");
> {code}
> The plain value without validation is passed to OSSClient. If the endPoint is 
> not provided (empty string) or the endPoint is not valid, users will get 
> exception from Aliyun OSS sdk with raw exception message like:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected 
> authority at index 8: https://
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359)
>   at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313)
>   at com.aliyun.oss.OSSClient.(OSSClient.java:297)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63)
>   at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47)
>   at junit.framework.TestCase.runBare(TestCase.java:139)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.net.URISyntaxException: Expected authority at index 8: 
> https://
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.failExpecting(URI.java:2854)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3102)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:588)
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357)
> {code}
> Let's check endPoint is not null or empty, catch the IllegalArgumentException 
> and log it, wrapping the exception with clearer message stating the 
> misconfiguration in endpoint or credentials.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1

2017-08-20 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134621#comment-16134621
 ] 

Genmao Yu commented on HADOOP-14649:


[~rchiang] yeah, I has been testing it on oss sdk v2.8.1. But I need to fix 
HADOOP-14787 before this work.

> Update aliyun-sdk-oss version to 2.8.1
> --
>
> Key: HADOOP-14649
> URL: https://issues.apache.org/jira/browse/HADOOP-14649
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Genmao Yu
>
> Update the dependency
> com.aliyun.oss:aliyun-sdk-oss:2.4.1
> to the latest (2.8.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-08-20 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14194:
---
Attachment: HADOOP-14194.001.patch

> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Genmao Yu
> Attachments: HADOOP-14194.000.patch, HADOOP-14194.001.patch
>
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a default value.
> {code}
> String endPoint = conf.getTrimmed(ENDPOINT_KEY, "");
> {code}
> The plain value without validation is passed to OSSClient. If the endPoint is 
> not provided (empty string) or the endPoint is not valid, users will get 
> exception from Aliyun OSS sdk with raw exception message like:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected 
> authority at index 8: https://
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359)
>   at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313)
>   at com.aliyun.oss.OSSClient.(OSSClient.java:297)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63)
>   at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47)
>   at junit.framework.TestCase.runBare(TestCase.java:139)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.net.URISyntaxException: Expected authority at index 8: 
> https://
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.failExpecting(URI.java:2854)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3102)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:588)
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357)
> {code}
> Let's check endPoint is not null or empty, catch the IllegalArgumentException 
> and log it, wrapping the exception with clearer message stating the 
> misconfiguration in endpoint or credentials.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14705) Add batched reencryptEncryptedKey interface to KMS

2017-08-20 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134586#comment-16134586
 ] 

Wei-Chiu Chuang commented on HADOOP-14705:
--

Did a final review before I sign off --

Found a few very minor log messages that worth improvement:

{code:title=KeyProviderCryptoExtension#reencryptEncryptedKeys}
"encryptedKey version name must be '%s', is '%s'"
...
"All keys must be with same key name. found '%s', '%s'"
{code}
can be updated with 
{code}
"encryptedKey version name must be '%s', but found '%s'"
and
"All keys must have the same key name. Expected '%s' but found '%s'"
{code}
(KMSClientProvider#reencryptEncryptedKeys has the same Preconditions check that 
can also be updated)

Would it make sense to move the following encryptor/decryptor initilalization 
{code}
  if (decryptor == null) {
decryptor = cc.createDecryptor();
  }
  if (encryptor == null) {
encryptor = cc.createEncryptor();
  }
{code}
to right after 
{code}
try (CryptoCodec cc = CryptoCodec.getInstance(keyProvider.getConf())) {
{code}? (i.e. before the while loop)

Thanks for the work. I am +1 after these nits are addressed.

> Add batched reencryptEncryptedKey interface to KMS
> --
>
> Key: HADOOP-14705
> URL: https://issues.apache.org/jira/browse/HADOOP-14705
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14705.01.patch, HADOOP-14705.02.patch, 
> HADOOP-14705.03.patch, HADOOP-14705.04.patch, HADOOP-14705.05.patch, 
> HADOOP-14705.06.patch, HADOOP-14705.07.patch, HADOOP-14705.08.patch, 
> HADOOP-14705.09.patch
>
>
> HADOOP-13827 already enabled the KMS to re-encrypt a {{EncryptedKeyVersion}}.
> As the performance results of HDFS-10899 turns out, communication overhead 
> with the KMS occupies the majority of the time. So this jira proposes to add 
> a batched interface to re-encrypt multiple EDEKs in 1 call.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14586) StringIndexOutOfBoundsException breaks org.apache.hadoop.util.Shell on 2.7.x with Java 9

2017-08-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134581#comment-16134581
 ] 

Uwe Schindler commented on HADOOP-14586:


FYI, I updated Solr to use Hadoop 2.7.4. Thanks!

> StringIndexOutOfBoundsException breaks org.apache.hadoop.util.Shell on 2.7.x 
> with Java 9
> 
>
> Key: HADOOP-14586
> URL: https://issues.apache.org/jira/browse/HADOOP-14586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
> Environment: Java 9, build 175 (Java 9 release candidate as of June 
> 25th, 2017)
>Reporter: Uwe Schindler
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: Java9
> Fix For: 2.7.4
>
> Attachments: HADOOP-14586-branch-2.7-01.patch, 
> HADOOP-14586-branch-2.7-02.patch, HADOOP-14586-branch-2.7-03.patch
>
>
> You cannot use any pre-Hadoop 2.8 component anymore with the latest release 
> candidate build of Java 9, because it fails with an 
> StringIndexOutOfBoundsException in {{org.apache.hadoop.util.Shell#}}. 
> This leads to a whole cascade of failing classes (next in chain is 
> StringUtils).
> The reason is that the release candidate build of Java 9 no longer has "-ea" 
> in the version string and the system property "java.version" is now simply 
> "9". This causes the following line to fail fatally:
> {code:java}
>   private static boolean IS_JAVA7_OR_ABOVE =
>   System.getProperty("java.version").substring(0, 3).compareTo("1.7") >= 
> 0;
> {code}
> Analysis:
> - This code looks wrong, as comparing a version this way is incorrect.
> - The {{substring(0, 3)}} is not needed, {{compareTo}} also works without it, 
> although it is still an invalid way to compare a version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12502) SetReplication OutOfMemoryError

2017-08-20 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134525#comment-16134525
 ] 

Wei-Chiu Chuang commented on HADOOP-12502:
--

Hi [~vinayrpet] thanks for the patch!

I saw a similar OOM error when I do hdfs dfs -chmod. So I assume many other 
commands are prone to this bug too. Haven't reviewed the patch yet, but should 
we make sure the fix considers other commands as well?


> SetReplication OutOfMemoryError
> ---
>
> Key: HADOOP-12502
> URL: https://issues.apache.org/jira/browse/HADOOP-12502
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Philipp Schuegerl
>Assignee: Vinayakumar B
> Attachments: HADOOP-12502-01.patch, HADOOP-12502-02.patch, 
> HADOOP-12502-03.patch, HADOOP-12502-04.patch, HADOOP-12502-05.patch, 
> HADOOP-12502-06.patch
>
>
> Setting the replication of a HDFS folder recursively can run out of memory. 
> E.g. with a large /var/log directory:
> hdfs dfs -setrep -R -w 1 /var/log
> Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit 
> exceeded
>   at java.util.Arrays.copyOfRange(Arrays.java:2694)
>   at java.lang.String.(String.java:203)
>   at java.lang.String.substring(String.java:1913)
>   at java.net.URI$Parser.substring(URI.java:2850)
>   at java.net.URI$Parser.parse(URI.java:3046)
>   at java.net.URI.(URI.java:753)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   at org.apache.hadoop.fs.Path.(Path.java:116)
>   at org.apache.hadoop.fs.Path.(Path.java:94)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:222)
>   at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:246)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:689)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
>   at 
> org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347)
>   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308)
>   at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
>   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
>   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
>   at 
> org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134436#comment-16134436
 ] 

Hadoop QA commented on HADOOP-14194:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  9s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14194 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882783/HADOOP-14194.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fa70b084ba3a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 436c263 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13077/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13077/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13077/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>

[jira] [Commented] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator

2017-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134433#comment-16134433
 ] 

Hadoop QA commented on HADOOP-14787:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882781/HADOOP-14787.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 01b17be38acb 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 436c263 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13076/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13076/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: Implement the `createNonRecursive` operator
> --
>
> Key: HADOOP-14787
> URL: https://issues.apache.org/jira/browse/HADOOP-14787
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14787.000.patch
>
>
> {code}
> 

[jira] [Commented] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-08-20 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134432#comment-16134432
 ] 

Genmao Yu commented on HADOOP-14194:


[~drankye] Please take a review. 

> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Genmao Yu
> Attachments: HADOOP-14194.000.patch
>
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a default value.
> {code}
> String endPoint = conf.getTrimmed(ENDPOINT_KEY, "");
> {code}
> The plain value without validation is passed to OSSClient. If the endPoint is 
> not provided (empty string) or the endPoint is not valid, users will get 
> exception from Aliyun OSS sdk with raw exception message like:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected 
> authority at index 8: https://
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359)
>   at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313)
>   at com.aliyun.oss.OSSClient.(OSSClient.java:297)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63)
>   at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47)
>   at junit.framework.TestCase.runBare(TestCase.java:139)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.net.URISyntaxException: Expected authority at index 8: 
> https://
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.failExpecting(URI.java:2854)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3102)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:588)
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357)
> {code}
> Let's check endPoint is not null or empty, catch the IllegalArgumentException 
> and log it, wrapping the exception with clearer message stating the 
> misconfiguration in endpoint or credentials.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-08-20 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14194:
---
Status: Patch Available  (was: Open)

> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Genmao Yu
> Attachments: HADOOP-14194.000.patch
>
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a default value.
> {code}
> String endPoint = conf.getTrimmed(ENDPOINT_KEY, "");
> {code}
> The plain value without validation is passed to OSSClient. If the endPoint is 
> not provided (empty string) or the endPoint is not valid, users will get 
> exception from Aliyun OSS sdk with raw exception message like:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected 
> authority at index 8: https://
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359)
>   at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313)
>   at com.aliyun.oss.OSSClient.(OSSClient.java:297)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63)
>   at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47)
>   at junit.framework.TestCase.runBare(TestCase.java:139)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.net.URISyntaxException: Expected authority at index 8: 
> https://
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.failExpecting(URI.java:2854)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3102)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:588)
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357)
> {code}
> Let's check endPoint is not null or empty, catch the IllegalArgumentException 
> and log it, wrapping the exception with clearer message stating the 
> misconfiguration in endpoint or credentials.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-08-20 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14194:
---
Attachment: HADOOP-14194.000.patch

> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Genmao Yu
> Attachments: HADOOP-14194.000.patch
>
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a default value.
> {code}
> String endPoint = conf.getTrimmed(ENDPOINT_KEY, "");
> {code}
> The plain value without validation is passed to OSSClient. If the endPoint is 
> not provided (empty string) or the endPoint is not valid, users will get 
> exception from Aliyun OSS sdk with raw exception message like:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected 
> authority at index 8: https://
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359)
>   at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313)
>   at com.aliyun.oss.OSSClient.(OSSClient.java:297)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63)
>   at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47)
>   at junit.framework.TestCase.runBare(TestCase.java:139)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.net.URISyntaxException: Expected authority at index 8: 
> https://
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.failExpecting(URI.java:2854)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3102)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:588)
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357)
> {code}
> Let's check endPoint is not null or empty, catch the IllegalArgumentException 
> and log it, wrapping the exception with clearer message stating the 
> misconfiguration in endpoint or credentials.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator

2017-08-20 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134423#comment-16134423
 ] 

Genmao Yu commented on HADOOP-14787:


Passed on existing unit tests, no new added unit test

[~drankye] [~ste...@apache.org]  Take a review please.

{code}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 10.4 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.915 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 192.85 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Running 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.97 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.96 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.688 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.5 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.127 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.232 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.257 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.172 sec - 
in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 128.397 sec - 
in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.102 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.838 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSOutputStream

Results :

Tests run: 144, Failures: 0, Errors: 0, Skipped: 2

{code}

> AliyunOSS: Implement the `createNonRecursive` operator
> --
>
> Key: HADOOP-14787
> URL: https://issues.apache.org/jira/browse/HADOOP-14787
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14787.000.patch
>
>
> {code}
> testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 1.146 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178)
>   at 
> 

[jira] [Updated] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator

2017-08-20 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14787:
---
Attachment: HADOOP-14787.000.patch

> AliyunOSS: Implement the `createNonRecursive` operator
> --
>
> Key: HADOOP-14787
> URL: https://issues.apache.org/jira/browse/HADOOP-14787
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14787.000.patch
>
>
> {code}
> testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 1.146 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 0.145 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 0.147 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> 

[jira] [Updated] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator

2017-08-20 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14787:
---
Status: Patch Available  (was: Open)

> AliyunOSS: Implement the `createNonRecursive` operator
> --
>
> Key: HADOOP-14787
> URL: https://issues.apache.org/jira/browse/HADOOP-14787
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>
> {code}
> testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 1.146 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 0.145 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 0.147 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> 

[jira] [Commented] (HADOOP-14791) SimpleKdcServer: Fail to delete krb5 conf

2017-08-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16134334#comment-16134334
 ] 

Hadoop QA commented on HADOOP-14791:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-common-project/hadoop-minikdc generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-minikdc in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-minikdc |
|  |  Exceptional return value of java.io.File.delete() ignored in 
org.apache.hadoop.minikdc.MiniKdc.main(String[])  At MiniKdc.java:ignored in 
org.apache.hadoop.minikdc.MiniKdc.main(String[])  At MiniKdc.java:[line 113] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882765/HADOOP-14791.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 218329f252c3 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 436c263 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13075/artifact/patchprocess/new-findbugs-hadoop-common-project_hadoop-minikdc.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13075/testReport/ |
| modules | C: hadoop-common-project/hadoop-minikdc U: 
hadoop-common-project/hadoop-minikdc |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13075/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HADOOP-14791) SimpleKdcServer: Fail to delete krb5 conf

2017-08-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14791:

Status: Patch Available  (was: Open)

> SimpleKdcServer: Fail to delete krb5 conf
> -
>
> Key: HADOOP-14791
> URL: https://issues.apache.org/jira/browse/HADOOP-14791
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-beta1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14791.001.patch
>
>
> Run MiniKdc in a terminal and then press Ctrl-C:
> {noformat}
> Do  or kill  to stop it
> ---
> ^C2017-08-19 22:52:23,607 INFO impl.DefaultInternalKdcServerImpl: Default 
> Internal kdc server stopped.
> 2017-08-19 22:53:21,358 INFO server.SimpleKdcServer: Fail to delete krb5 
> conf. java.io.IOException
> 2017-08-19 22:53:22,363 INFO minikdc.MiniKdc: MiniKdc stopped.
> {noformat}
> The reason for "Fail to delete krb5 conf" is because MiniKdc renames 
> SimpleKdcServer's krb5 conf file. During shutdown, SimpleKdcServer attempts 
> to delete its krb5 conf file, and can not find it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14791) SimpleKdcServer: Fail to delete krb5 conf

2017-08-20 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14791:

Attachment: HADOOP-14791.001.patch

Patch 001
* Instead of renaming krb5 conf file, just copy it.

> SimpleKdcServer: Fail to delete krb5 conf
> -
>
> Key: HADOOP-14791
> URL: https://issues.apache.org/jira/browse/HADOOP-14791
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: minikdc
>Affects Versions: 3.0.0-beta1
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14791.001.patch
>
>
> Run MiniKdc in a terminal and then press Ctrl-C:
> {noformat}
> Do  or kill  to stop it
> ---
> ^C2017-08-19 22:52:23,607 INFO impl.DefaultInternalKdcServerImpl: Default 
> Internal kdc server stopped.
> 2017-08-19 22:53:21,358 INFO server.SimpleKdcServer: Fail to delete krb5 
> conf. java.io.IOException
> 2017-08-19 22:53:22,363 INFO minikdc.MiniKdc: MiniKdc stopped.
> {noformat}
> The reason for "Fail to delete krb5 conf" is because MiniKdc renames 
> SimpleKdcServer's krb5 conf file. During shutdown, SimpleKdcServer attempts 
> to delete its krb5 conf file, and can not find it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14791) SimpleKdcServer: Fail to delete krb5 conf

2017-08-20 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14791:
---

 Summary: SimpleKdcServer: Fail to delete krb5 conf
 Key: HADOOP-14791
 URL: https://issues.apache.org/jira/browse/HADOOP-14791
 Project: Hadoop Common
  Issue Type: Bug
  Components: minikdc
Affects Versions: 3.0.0-beta1
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


Run MiniKdc in a terminal and then press Ctrl-C:
{noformat}
Do  or kill  to stop it
---

^C2017-08-19 22:52:23,607 INFO impl.DefaultInternalKdcServerImpl: Default 
Internal kdc server stopped.
2017-08-19 22:53:21,358 INFO server.SimpleKdcServer: Fail to delete krb5 conf. 
java.io.IOException
2017-08-19 22:53:22,363 INFO minikdc.MiniKdc: MiniKdc stopped.
{noformat}

The reason for "Fail to delete krb5 conf" is because MiniKdc renames 
SimpleKdcServer's krb5 conf file. During shutdown, SimpleKdcServer attempts to 
delete its krb5 conf file, and can not find it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org