[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2 and 2.8+ branches

2017-11-24 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265592#comment-16265592
 ] 

Chris Douglas commented on HADOOP-14964:


It looks like this is now committed to branch-2 
{{30ab9b6aef2e3d31f2a8fc9211b5324b3d42f18e}} and branch-2.9 
{{32a88442d0f9e9860b1f179da586894cea6a9e10}} (in the future, please use 
cherry-pick -x when backporting a patch). Would it make sense to change the 
title to "Backport Aliyun OSS module to branch-2" and resolve this issue as 
fixed in 2.9.1? If it is released in a 2.8.x release, we can update the fix 
version. Please also add a release note.

Is someone ready to RM the 2.9.1 release? If we release it quickly, downstream 
users are less likely to be affected.

> AliyunOSS: backport Aliyun OSS module to branch-2 and 2.8+ branches
> ---
>
> Key: HADOOP-14964
> URL: https://issues.apache.org/jira/browse/HADOOP-14964
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: Genmao Yu
>Assignee: SammiChen
> Attachments: HADOOP-14964-branch-2.000.patch, 
> HADOOP-14964-branch-2.8.000.patch, HADOOP-14964-branch-2.8.001.patch, 
> HADOOP-14964-branch-2.9.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15069) support git-secrets commit hook to keep AWS secrets out of git

2017-11-24 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265553#comment-16265553
 ] 

Chris Douglas commented on HADOOP-15069:


This is a copy/paste artifact?
{noformat}
+# Provides tab completion for the main hadoop script.
+#
+# On debian-based systems, place in /etc/bash_completion.d/ and either restart
+# Bash or source the script manually (. /etc/bash_completion.d/hadoop.sh).
{noformat}

Haven't tested it, but this looks helpful to include. How should we keep it 
current, or ensure it stays current?

> support git-secrets commit hook to keep AWS secrets out of git
> --
>
> Key: HADOOP-15069
> URL: https://issues.apache.org/jira/browse/HADOOP-15069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15069-001.patch, HADOOP-15069-002.patch
>
>
> The latest Uber breach looks like it involved AWS keys in git repos.
> Nobody wants that, which is why amazon provide 
> [git-secrets|https://github.com/awslabs/git-secrets]; a script you can use to 
> scan a repo and its history, *and* add as an automated check.
> Anyone can set this up, but there are a few false positives in the scan, 
> mostly from longs and a few all-upper-case constants. These can all be added 
> to a .gitignore file.
> Also: mention git-secrets in the aws testing docs; say "use it"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-11-24 Thread Ohad Raviv (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265476#comment-16265476
 ] 

Ohad Raviv commented on HADOOP-14600:
-

I'm not sure I was clear enough in the above comment.
I totaly agree with your current patch, but I'm saying that the initiator of 
all this mess from SPARK-21137 is org.apache.hadoop.mapred.FileInputFormat with 
the following code:
{code}
for (FileStatus globStat: matches) {
  if (globStat.isDirectory()) {
RemoteIterator iter =
fs.listLocatedStatus(globStat.getPath());
while (iter.hasNext()) {
  LocatedFileStatus stat = iter.next();
  if (inputFilter.accept(stat.getPath())) {
if (recursive && stat.isDirectory()) {
  addInputPathRecursively(result, fs, stat.getPath(),
  inputFilter);
} else {
  result.add(stat);
}
  }
}
  } else {
result.add(globStat);
  }
}
{code}
All I'm suggesting is to replace `listLocatedStatus` call with 
`listStatusIterator` because it returns FileStatus rather than 
LocatedFileStatus and that doesn't trigger all the getPermission() mess at all.
testing that localy (on Mac) caused the test to run in less than a second 
instead of about 30 seconds for about 1 files.

> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Attachments: HADOOP-14600.001.patch, HADOOP-14600.002.patch, 
> HADOOP-14600.003.patch, HADOOP-14600.004.patch, HADOOP-14600.005.patch, 
> HADOOP-14600.006.patch, HADOOP-14600.007.patch, HADOOP-14600.008.patch, 
> HADOOP-14600.009.patch, TestRawLocalFileSystemContract.java
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14714) handle InternalError in bulk object delete through retries

2017-11-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14714.
-
   Resolution: Fixed
Fix Version/s: 3.1.0

Done in HADOOP-13786: we *do* consider delete to be idempotent. This fact is 
declared in a boolean constant DELETE_CONSIDERED_IDEMPOTENT, so if it were to 
be changed (or made a config option), it would be straightforward

> handle InternalError in bulk object delete through retries
> --
>
> Key: HADOOP-14714
> URL: https://issues.apache.org/jira/browse/HADOOP-14714
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.1.0
>
>
> There's some more detail appearing on HADOOP-11572 about the errors seen 
> here; sounds like its large fileset related (or just probability working 
> against you). Most importantly: retries may make it go away. 
> Proposed: implement a retry policy.
> Issue: delete is not idempotent, not if someone else adds things.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14714) handle InternalError in bulk object delete through retries

2017-11-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-14714:
---

Assignee: Steve Loughran

> handle InternalError in bulk object delete through retries
> --
>
> Key: HADOOP-14714
> URL: https://issues.apache.org/jira/browse/HADOOP-14714
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> There's some more detail appearing on HADOOP-11572 about the errors seen 
> here; sounds like its large fileset related (or just probability working 
> against you). Most importantly: retries may make it go away. 
> Proposed: implement a retry policy.
> Issue: delete is not idempotent, not if someone else adds things.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13551) hook up AwsSdkMetrics to hadoop metrics

2017-11-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265456#comment-16265456
 ] 

Steve Loughran commented on HADOOP-13551:
-

Looks like the http pool stats could be interesting: available count and 
time-to-acquire lease; can identify undercapacity in connection pool before 
things fail (HADOOP-14621) or block (SPARK-22526)

https://aws.amazon.com/blogs/developer/tuning-the-aws-sdk-for-java-to-improve-resiliency/

> hook up AwsSdkMetrics to hadoop metrics
> ---
>
> Key: HADOOP-13551
> URL: https://issues.apache.org/jira/browse/HADOOP-13551
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Sean Mackrory
>Priority: Minor
>
> There's an API in {{com.amazonaws.metrics.AwsSdkMetrics}} to give access to 
> the internal metrics of the AWS libraries. We might want to get at those



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14621) S3A client raising ConnectionPoolTimeoutException

2017-11-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265452#comment-16265452
 ] 

Steve Loughran commented on HADOOP-14621:
-

This is possibly now remapped to ConnectionTimeoutException and so 
retried...we'd need to experiment

ConnectionPoolTimeoutException is a sign of a serious problem, as it can be 
raised if the pool of http threads has been used up by files not being closed 
in user code (SPARK-22526), elsewhere (HIVE-13216). At the same time, 
completely spurious if it is raised early and often.

Proposed
# Make sure translation preserves it as its own exception (interruptedIOE with 
text)
# add commentary in troubleshooting doc
# review timeout value  and pool size, consider recommending values (2 per CPU?)
# could we make queue size a gauge in metrics ?
# have a test which creates a very small pool and timeout and triggers a 
failure (GET on small file and read() one byte should be enough to use up a 
connection.

Look @ other object store code and see if the same policy applies

> S3A client raising ConnectionPoolTimeoutException
> -
>
> Key: HADOOP-14621
> URL: https://issues.apache.org/jira/browse/HADOOP-14621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-beta1
> Environment: Home network with 2+ other users on high bandwidth 
> activities
>Reporter: Steve Loughran
>Priority: Minor
>
> Parallel test with threads = 12 triggering connection pool timeout. 
> Hypothesis? Congested network triggering pool timeout.
> Fix? For tests, could increase pool size
> For retry logic, this should be considered retriable, even on idempotent 
> calls (as its a failure to acquire a connection



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14630) Contract Tests to verify create, mkdirs and rename under a file is forbidden

2017-11-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14630:

Status: Open  (was: Patch Available)

> Contract Tests to verify create, mkdirs and rename under a file is forbidden
> 
>
> Key: HADOOP-14630
> URL: https://issues.apache.org/jira/browse/HADOOP-14630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, fs/s3, fs/swift
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14630-001.patch, HADOOP-14630-002.patch
>
>
> Object stores can get into trouble in ways which an FS would never, do, ways 
> so obvious we've never done tests for them. We know what the problems are: 
> test for file and dir creation directly/indirectly under other files
> * mkdir(file/file)
> * mkdir(file/subdir)
> * dir under file/subdir/subdir
> * dir/dir2/file, verify dir & dir2 exist
> * dir/dir2/dir3, verify dir & dir2 exist 
> * rename(src, file/dest)
> * rename(src, file/dir/dest)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14630) Contract Tests to verify create, mkdirs and rename under a file is forbidden

2017-11-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14630:

Attachment: HADOOP-14630-002.patch

Patch 002; rebased to trunk. Not retested

> Contract Tests to verify create, mkdirs and rename under a file is forbidden
> 
>
> Key: HADOOP-14630
> URL: https://issues.apache.org/jira/browse/HADOOP-14630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, fs/s3, fs/swift
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14630-001.patch, HADOOP-14630-002.patch
>
>
> Object stores can get into trouble in ways which an FS would never, do, ways 
> so obvious we've never done tests for them. We know what the problems are: 
> test for file and dir creation directly/indirectly under other files
> * mkdir(file/file)
> * mkdir(file/subdir)
> * dir under file/subdir/subdir
> * dir/dir2/file, verify dir & dir2 exist
> * dir/dir2/dir3, verify dir & dir2 exist 
> * rename(src, file/dest)
> * rename(src, file/dir/dest)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14630) Contract Tests to verify create, mkdirs and rename under a file is forbidden

2017-11-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14630:

Target Version/s: 3.1.0
  Status: Patch Available  (was: Open)

> Contract Tests to verify create, mkdirs and rename under a file is forbidden
> 
>
> Key: HADOOP-14630
> URL: https://issues.apache.org/jira/browse/HADOOP-14630
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, fs/s3, fs/swift
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14630-001.patch, HADOOP-14630-002.patch
>
>
> Object stores can get into trouble in ways which an FS would never, do, ways 
> so obvious we've never done tests for them. We know what the problems are: 
> test for file and dir creation directly/indirectly under other files
> * mkdir(file/file)
> * mkdir(file/subdir)
> * dir under file/subdir/subdir
> * dir/dir2/file, verify dir & dir2 exist
> * dir/dir2/dir3, verify dir & dir2 exist 
> * rename(src, file/dest)
> * rename(src, file/dir/dest)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8195) Backport FileContext to branch-1

2017-11-24 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-8195.
--
Resolution: Won't Fix

Branch-1 is EoL.

> Backport FileContext to branch-1
> 
>
> Key: HADOOP-8195
> URL: https://issues.apache.org/jira/browse/HADOOP-8195
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Eli Collins
>
> It's hard to get users to migrate to FileContext because they (rightly!) want 
> their software to work against Hadoop 1.x which only has FileSystem. 
> Backporting FileContext to branch-1 would allow people to migrate to 
> FileContext and still work against both Hadoop 1.x and 2.x (or whatever we 
> call it). It probably isn't that much work since FileContext is mostly net 
> new code with lots of tests. It's a pain to support the same code in two 
> places, but we already have to do that with FileSystem, and that's the cost 
> of introducing a new API instead of improving FileSystem in place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8590) Backport HADOOP-7318 (MD5Hash factory should reset the digester it returns) to branch-1

2017-11-24 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-8590.
--
Resolution: Won't Fix

> Backport HADOOP-7318 (MD5Hash factory should reset the digester it returns) 
> to branch-1
> ---
>
> Key: HADOOP-8590
> URL: https://issues.apache.org/jira/browse/HADOOP-8590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Todd Lipcon
>
> I ran into this bug on branch-1 today, it seems like we should backport it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-12756:
---
Fix Version/s: 2.9.1

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/oss
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: mingfei.shi
> Fix For: HADOOP-12756, 3.0.0-alpha2, 2.9.1
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13591) Unit test failure in TestOSSContractGetFileStatus and TestOSSContractRootDir

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-13591:
---
Fix Version/s: 2.9.1

> Unit test failure in TestOSSContractGetFileStatus and TestOSSContractRootDir 
> -
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756, 2.9.1
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch, HADOOP-13591-HADOOP-12756.003.patch, 
> HADOOP-13591-HADOOP-12756.004.patch, HADOOP-13591-HADOOP-12756.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13481) User end documents for Aliyun OSS FileSystem

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-13481:
---
Fix Version/s: 2.9.1

> User end documents for Aliyun OSS FileSystem
> 
>
> Key: HADOOP-13481
> URL: https://issues.apache.org/jira/browse/HADOOP-13481
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Minor
> Fix For: HADOOP-12756, 2.9.1
>
> Attachments: HADOOP-13481-HADOOP-12756.001.patch, 
> HADOOP-13481-HADOOP-12756.002.patch, HADOOP-13481-HADOOP-12756.003.patch, 
> HADOOP-13481-HADOOP-12756.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13723) AliyunOSSInputStream#read() should update read bytes stat correctly

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-13723:
---
Fix Version/s: 2.9.1

> AliyunOSSInputStream#read() should update read bytes stat correctly
> ---
>
> Key: HADOOP-13723
> URL: https://issues.apache.org/jira/browse/HADOOP-13723
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 3.0.0-alpha2, 2.9.1
>
> Attachments: HDFS-11007.000.patch
>
>
> {code}
>   @Override
>   public synchronized int read() throws IOException {
> ..
> if (statistics != null && byteRead >= 0) {
>   statistics.incrementBytesRead(1);
> }
> return byteRead;
>   }
> {code}
> I believe it should be {{statistics.incrementBytesRead(byteRead);}}?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13624) Rename TestAliyunOSSContractDispCp

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-13624:
---
Fix Version/s: 2.9.1

> Rename TestAliyunOSSContractDispCp
> --
>
> Key: HADOOP-13624
> URL: https://issues.apache.org/jira/browse/HADOOP-13624
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756, 2.9.1
>
> Attachments: HADOOP-13624-HADOOP-12756.001.patch
>
>
> It should be TestAliyunOSSContractDistCp.java instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14045) Aliyun OSS documentation missing from website

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-14045:
---
Fix Version/s: 2.9.1

> Aliyun OSS documentation missing from website
> -
>
> Key: HADOOP-14045
> URL: https://issues.apache.org/jira/browse/HADOOP-14045
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Andrew Wang
>Assignee: Yiqun Lin
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-14045.001.patch
>
>
> I'm looking at the alpha2 website, and can't find a link to the Aliyun OSS 
> documentation. Under the "Hadoop Compatible File Systems" header there are 
> links to S3, Azure blob, ADLS, and Swift, but not Aliyun OSS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13768) AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-13768:
---
Fix Version/s: 2.9.1

> AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch, 
> HADOOP-13768.003.patch, HADOOP-13768.004.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-14069:
---
Fix Version/s: 2.9.1

> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -ls oss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14065) AliyunOSS: oss directory filestatus should use meta time

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-14065:
---
Fix Version/s: 2.9.1

> AliyunOSS: oss directory filestatus should use meta time
> 
>
> Key: HADOOP-14065
> URL: https://issues.apache.org/jira/browse/HADOOP-14065
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-14065.001.patch, HADOOP-14065.002.patch, 
> HADOOP-14065.003.patch, HADOOP-14065.004.patch, HADOOP-14065.patch
>
>
> code in getFileStatus function
> {code:title=AliyunOSSFileSystem.java|borderStyle=solid}
> else if (objectRepresentsDirectory(key, meta.getContentLength())) {
>   return new FileStatus(0, true, 1, 0, 0, qualifiedPath);
> }
> {code}
> When the object is a directory,  we should set right modifiedtime rather than 
> 0 for FileStatus



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13769) AliyunOSS: update oss sdk version

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-13769:
---
Fix Version/s: 2.9.1

> AliyunOSS: update oss sdk version
> -
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-13769.001.patch, HADOOP-13769.002.patch
>
>
>  -AliyunOSS object inputstream.close() will read the remaining bytes of the 
> OSS object, potentially transferring a lot of bytes from OSS that are 
> discarded.-
> just update oss sdk version. It fix many bugs including this " 
> inputstream.close()" perfomance issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14458) Add missing imports to TestAliyunOSSFileSystemContract.java

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-14458:
---
Fix Version/s: 2.9.1

> Add missing imports to TestAliyunOSSFileSystemContract.java
> ---
>
> Key: HADOOP-14458
> URL: https://issues.apache.org/jira/browse/HADOOP-14458
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-14458.000.patch
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-aliyun: Compilation failure: 
> Compilation failure:
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[71,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[90,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[91,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[92,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[93,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[95,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[96,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[98,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[99,5]
>  cannot find symbol
> [ERROR]   symbol:   method assertTrue(java.lang.String,boolean)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[115,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[129,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
> [ERROR] 
> /Users/mliu/Workspace/hadoop/hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java:[143,7]
>  cannot find symbol
> [ERROR]   symbol:   method fail(java.lang.String)
> [ERROR]   location: class 
> 

[jira] [Updated] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-14072:
---
Fix Version/s: 2.9.1

> AliyunOSS: Failed to read from stream when seek beyond the download size
> 
>
> Key: HADOOP-14072
> URL: https://issues.apache.org/jira/browse/HADOOP-14072
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-14072.001.patch
>
>
> {code}
> public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
>   position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> In seek function, we need to update the partRemaining when the seeking 
> position is located in downloaded part.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14192) Aliyun OSS FileSystem contract test should implement getTestBaseDir()

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-14192:
---
Fix Version/s: 2.9.1

> Aliyun OSS FileSystem contract test should implement getTestBaseDir()
> -
>
> Key: HADOOP-14192
> URL: https://issues.apache.org/jira/browse/HADOOP-14192
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-14192.000.patch
>
>
> [HADOOP-14170] is the recent effort of improving the file system contract 
> tests {{FileSystemContractBaseTest}}, which make {{path()}} method final and 
> add a new method {{getTestBaseDir()}} for subclasses to implement. Aliyun OSS 
> should override that as it uses unique directory (naming with fork id) for 
> supporting parallel tests. Plus, the current {{testWorkingDirectory}} needs 
> not override per changes in {{FileSystemContractBaseTest}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14466) Remove useless document from TestAliyunOSSFileSystemContract.java

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-14466:
---
Fix Version/s: 2.9.1

> Remove useless document from TestAliyunOSSFileSystemContract.java
> -
>
> Key: HADOOP-14466
> URL: https://issues.apache.org/jira/browse/HADOOP-14466
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Chen Liang
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4, 2.9.1
>
> Attachments: HADOOP-14466.001.patch
>
>
> The following document is not valid.
> {code:title=TestAliyunOSSFileSystemContract.java}
>  * This uses BlockJUnit4ClassRunner because FileSystemContractBaseTest from
>  * TestCase which uses the old Junit3 runner that doesn't ignore assumptions
>  * properly making it impossible to skip the tests if we don't have a valid
>  * bucket.
> {code}
> HADOOP-14180 updated FIleSystemContractBaseTest to use JUnit 4, so this 
> sentence is no longer valid.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14194) Aliyun OSS should not use empty endpoint as default

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-14194:
---
Fix Version/s: 2.9.1

> Aliyun OSS should not use empty endpoint as default
> ---
>
> Key: HADOOP-14194
> URL: https://issues.apache.org/jira/browse/HADOOP-14194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Mingliang Liu
>Assignee: Genmao Yu
> Fix For: 3.0.0-beta1, 2.9.1
>
> Attachments: HADOOP-14194.000.patch, HADOOP-14194.001.patch
>
>
> In {{AliyunOSSFileSystemStore::initialize()}}, it retrieves the endPoint and 
> using empty string as a default value.
> {code}
> String endPoint = conf.getTrimmed(ENDPOINT_KEY, "");
> {code}
> The plain value without validation is passed to OSSClient. If the endPoint is 
> not provided (empty string) or the endPoint is not valid, users will get 
> exception from Aliyun OSS sdk with raw exception message like:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Expected 
> authority at index 8: https://
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:359)
>   at com.aliyun.oss.OSSClient.setEndpoint(OSSClient.java:313)
>   at com.aliyun.oss.OSSClient.(OSSClient.java:297)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.initialize(AliyunOSSFileSystemStore.java:134)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.initialize(AliyunOSSFileSystem.java:272)
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSTestUtils.createTestFileSystem(AliyunOSSTestUtils.java:63)
>   at 
> org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract.setUp(TestAliyunOSSFileSystemContract.java:47)
>   at junit.framework.TestCase.runBare(TestCase.java:139)
>   at junit.framework.TestResult$1.protect(TestResult.java:122)
>   at junit.framework.TestResult.runProtected(TestResult.java:142)
>   at junit.framework.TestResult.run(TestResult.java:125)
>   at junit.framework.TestCase.run(TestCase.java:129)
>   at junit.framework.TestSuite.runTest(TestSuite.java:255)
>   at junit.framework.TestSuite.run(TestSuite.java:250)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
> Caused by: java.net.URISyntaxException: Expected authority at index 8: 
> https://
>   at java.net.URI$Parser.fail(URI.java:2848)
>   at java.net.URI$Parser.failExpecting(URI.java:2854)
>   at java.net.URI$Parser.parseHierarchical(URI.java:3102)
>   at java.net.URI$Parser.parse(URI.java:3053)
>   at java.net.URI.(URI.java:588)
>   at com.aliyun.oss.OSSClient.toURI(OSSClient.java:357)
> {code}
> Let's check endPoint is not null or empty, catch the IllegalArgumentException 
> and log it, wrapping the exception with clearer message stating the 
> misconfiguration in endpoint or credentials.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-14649:
---
Fix Version/s: 2.9.1

> Update aliyun-sdk-oss version to 2.8.1
> --
>
> Key: HADOOP-14649
> URL: https://issues.apache.org/jira/browse/HADOOP-14649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Reporter: Ray Chiang
>Assignee: Genmao Yu
> Fix For: 3.0.0-beta1, 2.9.1
>
> Attachments: HADOOP-14649.000.patch
>
>
> Update the dependency
> com.aliyun.oss:aliyun-sdk-oss:2.4.1
> to the latest (2.8.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14787) AliyunOSS: Implement the `createNonRecursive` operator

2017-11-24 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-14787:
---
Fix Version/s: 2.9.1

> AliyunOSS: Implement the `createNonRecursive` operator
> --
>
> Key: HADOOP-14787
> URL: https://issues.apache.org/jira/browse/HADOOP-14787
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-beta1
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-beta1, 2.9.1
>
> Attachments: HADOOP-14787.000.patch
>
>
> {code}
> testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 1.146 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:178)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteNonEmptyDirectory(AbstractContractCreateTest.java:208)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> testOverwriteEmptyDirectory(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 0.145 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
> org.apache.hadoop.fs.FileSystem$FileSystemDataOutputStreamBuilder.build(FileSystem.java:4163)
>   at 
> org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset(ContractTestUtils.java:179)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:133)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testOverwriteEmptyDirectory(AbstractContractCreateTest.java:155)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> testCreateFileOverExistingFileNoOverwrite(org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate)
>   Time elapsed: 0.147 sec  <<< ERROR!
> java.io.IOException: createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem
>   at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1304)
>   at 
>