[jira] [Commented] (HADOOP-12663) Remove Hard-Coded Values From FileSystem.java

2015-12-22 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068276#comment-15068276
 ] 

BELUGA BEHR commented on HADOOP-12663:
--

I have attached another patch.  I apologize, you'll have to apply the patches 
in order to get the final affect.

How about this?  Once I understand exactly what the formatting should look like 
(i.e., this patch is accepted), I'll create a new ticket and work on the rest.

Thanks!

> Remove Hard-Coded Values From FileSystem.java
> -
>
> Key: HADOOP-12663
> URL: https://issues.apache.org/jira/browse/HADOOP-12663
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: FileSystem.HADOOP-12663.0002.patch, 
> FileSystem.HADOOP-12663.patch
>
>
> Within FileSystem.java, there is one instance where the global variables 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY" and 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT" were being used, 
> but in all other instances, their literal values were being used.
> Please find attached a patch to remove use of literal values and instead 
> replace them with references to the global variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12663) Remove Hard-Coded Values From FileSystem.java

2015-12-22 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-12663:
-
Attachment: FileSystem.HADOOP-12663.0002.patch

> Remove Hard-Coded Values From FileSystem.java
> -
>
> Key: HADOOP-12663
> URL: https://issues.apache.org/jira/browse/HADOOP-12663
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: FileSystem.HADOOP-12663.0002.patch, 
> FileSystem.HADOOP-12663.patch
>
>
> Within FileSystem.java, there is one instance where the global variables 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY" and 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT" were being used, 
> but in all other instances, their literal values were being used.
> Please find attached a patch to remove use of literal values and instead 
> replace them with references to the global variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-12669) clean up temp dirs in hadoop-project-dist/pom.xml

2015-12-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HDFS-9592 to HADOOP-12669:
---

Affects Version/s: (was: 2.8.0)
   2.8.0
  Component/s: (was: build)
   build
  Key: HADOOP-12669  (was: HDFS-9592)
  Project: Hadoop Common  (was: Hadoop HDFS)

> clean up temp dirs in  hadoop-project-dist/pom.xml
> --
>
> Key: HADOOP-12669
> URL: https://issues.apache.org/jira/browse/HADOOP-12669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Andrew Wang noted in HDFS-9263 that there are various tmp dir definitions in 
> {{hadoop-project-dist/pom.xml}} which are creating data in the wrong place: 
> clean them up



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12473) distcp's ignoring failures option should be mutually exclusive with the atomic option

2015-12-22 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12473:
-
Target Version/s: 2.8.0
   Fix Version/s: (was: 2.8.0)

[~liuml07], please don't set fix-version. Committers set that to appropriate 
values at commit time. You should instead use target-version to express your 
intention. Fixing it myself now, but FYI.


> distcp's ignoring failures option should be mutually exclusive with the 
> atomic option
> -
>
> Key: HADOOP-12473
> URL: https://issues.apache.org/jira/browse/HADOOP-12473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> In {{CopyMapper::handleFailure}}, the mapper handles failure and will ignore 
> it if no it's config key is on. Ignoring failures option {{-i}} should be 
> mutually exclusive with the {{-atomic}} option otherwise an incomplete dir is 
> eligible for commit defeating the purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2015-12-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068649#comment-15068649
 ] 

Steve Loughran commented on HADOOP-12667:
-

hadoop already has some atomicity problems with create-no-overwrite, given the 
write doesn't even happen until the close() call. But...createNonRecursive is 
potentially useful, as it will fail-fast if the parent doesn't exist.

Looking at the patch, I can see we need the filesystem spec updated with 
coverage of the reinstated method, ideally with a recommended IOE subclass 
{{FileNotFoundException}} if the parent isn't there...then we can add this test 
here to the contract tests and roll it out across all the filesystems.

Sean: fancy taking that on?


> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2015-12-22 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068697#comment-15068697
 ] 

Sean Mackrory commented on HADOOP-12667:


HBase's use of this API is indeed what brought the lack of support to my 
attention. I wouldn't say I'm specifically trying to support HBase-on-S3, 
though - just trying to improve support in general where we can. If there's no 
reasonable solution, there's no reasonable solution. I certainly don't want to 
be increasing the potential for inconsistency if that's what it would take.

On top of what Steve said, though, I think it's well-understood that when using 
Hadoop-on-S3 in general there's been the potential for consistency issues. 
That's improved dramatically on S3's end over time, of course. In this case all 
nodes will currently always fail because of an unsupported API. Due to the 
potential for inconsistency, if this API was supported, it would be possible 
for some nodes to fail because of a FileAlreadyExistsException, and the 
operation could be retried. That's certainly an improvement, and well within 
the expectations set by the S3 filesystems, IMO. I would still vote to move in 
this direction.

{quote}Sean: fancy taking that on?{quote}

Yeah I think that'd be good to do. I'll wait until we reach some consensus on 
the consistency issue, though.

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12496) Update AWS SDK version (1.7.4)

2015-12-22 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12496:
-
Fix Version/s: (was: 2.8.0)

> Update AWS SDK version (1.7.4)
> --
>
> Key: HADOOP-12496
> URL: https://issues.apache.org/jira/browse/HADOOP-12496
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Yongjia Wang
>
> hadoop-aws jar still depends on the very old 1.7.4 version of aws-java-sdk.
> In newer versions of SDK, there is incompatible API changes that leads to the 
> following error when trying to use the S3A class and newer versions of sdk 
> presents.
> This is because S3A is calling the method with "int" as the parameter type 
> while the new SDK is expecting "long". This makes it impossible to use 
> kinesis + s3a in the same process.
> It would be very helpful to upgrade hadoop-awas's aws-sdk version.
> java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:285)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:130)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:104)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:29)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC$$iwC$$iwC.(:40)
>   at $iwC$$iwC$$iwC.(:42)
>   at $iwC$$iwC.(:44)
>   at $iwC.(:46)
>   at (:48)
>   at .(:52)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:655)
>   at 
> org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:620)
>   at 
> org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:613)
>   at 
> org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
>   at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
>   at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
>   at 
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12608) Fix error message in WASB in connecting through Anonymous Credential codepath

2015-12-22 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12608:
-
Fix Version/s: (was: 2.8.0)

[~dchickabasapa], please don't set fix-version. Committers set that to 
appropriate values at commit time. Unsetting it myself now, FYI.

> Fix error message in WASB in connecting through Anonymous Credential codepath
> -
>
> Key: HADOOP-12608
> URL: https://issues.apache.org/jira/browse/HADOOP-12608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12608.001.patch, HADOOP-12608.002.patch, 
> HADOOP-12608.003.patch, HADOOP-12608.004.patch, HADOOP-12608.005.patch
>
>
> Users of WASB have raised complaints over the error message returned back 
> from WASB when they are trying to connect to Azure storage with anonymous 
> credentials. Current implementation returns the correct message when we 
> encounter a Storage exception, however for scenarios like querying to check 
> if a container exists does not throw a StorageException but returns false 
> when URI is directly specified (Anonymous access) the error message returned 
> does not clearly state that credentials for storage account is not provided. 
> This JIRA tracks the fix the error message to return what is returned when a 
> storage exception is hit and also correct spelling mistakes in the error 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12557) Add information in BUILDING.txt about the need for the FINDBUGS_HOME environment variable for docs builds.

2015-12-22 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12557:
-
Fix Version/s: (was: 2.8.0)

[~1106944...@qq.com], please don't set fix-version. Committers set that to 
appropriate values at commit time. Unsetting it myself now, FYI.

> Add information in BUILDING.txt about the need for the FINDBUGS_HOME 
> environment variable for docs builds.
> --
>
> Key: HADOOP-12557
> URL: https://issues.apache.org/jira/browse/HADOOP-12557
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Chris Nauroth
>Assignee: huangzheng
>Priority: Minor
> Attachments: HADOOP-12557.patch
>
>
> BUILDING.txt mentions Findbugs 1.3.9 as a requirement, but it doesn't mention 
> that the {{FINDBUGS_HOME}} environment variable must point to the base 
> directory of the Findbugs installation when running builds with {{-Pdocs}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12663) Remove Hard-Coded Values From FileSystem.java

2015-12-22 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-12663:
-
Attachment: (was: FileSystem.HADOOP-12663.patch)

> Remove Hard-Coded Values From FileSystem.java
> -
>
> Key: HADOOP-12663
> URL: https://issues.apache.org/jira/browse/HADOOP-12663
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: FileSystem.HADOOP-12663.0002.patch
>
>
> Within FileSystem.java, there is one instance where the global variables 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY" and 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT" were being used, 
> but in all other instances, their literal values were being used.
> Please find attached a patch to remove use of literal values and instead 
> replace them with references to the global variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2015-12-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068665#comment-15068665
 ] 

Chris Nauroth commented on HADOOP-12667:


[~ste...@apache.org], just to make sure I understand, are you suggesting that 
we go ahead with the non-atomic {{createNonRecursive}} implementation shown in 
the patch here, and it won't be useful to HBase, but perhaps other applications 
would find it useful?

[~mackrorysd], are you specifically trying to enable HBase running on S3?  (I'm 
asking because you mentioned HBase in the issue description.)

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2015-12-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068703#comment-15068703
 ] 

Steve Loughran commented on HADOOP-12667:
-

yes, though we'll need to update that bit on object stores in the FS spec to 
add it to the list of things not to trust

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12473) distcp's ignoring failures option should be mutually exclusive with the atomic option

2015-12-22 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068743#comment-15068743
 ] 

Mingliang Liu commented on HADOOP-12473:


When I cloned it, I didn't notice this should be unset. Thanks for the kind 
comment.

> distcp's ignoring failures option should be mutually exclusive with the 
> atomic option
> -
>
> Key: HADOOP-12473
> URL: https://issues.apache.org/jira/browse/HADOOP-12473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> In {{CopyMapper::handleFailure}}, the mapper handles failure and will ignore 
> it if no it's config key is on. Ignoring failures option {{-i}} should be 
> mutually exclusive with the {{-atomic}} option otherwise an incomplete dir is 
> eligible for commit defeating the purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12663) Remove Hard-Coded Values From FileSystem.java

2015-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068660#comment-15068660
 ] 

Hadoop QA commented on HADOOP-12663:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 49s 
{color} | {color:red} root-jdk1.8.0_66 with JDK v1.8.0_66 generated 1 new 
issues (was 730, now 730). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 37s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-common-project/hadoop-common (total was 140, now 139). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 0s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 58s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 40s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.fs.shell.TestCopyPreserveFlag |
| JDK v1.7.0_91 Failed junit tests | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 

[jira] [Updated] (HADOOP-12469) distcp should not ignore the ignoreFailures option

2015-12-22 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12469:
---
Fix Version/s: (was: 2.8.0)

> distcp should not ignore the ignoreFailures option
> --
>
> Key: HADOOP-12469
> URL: https://issues.apache.org/jira/browse/HADOOP-12469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Gera Shegalov
>Assignee: Mingliang Liu
>Priority: Critical
> Attachments: HADOOP-12469.000.patch, HADOOP-12469.001.patch
>
>
> {{RetriableFileCopyCommand.CopyReadException}} is double-wrapped via
> # via {{RetriableCommand::execute}}
> # via {{CopyMapper#copyFileWithRetry}}
> before {{CopyMapper::handleFailure}} tests 
> {code}
> if (ignoreFailures && exception.getCause() instanceof
> RetriableFileCopyCommand.CopyReadException
> {code}
> which is always false.
> Orthogonally, ignoring failures should be mutually exclusive with the atomic 
> option otherwise an incomplete dir is eligible for commit defeating the 
> purpose.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12663) Remove Hard-Coded Values From FileSystem.java

2015-12-22 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-12663:
-
Status: Patch Available  (was: Open)

> Remove Hard-Coded Values From FileSystem.java
> -
>
> Key: HADOOP-12663
> URL: https://issues.apache.org/jira/browse/HADOOP-12663
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: FileSystem.HADOOP-12663.0002.patch
>
>
> Within FileSystem.java, there is one instance where the global variables 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY" and 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT" were being used, 
> but in all other instances, their literal values were being used.
> Please find attached a patch to remove use of literal values and instead 
> replace them with references to the global variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2015-12-22 Thread Elliott Clark (JIRA)
Elliott Clark created HADOOP-12670:
--

 Summary: Fix TestNetUtils and TestSecurityUtil when localhost is 
ipv6 only
 Key: HADOOP-12670
 URL: https://issues.apache.org/jira/browse/HADOOP-12670
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Elliott Clark
Assignee: Elliott Clark


{code}
  TestSecurityUtil.testBuildTokenServiceSockAddr:165 expected:<[127.0.0.]1:123> 
but was:<[0:0:0:0:0:0:0:]1:123>
  TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
was:<[0:0:0:0:0:0:0:]1:123>
  
TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
 expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
  
TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
 expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
  
TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
 expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
  TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
was:<[127.0.0.]1>
  TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)
Tianyin Xu created HADOOP-12671:
---

 Summary: Inconsistent configuration values and incorrect comments
 Key: HADOOP-12671
 URL: https://issues.apache.org/jira/browse/HADOOP-12671
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf, documentation, fs/s3
Affects Versions: 2.6.2, 2.7.1
Reporter: Tianyin Xu


The two values in [core-default.xml | 
https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
 are wrong. 
{{fs.s3a.multipart.purge.age}}
{{fs.s3a.connection.timeout}}
{{fs.s3a.connection.establish.timeout}}
\\
\\

*1. {{fs.s3a.multipart.purge.age}}*
(in both {{2.6.2}} and {{2.7.1}})
In [core-default.xml | 
https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
 the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
({{4}} hours).
\\
\\

*2. {{fs.s3a.connection.timeout}}*
(only appear in {{2.6.2}})
In [core-default.xml (2.6.2) | 
https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
 the value is {{5000}}, while in the code it is {{5}}.
{code}
  // seconds until we give up on a connection to s3
  public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
  public static final int DEFAULT_SOCKET_TIMEOUT = 5;
{code}
\\

*3. {{fs.s3a.connection.establish.timeout}}*
(only appear in {{2.7.1}})
In [core-default.xml (2.7.1)| 
https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
 the value is {{5000}}, while in the code it is {{5}}.
{code}
  // seconds until we give up trying to establish a connection to s3
  public static final String ESTABLISH_TIMEOUT = 
"fs.s3a.connection.establish.timeout";
  public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
{code}
\\

btw, the code comments are wrong! The two parameters are in the unit of 
*milliseconds* instead of *seconds*...
{code}
-  // seconds until we give up on a connection to s3
+  // milliseconds until we give up on a connection to s3
...
-  // seconds until we give up trying to establish a connection to s3
+  // milliseconds until we give up trying to establish a connection to s3
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068942#comment-15068942
 ] 

Tianyin Xu commented on HADOOP-12671:
-

[~liuml07], I attach the patch for trunk, but I don't know how to port the 
changes (or patching) to all the 2.6.* and 2.7.* branches. Could you advise me 
on this (the values are pretty different in different branches...)?  

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
> Attachments: 0001-fix-the-errors-in-default-configs.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11521) Make connection timeout configurable in s3a

2015-12-22 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068983#comment-15068983
 ] 

Mingliang Liu commented on HADOOP-11521:


I mean millisecond or second. Sorry for the typo.

> Make connection timeout configurable in s3a 
> 
>
> Key: HADOOP-11521
> URL: https://issues.apache.org/jira/browse/HADOOP-11521
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11521-002.patch, HADOOP-11521.001.patch
>
>
> Currently in s3a, only the socket timeout is configurable, i.e. how long to 
> wait before an existing connection is declared dead. The aws sdk has a 
> separate timeout for establishing a connection. This patch introduces a 
> config option in s3a to pass this on. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12671:
---
Assignee: Tianyin Xu

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: 0001-fix-the-errors-in-default-configs.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2015-12-22 Thread Rashmi Vinayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rashmi Vinayak updated HADOOP-11828:

Description: 
[Hitchhiker | 
http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is a 
new erasure coding algorithm developed as a research project at UC Berkeley. It 
has been shown to reduce network traffic and disk I/O by 25%-45% during data 
reconstruction while retaining the same storage capacity and failure tolerance 
capability as RS codes. This JIRA aims to introduce Hitchhiker to the HDFS-EC 
framework, as one of the pluggable codec algorithms.

The existing implementation is based on HDFS-RAID. 

  was:
[Hitchhiker | 
http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is a 
new erasure coding algorithm developed as a research project at UC Berkeley. It 
has been shown to reduce network traffic and disk I/O by 25%-45% during data 
reconstruction. This JIRA aims to introduce Hitchhiker to the HDFS-EC 
framework, as one of the pluggable codec algorithms.

The existing implementation is based on HDFS-RAID. 


> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction while retaining the same storage capacity and 
> failure tolerance capability as RS codes. This JIRA aims to introduce 
> Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2015-12-22 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12670:
---
Attachment: HADOOP-12670-HADOOP-11890.0.patch

Make sure to use the new NetUtils functions.

> Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
> -
>
> Key: HADOOP-12670
> URL: https://issues.apache.org/jira/browse/HADOOP-12670
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12670-HADOOP-11890.0.patch
>
>
> {code}
>   TestSecurityUtil.testBuildTokenServiceSockAddr:165 
> expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123>
>   TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
> was:<[0:0:0:0:0:0:0:]1:123>
>   
> TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
> was:<[127.0.0.]1>
>   TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
> expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2015-12-22 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12670:
---
Affects Version/s: HADOOP-11890
   Status: Patch Available  (was: Open)

> Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
> -
>
> Key: HADOOP-12670
> URL: https://issues.apache.org/jira/browse/HADOOP-12670
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: HADOOP-11890
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12670-HADOOP-11890.0.patch
>
>
> {code}
>   TestSecurityUtil.testBuildTokenServiceSockAddr:165 
> expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123>
>   TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
> was:<[0:0:0:0:0:0:0:]1:123>
>   
> TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
> was:<[127.0.0.]1>
>   TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
> expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068993#comment-15068993
 ] 

Kai Zheng commented on HADOOP-12662:


bq. dist is used by anyone using the tar ball install.
To clarify (please correct me if it's bad), the script only runs in tar ball 
building phase, while someone runs {{mvn packge}} with {{-Pdist -Dtar}} 
options. So the {{bash}}  would be good to be required to be there in the 
building environment. It's not for end users. Even for hadoop deployment 
environment, I checked relevant daemon/client scripts, and they have already 
relied on bash.

bq. I think it's reasonable to split this script out into a separate file.
With the script split out, it's a good chance to rewrite it (nice to use bash 
if agreed). For example, a function to bundle a native library would be nice to 
have, thus when adding a new native library, a simple line would be added to 
call it instead of having to duplicate a block of codes.

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069010#comment-15069010
 ] 

Mingliang Liu commented on HADOOP-12671:


Thanks for your work. You're right to file them independently. It's easy to 
review, test and commit. Sorry I'm not a committer and I'm not in charge of 
anything. Just an innocent contributor.

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: HADOOP-12671.000.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HADOOP-12671:

Attachment: 0001-fix-the-errors-in-default-configs.patch

patch for trunk

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
> Attachments: 0001-fix-the-errors-in-default-configs.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12559) KMS connection failures should trigger TGT renewal

2015-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068975#comment-15068975
 ] 

Hadoop QA commented on HADOOP-12559:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 41s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 58s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779152/HADOOP-12559.05.patch 
|
| JIRA Issue | HADOOP-12559 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1e3fca6a225c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068976#comment-15068976
 ] 

Mingliang Liu commented on HADOOP-12671:


# You can rename the patch file for {{trunk}} branch as 
_HADOOP-12671.000.patch_. A patch is for {{trunk}} branch by default.
# Meanwhile if the patch can not be applied to {{branch-2}} directly, you 
should upload a new patch file. It is suggested named 
_HADOOP-12671-branch-2.000.patch_. Generally we don't work on released branches 
like 2.6.3 or 2.7.1.

More information can be found at: https://wiki.apache.org/hadoop/HowToContribute

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
> Attachments: 0001-fix-the-errors-in-default-configs.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2015-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068984#comment-15068984
 ] 

Hadoop QA commented on HADOOP-12670:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 15m 24s 
{color} | {color:red} Docker failed to build yetus/hadoop:a890a31. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779159/HADOOP-12670-HADOOP-11890.0.patch
 |
| JIRA Issue | HADOOP-12670 |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8293/console |


This message was automatically generated.



> Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
> -
>
> Key: HADOOP-12670
> URL: https://issues.apache.org/jira/browse/HADOOP-12670
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: HADOOP-11890
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12670-HADOOP-11890.0.patch
>
>
> {code}
>   TestSecurityUtil.testBuildTokenServiceSockAddr:165 
> expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123>
>   TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
> was:<[0:0:0:0:0:0:0:]1:123>
>   
> TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
> was:<[127.0.0.]1>
>   TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
> expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HADOOP-12671:

Attachment: HADOOP-12671.000.patch

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: HADOOP-12671.000.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HADOOP-12671:

Attachment: (was: HADOOP-12671.000.patch)

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: HADOOP-12671.000.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HADOOP-12671:

Attachment: (was: 0001-fix-the-errors-in-default-configs.patch)

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: HADOOP-12671.000.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HADOOP-12671:

Attachment: HADOOP-12671.000.patch

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: HADOOP-12671.000.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2015-12-22 Thread Rashmi Vinayak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069031#comment-15069031
 ] 

Rashmi Vinayak commented on HADOOP-11828:
-

Hi [~jack_liuquan],
Thanks for the great work! I went through the code very carefully for the 
algorithm review. Everything looks fine in terms of correctness. 
Few comments:
1. The name ‘doDecodeMulti’ for the method in HHXORErasureDecodingStep is 
slightly confusing since it handles both the case of multiple erasures and as 
well single parity erasure. Perhaps something on the lines of 
‘doDecodeMultiAndParity’ might reflect the actions of this method more 
accurately? 
2. It seems that there is no need to pass ‘erasedIndexes’  as input to the 
methods in HHXORErasureDecodingStep class since it is a class variable? (You 
might have used these additional inputs for clarity; I just thought of bringing 
this to your attention.) 
3. On a minor side, I think it would be helpful for future readers to include a 
reference to the paper in case they want to understand the algorithm. What do 
you think? (We can have something on the lines: “A "Hitchhiker's" Guide to Fast 
and Efficient Data Reconstruction in Erasure-coded Data Centers”, in ACM 
SIGCOMM 2014.). Also, just to make the context completely clear, could you 
please change the description in the comments to “It has been shown to reduce 
network traffic and disk I/O by 25%-45% during data reconstruction while 
retaining the same storage capacity and failure tolerance capability of RS 
codes.”  (last phrase is added to the existing comment).

Thanks,
Rashmi

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction while retaining the same storage capacity and 
> failure tolerance capability as RS codes. This JIRA aims to introduce 
> Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069069#comment-15069069
 ] 

Hadoop QA commented on HADOOP-12671:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
51s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
11s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
33s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 40s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 39s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 

[jira] [Commented] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2015-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069079#comment-15069079
 ] 

Hadoop QA commented on HADOOP-12670:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 11m 1s 
{color} | {color:red} Docker failed to build yetus/hadoop:a890a31. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12779184/HADOOP-12670-HADOOP-11890.2.patch
 |
| JIRA Issue | HADOOP-12670 |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8295/console |


This message was automatically generated.



> Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
> -
>
> Key: HADOOP-12670
> URL: https://issues.apache.org/jira/browse/HADOOP-12670
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: HADOOP-11890
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12670-HADOOP-11890.0.patch, 
> HADOOP-12670-HADOOP-11890.2.patch
>
>
> {code}
>   TestSecurityUtil.testBuildTokenServiceSockAddr:165 
> expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123>
>   TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
> was:<[0:0:0:0:0:0:0:]1:123>
>   
> TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
> was:<[127.0.0.]1>
>   TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
> expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11990) DNS#reverseDns fails with a NumberFormatException when using an IPv6 DNS server

2015-12-22 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068958#comment-15068958
 ] 

Elliott Clark commented on HADOOP-11990:


8u60 has this and the fix seems to work well for me.

> DNS#reverseDns fails with a NumberFormatException when using an IPv6 DNS 
> server
> ---
>
> Key: HADOOP-11990
> URL: https://issues.apache.org/jira/browse/HADOOP-11990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 2.5.1
> Environment: java version "1.7.0_45"
> Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
> Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
>Reporter: Benoit Sigoure
>  Labels: ipv6
>
> With this resolv.conf:
> {code}
> nameserver 192.168.1.1
> nameserver 2604:5500:3::3
> nameserver 2604:5500:3:3::3
> {code}
> Starting HBase yields the following:
> {code}
> Caused by: java.lang.NumberFormatException: For input string: "5500:3::3"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:492)
> at java.lang.Integer.parseInt(Integer.java:527)
> at com.sun.jndi.dns.DnsClient.(DnsClient.java:122)
> at com.sun.jndi.dns.Resolver.(Resolver.java:61)
> at com.sun.jndi.dns.DnsContext.getResolver(DnsContext.java:570)
> at com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:430)
> at 
> com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:231)
> at 
> com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:139)
> at 
> com.sun.jndi.toolkit.url.GenericURLDirContext.getAttributes(GenericURLDirContext.java:103)
> at 
> javax.naming.directory.InitialDirContext.getAttributes(InitialDirContext.java:142)
> at org.apache.hadoop.net.DNS.reverseDns(DNS.java:84)
> at org.apache.hadoop.net.DNS.getHosts(DNS.java:241)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:344)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:362)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:341)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getHostname(RSRpcServices.java:825)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:782)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:195)
> at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:477)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:492)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.(HMasterCommandLine.java:276)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 7 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11990) DNS#reverseDns fails with a NumberFormatException when using an IPv6 DNS server

2015-12-22 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HADOOP-11990.

  Resolution: Not A Problem
Release Note: 
If your resolvers are ipv6 addresses make sure that you use one of the java 
versions listed in https://bugs.openjdk.java.net/browse/JDK-6991580

jdk8u60, jdk8u65, or jdk9+

> DNS#reverseDns fails with a NumberFormatException when using an IPv6 DNS 
> server
> ---
>
> Key: HADOOP-11990
> URL: https://issues.apache.org/jira/browse/HADOOP-11990
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 2.5.1
> Environment: java version "1.7.0_45"
> Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
> Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
>Reporter: Benoit Sigoure
>  Labels: ipv6
>
> With this resolv.conf:
> {code}
> nameserver 192.168.1.1
> nameserver 2604:5500:3::3
> nameserver 2604:5500:3:3::3
> {code}
> Starting HBase yields the following:
> {code}
> Caused by: java.lang.NumberFormatException: For input string: "5500:3::3"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:492)
> at java.lang.Integer.parseInt(Integer.java:527)
> at com.sun.jndi.dns.DnsClient.(DnsClient.java:122)
> at com.sun.jndi.dns.Resolver.(Resolver.java:61)
> at com.sun.jndi.dns.DnsContext.getResolver(DnsContext.java:570)
> at com.sun.jndi.dns.DnsContext.c_getAttributes(DnsContext.java:430)
> at 
> com.sun.jndi.toolkit.ctx.ComponentDirContext.p_getAttributes(ComponentDirContext.java:231)
> at 
> com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.getAttributes(PartialCompositeDirContext.java:139)
> at 
> com.sun.jndi.toolkit.url.GenericURLDirContext.getAttributes(GenericURLDirContext.java:103)
> at 
> javax.naming.directory.InitialDirContext.getAttributes(InitialDirContext.java:142)
> at org.apache.hadoop.net.DNS.reverseDns(DNS.java:84)
> at org.apache.hadoop.net.DNS.getHosts(DNS.java:241)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:344)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:362)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:341)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getHostname(RSRpcServices.java:825)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:782)
> at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:195)
> at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:477)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:492)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.(HMasterCommandLine.java:276)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
> ... 7 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12431) NameNode should bind on both IPv6 and IPv4 if running on dual-stack machine and IPv6 enabled

2015-12-22 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HADOOP-12431.

Resolution: Won't Fix
  Assignee: (was: Nemanja Matkovic)

Going to resolve this one as won't fix. We don't want to bind to ipv6 by 
default. Instead I'm going to open a documentation jira about how to set up a 
cluster with dual stack.

> NameNode should bind on both IPv6 and IPv4 if running on dual-stack machine 
> and IPv6 enabled
> 
>
> Key: HADOOP-12431
> URL: https://issues.apache.org/jira/browse/HADOOP-12431
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Nate Edel
>  Labels: ipv6
>
> NameNode works properly on IPv4 or IPv6 single stack (assuming in the latter 
> case that scripts have been changed to disable preferIPv4Stack, and dependent 
> on the client/data node fix in HDFS-8078).  On dual-stack machines, NameNode 
> listens only on IPv4 (even ignoring preferIPv6Addresses being set.)
> Our initial use case for IPv6 is IPv6-only clusters, but ideally we'd support 
> binding to both the IPv4 and IPv6 machine addresses so that we can support 
> heterogenous clusters (some dual-stack and some IPv6-only machines.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11521) Make connection timeout configurable in s3a

2015-12-22 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068977#comment-15068977
 ] 

Mingliang Liu commented on HADOOP-11521:


{{fs.s3a.connection.establish.timeout}} is in millionsecond or second? The 
default value is 5000 or 5?

Would you kindly comment on [HADOOP-12671]? Thanks.

> Make connection timeout configurable in s3a 
> 
>
> Key: HADOOP-11521
> URL: https://issues.apache.org/jira/browse/HADOOP-11521
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11521-002.patch, HADOOP-11521.001.patch
>
>
> Currently in s3a, only the socket timeout is configurable, i.e. how long to 
> wait before an existing connection is declared dead. The aws sdk has a 
> separate timeout for establishing a connection. This patch introduces a 
> config option in s3a to pass this on. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068978#comment-15068978
 ] 

Mingliang Liu commented on HADOOP-12671:


I saw your previous academic work. Do you have any tool to find these 
misconfiguration automatically? Just curious...

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
> Attachments: 0001-fix-the-errors-in-default-configs.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12671:
---
Status: Patch Available  (was: Open)

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.6.2, 2.7.1
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: HADOOP-12671.000.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069005#comment-15069005
 ] 

Tianyin Xu commented on HADOOP-12671:
-

Thanks you so much, [~liuml07]! I attached the new patch which works for both 
trunk and branch-2. 
Yes... these are find automatically (but not from my previous work :-S)... I 
was bugged by an issue caused by such inconsistency in my cluster (as I usually 
only read the docs)... then I wrote scripts and find a tremendous number of 
configs in docs are not the values really used (so surprised)... I actually 
have more to report and fix... but I don't know how to report. I guess if I 
report too much at once, nobody gonna look at it. So I decide to report them 
sub-component by sub-component. Are you in charge of all the configs of 
hadoop-common, Mingliang? 


> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: HADOOP-12671.000.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069015#comment-15069015
 ] 

Tianyin Xu commented on HADOOP-12671:
-

Thanks a lot for the help, [~liuml07]! Truly respect "innocent contributors" :P

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
> Attachments: HADOOP-12671.000.patch
>
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2015-12-22 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12670:
---
Attachment: HADOOP-12670-HADOOP-11890.2.patch

Does docker not work on branches ?

> Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
> -
>
> Key: HADOOP-12670
> URL: https://issues.apache.org/jira/browse/HADOOP-12670
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: HADOOP-11890
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12670-HADOOP-11890.0.patch, 
> HADOOP-12670-HADOOP-11890.2.patch
>
>
> {code}
>   TestSecurityUtil.testBuildTokenServiceSockAddr:165 
> expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123>
>   TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
> was:<[0:0:0:0:0:0:0:]1:123>
>   
> TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
> was:<[127.0.0.]1>
>   TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
> expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12658) Javadoc needs minor udpate in DomainSocket

2015-12-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069077#comment-15069077
 ] 

Kai Zheng commented on HADOOP-12658:


Sorry, but I would probably handle this together in HDFS-8562. When that's in 
if possible, this will be marked as duplicate, otherwise will move on this.

> Javadoc needs minor udpate in DomainSocket
> --
>
> Key: HADOOP-12658
> URL: https://issues.apache.org/jira/browse/HADOOP-12658
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Trivial
> Attachments: HADOOP-12658-v1.patch, HADOOP-12658-v2.patch
>
>
> It was noticed Javadoc needs minor udpate in {{DomainSocket}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9680) Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials

2015-12-22 Thread Brendan Maguire (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068615#comment-15068615
 ] 

Brendan Maguire commented on HADOOP-9680:
-

I see this is still open. Is there still no way to use S3 from Hadoop using 
temporary credentials?

> Extend S3FS and S3NativeFS to work with AWS IAM Temporary Security Credentials
> --
>
> Key: HADOOP-9680
> URL: https://issues.apache.org/jira/browse/HADOOP-9680
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.1.0-beta, 3.0.0
>Reporter: Robert Gibbon
>Priority: Minor
> Attachments: s3fs-temp-iam-creds.diff.patch
>
>
> Here is a patch in unified diff format to enable Amazon Web Services IAM 
> Temporary Security Credentials secured interactions with S3 from Hadoop.
> It bumps the JetS3t release version up to 0.9.0.
> To use a temporary security credential set, you need to provide the following 
> properties, depending on the implementation (s3 or s3native):
> fs.s3.awsAccessKeyId or fs.s3n.awsAccessKeyId - the temporary access key id 
> issued by AWS IAM
> fs.s3.awsSecretAccessKey or fs.s3n.awsSecretAccessKey - the temporary secret 
> access key issued by AWS IAM
> fs.s3.awsSessionToken or fs.s3n.awsSessionToken - the session ticket issued 
> by AWS IAM along with the temporary key
> fs.s3.awsTokenFriendlyName or fs.s3n.awsTokenFriendlyName - any string



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11582) org.apache.hadoop.net.TestDNS failing with NumberFormatException -IPv6 related?

2015-12-22 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HADOOP-11582.

Resolution: Not A Problem

This was fixed in a different jira.

> org.apache.hadoop.net.TestDNS failing with NumberFormatException -IPv6 
> related?
> ---
>
> Key: HADOOP-11582
> URL: https://issues.apache.org/jira/browse/HADOOP-11582
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 3.0.0
> Environment: OSX yosemite
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>  Labels: ipv6
>
> {{org.apache.hadoop.net.TestDNS}} failing {{java.lang.NumberFormatException: 
> For input string: ":3246:9aff:fe80:438f"}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068913#comment-15068913
 ] 

Mingliang Liu commented on HADOOP-12671:


Thanks for reporting this. Do you mind preparing a patch, and assign this jira 
to you? I can do some review work after that.

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12671) Inconsistent configuration values and incorrect comments

2015-12-22 Thread Tianyin Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068918#comment-15068918
 ] 

Tianyin Xu commented on HADOOP-12671:
-

Sure. Thanks, [~liuml07]. So the patch should target on the main branch?

> Inconsistent configuration values and incorrect comments
> 
>
> Key: HADOOP-12671
> URL: https://issues.apache.org/jira/browse/HADOOP-12671
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, documentation, fs/s3
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Tianyin Xu
>
> The two values in [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml]
>  are wrong. 
> {{fs.s3a.multipart.purge.age}}
> {{fs.s3a.connection.timeout}}
> {{fs.s3a.connection.establish.timeout}}
> \\
> \\
> *1. {{fs.s3a.multipart.purge.age}}*
> (in both {{2.6.2}} and {{2.7.1}})
> In [core-default.xml | 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{86400}} ({{24}} hours), while in the code it is {{14400}} 
> ({{4}} hours).
> \\
> \\
> *2. {{fs.s3a.connection.timeout}}*
> (only appear in {{2.6.2}})
> In [core-default.xml (2.6.2) | 
> https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up on a connection to s3
>   public static final String SOCKET_TIMEOUT = "fs.s3a.connection.timeout";
>   public static final int DEFAULT_SOCKET_TIMEOUT = 5;
> {code}
> \\
> *3. {{fs.s3a.connection.establish.timeout}}*
> (only appear in {{2.7.1}})
> In [core-default.xml (2.7.1)| 
> https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/core-default.xml],
>  the value is {{5000}}, while in the code it is {{5}}.
> {code}
>   // seconds until we give up trying to establish a connection to s3
>   public static final String ESTABLISH_TIMEOUT = 
> "fs.s3a.connection.establish.timeout";
>   public static final int DEFAULT_ESTABLISH_TIMEOUT = 5;
> {code}
> \\
> btw, the code comments are wrong! The two parameters are in the unit of 
> *milliseconds* instead of *seconds*...
> {code}
> -  // seconds until we give up on a connection to s3
> +  // milliseconds until we give up on a connection to s3
> ...
> -  // seconds until we give up trying to establish a connection to s3
> +  // milliseconds until we give up trying to establish a connection to s3
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12559) KMS connection failures should trigger TGT renewal

2015-12-22 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12559:
---
Attachment: HADOOP-12559.05.patch

Thanks for the helpful discussion Xiaoyu. I don't think it's easy to bypass the 
KDC limitation and efficiently emulate a short TGT lifetime. Following the 
suggestion I have removed the unit test in v05 patch. It's a good catch that we 
should use {{actualUgi}} when renewing TGT.

I've verified with the following test code (in the context of {{TestKMS}}):
{code}
  @Test
  public void testTGTRenewal() throws Exception {
tearDownMiniKdc();
Properties kdcConf = MiniKdc.createConf();
kdcConf.setProperty(MiniKdc.MAX_TICKET_LIFETIME, "36");
setUpMiniKdc(kdcConf);

Configuration conf = new Configuration();
conf.set("hadoop.security.authentication", "kerberos");
UserGroupInformation.setConfiguration(conf);
final File testDir = getTestDir();
conf = createBaseKMSConf(testDir);
conf.set("hadoop.kms.authentication.type", "kerberos");
conf.set("hadoop.kms.authentication.kerberos.keytab",
keytab.getAbsolutePath());
conf.set("hadoop.kms.authentication.kerberos.principal", "HTTP/localhost");
conf.set("hadoop.kms.authentication.kerberos.name.rules", "DEFAULT");

final String keyA = "key_a";
final String keyD = "key_d";
conf.set(KeyAuthorizationKeyProvider.KEY_ACL + keyA + ".ALL", "*");
conf.set(KeyAuthorizationKeyProvider.KEY_ACL + keyD + ".ALL", "*");

writeConf(testDir, conf);

runServer(null, null, testDir, new KMSCallable() {
  @Override
  public Void call() throws Exception {
final Configuration conf = new Configuration();
conf.setInt(KeyProvider.DEFAULT_BITLENGTH_NAME, 64);
final URI uri = createKMSUri(getKMSUrl());
UserGroupInformation.
loginUserFromKeytab("client", keytab.getAbsolutePath());
try {
  KeyProvider kp = createProvider(uri, conf);
  Thread.sleep(36);
  kp.getKeys();
} catch (Exception ex) {
  String errMsg = ex.getMessage();
  System.out.println(errMsg);
  if (errMsg.contains("Failed to find any Kerberos tgt")) {
Assert.fail("TGT expired");
  }
}

return null;
  }
});
  }
{code}

The test passes with the patch, but fails without it, with the same complain 
that Harsh commented above:
{code}
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
{code}

> KMS connection failures should trigger TGT renewal
> --
>
> Key: HADOOP-12559
> URL: https://issues.apache.org/jira/browse/HADOOP-12559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-12559.00.patch, HADOOP-12559.01.patch, 
> HADOOP-12559.02.patch, HADOOP-12559.03.patch, HADOOP-12559.04.patch, 
> HADOOP-12559.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12665) Document hadoop.security.token.service.use_ip

2015-12-22 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068774#comment-15068774
 ] 

Kihwal Lee commented on HADOOP-12665:
-

[~daryn], you might want to give some input.

> Document hadoop.security.token.service.use_ip
> -
>
> Key: HADOOP-12665
> URL: https://issues.apache.org/jira/browse/HADOOP-12665
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Anu Engineer
>
> {{hadoop.security.token.service.use_ip}} is not documented in 2.x/trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2015-12-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068790#comment-15068790
 ] 

Chris Nauroth commented on HADOOP-12667:


I'm OK with proceeding.  I just wanted to make sure you were aware that a 
non-atomic implementation will not be sufficient to protect against data loss 
in HBase.

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2015-12-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068706#comment-15068706
 ] 

Steve Loughran commented on HADOOP-12667:
-

it's not s3 consistency semantics, its S3 API vs HDFS's subset of posix, 
whether things like rename() and rm() are atomic and O(1).

see the object store bit under: 
[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/introduction.html]


> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12663) Remove Hard-Coded Values From FileSystem.java

2015-12-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068466#comment-15068466
 ] 

Arpit Agarwal commented on HADOOP-12663:


Hi [~belugabehr], thanks for the updated patch. We can replace the wildcard 
{{import static}} with individual imports. The changes look fine otherwise. 

Please consider submitting a single patch so we can get a Jenkins run. If you 
want to fix usages in other files that would be great too.

> Remove Hard-Coded Values From FileSystem.java
> -
>
> Key: HADOOP-12663
> URL: https://issues.apache.org/jira/browse/HADOOP-12663
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: FileSystem.HADOOP-12663.0002.patch, 
> FileSystem.HADOOP-12663.patch
>
>
> Within FileSystem.java, there is one instance where the global variables 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY" and 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT" were being used, 
> but in all other instances, their literal values were being used.
> Please find attached a patch to remove use of literal values and instead 
> replace them with references to the global variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12663) Remove Hard-Coded Values From FileSystem.java

2015-12-22 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-12663:
-
Attachment: (was: FileSystem.HADOOP-12663.0002.patch)

> Remove Hard-Coded Values From FileSystem.java
> -
>
> Key: HADOOP-12663
> URL: https://issues.apache.org/jira/browse/HADOOP-12663
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: FileSystem.HADOOP-12663.0002.patch, 
> FileSystem.HADOOP-12663.patch
>
>
> Within FileSystem.java, there is one instance where the global variables 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY" and 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT" were being used, 
> but in all other instances, their literal values were being used.
> Please find attached a patch to remove use of literal values and instead 
> replace them with references to the global variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12663) Remove Hard-Coded Values From FileSystem.java

2015-12-22 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-12663:
-
Attachment: FileSystem.HADOOP-12663.0002.patch

> Remove Hard-Coded Values From FileSystem.java
> -
>
> Key: HADOOP-12663
> URL: https://issues.apache.org/jira/browse/HADOOP-12663
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: FileSystem.HADOOP-12663.0002.patch, 
> FileSystem.HADOOP-12663.patch
>
>
> Within FileSystem.java, there is one instance where the global variables 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY" and 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT" were being used, 
> but in all other instances, their literal values were being used.
> Please find attached a patch to remove use of literal values and instead 
> replace them with references to the global variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12663) Remove Hard-Coded Values From FileSystem.java

2015-12-22 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068492#comment-15068492
 ] 

BELUGA BEHR commented on HADOOP-12663:
--

Thank you for your interest.  I have attached a new patch.  It still has the 
wildcard.  Going forward, I will remove the wildcard.

I will perform the work for the rest of the instances in another, more 
exhaustive, ticket.  Thanks.

> Remove Hard-Coded Values From FileSystem.java
> -
>
> Key: HADOOP-12663
> URL: https://issues.apache.org/jira/browse/HADOOP-12663
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: FileSystem.HADOOP-12663.0002.patch, 
> FileSystem.HADOOP-12663.patch
>
>
> Within FileSystem.java, there is one instance where the global variables 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY" and 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT" were being used, 
> but in all other instances, their literal values were being used.
> Please find attached a patch to remove use of literal values and instead 
> replace them with references to the global variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11252) RPC client does not time out by default

2015-12-22 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11252:
--
Attachment: HADOOP-11252.003.patch

Thanks, [~andrew.wang]. I updated the patch.

> RPC client does not time out by default
> ---
>
> Key: HADOOP-11252
> URL: https://issues.apache.org/jira/browse/HADOOP-11252
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.5.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Critical
> Attachments: HADOOP-11252.002.patch, HADOOP-11252.003.patch, 
> HADOOP-11252.patch
>
>
> The RPC client has a default timeout set to 0 when no timeout is passed in. 
> This means that the network connection created will not timeout when used to 
> write data. The issue has shown in YARN-2578 and HDFS-4858. Timeouts for 
> writes then fall back to the tcp level retry (configured via tcp_retries2) 
> and timeouts between the 15-30 minutes. Which is too long for a default 
> behaviour.
> Using 0 as the default value for timeout is incorrect. We should use a sane 
> value for the timeout and the "ipc.ping.interval" configuration value is a 
> logical choice for it. The default behaviour should be changed from 0 to the 
> value read for the ping interval from the Configuration.
> Fixing it in common makes more sense than finding and changing all other 
> points in the code that do not pass in a timeout.
> Offending code lines:
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L488
> and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L350



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12672) Split RPC timeout from IPC ping

2015-12-22 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki reassigned HADOOP-12672:
-

Assignee: Masatake Iwasaki

> Split RPC timeout from IPC ping
> ---
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11252) RPC client does not time out by default

2015-12-22 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069131#comment-15069131
 ] 

Masatake Iwasaki commented on HADOOP-11252:
---

bq. I think we should track this in separate jira. 

I agree, filed HADOOP-12672 as follow-up.

> RPC client does not time out by default
> ---
>
> Key: HADOOP-11252
> URL: https://issues.apache.org/jira/browse/HADOOP-11252
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.5.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Critical
> Attachments: HADOOP-11252.002.patch, HADOOP-11252.003.patch, 
> HADOOP-11252.patch
>
>
> The RPC client has a default timeout set to 0 when no timeout is passed in. 
> This means that the network connection created will not timeout when used to 
> write data. The issue has shown in YARN-2578 and HDFS-4858. Timeouts for 
> writes then fall back to the tcp level retry (configured via tcp_retries2) 
> and timeouts between the 15-30 minutes. Which is too long for a default 
> behaviour.
> Using 0 as the default value for timeout is incorrect. We should use a sane 
> value for the timeout and the "ipc.ping.interval" configuration value is a 
> logical choice for it. The default behaviour should be changed from 0 to the 
> value read for the ping interval from the Configuration.
> Fixing it in common makes more sense than finding and changing all other 
> points in the code that do not pass in a timeout.
> Offending code lines:
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L488
> and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L350



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12672) Split RPC timeout from IPC ping

2015-12-22 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-12672:
-

 Summary: Split RPC timeout from IPC ping
 Key: HADOOP-12672
 URL: https://issues.apache.org/jira/browse/HADOOP-12672
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11996) Native erasure coder facilities based on ISA-L

2015-12-22 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11996:
---
Attachment: HADOOP-11996-v6.patch

Rebased the patch. 

> Native erasure coder facilities based on ISA-L
> --
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch, HADOOP-11996-v5.patch, 
> HADOOP-11996-v6.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff. It's 
> also easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12666) Support Windows Azure Data Lake - as a file system in Hadoop

2015-12-22 Thread vishwajeet dusane (JIRA)
vishwajeet dusane created HADOOP-12666:
--

 Summary: Support Windows Azure Data Lake - as a file system in 
Hadoop
 Key: HADOOP-12666
 URL: https://issues.apache.org/jira/browse/HADOOP-12666
 Project: Hadoop Common
  Issue Type: New Feature
  Components: tools
Reporter: vishwajeet dusane


h2. Description
This JIRA describes a new file system implementation for accessing Windows 
Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as input 
or output.
 
ADL is ultra-high capacity, Optimized for massive throughput with rich 
management and security features. More details available at 
https://azure.microsoft.com/en-us/services/data-lake-store/

h2. High level design
ADL file system exposes RESTful interfaces compatible with WebHdfs 
specification 2.7.1.
At a high level, the code here extends the SWebHdfsFileSystem class to provide 
an implementation for accessing ADL storage; the scheme ADL is used for 
accessing it over HTTPS. We use the URI scheme:
{code}adl:///path/to/file{code} 
to address individual Files/Folders. Tests are implemented mostly using a 
Contract implementation for the ADL functionality, with an option to test 
against a real ADL storage if configured.

h2. Credits and history
This has been ongoing work for a while, and the early version of this work can 
be seen in. Credit for this work goes to the team: [~vishwajeet.dusane], 
[~snayak], [~srevanka], [~kiranch], [~chakrab], [~omkarksa], [~snvijaya], 
[~ansaiprasanna]  [~jsangwan]

h2. Test
Besides Contract tests, we have used ADL as the additional file system in the 
current public preview release. Various different customer and test workloads 
have been run against clusters with such configurations for quite some time. 
The current version reflects to the version of the code tested and used in our 
production environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12559) KMS connection failures should trigger TGT renewal

2015-12-22 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067770#comment-15067770
 ] 

Xiaoyu Yao commented on HADOOP-12559:
-

Thanks [~zhz] for updating the patch with additional. I agree with your 
analysis that this patch can handle the case where the current user is 
authenticated by KERBEROS with its Kerberos principle available in keytab but 
not in TGT cache (not login or expired). However, I think the currentUgi below 
should be actualUgi to handle the proxy user case. 

{code}
currentUgi.checkTGTAndReloginFromKeytab();
{code}

The original comment I made is on a different use case where the currentUser is 
authenticated by TOKEN, e.g., a  user token passed from distcp mappers on HDFS 
datanode when using webhdfs + KMS. When DN talks to KMS with the user token, it 
won't be able to do SPNEGO based authentication. The additional 
UGI#checkTGTAndReloginFromKeytab in KMSClientProvider will be a no-op in this 
case as the token based user won't have its Kerberos principle in local keytab 
or TGT cache, which failed later in doSpnego with a similar stack. I will open 
a separate JIRA for that.

Regarding simulating kerberos ticket timeout, I can do that with 'kinit -l' on 
a MIT KDC as shown below. The issue seems like a limitation of 
org.apache.directory.server.kerberos.kdc.KdcServer used by miniKDC. If there is 
no obvious solution for that, I'm fine without unit test as long we comment on 
this JIRA about the validation that have been done before commit.  

{code}
[ambari-qa@c6402 vagrant]$ kinit -l 1m -kt 
/etc/security/keytabs/smokeuser.headless.keytab ambari-qa-hd...@example.com
[ambari-qa@c6402 vagrant]$ klist
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: ambari-qa-hd...@example.com

Valid starting ExpiresService principal
12/22/15 08:41:04  12/22/15 08:42:04  krbtgt/example@example.com
renew until 12/22/15 08:41:04
{code}

> KMS connection failures should trigger TGT renewal
> --
>
> Key: HADOOP-12559
> URL: https://issues.apache.org/jira/browse/HADOOP-12559
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HADOOP-12559.00.patch, HADOOP-12559.01.patch, 
> HADOOP-12559.02.patch, HADOOP-12559.03.patch, HADOOP-12559.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-22 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068056#comment-15068056
 ] 

Kai Zheng commented on HADOOP-12662:


bq. we should pull all of this shell code out and put it into a script in 
dev-support ...
I would try it and see the effect. It would be much clean for the {{pom.xml}}. 
It looks bad with these script codes in my IDE. :)

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068030#comment-15068030
 ] 

Steve Loughran commented on HADOOP-12649:
-

diagnostics should look up {{hadoop.kerberos.kinit.command}}, verify it is 
present, and list its details

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2015-12-22 Thread jack liuquan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067995#comment-15067995
 ] 

jack liuquan commented on HADOOP-11828:
---

Hi Kai,
Thank you for your review.
I will fix the codes after your comments.

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction. This JIRA aims to introduce Hitchhiker to the 
> HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12663) Remove Hard-Coded Values From FileSystem.java

2015-12-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068024#comment-15068024
 ] 

Steve Loughran commented on HADOOP-12663:
-

consider static importing the keys and then refer to them without the prefix

> Remove Hard-Coded Values From FileSystem.java
> -
>
> Key: HADOOP-12663
> URL: https://issues.apache.org/jira/browse/HADOOP-12663
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: FileSystem.HADOOP-12663.patch
>
>
> Within FileSystem.java, there is one instance where the global variables 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_KEY" and 
> "CommonConfigurationKeysPublic.IO_FILE_BUFFER_SIZE_DEFAULT" were being used, 
> but in all other instances, their literal values were being used.
> Please find attached a patch to remove use of literal values and instead 
> replace them with references to the global variables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12666) Support Windows Azure Data Lake - as a file system in Hadoop

2015-12-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12666:

Target Version/s: 2.9.0  (was: 2.8.0)
 Component/s: fs
  fs/azure

> Support Windows Azure Data Lake - as a file system in Hadoop
> 
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Windows 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/
> h2. High level design
> ADL file system exposes RESTful interfaces compatible with WebHdfs 
> specification 2.7.1.
> At a high level, the code here extends the SWebHdfsFileSystem class to 
> provide an implementation for accessing ADL storage; the scheme ADL is used 
> for accessing it over HTTPS. We use the URI scheme:
> {code}adl:///path/to/file{code} 
> to address individual Files/Folders. Tests are implemented mostly using a 
> Contract implementation for the ADL functionality, with an option to test 
> against a real ADL storage if configured.
> h2. Credits and history
> This has been ongoing work for a while, and the early version of this work 
> can be seen in. Credit for this work goes to the team: [~vishwajeet.dusane], 
> [~snayak], [~srevanka], [~kiranch], [~chakrab], [~omkarksa], [~snvijaya], 
> [~ansaiprasanna]  [~jsangwan]
> h2. Test
> Besides Contract tests, we have used ADL as the additional file system in the 
> current public preview release. Various different customer and test workloads 
> have been run against clusters with such configurations for quite some time. 
> The current version reflects to the version of the code tested and used in 
> our production environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12667) s3a: Support createNonRecursive API

2015-12-22 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-12667:
---
Attachment: HADOOP-12667.001.patch

Attaching a patch with tests I've run against multiple endpoints. s3a's 
create() is already not recursive because it just converts the path into a key, 
and I suspect this API is only used directly when you WANT it to fail if the 
parent directory doesn't exist (as opposed to using it because you don't think 
recursion is necessary) so I explicitly fail if the parent of the supplied path 
does not exist.

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11252) RPC client does not time out by default

2015-12-22 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068352#comment-15068352
 ] 

Andrew Wang commented on HADOOP-11252:
--

LGTM besides a few nits, +1 pending:

* Unnecessary whitespace change in CommonConfigurationKeys
* ipc.client.ping description in core-default.xml, I think you meant "a byte" 
rather than "byte"

Thanks [~iwasakims]!

> RPC client does not time out by default
> ---
>
> Key: HADOOP-11252
> URL: https://issues.apache.org/jira/browse/HADOOP-11252
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.5.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Critical
> Attachments: HADOOP-11252.002.patch, HADOOP-11252.patch
>
>
> The RPC client has a default timeout set to 0 when no timeout is passed in. 
> This means that the network connection created will not timeout when used to 
> write data. The issue has shown in YARN-2578 and HDFS-4858. Timeouts for 
> writes then fall back to the tcp level retry (configured via tcp_retries2) 
> and timeouts between the 15-30 minutes. Which is too long for a default 
> behaviour.
> Using 0 as the default value for timeout is incorrect. We should use a sane 
> value for the timeout and the "ipc.ping.interval" configuration value is a 
> logical choice for it. The default behaviour should be changed from 0 to the 
> value read for the ping interval from the Configuration.
> Fixing it in common makes more sense than finding and changing all other 
> points in the code that do not pass in a timeout.
> Offending code lines:
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L488
> and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L350



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12667) s3a: Support createNonRecursive API

2015-12-22 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-12667:
--

 Summary: s3a: Support createNonRecursive API
 Key: HADOOP-12667
 URL: https://issues.apache.org/jira/browse/HADOOP-12667
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Sean Mackrory
Assignee: Sean Mackrory


HBase and other clients rely on the createNonRecursive API, which was recently 
un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12662) The build should fail if a -Dbundle option fails

2015-12-22 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068430#comment-15068430
 ] 

Colin Patrick McCabe commented on HADOOP-12662:
---

Yeah, I think it's reasonable to split this script out into a separate file.

> The build should fail if a -Dbundle option fails
> 
>
> Key: HADOOP-12662
> URL: https://issues.apache.org/jira/browse/HADOOP-12662
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12662-v1.patch, HADOOP-12662-v2.patch
>
>
> Per some discussion with [~cmccabe], it would be good to refine and make it 
> consistent the behaviors in bundling native libraries when building dist 
> package.
> For all native libraries to bundle, if the bundling option like 
> {{-Dbundle.snappy}} is specified, then the lib option like {{-Dsnappy.lib}} 
> will be checked and ensured to be there, but if not, it will then report 
> error and fail the building explicitly.
> {{BUILDING.txt}} would also be updated to explicitly state this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12668) Modify HDFS embeded jetty server logic in HttpServer2.java to exclude weak Ciphers through ssl-server.conf

2015-12-22 Thread Vijay Singh (JIRA)
Vijay Singh created HADOOP-12668:


 Summary: Modify HDFS embeded jetty server logic in 
HttpServer2.java to exclude weak Ciphers through ssl-server.conf
 Key: HADOOP-12668
 URL: https://issues.apache.org/jira/browse/HADOOP-12668
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.7.1
Reporter: Vijay Singh
Assignee: Vijay Singh
Priority: Critical
 Fix For: 2.7.2


Currently Embeded jetty Server used across all hadoop services is configured 
through ssl-server.xml file from their respective configuration section. 
However, the SSL/TLS protocol being used for this jetty servers can be 
downgraded to weak cipher suites. This code changes aims to add following 
functionality:
1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to 
spawn jetty servers with ability to exclude weak cipher suites. I propose we 
make this though ssl-server.xml and hence each service can choose to disable 
specific ciphers.
2) Modify DFSUtil.java used by HDFS code to supply new parameter 
ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the 
ciphers supplied through this key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12667) s3a: Support createNonRecursive API

2015-12-22 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068449#comment-15068449
 ] 

Chris Nauroth commented on HADOOP-12667:


If the goal is to support HBase's WAL management, then another aspect is that 
HBase expects atomicity from this call.  Atomicity would guarantee that for 2 
concurrent threads/processes calling {{createNonRecursive}}, one of the callers 
succeeds and the other fails.  HBASE-11045 has some great discussion of the 
expected semantics, especially this comment:

https://issues.apache.org/jira/browse/HBASE-11045?focusedCommentId=13977198=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13977198

This means that an implementation that just checks for existence before 
allowing the create to proceed is insufficient.  There would be a race 
condition allowing 2 concurrent callers to think the path exists and proceed 
with their creates.

In hadoop-azure, this problem is solved by making concurrent callers acquire a 
mutually exclusive lease on the blob before doing the create.  Blob leases are 
a feature of Azure Storage.  I'm not aware of any lease functionality like this 
available in S3.

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12665) Document hadoop.security.token.service.use_ip

2015-12-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HADOOP-12665:
-

Assignee: Anu Engineer

> Document hadoop.security.token.service.use_ip
> -
>
> Key: HADOOP-12665
> URL: https://issues.apache.org/jira/browse/HADOOP-12665
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Anu Engineer
>
> {{hadoop.security.token.service.use_ip}} is not documented in 2.x/trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12473) distcp's ignoring failures option should be mutually exclusive with the atomic option

2015-12-22 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068624#comment-15068624
 ] 

Mithun Radhakrishnan commented on HADOOP-12473:
---

[~jira.shegalov], that is an interesting take. Hmm.

Between you and me, I think no one should be using {{-i}} at all, in atomic 
copies or otherwise. It was included to be backward compatible with DistCpV1, 
for those with an inexplicable tolerance for bad data. :]

{{-atomic}} was added so that users have the choice of staging their copies to 
a temp-location, before atomically moving them to the target location. I 
guessed there might be users who'd want to stage data before moving them, but 
could also tolerate bad copies. But I do see your point of view.

{{-i}} could be useful to work around annoying copy errors. For instance, there 
was a time when {{-skipCrc}} wouldn't work correctly, and copying files with 
different block-sizes (or empty files) would result in CRC failures. {{-i}} 
would let workflows complete while DistCp was under fix. Removing this makes 
the workaround unavailable when {{-atomic}} is used.

I'm on the fence here, but tending in your direction. I'd be happy to go along, 
if you could another "Aye!" from a committer. Paging [~jlowe] and [~daryn].



> distcp's ignoring failures option should be mutually exclusive with the 
> atomic option
> -
>
> Key: HADOOP-12473
> URL: https://issues.apache.org/jira/browse/HADOOP-12473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
>
> In {{CopyMapper::handleFailure}}, the mapper handles failure and will ignore 
> it if no it's config key is on. Ignoring failures option {{-i}} should be 
> mutually exclusive with the {{-atomic}} option otherwise an incomplete dir is 
> eligible for commit defeating the purpose.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)