[jira] [Created] (HADOOP-13740) Add to MapFile.Merger Options for target file

2016-10-19 Thread VITALIY SAVCHENKO (JIRA)
VITALIY SAVCHENKO created HADOOP-13740:
--

 Summary: Add to MapFile.Merger Options for target file
 Key: HADOOP-13740
 URL: https://issues.apache.org/jira/browse/HADOOP-13740
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: VITALIY SAVCHENKO


Need add to MapFile.Merger Options for target file.
For example compression option(or others) for target file.
Now it is not possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13739) Add the ability to check - is a file opened for writing

2016-10-19 Thread VITALIY SAVCHENKO (JIRA)
VITALIY SAVCHENKO created HADOOP-13739:
--

 Summary: Add the ability to check - is a file opened for writing
 Key: HADOOP-13739
 URL: https://issues.apache.org/jira/browse/HADOOP-13739
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: VITALIY SAVCHENKO


I want to merge some MapFile's to one via MapFile.Merger. But i want to be 
sure, what no one of file not open for wrinting now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13725) Open MapFile for append

2016-10-19 Thread VITALIY SAVCHENKO (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590692#comment-15590692
 ] 

VITALIY SAVCHENKO commented on HADOOP-13725:


{code}
1.
MapFile.Writer w = null;
try {
w = new MapFile.Writer(
new Configuration(),
new Path("hdfs://192.168.56.101:9000/20161020/data"),
MapFile.Writer.keyClass(IntWritable.class),
MapFile.Writer.valueClass(IntWritable.class)
);
w.append(new IntWritable(0), new IntWritable(100));
w.append(new IntWritable(10), new IntWritable(200));
w.append(new IntWritable(5), new IntWritable(400)); 
//java.io.IOException: key out of order: 5 after 10
} catch (Exception ex) {
ex.printStackTrace();
} finally {
w.close();
}
MapFile.Reader reader = new MapFile.Reader(
new Path("hdfs://192.168.56.101:9000/20161020/data"),
new Configuration()
);
System.out.println(reader.get(new IntWritable(0), new IntWritable())); 
//print 100
System.out.println(reader.get(new IntWritable(10), new IntWritable())); 
// print 200 MapFile corrent
reader.close();
2. Open MapFile for apped
MapFile.Writer w = null;
try {
w = new MapFile.Writer(
new Configuration(),
new Path("hdfs://192.168.56.101:9000/20161020/data"),
MapFile.Writer.keyClass(IntWritable.class),
MapFile.Writer.valueClass(IntWritable.class)
);
w.append(new IntWritable(0), new IntWritable(100));
w.append(new IntWritable(10), new IntWritable(200));
w.close();

w = new MapFile.Writer(
new Configuration(),
new Path("hdfs://192.168.56.101:9000/20161020/data"),
SequenceFile.Writer.appendIfExists(true), // append to exist 
MapFile
SequenceFile.Writer.replication((short)2),
MapFile.Writer.keyClass(IntWritable.class),
MapFile.Writer.valueClass(IntWritable.class)
);
w.append(new IntWritable(20), new IntWritable(300));
w.append(new IntWritable(30), new IntWritable(400));
} catch (Exception ex) {
ex.printStackTrace();
} finally {
w.close();
}
MapFile.Reader reader = new MapFile.Reader(
new Path("hdfs://192.168.56.101:9000/20161020/data"),
new Configuration()
);
System.out.println(reader.get(new IntWritable(10), new IntWritable())); 
//print 200
System.out.println(reader.get(new IntWritable(20), new IntWritable())); 
//print 300 MapFile correct
reader.close();

3. Append to exist MapFile, but incorrect range
MapFile.Writer w = null;
try {
w = new MapFile.Writer(
new Configuration(),
new Path("hdfs://192.168.56.101:9000/20161020/data"),
MapFile.Writer.keyClass(IntWritable.class),
MapFile.Writer.valueClass(IntWritable.class)
);
w.append(new IntWritable(10), new IntWritable(100));
w.append(new IntWritable(20), new IntWritable(200));
w.close();

w = new MapFile.Writer(
new Configuration(),
new Path("hdfs://192.168.56.101:9000/20161020/data"),
SequenceFile.Writer.appendIfExists(true), //append to MapFile
MapFile.Writer.keyClass(IntWritable.class),
MapFile.Writer.valueClass(IntWritable.class)
);
w.append(new IntWritable(5), new IntWritable(300)); //No exception 
here
w.append(new IntWritable(10), new IntWritable(400));
} catch (Exception ex) {
ex.printStackTrace();
} finally {
w.close();
}

MapFile.Reader reader = new MapFile.Reader(
new Path("hdfs://192.168.56.101:9000/20161020/data"),
new Configuration()
);
System.out.println(reader.get(new IntWritable(5), new IntWritable())); 
// java.io.IOException: key out of order: 5 after 10 - MapFile corrupted
System.out.println(reader.get(new IntWritable(10), new IntWritable()));
System.out.println(reader.get(new IntWritable(20), new IntWritable()));
reader.close();
{code}

Reason: When open MapFile with option SequenceFile.Writer.appendIfExists(true) 
- writer not read last key from exist MapFile.


> Open MapFile for append
> ---
>
> Key: HADOOP-13725
> URL: https://issues.apache.org/jira/browse/HADOOP-13725
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: VITALIY SAVCHENKO
>
> I think it possible 

[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590673#comment-15590673
 ] 

Hadoop QA commented on HADOOP-12082:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
9s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
38s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} root: The patch generated 0 new + 151 unchanged - 6 
fixed = 151 total (was 157) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-project in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
23s{color} | {color:green} hadoop-auth in the patch passed with 

[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590666#comment-15590666
 ] 

Hadoop QA commented on HADOOP-12082:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
13s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} branch-2.8 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 11s{color} | {color:orange} root: The patch generated 2 new + 138 unchanged 
- 5 fixed = 140 total (was 143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-project in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
16s{color} | {color:green} hadoop-auth in 

[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590653#comment-15590653
 ] 

Hadoop QA commented on HADOOP-10075:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 75 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client . 
hadoop-mapreduce-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-common-project/hadoop-kms in trunk has 2 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 25 new + 2508 
unchanged - 71 fixed = 2533 total (was 2579) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 582 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
16s{color} | {color:red} The patch 4278 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
30s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client . 
hadoop-mapreduce-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-maven-plugins generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 38s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 15m 
30s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}256m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-maven-plugins |
|  |  Exceptional return value of java.io.File.mkdirs() ignored in 
org.apache.hadoop.maven.plugin.resourcegz.ResourceGzMojo$GZConsumer.accept(Path)
  At ResourceGzMojo.java:ignored in 

[jira] [Work stopped] (HADOOP-13364) Variable HADOOP_LIBEXEC_DIR must be quoted in bin/hadoop line 26

2016-10-19 Thread Yulei Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13364 stopped by Yulei Li.
-
> Variable HADOOP_LIBEXEC_DIR must be quoted in bin/hadoop line 26
> 
>
> Key: HADOOP-13364
> URL: https://issues.apache.org/jira/browse/HADOOP-13364
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.6.4
> Environment: Linux, Unix, Mac machines with spaces in file paths
>Reporter: Jeffrey McAteer
>Assignee: Yulei Li
>  Labels: script
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Upon a standard download, untaring, and execution of 
> './hadoop-2.6.4/bin/hadoop version', I received: './hadoop-2.6.4/bin/hadoop: 
> line 26: /Users/jeffrey/Projects/Hadoop: No such file or directory'
> My project directory was called 'Hadoop Playground', with a space in it. Upon 
> investigating, I found line 26 held:
> . $HADOOP_LIBEXEC_DIR/hadoop-config.sh
> Which means the variable $HADOOP_LIBEXEC_DIR will be handled as multiple 
> arguments if there is a space. The solution is to quote the variable, like so:
> . "$HADOOP_LIBEXEC_DIR/hadoop-config.sh"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13364) Variable HADOOP_LIBEXEC_DIR must be quoted in bin/hadoop line 26

2016-10-19 Thread Yulei Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13364 started by Yulei Li.
-
> Variable HADOOP_LIBEXEC_DIR must be quoted in bin/hadoop line 26
> 
>
> Key: HADOOP-13364
> URL: https://issues.apache.org/jira/browse/HADOOP-13364
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.6.4
> Environment: Linux, Unix, Mac machines with spaces in file paths
>Reporter: Jeffrey McAteer
>Assignee: Yulei Li
>  Labels: script
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Upon a standard download, untaring, and execution of 
> './hadoop-2.6.4/bin/hadoop version', I received: './hadoop-2.6.4/bin/hadoop: 
> line 26: /Users/jeffrey/Projects/Hadoop: No such file or directory'
> My project directory was called 'Hadoop Playground', with a space in it. Upon 
> investigating, I found line 26 held:
> . $HADOOP_LIBEXEC_DIR/hadoop-config.sh
> Which means the variable $HADOOP_LIBEXEC_DIR will be handled as multiple 
> arguments if there is a space. The solution is to quote the variable, like so:
> . "$HADOOP_LIBEXEC_DIR/hadoop-config.sh"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13734) ListStatus Returns Incorrect Result for Empty File on Swift

2016-10-19 Thread Kevin Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Huang updated HADOOP-13734:
-
Summary: ListStatus Returns Incorrect Result for Empty File on Swift  (was: 
ListStatus Returns Incorrect Result for Blank File on swift)

> ListStatus Returns Incorrect Result for Empty File on Swift
> ---
>
> Key: HADOOP-13734
> URL: https://issues.apache.org/jira/browse/HADOOP-13734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Reporter: Kevin Huang
>
> Reproduce steps:
> 1. Create a empty file on Swift via Swift client(e.g Cyberduck)
> 2. Use Hadoop Swift API to get file status. The following is the code example:
> {code}
> Configuration hadoopConf = new Configuration();
> hadoopConf.addResource("swift-site.xml"); // Set Swift configurations
> FileSystem fs = FileSystem.get(new 
> URI("swift://containername.myprovider/"), hadoopConf);
> FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
> for(FileStatus status : statuses) {
> System.out.println(status);
> }
> {code}
> Result:
> {code}
> SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/emptyfile; 
> isDirectory=true; length=0; blocksize=33554432; 
> modification_time=1476875293230}
> {code}
> API treated empty file as a directory. That is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13734) ListStatus Returns Incorrect Result for Blank File on swift

2016-10-19 Thread Kevin Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Huang updated HADOOP-13734:
-
Description: 
Reproduce steps:
1. Create a empty file on Swift via Swift client(e.g Cyberduck)
2. Use Hadoop Swift API to get file status. The following is the code example:
{code}
Configuration hadoopConf = new Configuration();
hadoopConf.addResource("swift-site.xml"); // Set Swift configurations

FileSystem fs = FileSystem.get(new 
URI("swift://containername.myprovider/"), hadoopConf);
FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
for(FileStatus status : statuses) {
System.out.println(status);
}
{code}
Result:
{code}
SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/emptyfile; isDirectory=true; 
length=0; blocksize=33554432; modification_time=1476875293230}
{code}

API treated empty file as a directory. That is incorrect.

  was:
Reproduce steps:
1. Create a blank file on Swift via Swift client(e.g Cyberduck)
2. Use Hadoop Swift API to get file status. The following is the code example:
{code}
Configuration hadoopConf = new Configuration();
hadoopConf.addResource("swift-site.xml"); // Set Swift configurations

FileSystem fs = FileSystem.get(new 
URI("swift://containername.myprovider/"), hadoopConf);
FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
for(FileStatus status : statuses) {
System.out.println(status);
}
{code}
Result:
{code}
SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/blankfile; isDirectory=true; 
length=0; blocksize=33554432; modification_time=1476875293230}
{code}

API treated blankfile as a directory. That is incorrect.


> ListStatus Returns Incorrect Result for Blank File on swift
> ---
>
> Key: HADOOP-13734
> URL: https://issues.apache.org/jira/browse/HADOOP-13734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Reporter: Kevin Huang
>
> Reproduce steps:
> 1. Create a empty file on Swift via Swift client(e.g Cyberduck)
> 2. Use Hadoop Swift API to get file status. The following is the code example:
> {code}
> Configuration hadoopConf = new Configuration();
> hadoopConf.addResource("swift-site.xml"); // Set Swift configurations
> FileSystem fs = FileSystem.get(new 
> URI("swift://containername.myprovider/"), hadoopConf);
> FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
> for(FileStatus status : statuses) {
> System.out.println(status);
> }
> {code}
> Result:
> {code}
> SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/emptyfile; 
> isDirectory=true; length=0; blocksize=33554432; 
> modification_time=1476875293230}
> {code}
> API treated empty file as a directory. That is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-19 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HADOOP-12082:
--
Attachment: HADOOP-12082-branch-2-001.patch

Here is a patch against branch-2 fixing the whitespace errors.

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2-001.patch, 
> HADOOP-12082-branch-2.8-001.patch, HADOOP-12082-branch-2.8.patch, 
> HADOOP-12082-branch-2.patch, HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, 
> hadoop-ldap-auth-v3.patch, hadoop-ldap-auth-v4.patch, 
> hadoop-ldap-auth-v5.patch, hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-19 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590483#comment-15590483
 ] 

Hrishikesh Gadre edited comment on HADOOP-12082 at 10/20/16 1:48 AM:
-

[~benoyantony] Here is the updated patch against branch-2.8 which fixes the 
checkstyle errors.


was (Author: hgadre):
[~benoyantony] Here is the updated patch which fixes the checkstyle errors.

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2.8-001.patch, 
> HADOOP-12082-branch-2.8.patch, HADOOP-12082-branch-2.patch, 
> HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Updated] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-19 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HADOOP-12082:
--
Attachment: HADOOP-12082-branch-2.8-001.patch

[~benoyantony] Here is the updated patch which fixes the checkstyle errors.

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2.8-001.patch, 
> HADOOP-12082-branch-2.8.patch, HADOOP-12082-branch-2.patch, 
> HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590462#comment-15590462
 ] 

Hadoop QA commented on HADOOP-13737:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13737 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834292/HADOOP-13737.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 707e6b2a3668 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8650cc8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10835/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10835/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13737.01.patch, HADOOP-13737.02.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-19 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590460#comment-15590460
 ] 

Hrishikesh Gadre commented on HADOOP-12082:
---

[~benoyantony] Regarding patch for branch-2.8,

Following checkstyle errors seem bogus to me. Also these were not reported for 
the patch I submitted against trunk.

./hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/LdapAuthenticationHandler.java:118:
  public void setEnableStartTls(Boolean enableStartTls) {:41: 'enableStartTls' 
hides a field.
./hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/LdapAuthenticationHandler.java:131:
  Boolean disableHostNameVerification) {:15: 'disableHostNameVerification' 
hides a field.

Also the following whitespace errors reported are for a file which I can't find 
in branch-2.8

https://builds.apache.org/job/PreCommit-HADOOP-Build/10830/artifact/patchprocess/whitespace-eol.txt



> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2.8.patch, 
> HADOOP-12082-branch-2.patch, HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, 
> hadoop-ldap-auth-v3.patch, hadoop-ldap-auth-v4.patch, 
> hadoop-ldap-auth-v5.patch, hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> 

[jira] [Commented] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590426#comment-15590426
 ] 

Hadoop QA commented on HADOOP-13050:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-project in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch 

[jira] [Updated] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13737:
---
Status: Patch Available  (was: Reopened)

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13737.01.patch, HADOOP-13737.02.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13737:
---
Attachment: HADOOP-13737.02.patch

v02 patch addresses the checkstyle warning.

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13737.01.patch, HADOOP-13737.02.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590382#comment-15590382
 ] 

Hadoop QA commented on HADOOP-13737:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 32 unchanged - 0 fixed = 33 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13737 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834278/HADOOP-13737.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5571a249a5cc 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e9c4616 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10834/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10834/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10834/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: 

[jira] [Updated] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13737:
---
Target Version/s: 2.9.0

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13737:
---
Fix Version/s: (was: 3.0.0-alpha2)
   (was: 2.8.0)

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13737:
---
Comment: was deleted

(was: Committed for 2.8.0. Thanks for the contribution [~hanishakoneru]!)

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13737:
---
Hadoop Flags:   (was: Reviewed)

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13737:
---
Comment: was deleted

(was: Resolved the wrong issue!)

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-13737:

  Assignee: Arpit Agarwal

Resolved the wrong issue!

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13737:
---
   Resolution: Fixed
 Assignee: (was: Arpit Agarwal)
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed for 2.8.0. Thanks for the contribution [~hanishakoneru]!

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590298#comment-15590298
 ] 

Hadoop QA commented on HADOOP-12082:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
55s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
41s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
17s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
48s{color} | {color:green} branch-2.8 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
29s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 20s{color} | {color:orange} root: The patch generated 4 new + 138 unchanged 
- 5 fixed = 142 total (was 143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-project in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
29s{color} | {color:green} hadoop-auth in 

[jira] [Commented] (HADOOP-13736) Change PathMetadata to hold S3AFileStatus instead of FileStatus.

2016-10-19 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590296#comment-15590296
 ] 

Lei (Eddy) Xu commented on HADOOP-13736:


Thanks for the great summary, [~fabbri].

As [~fabbri] mentioned, the motivation of this change is that: when I was 
working on HADOOP-13650 and HADOOP-13449, the PathMetadata and a few other 
assumptions that abstract away {{S3AFileStatus}} make implemeting 
{{DynamoDBMetadataStore}}, {{CLI}} and integrating with {{S3AFileSystem}} 
harder than they should. I lean to get a simpler and S3A-specific 
implementation which is limited within the {{S3Guard}} scope. After we gain 
more experience on this matter, we might be able to bring a better solution for 
other projects (i.e., HADOOP-12876).  Additionally, IIUC, only the 
{{InMemoryMetadataStore}} is useful for other project. 

However I am not aware of the timeline for HADOOP-12876, so it is appreciate if 
[~ste...@apache.org] and [~vishwajeet.dusane] can chime in.

Thanks !

> Change PathMetadata to hold S3AFileStatus instead of FileStatus.
> 
>
> Key: HADOOP-13736
> URL: https://issues.apache.org/jira/browse/HADOOP-13736
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> {{S3AFileStatus}} is implemented differently with {{FileStatus}}, for 
> instance {{S3AFileStatus#isEmptyDirectory()}} is not implemented in 
> {{FileStatus()}}. And {{access_time}}, {{block_replication}}, {{owner}}, 
> {{group}} and a few other fields are not meaningful in {{S3AFileStatus}}.  
> So in the scope of {{S3guard}}, it should use {{S3AFileStatus}} in  instead 
> of {{FileStatus}} in {{PathMetadaa}} to avoid casting the types back and 
> forth in S3A. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13738) DiskChecker should perform some file IO

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13738:
---
Attachment: HADOOP-13738.01.patch

v01 patch attempts to create a file in the target directory, write 1 byte to it 
and flush the file data to disk.

Not hitting "Submit Patch" yet as this depends on HADOOP-13737.

> DiskChecker should perform some file IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13738.01.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13738) DiskChecker should perform some disk IO

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13738:
---
Summary: DiskChecker should perform some disk IO  (was: DiskChecker should 
perform some file IO)

> DiskChecker should perform some disk IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13738.01.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13737:
---
Status: Patch Available  (was: Open)

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13737:
---
Attachment: HADOOP-13737.01.patch

The v01 patch removes a number of unused methods and minimizes the public 
interface.

> Cleanup DiskChecker interface
> -
>
> Key: HADOOP-13737
> URL: https://issues.apache.org/jira/browse/HADOOP-13737
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13737.01.patch
>
>
> The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13452) S3Guard: Implement access policy for intra-client consistency with in-memory metadata store.

2016-10-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13452:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

+1 for revision 003.  I have committed this to the HADOOP-13345 feature branch. 
 Aaron, thank you for the patch.  Mingliang, thank you for the code review.

> S3Guard: Implement access policy for intra-client consistency with in-memory 
> metadata store.
> 
>
> Key: HADOOP-13452
> URL: https://issues.apache.org/jira/browse/HADOOP-13452
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Aaron Fabbri
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13452-HADOOP-13345.002.patch, 
> HADOOP-13452-HADOOP-13345.003.patch, HADOOP-13452.001.patch
>
>
> Implement an S3A access policy based on an in-memory metadata store.  This 
> can provide consistency within the same client without needing to integrate 
> with an external system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13738) DiskChecker should perform some file IO

2016-10-19 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13738:
--

 Summary: DiskChecker should perform some file IO
 Key: HADOOP-13738
 URL: https://issues.apache.org/jira/browse/HADOOP-13738
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


DiskChecker can fail to detect total disk/controller failures indefinitely. We 
have seen this in real clusters. DiskChecker performs simple permissions-based 
checks on directories which do not guarantee that any disk IO will be attempted.

A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-10-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13050:
---
Attachment: HADOOP-13050-branch-2.003.patch

I'm uploading revision 003, which fixes new deprecation warnings triggered by 
the SDK upgrade.

> Upgrade to AWS SDK 10.10+ for Java 8u60+
> 
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
> Attachments: HADOOP-13050-001.patch, HADOOP-13050-branch-2.002.patch, 
> HADOOP-13050-branch-2.003.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13737) Cleanup DiskChecker interface

2016-10-19 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13737:
--

 Summary: Cleanup DiskChecker interface
 Key: HADOOP-13737
 URL: https://issues.apache.org/jira/browse/HADOOP-13737
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The DiskChecker class has a few unused public methods. We can remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590201#comment-15590201
 ] 

Hadoop QA commented on HADOOP-12082:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
54s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
41s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
27s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} root: The patch generated 0 new + 151 unchanged - 6 
fixed = 151 total (was 157) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 48 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-project in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
23s{color} | {color:green} hadoop-auth in the patch passed with 

[jira] [Commented] (HADOOP-13736) Change PathMetadata to hold S3AFileStatus instead of FileStatus.

2016-10-19 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590167#comment-15590167
 ] 

Aaron Fabbri commented on HADOOP-13736:
---

Chatted w/ [~eddyxu] a bit offline, paraphrasing here:  He mentioned dealing 
with FileStatus creates extra work for DynamoDB and maybe CLI as well.  We 
agreed that:

- DynamoDBMetadataStore can only support S3A.  It can assert at initialize() 
that the fs is S3AFileSystem,
- It can ignore FileStatus fields that are not used by S3A.  We'll modify the 
MetadataStore contract tests to take this into account.
- In general, it should only do minimal work to support S3a.  No other fs 
clients are supported.

I was thinking the only impact to keeping FileStatus is just a cast to 
S3AFileStatus at the beginning of the public MetadataStore functions.  (Let me 
know if there are more difficulties).  We agreed that casting is not that 
great, but current S3A code should not have any non-S3A FileStatus in its code 
paths.  I originally tried to make the subtype of FileStatus a type parameter 
but as [~cnauroth] mentioned, that doesn't play nicely with reflection-based 
instantiation.  (Open to ideas on this so we can get better type checking).

It sounds like there will be a little extra work in CLI as well, to keep this 
FileStatus.  I.e. optionally showing the extra fields that subclasses of 
FileStatus might add (e.g. S3AFileStatus.isEmptyDirectory).

Overall I like the idea of keeping the MetadataStore interface FS-agnostic.  I 
was hoping we could easily move it to its own package and use it for other FS 
clients in the future.  [~ste...@apache.org] and [~vishwajeet.dusane] also 
indicated they were interested in using LocalMetadataStore for a ephemeral 
listing cache in HADOOP-12876.  Maybe they can chime in if this is not still 
interesting.

I want to be sensitive to any pain you guys are feeling on DynamoDB and/or CLI 
implementation, so I'm hoping we can maintain the FS separation, but I want to 
be flexible if needed.  Thanks!

> Change PathMetadata to hold S3AFileStatus instead of FileStatus.
> 
>
> Key: HADOOP-13736
> URL: https://issues.apache.org/jira/browse/HADOOP-13736
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> {{S3AFileStatus}} is implemented differently with {{FileStatus}}, for 
> instance {{S3AFileStatus#isEmptyDirectory()}} is not implemented in 
> {{FileStatus()}}. And {{access_time}}, {{block_replication}}, {{owner}}, 
> {{group}} and a few other fields are not meaningful in {{S3AFileStatus}}.  
> So in the scope of {{S3guard}}, it should use {{S3AFileStatus}} in  instead 
> of {{FileStatus}} in {{PathMetadaa}} to avoid casting the types back and 
> forth in S3A. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10075) Update jetty dependency to version 9

2016-10-19 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10075:
---
Attachment: HADOOP-10075.010.patch

The 010 patch:
- Addresses [~raviprak]'s comment about the -2 check
- Relevant CheckStyle warnings.  Also, I think it's confused about indentation 
levels for lambda expressions.
- Replaces {{1024 * 64}} with a constant
- Uses constants in {{TestHttpServer}} for the types.  However, for some 
reason, it keeps being without a space, even though I'm directly setting the 
type in the servlet with a space.  Even stranger, if I directly set the type in 
the servlet to a different charset (e.g. {{utf-16}}), it does keep the space.  
So I'm not really sure what's going on here...
- Moved {{+}} signs as per [~templedf]'s comments
- Rebased on latest trunk

Here's the changes in github if that's easier to look at:
https://github.com/rkanter/hadoop/commit/546a7e36a7eeacf4b68fe31a1a09b9badc6b7560

I didn't remove catching {{throwable}} in {{ResourceGzMojo}}.  The other Hadoop 
maven plugins also do this, and I think the idea is that Maven knows how to 
nicely handle a {{MojoExecutionException}} and report that properly to the user.

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590150#comment-15590150
 ] 

Hadoop QA commented on HADOOP-13050:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
36s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
30s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 51s{color} 
| {color:red} root-jdk1.8.0_101 with JDK v1.8.0_101 generated 7 new + 857 
unchanged - 0 fixed = 864 total (was 857) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 52s{color} 
| {color:red} root-jdk1.7.0_111 with JDK v1.7.0_111 generated 7 new + 950 
unchanged - 0 fixed = 957 total (was 950) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-project in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13050 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834265/HADOOP-13050-branch-2.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 78bacacd5bd3 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (HADOOP-13659) Upgrade jaxb-api version

2016-10-19 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590144#comment-15590144
 ] 

Sean Mackrory commented on HADOOP-13659:


Pinging the mailing list to see if I can clear this up: 
https://java.net/projects/jaxb/lists/users/archive/2016-10/message/1. It is 
concerning how hard it is to find any information about jaxb-api itself outside 
of maven repositories and sites that index them...

> Upgrade jaxb-api version
> 
>
> Key: HADOOP-13659
> URL: https://issues.apache.org/jira/browse/HADOOP-13659
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13659.001.patch
>
>
> We're currently pulling in version 2.2.2 - I think we should upgrade to the 
> latest 2.2.12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13659) Upgrade jaxb-api version

2016-10-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590066#comment-15590066
 ] 

Wei-Chiu Chuang edited comment on HADOOP-13659 at 10/19/16 10:51 PM:
-

So... I checked out the code (fortunately they use git)
and looking at pom.xml, the version history was like 2.1.11-SNAPSHOT --> 2.1.11 
--> 2.1.12-SNAPSHOT --> 2.3.0-SNAPSHOT
It seems the last release was 2.1.11, and then they planned to do 2.1.12 but 
then move up to 2.3.0

Also the repo has branch jaxb-2_2_11-branch but no jaxb-2_2_12-branch

EDIT: I was referring to jaxb-ri repo, so maybe it's not what you're looking 
for. Sorry for the confusion.


was (Author: jojochuang):
So... I checked out the code (fortunately they use git)
and looking at pom.xml, the version history was like 2.1.11-SNAPSHOT --> 2.1.11 
--> 2.1.12-SNAPSHOT --> 2.3.0-SNAPSHOT
It seems the last release was 2.1.11, and then they planned to do 2.1.12 but 
then move up to 2.3.0

Also the repo has branch jaxb-2_2_11-branch but no jaxb-2_2_12-branch

> Upgrade jaxb-api version
> 
>
> Key: HADOOP-13659
> URL: https://issues.apache.org/jira/browse/HADOOP-13659
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13659.001.patch
>
>
> We're currently pulling in version 2.2.2 - I think we should upgrade to the 
> latest 2.2.12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13659) Upgrade jaxb-api version

2016-10-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590066#comment-15590066
 ] 

Wei-Chiu Chuang commented on HADOOP-13659:
--

So... I checked out the code (fortunately they use git)
and looking at pom.xml, the version history was like 2.1.11-SNAPSHOT --> 2.1.11 
--> 2.1.12-SNAPSHOT --> 2.3.0-SNAPSHOT
It seems the last release was 2.1.11, and then they planned to do 2.1.12 but 
then move up to 2.3.0

Also the repo has branch jaxb-2_2_11-branch but no jaxb-2_2_12-branch

> Upgrade jaxb-api version
> 
>
> Key: HADOOP-13659
> URL: https://issues.apache.org/jira/browse/HADOOP-13659
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13659.001.patch
>
>
> We're currently pulling in version 2.2.2 - I think we should upgrade to the 
> latest 2.2.12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13736) Change PathMetadata to hold S3AFileStatus instead of FileStatus.

2016-10-19 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590058#comment-15590058
 ] 

Aaron Fabbri commented on HADOOP-13736:
---

Example code:

In init code:
{code}
if (fs instanceof S3AFileSystem) {
  isS3A = true;
}
{code}

Example from put()
{code}
 // S3A-specific logic to maintain S3AFileStatus#isEmptyDirectory()
  if (isS3A) {
setS3AIsEmpty(parentPath, false);
  }
{code}

Implementation:
{code}
private void setS3AIsEmpty(Path path, boolean isEmpty) {
// Update any file statuses in fileHash
PathMetadata meta = fileHash.get(path);
if (meta != null) {
  S3AFileStatus s3aStatus =  (S3AFileStatus)meta.getFileStatus();
  s3aStatus.setIsEmptyDirectory(isEmpty);
}
...
{code}



> Change PathMetadata to hold S3AFileStatus instead of FileStatus.
> 
>
> Key: HADOOP-13736
> URL: https://issues.apache.org/jira/browse/HADOOP-13736
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> {{S3AFileStatus}} is implemented differently with {{FileStatus}}, for 
> instance {{S3AFileStatus#isEmptyDirectory()}} is not implemented in 
> {{FileStatus()}}. And {{access_time}}, {{block_replication}}, {{owner}}, 
> {{group}} and a few other fields are not meaningful in {{S3AFileStatus}}.  
> So in the scope of {{S3guard}}, it should use {{S3AFileStatus}} in  instead 
> of {{FileStatus}} in {{PathMetadaa}} to avoid casting the types back and 
> forth in S3A. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-10-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13050:
---
Target Version/s: 2.9.0  (was: 2.8.0)

> Upgrade to AWS SDK 10.10+ for Java 8u60+
> 
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
> Attachments: HADOOP-13050-001.patch, HADOOP-13050-branch-2.002.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-10-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13050:
---
Target Version/s: 2.9.0, 3.0.0-alpha2  (was: 2.9.0)

> Upgrade to AWS SDK 10.10+ for Java 8u60+
> 
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
> Attachments: HADOOP-13050-001.patch, HADOOP-13050-branch-2.002.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13736) Change PathMetadata to hold S3AFileStatus instead of FileStatus.

2016-10-19 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590048#comment-15590048
 ] 

Aaron Fabbri commented on HADOOP-13736:
---

Why not cast as needed and keep it the base type?

The ADLS folks indicated they were interested in using MetadataStore, but this 
change would break that.

In LocalMetadataStore, I have a single special case for s3a due to the 
isEmptyDirectory flag in S3AFileStatus.  I have a flag {{isS3A}} in the 
LocalMetadataStore and I cast as needed when it is true.

> Change PathMetadata to hold S3AFileStatus instead of FileStatus.
> 
>
> Key: HADOOP-13736
> URL: https://issues.apache.org/jira/browse/HADOOP-13736
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> {{S3AFileStatus}} is implemented differently with {{FileStatus}}, for 
> instance {{S3AFileStatus#isEmptyDirectory()}} is not implemented in 
> {{FileStatus()}}. And {{access_time}}, {{block_replication}}, {{owner}}, 
> {{group}} and a few other fields are not meaningful in {{S3AFileStatus}}.  
> So in the scope of {{S3guard}}, it should use {{S3AFileStatus}} in  instead 
> of {{FileStatus}} in {{PathMetadaa}} to avoid casting the types back and 
> forth in S3A. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-10-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590045#comment-15590045
 ] 

Chris Nauroth commented on HADOOP-13050:


Also, AWS SDK 1.11.45 depends on Jackson 2.6.6.

> Upgrade to AWS SDK 10.10+ for Java 8u60+
> 
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
> Attachments: HADOOP-13050-001.patch, HADOOP-13050-branch-2.002.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-10-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13050:
---
Attachment: HADOOP-13050-branch-2.002.patch

This needed a rebase after one of my patches, so I went ahead and updated it.  
I'm attaching revision 002.  This time, I went to AWS SDK 1.11.45.  All tests 
passed against US-west-2.

Here is the updated dependency tree:
{code}
[INFO] --- maven-dependency-plugin:2.2:tree (default-cli) @ hadoop-aws ---
[INFO] org.apache.hadoop:hadoop-aws:jar:2.9.0-SNAPSHOT
[INFO] +- org.apache.hadoop:hadoop-common:jar:2.9.0-SNAPSHOT:compile
[INFO] |  +- org.apache.hadoop:hadoop-annotations:jar:2.9.0-SNAPSHOT:compile
[INFO] |  |  \- jdk.tools:jdk.tools:jar:1.7:system
[INFO] |  +- com.google.guava:guava:jar:11.0.2:compile
[INFO] |  +- commons-cli:commons-cli:jar:1.2:compile
[INFO] |  +- org.apache.commons:commons-math3:jar:3.1.1:compile
[INFO] |  +- xmlenc:xmlenc:jar:0.52:compile
[INFO] |  +- org.apache.httpcomponents:httpclient:jar:4.5.2:compile
[INFO] |  |  \- org.apache.httpcomponents:httpcore:jar:4.4.4:compile
[INFO] |  +- commons-codec:commons-codec:jar:1.4:compile
[INFO] |  +- commons-io:commons-io:jar:2.4:compile
[INFO] |  +- commons-net:commons-net:jar:3.1:compile
[INFO] |  +- commons-collections:commons-collections:jar:3.2.2:compile
[INFO] |  +- javax.servlet:servlet-api:jar:2.5:compile
[INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26:compile
[INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26:compile
[INFO] |  +- org.mortbay.jetty:jetty-sslengine:jar:6.1.26:compile
[INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:runtime
[INFO] |  +- com.sun.jersey:jersey-core:jar:1.9:compile
[INFO] |  +- com.sun.jersey:jersey-json:jar:1.9:compile
[INFO] |  |  +- org.codehaus.jettison:jettison:jar:1.1:compile
[INFO] |  |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile
[INFO] |  |  |  \- javax.xml.bind:jaxb-api:jar:2.2.2:compile
[INFO] |  |  | +- javax.xml.stream:stax-api:jar:1.0-2:compile
[INFO] |  |  | \- javax.activation:activation:jar:1.1:compile
[INFO] |  |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile (version 
managed from 1.8.3)
[INFO] |  |  \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile (version 
managed from 1.8.3)
[INFO] |  +- com.sun.jersey:jersey-server:jar:1.9:compile
[INFO] |  |  \- asm:asm:jar:3.2:compile (version managed from 3.1)
[INFO] |  +- commons-logging:commons-logging:jar:1.1.3:compile
[INFO] |  +- log4j:log4j:jar:1.2.17:compile
[INFO] |  +- net.java.dev.jets3t:jets3t:jar:0.9.0:compile
[INFO] |  |  \- com.jamesmurty.utils:java-xmlbuilder:jar:0.4:compile
[INFO] |  +- commons-lang:commons-lang:jar:2.6:compile
[INFO] |  +- commons-configuration:commons-configuration:jar:1.6:compile
[INFO] |  |  +- commons-digester:commons-digester:jar:1.8:compile
[INFO] |  |  |  \- commons-beanutils:commons-beanutils:jar:1.7.0:compile
[INFO] |  |  \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
[INFO] |  +- org.slf4j:slf4j-api:jar:1.7.10:compile
[INFO] |  +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
[INFO] |  +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] |  +- org.apache.avro:avro:jar:1.7.4:compile
[INFO] |  |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
[INFO] |  |  \- org.xerial.snappy:snappy-java:jar:1.0.4.1:compile
[INFO] |  +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
[INFO] |  +- com.google.code.gson:gson:jar:2.2.4:compile
[INFO] |  +- org.apache.hadoop:hadoop-auth:jar:2.9.0-SNAPSHOT:compile
[INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:3.9:compile
[INFO] |  |  |  +- net.jcip:jcip-annotations:jar:1.0:compile
[INFO] |  |  |  \- net.minidev:json-smart:jar:1.1.1:compile
[INFO] |  |  +- 
org.apache.directory.server:apacheds-kerberos-codec:jar:2.0.0-M15:compile
[INFO] |  |  |  +- 
org.apache.directory.server:apacheds-i18n:jar:2.0.0-M15:compile
[INFO] |  |  |  +- org.apache.directory.api:api-asn1-api:jar:1.0.0-M20:compile
[INFO] |  |  |  \- org.apache.directory.api:api-util:jar:1.0.0-M20:compile
[INFO] |  |  \- org.apache.curator:curator-framework:jar:2.7.1:compile
[INFO] |  +- com.jcraft:jsch:jar:0.1.51:compile
[INFO] |  +- org.apache.curator:curator-client:jar:2.7.1:compile
[INFO] |  +- org.apache.curator:curator-recipes:jar:2.7.1:compile
[INFO] |  +- com.google.code.findbugs:jsr305:jar:3.0.0:compile
[INFO] |  +- org.apache.htrace:htrace-core4:jar:4.0.1-incubating:compile
[INFO] |  +- org.apache.zookeeper:zookeeper:jar:3.4.6:compile
[INFO] |  |  +- org.slf4j:slf4j-log4j12:jar:1.7.10:compile (version managed 
from 1.6.1)
[INFO] |  |  \- io.netty:netty:jar:3.6.2.Final:compile (version managed from 
3.7.0.Final)
[INFO] |  \- org.apache.commons:commons-compress:jar:1.4.1:compile
[INFO] | \- org.tukaani:xz:jar:1.0:compile
[INFO] +- org.apache.hadoop:hadoop-common:test-jar:tests:2.9.0-SNAPSHOT:test
[INFO] +- com.amazonaws:aws-java-sdk-s3:jar:1.11.45:compile
[INFO] |  +- 

[jira] [Commented] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-10-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590039#comment-15590039
 ] 

Chris Nauroth commented on HADOOP-13050:


Linking to HADOOP-13727, which is a workaround within hadoop-aws that we might 
be able to remove after upgrading the AWS SDK.

> Upgrade to AWS SDK 10.10+ for Java 8u60+
> 
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
> Attachments: HADOOP-13050-001.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13659) Upgrade jaxb-api version

2016-10-19 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589987#comment-15589987
 ] 

Sean Mackrory edited comment on HADOOP-13659 at 10/19/16 10:08 PM:
---

Is the website you're looking at this: https://jaxb.java.net/? This gets really 
confusing because they have pages with 2.2.11 in the URL (and the page doesn't 
exist if you replace it with 2.2.12), but the page content says 2.2.12: 
https://jaxb.java.net/2.2.11/docs/api/javax/xml/bind/JAXB.html.

I think what's happened is jaxb-ri, the reference implementation, is on 2.2.11. 
jaxb-api is a distinct artifact and is on version 2.2.12. The mailing list 
archives show a 2.2.11 release of jaxb-ri happening 
(https://java.net/projects/jaxb/lists/commits/archive/2014-10/message/5), and 
then an update to jaxb-api 2.2.12 
(https://java.net/projects/jaxb/lists/commits/archive/2014-10/message/12), and 
then I can't find a record of a release of jaxb-ri 2.2.12. 

So I think the correct version for us to target is 2.2.12, but I'm struggling 
to find any official-looking source of information about jaxb-api releases, so 
I could be wrong.


was (Author: mackrorysd):
Is the website you're looking at this: https://jaxb.java.net/? This gets really 
confusing because they have pages with 2.2.11 in the URL (and the page doesn't 
exist if you replace it with 2.2.12), but the page content says 2.2.12: 
https://jaxb.java.net/2.2.11/docs/api/javax/xml/bind/JAXB.html.

I think what's happened is jaxb-ri, the reference implementation, is on 2.2.11. 
jaxb-api is a distinct artifact and is on version 2.2.12. The mailing list 
archives show a 2.2.11 release of jaxb-ri happening 
(https://java.net/projects/jaxb/lists/commits/archive/2014-10/message/5), and 
then an update to jaxb-api 2.2.12 
(https://java.net/projects/jaxb/lists/commits/archive/2014-10/message/12), and 
then I can't find a record of a release of jaxb-ri 2.2.11. 

So I think the correct version for us to target is 2.2.12, but I'm struggling 
to find any official-looking source of information about jaxb-api releases, so 
I could be wrong.

> Upgrade jaxb-api version
> 
>
> Key: HADOOP-13659
> URL: https://issues.apache.org/jira/browse/HADOOP-13659
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13659.001.patch
>
>
> We're currently pulling in version 2.2.2 - I think we should upgrade to the 
> latest 2.2.12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-19 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HADOOP-12082:
--
Attachment: HADOOP-12082-branch-2.8.patch

Here is the patch for branch-2.8. This is identical to branch-2 except a small 
change in the import statements in DelegationTokenAuthenticationFilter.java.

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2.8.patch, 
> HADOOP-12082-branch-2.patch, HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, 
> hadoop-ldap-auth-v3.patch, hadoop-ldap-auth-v4.patch, 
> hadoop-ldap-auth-v5.patch, hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13659) Upgrade jaxb-api version

2016-10-19 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589987#comment-15589987
 ] 

Sean Mackrory commented on HADOOP-13659:


Is the website you're looking at this: https://jaxb.java.net/? This gets really 
confusing because they have pages with 2.2.11 in the URL (and the page doesn't 
exist if you replace it with 2.2.12), but the page content says 2.2.12: 
https://jaxb.java.net/2.2.11/docs/api/javax/xml/bind/JAXB.html.

I think what's happened is jaxb-ri, the reference implementation, is on 2.2.11. 
jaxb-api is a distinct artifact and is on version 2.2.12. The mailing list 
archives show a 2.2.11 release of jaxb-ri happening 
(https://java.net/projects/jaxb/lists/commits/archive/2014-10/message/5), and 
then an update to jaxb-api 2.2.12 
(https://java.net/projects/jaxb/lists/commits/archive/2014-10/message/12), and 
then I can't find a record of a release of jaxb-ri 2.2.11. 

So I think the correct version for us to target is 2.2.12, but I'm struggling 
to find any official-looking source of information about jaxb-api releases, so 
I could be wrong.

> Upgrade jaxb-api version
> 
>
> Key: HADOOP-13659
> URL: https://issues.apache.org/jira/browse/HADOOP-13659
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13659.001.patch
>
>
> We're currently pulling in version 2.2.2 - I think we should upgrade to the 
> latest 2.2.12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13736) Change PathMetadata to hold S3AFileStatus instead of FileStatus.

2016-10-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589957#comment-15589957
 ] 

Chris Nauroth commented on HADOOP-13736:


+1 from me too.  Thanks!

> Change PathMetadata to hold S3AFileStatus instead of FileStatus.
> 
>
> Key: HADOOP-13736
> URL: https://issues.apache.org/jira/browse/HADOOP-13736
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> {{S3AFileStatus}} is implemented differently with {{FileStatus}}, for 
> instance {{S3AFileStatus#isEmptyDirectory()}} is not implemented in 
> {{FileStatus()}}. And {{access_time}}, {{block_replication}}, {{owner}}, 
> {{group}} and a few other fields are not meaningful in {{S3AFileStatus}}.  
> So in the scope of {{S3guard}}, it should use {{S3AFileStatus}} in  instead 
> of {{FileStatus}} in {{PathMetadaa}} to avoid casting the types back and 
> forth in S3A. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13736) Change PathMetadata to hold S3AFileStatus instead of FileStatus.

2016-10-19 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589945#comment-15589945
 ] 

Lei (Eddy) Xu commented on HADOOP-13736:


Thanks, [~liuml07]. I typed slow..

> Change PathMetadata to hold S3AFileStatus instead of FileStatus.
> 
>
> Key: HADOOP-13736
> URL: https://issues.apache.org/jira/browse/HADOOP-13736
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> {{S3AFileStatus}} is implemented differently with {{FileStatus}}, for 
> instance {{S3AFileStatus#isEmptyDirectory()}} is not implemented in 
> {{FileStatus()}}. And {{access_time}}, {{block_replication}}, {{owner}}, 
> {{group}} and a few other fields are not meaningful in {{S3AFileStatus}}.  
> So in the scope of {{S3guard}}, it should use {{S3AFileStatus}} in  instead 
> of {{FileStatus}} in {{PathMetadaa}} to avoid casting the types back and 
> forth in S3A. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13736) Change PathMetadata to hold S3AFileStatus instead of FileStatus.

2016-10-19 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589944#comment-15589944
 ] 

Lei (Eddy) Xu commented on HADOOP-13736:


Ping [~cnauroth], [~liuml07] and [~fabbri]. What are your options on this?

I am going to provide a patch very soon if you guys are OK with the proposal.

Thanks.

> Change PathMetadata to hold S3AFileStatus instead of FileStatus.
> 
>
> Key: HADOOP-13736
> URL: https://issues.apache.org/jira/browse/HADOOP-13736
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> {{S3AFileStatus}} is implemented differently with {{FileStatus}}, for 
> instance {{S3AFileStatus#isEmptyDirectory()}} is not implemented in 
> {{FileStatus()}}. And {{access_time}}, {{block_replication}}, {{owner}}, 
> {{group}} and a few other fields are not meaningful in {{S3AFileStatus}}.  
> So in the scope of {{S3guard}}, it should use {{S3AFileStatus}} in  instead 
> of {{FileStatus}} in {{PathMetadaa}} to avoid casting the types back and 
> forth in S3A. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13736) Change PathMetadata to hold S3AFileStatus instead of FileStatus.

2016-10-19 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589939#comment-15589939
 ] 

Mingliang Liu commented on HADOOP-13736:


+1 for the proposal. Thanks,

> Change PathMetadata to hold S3AFileStatus instead of FileStatus.
> 
>
> Key: HADOOP-13736
> URL: https://issues.apache.org/jira/browse/HADOOP-13736
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> {{S3AFileStatus}} is implemented differently with {{FileStatus}}, for 
> instance {{S3AFileStatus#isEmptyDirectory()}} is not implemented in 
> {{FileStatus()}}. And {{access_time}}, {{block_replication}}, {{owner}}, 
> {{group}} and a few other fields are not meaningful in {{S3AFileStatus}}.  
> So in the scope of {{S3guard}}, it should use {{S3AFileStatus}} in  instead 
> of {{FileStatus}} in {{PathMetadaa}} to avoid casting the types back and 
> forth in S3A. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13736) Change PathMetadata to hold S3AFileStatus instead of FileStatus.

2016-10-19 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-13736:
--

 Summary: Change PathMetadata to hold S3AFileStatus instead of 
FileStatus.
 Key: HADOOP-13736
 URL: https://issues.apache.org/jira/browse/HADOOP-13736
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu


{{S3AFileStatus}} is implemented differently with {{FileStatus}}, for instance 
{{S3AFileStatus#isEmptyDirectory()}} is not implemented in {{FileStatus()}}. 
And {{access_time}}, {{block_replication}}, {{owner}}, {{group}} and a few 
other fields are not meaningful in {{S3AFileStatus}}.  

So in the scope of {{S3guard}}, it should use {{S3AFileStatus}} in  instead of 
{{FileStatus}} in {{PathMetadaa}} to avoid casting the types back and forth in 
S3A. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589914#comment-15589914
 ] 

Hadoop QA commented on HADOOP-13727:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
39s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
26s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
25s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} root: The patch generated 0 new + 4 unchanged - 3 
fixed = 4 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 47 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
23s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Updated] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-10-19 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HADOOP-12082:
--
Attachment: HADOOP-12082-branch-2.patch

Here is the patch for branch-2. The main difference is that we need different 
dependencies for ApacheDS libraries as compared to trunk (since fix for 
HADOOP-12911 is not available in branch-2). 

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082-001.patch, HADOOP-12082-002.patch, 
> HADOOP-12082-003.patch, HADOOP-12082-004.patch, HADOOP-12082-005.patch, 
> HADOOP-12082-006.patch, HADOOP-12082-branch-2.patch, HADOOP-12082.patch, 
> hadoop-ldap-auth-v2.patch, hadoop-ldap-auth-v3.patch, 
> hadoop-ldap-auth-v4.patch, hadoop-ldap-auth-v5.patch, 
> hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13727:
---
Status: Patch Available  (was: Open)

> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13727-branch-2.001.patch, 
> HADOOP-13727-branch-2.002.patch, HADOOP-13727-branch-2.003.patch, 
> HADOOP-13727-branch-2.004.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13727:
---
Attachment: HADOOP-13727-branch-2.004.patch

I'm attaching revision 004, which is the same thing rebased against current 
branch-2.

> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13727-branch-2.001.patch, 
> HADOOP-13727-branch-2.002.patch, HADOOP-13727-branch-2.003.patch, 
> HADOOP-13727-branch-2.004.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13727) S3A: Reduce high number of connections to EC2 Instance Metadata Service caused by InstanceProfileCredentialsProvider.

2016-10-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13727:
---
Target Version/s: 2.8.0, 3.0.0-alpha2  (was: 2.8.0)

> S3A: Reduce high number of connections to EC2 Instance Metadata Service 
> caused by InstanceProfileCredentialsProvider.
> -
>
> Key: HADOOP-13727
> URL: https://issues.apache.org/jira/browse/HADOOP-13727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13727-branch-2.001.patch, 
> HADOOP-13727-branch-2.002.patch, HADOOP-13727-branch-2.003.patch, 
> HADOOP-13727-branch-2.004.patch
>
>
> When running in an EC2 VM, S3A can make use of 
> {{InstanceProfileCredentialsProvider}} from the AWS SDK to obtain credentials 
> from the EC2 Instance Metadata Service.  We have observed that for a highly 
> multi-threaded application, this may generate a high number of calls to the 
> Instance Metadata Service.  The service may throttle the client by replying 
> with an HTTP 429 response or forcibly closing connections.  We can greatly 
> reduce the number of calls to the service by enforcing that all threads use a 
> single shared instance of {{InstanceProfileCredentialsProvider}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-19 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589483#comment-15589483
 ] 

Daniel Templeton commented on HADOOP-10075:
---

Latest patch looks great.  One more round of quibbles:

* Seems like maybe {{1024 * 64}} should be a constant since it appears several 
times.  It's also still in there as {{1024*64}} a few times.
* In {{TestHttpServer}}, it would be better to use the constants instead of 
{{"text/plain;charset=utf-8"}}.  (That, by the way, is what I meant with you 
stripping the space after the semicolon.  You stripped it here, but then added 
the space everywhere else.)
* In {{MiniKMS}}, you have {code}  
((ServerConnector)server.getConnectors()[0]).getHost() + ":" +
  ((ServerConnector)server.getConnectors()[0]).getLocalPort());{code}  
I believe the convention is to have the {{+}} at the start of the next line.  
Same thing in {{JobEndNotifier}}.
* In {{ResourceGzMojo}} you should probably catch {{Exception}} instead of 
{{Throwable}}: {code}} catch (Throwable t) {
  throw new MojoExecutionException(t.toString(), t);
}{code} and {code}} catch (Throwable t) {
  this.throwable = t;
}
  } catch (Throwable t) {
this.throwable = t;
  }{code}


> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-19 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589256#comment-15589256
 ] 

Ravi Prakash commented on HADOOP-10075:
---

Thanks Robert! I think I am done with my feedback. If all the tests that passed 
earlier, would pass with the new patch, I'm happy to +1 it.


> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13734) ListStatus Returns Incorrect Result for Blank File on swift

2016-10-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13734:

Summary: ListStatus Returns Incorrect Result for Blank File on swift  (was: 
ListStatus Returns Incorrect Result for Blank File)

> ListStatus Returns Incorrect Result for Blank File on swift
> ---
>
> Key: HADOOP-13734
> URL: https://issues.apache.org/jira/browse/HADOOP-13734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Reporter: Kevin Huang
>
> Reproduce steps:
> 1. Create a blank file on Swift via Swift client(e.g Cyberduck)
> 2. Use Hadoop Swift API to get file status. The following is the code example:
> {code}
> Configuration hadoopConf = new Configuration();
> hadoopConf.addResource("swift-site.xml"); // Set Swift configurations
> FileSystem fs = FileSystem.get(new 
> URI("swift://containername.myprovider/"), hadoopConf);
> FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
> for(FileStatus status : statuses) {
> System.out.println(status);
> }
> {code}
> Result:
> {code}
> SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/blankfile; 
> isDirectory=true; length=0; blocksize=33554432; 
> modification_time=1476875293230}
> {code}
> API treated blankfile as a directory. That is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13716) Add LambdaTestUtils class for tests; fix eventual consistency problem in contract test setup

2016-10-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589081#comment-15589081
 ] 

Steve Loughran commented on HADOOP-13716:
-

[~any]: have you had a chance to look at this later patch?

> Add LambdaTestUtils class for tests; fix eventual consistency problem in 
> contract test setup
> 
>
> Key: HADOOP-13716
> URL: https://issues.apache.org/jira/browse/HADOOP-13716
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13716-001.patch, HADOOP-13716-002.patch, 
> HADOOP-13716-003.patch, HADOOP-13716-005.patch, HADOOP-13716-006.patch, 
> HADOOP-13716-branch-2-004.patch
>
>
> To make our tests robust against timing problems and eventual consistent 
> stores, we need to do more spin & wait for state.
> We have some code in {{GenericTestUtils.waitFor}} to await a condition being 
> met, but the predicate it calls doesn't throw exceptions, there's no way for 
> a probe to throw an exception, and all you get is the eventual "timed out" 
> message. 
> We can do better, and in closure-ready languages (scala & scalatest, groovy 
> and some slider code) we've examples to follow. Some of that work has been 
> reimplemented slightly in {{S3ATestUtils.eventually}}
> I propose adding a class in the test tree, {{Eventually}} to be a 
> successor/replacement for these.
> # has an eventually/waitfor operation taking a predicate that throws an 
> exception
> # has an "evaluate" exception which tries to evaluate an answer until the 
> operation stops raising an exception. (again, from scalatest)
> # plugin backoff strategies (from Scalatest; lets you do exponential as well 
> as linear)
> # option of adding a special handler to generate the failure exception (e.g. 
> run more detailed diagnostics for the exception text, etc).
> # be Java 8 lambda expression friendly
> # be testable and tested itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13731) Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8

2016-10-19 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588801#comment-15588801
 ] 

Kihwal Lee commented on HADOOP-13731:
-

I presume the java was packaged by Ubuntu. You could file a bug with Ubuntu. 
They do work with upstream to fix bugs.

> Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8
> ---
>
> Key: HADOOP-13731
> URL: https://issues.apache.org/jira/browse/HADOOP-13731
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
> Environment: OS : Ubuntu 16.04 (Xenial)
> JDK: OpenJDK 7 and OpenJDK 8
>Reporter: Anant Sharma
>Priority: Critical
>  Labels: build
>
> I am trying to build Hadoop 2.7.2(direct from the upstream with no 
> modifications) using OpenJDK 7 on Ubuntu 16.04(Xenial) but I get the 
> following errors. The result is same with OpenJDK 8 but I switched back to 
> OpenJDK 7 since its the recommended version. This is critical issue since I 
> am unable to move beyond building Hadoop.
> Other configuration details:
> Protobuf: 2.5.0 (Built from source, backported aarch64 dependencies from 2.6)
> Maven: 3.3.9
> Command Line:
>  mvn package -Pdist -DskipTests -Dtar
> Build log:
> [INFO] Building jar: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-auth-examples/target/hadoop-auth-examples-2.7.2-javadoc.jar
> [INFO]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.7.2
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test-dir
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test/data
> [INFO] Executed tasks
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:protoc (compile-protoc) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:version-info (version-info) @ 
> hadoop-common ---
> [WARNING] [svn, info] failed with error code 1
> [WARNING] [git, branch] failed with error code 128
> [INFO] SCM: NONE
> [INFO] Computed MD5: d0fda26633fa762bff87ec759ebe689c
> [INFO]
> [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
> hadoop-common ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 7 resources
> [INFO] Copying 1 resource
> [INFO]
> [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
> hadoop-common ---
> [INFO] Changes detected - recompiling the module!
> [INFO] Compiling 852 source files to 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/classes
> An exception has occurred in the compiler (1.7.0_95). Please file a bug at 
> the Java Developer Connection (http://java.sun.com/webapps/bugreport)  after 
> checking the Bug Parade for duplicates. Include your program and the 
> following diagnostic in your report.  Thank you.
> java.lang.NullPointerException
> at com.sun.tools.javac.tree.TreeInfo.skipParens(TreeInfo.java:571)
> at com.sun.tools.javac.jvm.Gen.visitIf(Gen.java:1613)
> at com.sun.tools.javac.tree.JCTree$JCIf.accept(JCTree.java:1140)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genLoop(Gen.java:1080)
> at com.sun.tools.javac.jvm.Gen.visitForLoop(Gen.java:1051)
> at com.sun.tools.javac.tree.JCTree$JCForLoop.accept(JCTree.java:872)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genMethod(Gen.java:912)
> at 

[jira] [Commented] (HADOOP-13735) ITestS3AFileContextStatistics.testStatistics()

2016-10-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588742#comment-15588742
 ] 

Steve Loughran commented on HADOOP-13735:
-


{code}
Running org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.823 sec <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics
testStatistics(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics)
  Time elapsed: 3.356 sec  <<< FAILURE!
java.lang.AssertionError: expected:<512> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextStatistics.verifyWrittenBytes(ITestS3AFileContextStatistics.java:54)
{code}

> ITestS3AFileContextStatistics.testStatistics()
> --
>
> Key: HADOOP-13735
> URL: https://issues.apache.org/jira/browse/HADOOP-13735
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> The test {{ITestS3AFileContextStatistics.testStatistics()}} seems to fail 
> pretty reliably these days...I'd assumed it was some race condition, but 
> maybe not. 
> Fixing this will probably entail adding more diagnostics to the base test 
> case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13735) ITestS3AFileContextStatistics.testStatistics()

2016-10-19 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13735:
---

 Summary: ITestS3AFileContextStatistics.testStatistics()
 Key: HADOOP-13735
 URL: https://issues.apache.org/jira/browse/HADOOP-13735
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


The test {{ITestS3AFileContextStatistics.testStatistics()}} seems to fail 
pretty reliably these days...I'd assumed it was some race condition, but maybe 
not. 

Fixing this will probably entail adding more diagnostics to the base test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-10-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13614:

Attachment: HADOOP-13614-branch-2-007.patch

patch 007

* apply to branch-2
* verify that all appears well. 
* Tune more tests, including ITestS3AMiniYarnCluster.

A big change here is fixing up the scale tests to work as subclasses of the 
S3AScaleTestBase indirect subclasses of AbstractFSContractTestBase, because 
that sets up the test timeout rule. Rather than have a field of the same name 
and hope that its timeout gets picked up, I've tuned how timeouts get set up, 
so the subclasses do it. All well and good, except those subclasses are being 
called during the initialization of the base class, that is: before the 
subclasses are full inited. I don't ever like doing that, though it is working 
here.

Tested S3a ireland, two (unrelated) failures
{code}
  
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive:97->Assert.fail:88
 Deletion of child entries failed, still have1
  s3a://hwdev-steve-ireland-new/fork-5

  
ITestS3AFileContextStatistics>FCStatisticsBaseTest.testStatistics:102->verifyWrittenBytes:54
 expected:<512> but was:<0>
{code}

Time to run without scale tests, ~4 minutes, going to 6:30 with the scale tests 
@ default size in.

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, 
> HADOOP-13614-branch-2-004.patch, HADOOP-13614-branch-2-005.patch, 
> HADOOP-13614-branch-2-006.patch, HADOOP-13614-branch-2-007.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade surefire to 2.19.1

2016-10-19 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588707#comment-15588707
 ] 

Ewan Higgs commented on HADOOP-13514:
-

It should be obvious that a minor version bump will not require a new unit test.

> Upgrade surefire to 2.19.1
> --
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13733) Support WASB connections through an HTTP proxy server.

2016-10-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588520#comment-15588520
 ] 

Steve Loughran commented on HADOOP-13733:
-

I'd look at the unfinished patch from larry of HADOOP-12804, which grabs the 
proxy password from a jceks file. There's also a tests in 
{{ITestS3AConfiguration}} which verify proxy propagation

> Support WASB connections through an HTTP proxy server.
> --
>
> Key: HADOOP-13733
> URL: https://issues.apache.org/jira/browse/HADOOP-13733
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Chris Nauroth
>
> WASB currently does not support use of an HTTP proxy server to connect to the 
> Azure Storage back-end.  The Azure Storage SDK does support use of a proxy, 
> so we can enhance WASB to read proxy settings from configuration and pass 
> them along in the Azure Storage SDK calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13731) Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8

2016-10-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13731.
-
Resolution: Won't Fix

Going to have to close this as a wontfix I'm afraid; bug in Javac. Nobody else 
has reported this, so file with the javac team, as the stack trace says, then 
try with oracle jdk
{code}
Please file a bug at the Java Developer Connection 
(http://java.sun.com/webapps/bugreport) after checking the Bug Parade for 
duplicates. Include your program and the following diagnostic in your report. 
Thank you.
{code}

> Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8
> ---
>
> Key: HADOOP-13731
> URL: https://issues.apache.org/jira/browse/HADOOP-13731
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
> Environment: OS : Ubuntu 16.04 (Xenial)
> JDK: OpenJDK 7 and OpenJDK 8
>Reporter: Anant Sharma
>Priority: Critical
>  Labels: build
>
> I am trying to build Hadoop 2.7.2(direct from the upstream with no 
> modifications) using OpenJDK 7 on Ubuntu 16.04(Xenial) but I get the 
> following errors. The result is same with OpenJDK 8 but I switched back to 
> OpenJDK 7 since its the recommended version. This is critical issue since I 
> am unable to move beyond building Hadoop.
> Other configuration details:
> Protobuf: 2.5.0 (Built from source, backported aarch64 dependencies from 2.6)
> Maven: 3.3.9
> Command Line:
>  mvn package -Pdist -DskipTests -Dtar
> Build log:
> [INFO] Building jar: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-auth-examples/target/hadoop-auth-examples-2.7.2-javadoc.jar
> [INFO]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.7.2
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test-dir
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test/data
> [INFO] Executed tasks
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:protoc (compile-protoc) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:version-info (version-info) @ 
> hadoop-common ---
> [WARNING] [svn, info] failed with error code 1
> [WARNING] [git, branch] failed with error code 128
> [INFO] SCM: NONE
> [INFO] Computed MD5: d0fda26633fa762bff87ec759ebe689c
> [INFO]
> [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
> hadoop-common ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 7 resources
> [INFO] Copying 1 resource
> [INFO]
> [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
> hadoop-common ---
> [INFO] Changes detected - recompiling the module!
> [INFO] Compiling 852 source files to 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/classes
> An exception has occurred in the compiler (1.7.0_95). Please file a bug at 
> the Java Developer Connection (http://java.sun.com/webapps/bugreport)  after 
> checking the Bug Parade for duplicates. Include your program and the 
> following diagnostic in your report.  Thank you.
> java.lang.NullPointerException
> at com.sun.tools.javac.tree.TreeInfo.skipParens(TreeInfo.java:571)
> at com.sun.tools.javac.jvm.Gen.visitIf(Gen.java:1613)
> at com.sun.tools.javac.tree.JCTree$JCIf.accept(JCTree.java:1140)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genLoop(Gen.java:1080)
> at com.sun.tools.javac.jvm.Gen.visitForLoop(Gen.java:1051)
> at com.sun.tools.javac.tree.JCTree$JCForLoop.accept(JCTree.java:872)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at 

[jira] [Commented] (HADOOP-13730) After 5 connection failures, yarn stops sending metrics graphite until restarted

2016-10-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588509#comment-15588509
 ] 

Steve Loughran commented on HADOOP-13730:
-

It's good to have retries to deal with transient outages, and race conditions 
in cluster launch/teardown, but at the same time, retrying forever can have 
adverse consequences (including, if the stack is logged, log overflows. nothing 
like coming in to find a 20GB error log)

I'd think about using the retry policies in 
{{org.apache.hadoop.io.retry.RetryPolicies}}; they are there for failures.

> After 5 connection failures, yarn stops sending metrics graphite until 
> restarted
> 
>
> Key: HADOOP-13730
> URL: https://issues.apache.org/jira/browse/HADOOP-13730
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
>Reporter: Sean Young
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: 
> 0001-Graphite-can-be-unreachable-for-some-time-and-come-b.patch
>
>
> We've had issues in production where metrics stopped. We found the following 
> in the log files:
> 2016-09-02 21:44:32,493 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
> Error sending metrics to Graphite
> java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
> at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
> at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
> at java.io.Writer.write(Writer.java:154)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> 2016-09-03 00:03:04,335 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
> Error sending metrics to Graphite
> java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
> at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:147)
> at java.io.OutputStreamWriter.write(OutputStreamWriter.java:270)
> at java.io.Writer.write(Writer.java:154)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink$Graphite.write(GraphiteSink.java:170)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSink.putMetrics(GraphiteSink.java:98)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> 2016-09-03 00:20:35,436 WARN org.apache.hadoop.metrics2.sink.GraphiteSink: 
> Error sending metrics to Graphite
> java.net.SocketException: Connection timed out
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:120)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:164)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
> at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:294)
> at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:137)
> at 

[jira] [Commented] (HADOOP-13734) ListStatus Returns Incorrect Result for Blank File

2016-10-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588501#comment-15588501
 ] 

Steve Loughran commented on HADOOP-13734:
-

There's a problem here in that Swift client uses an empty file as a marker for 
"a directory"; there's no distinguishing of the two. This is something that S3N 
initated, and which the SwiftFS copies. 

If we pull it, stuff which looks for (empty) directories is going to break. e.g

{code}
Path path = new path("dir")
fs.mkdirs(path)
FileStatus stat = fs.getFileStatus(path)
assert(stat.isDirectory())
{code}

There's no easy answer here. S3A, for S3, uses files ending in "/" for 
directory markers, which does provide more differentiation, but complicates 
other things (a call to getFileStatus() takes 3 HTTP requests)

> ListStatus Returns Incorrect Result for Blank File
> --
>
> Key: HADOOP-13734
> URL: https://issues.apache.org/jira/browse/HADOOP-13734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Reporter: Kevin Huang
>
> Reproduce steps:
> 1. Create a blank file on Swift via Swift client(e.g Cyberduck)
> 2. Use Hadoop Swift API to get file status. The following is the code example:
> {code}
> Configuration hadoopConf = new Configuration();
> hadoopConf.addResource("swift-site.xml"); // Set Swift configurations
> FileSystem fs = FileSystem.get(new 
> URI("swift://containername.myprovider/"), hadoopConf);
> FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
> for(FileStatus status : statuses) {
> System.out.println(status);
> }
> {code}
> Result:
> {code}
> SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/blankfile; 
> isDirectory=true; length=0; blocksize=33554432; 
> modification_time=1476875293230}
> {code}
> API treated blankfile as a directory. That is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13734) ListStatus Returns Incorrect Result for Blank File

2016-10-19 Thread Kevin Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588461#comment-15588461
 ] 

Kevin Huang commented on HADOOP-13734:
--

I have found where the bug happened in source code, see 
[code|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java#L247].
  However, I have no idea about the fix so far.

> ListStatus Returns Incorrect Result for Blank File
> --
>
> Key: HADOOP-13734
> URL: https://issues.apache.org/jira/browse/HADOOP-13734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Reporter: Kevin Huang
>
> Reproduce steps:
> 1. Create a blank file on Swift via Swift client(e.g Cyberduck)
> 2. Use Hadoop Swift API to get file status. The following is the code example:
> {code}
> Configuration hadoopConf = new Configuration();
> hadoopConf.addResource("swift-site.xml"); // Set Swift configurations
> FileSystem fs = FileSystem.get(new 
> URI("swift://containername.myprovider/"), hadoopConf);
> FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
> for(FileStatus status : statuses) {
> System.out.println(status);
> }
> {code}
> Result:
> {code}
> SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/blankfile; 
> isDirectory=true; length=0; blocksize=33554432; 
> modification_time=1476875293230}
> {code}
> API treated blankfile as a directory. That is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13222) s3a.mkdirs() to delete empty fake parent directories

2016-10-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588449#comment-15588449
 ] 

Steve Loughran commented on HADOOP-13222:
-

There's a test case in {{org.apache.hadoop.fs.s3a.ITestS3AFileOperationCost}} 
awaiting this patch

> s3a.mkdirs() to delete empty fake parent directories
> 
>
> Key: HADOOP-13222
> URL: https://issues.apache.org/jira/browse/HADOOP-13222
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Priority: Minor
>
> {{S3AFileSystem.mkdirs()}} has a TODO comment: what do do about fake parent 
> directories.
> The answer is: as with files, they should be deleted. This can be done 
> asynchronously



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13734) ListStatus Returns Incorrect Result for Blank File

2016-10-19 Thread Kevin Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Huang updated HADOOP-13734:
-
Description: 
Reproduce steps:
1. Create a blank file on Swift via Swift client(e.g Cyberduck)
2. Use Hadoop Swift API to get file status. The following is the code example:
{code}
Configuration hadoopConf = new Configuration();
hadoopConf.addResource("swift-site.xml"); // Set Swift configurations

FileSystem fs = FileSystem.get(new 
URI("swift://containername.myprovider/"), hadoopConf);
FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
for(FileStatus status : statuses) {
System.out.println(status);
}
{code}
Result:
{code}
SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/blankfile; isDirectory=true; 
length=0; blocksize=33554432; modification_time=1476875293230}
{code}

API treated blankfile as a directory. That is incorrect.

  was:
Reproduce steps:
1. Create a blank file on Swift via Swift client(e.g Cyberduck)
2. Use Hadoop Swift API to get file status. The following is the code example:
{code}
Configuration hadoopConf = new Configuration();
hadoopConf.addResource("swift-site.xml"); // Set Swift configurations

FileSystem fs = FileSystem.get(new 
URI("swift://containername.myprovider/"), hadoopConf);
FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
for(FileStatus status : statuses) {
System.out.println(status);
}
{code}
Result:
{code}
SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/blankfile; isDirectory=true; 
length=0; blocksize=33554432; modification_time=1476875293230}
{code}



> ListStatus Returns Incorrect Result for Blank File
> --
>
> Key: HADOOP-13734
> URL: https://issues.apache.org/jira/browse/HADOOP-13734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Reporter: Kevin Huang
>
> Reproduce steps:
> 1. Create a blank file on Swift via Swift client(e.g Cyberduck)
> 2. Use Hadoop Swift API to get file status. The following is the code example:
> {code}
> Configuration hadoopConf = new Configuration();
> hadoopConf.addResource("swift-site.xml"); // Set Swift configurations
> FileSystem fs = FileSystem.get(new 
> URI("swift://containername.myprovider/"), hadoopConf);
> FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
> for(FileStatus status : statuses) {
> System.out.println(status);
> }
> {code}
> Result:
> {code}
> SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/blankfile; 
> isDirectory=true; length=0; blocksize=33554432; 
> modification_time=1476875293230}
> {code}
> API treated blankfile as a directory. That is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13734) ListStatus Returns Incorrect Result for Blank File

2016-10-19 Thread Kevin Huang (JIRA)
Kevin Huang created HADOOP-13734:


 Summary: ListStatus Returns Incorrect Result for Blank File
 Key: HADOOP-13734
 URL: https://issues.apache.org/jira/browse/HADOOP-13734
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Reporter: Kevin Huang


Reproduce steps:
1. Create a blank file on Swift via Swift client(e.g Cyberduck)
2. Use Hadoop Swift API to get file status. The following is the code example:
{code}
Configuration hadoopConf = new Configuration();
hadoopConf.addResource("swift-site.xml"); // Set Swift configurations

FileSystem fs = FileSystem.get(new 
URI("swift://containername.myprovider/"), hadoopConf);
FileStatus[] statuses = fs.listStatus(new Path("/mydir"));
for(FileStatus status : statuses) {
System.out.println(status);
}
{code}
Result:
{code}
SwiftFileStatus{ path=swift://bdd-edp.bddcs/mydir/blankfile; isDirectory=true; 
length=0; blocksize=33554432; modification_time=1476875293230}
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Description: 
There's two output stream mechanisms in Hadooop 2.7.x, neither of which handle 
massive multi-GB files that well.

"classic": buffer everything to HDD until to the close() operation; time to 
close becomes O(data); as is available disk space. Fails to exploit exploit 
idle bandwidth, and on EC2 VMs with not much HDD capacity (especially 
completing with HDFS storage), can fill up the disk.

{{S3AFastOutputStream}} uploads data in partition-sized blocks, buffering via 
byte arrays. Avoids disk problems and as it writes as soon as the first 
partition is ready, close() time is O(outstanding-data). However: needs tuning 
to reduce amount of data buffered. Get it wrong, and the first clue you get may 
be that the process goes OOM or is killed by YARN. Which is a shame, as get it 
right and operations which generates lots of data, complete much faster, 
including distcp.

This patch proposes a new output stream, a successor to both, 
{{S3ABlockOutputStream}}.

# uses block upload model of S3AFastOutputStream
# supports buffering via: HDD, heap and (recycled) byte buffer, offering a 
choice between memory and HDD use. HDD: no OOM problems on small JVMs/need to 
tune.
# Uses the fast output stream mechanism of limiting queue size for data to 
upload. Even when buffering via HDD, you may need to limit that use.
# lots of instrumentation to see what's being written.
# good defaults out the box (e.g buffer to HDD, partition size to strike a good 
balance of early upload and scaleability)
# robust against transient failures. The AWS SDK retries a PUT on failure; the 
entire block may need to be replayed, so HDD input cannot be buffered via 
{{java.io.BufferedInputStream}}. It has also surfaced in testing that if the 
final commit of a multipart option fails, it isn't retried —at least in the 
current SDK in use. Do that ourselves.
# use roundrobin directory allocation, for most effective disk use
# take an AWS SDK {{com.amazonaws.event.ProgressListener}} for progress 
callbacks, giving more detail on the operation. (It actually takes a 
{{org.apache.hadoop.util.Progressable}}, but if that also implements the AWS 
interface, that is used instead.

All of this to come with scale tests

* generate large files using all buffer mechanisms
* Do a large copy/rname and verify that the copy really works, including 
metadata
* be configurable with sizes up to muti-GB, which also means that the test 
timeouts need to be configurable to match the time it can take.
* As they are slow, make them optional, using the {{-Dscale}} switch to enable.

Verifying large file rename is important on its own, as it is needed for very 
large commit operations for committers using rename

The goal here is to implement a single, object stream which can be used for all 
outputs, tuneable as to whether to use disk or memory, and on queue sizes, but 
otherwise be all that's needed. We can do future development on this, remove 
its predecessor {{S3AFastOutputStream}}, so keeping docs and testing down, and 
leave the original {{S3AOutputStream}} alone for regression testing/fallback.

  was:
An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
that metadata isn't copied on large copies.

1. Add a test to do that large copy/rname and verify that the copy really works
2. Verify that metadata makes it over.

Verifying large file rename is important on its own, as it is needed for very 
large commit operations for committers using rename


> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> There's two output stream mechanisms in Hadooop 2.7.x, neither of which 
> handle massive multi-GB files that well.
> "classic": buffer everything to HDD until to the close() operation; time to 
> close becomes O(data); as is available disk space. Fails to exploit exploit 
> idle bandwidth, and on EC2 VMs with not much HDD capacity (especially 
> completing with HDFS storage), can fill up the disk.
> {{S3AFastOutputStream}} uploads data in partition-sized blocks, buffering via 
> byte arrays. Avoids disk problems and as it writes as soon as the first 
> partition is ready, close() time is O(outstanding-data). However: needs 
> tuning to 

[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Parent Issue: HADOOP-11694  (was: HADOOP-13204)

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13703:

Fix Version/s: 2.8.0

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-13560-012.patch, HADOOP-13560-branch-2-010.patch, 
> HADOOP-13560-branch-2-011.patch, HADOOP-13560-branch-2-013.patch, 
> HADOOP-13560-branch-2-014.patch, HADOOP-13560-branch-2-015.patch, 
> HADOOP-13560-branch-2-016.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13703:

Parent Issue: HADOOP-11694  (was: HADOOP-13204)

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-13560-012.patch, HADOOP-13560-branch-2-010.patch, 
> HADOOP-13560-branch-2-011.patch, HADOOP-13560-branch-2-013.patch, 
> HADOOP-13560-branch-2-014.patch, HADOOP-13560-branch-2-015.patch, 
> HADOOP-13560-branch-2-016.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13569) S3AFastOutputStream to take ProgressListener in file create()

2016-10-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13569:

Parent Issue: HADOOP-11694  (was: HADOOP-13204)

> S3AFastOutputStream to take ProgressListener in file create()
> -
>
> Key: HADOOP-13569
> URL: https://issues.apache.org/jira/browse/HADOOP-13569
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
>
> For scale testing I'd like more meaningful progress than the Hadoop 
> {{Progressable}} offers. 
> Proposed: having {{S3AFastOutputStream}} check to see if the progressable 
> passed in is also an instance of {{com.amazonaws.event.ProgressListener}} 
> —and if so, wire it up directly.
> This allows tests to directly track state of upload, log it and perhaps even 
> assert on it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13569) S3AFastOutputStream to take ProgressListener in file create()

2016-10-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13569:

Fix Version/s: (was: 2.9.0)
   2.8.0

> S3AFastOutputStream to take ProgressListener in file create()
> -
>
> Key: HADOOP-13569
> URL: https://issues.apache.org/jira/browse/HADOOP-13569
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
>
> For scale testing I'd like more meaningful progress than the Hadoop 
> {{Progressable}} offers. 
> Proposed: having {{S3AFastOutputStream}} check to see if the progressable 
> passed in is also an instance of {{com.amazonaws.event.ProgressListener}} 
> —and if so, wire it up directly.
> This allows tests to directly track state of upload, log it and perhaps even 
> assert on it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13440) FileContext does not react on changing umask via configuration

2016-10-19 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588259#comment-15588259
 ] 

Tao Yang commented on HADOOP-13440:
---

The patch is fine to solve the problem in YARN-5749. I will resolve that issue 
and mark it as Duplicated.  Thanks.

> FileContext does not react on changing umask via configuration
> --
>
> Key: HADOOP-13440
> URL: https://issues.apache.org/jira/browse/HADOOP-13440
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5425.00.patch
>
>
> After HADOOP-13073, RawLocalFileSystem does react on changing umask but 
> FileContext does not react on changing umask via configuration. 
> TestDirectoryCollection fails by the inconsistent behavior.
> {code}
> java.lang.AssertionError: local dir parent not created with proper 
> permissions expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.TestDirectoryCollection.testCreateDirectories(TestDirectoryCollection.java:113)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13440) FileContext does not react on changing umask via configuration

2016-10-19 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588188#comment-15588188
 ] 

Akira Ajisaka commented on HADOOP-13440:


Hi [~Tao Yang], I'm seeing the problem in YARN-5679 and provided a patch to fix 
this. Would you check the patch in YARN-5679?

> FileContext does not react on changing umask via configuration
> --
>
> Key: HADOOP-13440
> URL: https://issues.apache.org/jira/browse/HADOOP-13440
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5425.00.patch
>
>
> After HADOOP-13073, RawLocalFileSystem does react on changing umask but 
> FileContext does not react on changing umask via configuration. 
> TestDirectoryCollection fails by the inconsistent behavior.
> {code}
> java.lang.AssertionError: local dir parent not created with proper 
> permissions expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.TestDirectoryCollection.testCreateDirectories(TestDirectoryCollection.java:113)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13440) FileContext does not react on changing umask via configuration

2016-10-19 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588160#comment-15588160
 ] 

Tao Yang commented on HADOOP-13440:
---

Hi [~ajisakaa]. There is a problem in YARN-5749 occurred by this solution.  
FileContext is used with different umask value in Multiple services use 
FileContext objects with different umask, therefor they may affect each other 
through changing umask in some scenarios. Could you give some suggestions to 
solve this problem?  Thank you.

> FileContext does not react on changing umask via configuration
> --
>
> Key: HADOOP-13440
> URL: https://issues.apache.org/jira/browse/HADOOP-13440
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5425.00.patch
>
>
> After HADOOP-13073, RawLocalFileSystem does react on changing umask but 
> FileContext does not react on changing umask via configuration. 
> TestDirectoryCollection fails by the inconsistent behavior.
> {code}
> java.lang.AssertionError: local dir parent not created with proper 
> permissions expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.TestDirectoryCollection.testCreateDirectories(TestDirectoryCollection.java:113)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588145#comment-15588145
 ] 

Hadoop QA commented on HADOOP-11798:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
12s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-11798 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834115/HADOOP-11798-v5.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  findbugs  checkstyle  |
| uname | Linux 12d97e29d2d8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5573e6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10827/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10827/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>  

[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588071#comment-15588071
 ] 

ASF GitHub Bot commented on HADOOP-13560:
-

Github user steveloughran closed the pull request at:

https://github.com/apache/hadoop/pull/130


> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-19 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-11798:
---
Attachment: HADOOP-11798-v5.patch

Remove unnecessary #include in .c files. 

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, 
> HADOOP-11798-v3.patch, HADOOP-11798-v4.patch, HADOOP-11798-v5.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-19 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-11798:
---
Comment: was deleted

(was: Hi, Wei-Chiu and Kai, thanks so much for review the patch!  I realize 
there may be more comments for documents than for code. So I created a new JIRA 
HDFS-11033 to track the documentation. )

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, 
> HADOOP-11798-v3.patch, HADOOP-11798-v4.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-19 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588025#comment-15588025
 ] 

SammiChen commented on HADOOP-11798:


Hi, Wei-Chiu and Kai, thanks so much for review the patch! I realize there may 
be more comments for documents than for code. So I created a new JIRA 
HDFS-11033 to track the documentation. Let's focus on code in this JIRA. 

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, 
> HADOOP-11798-v3.patch, HADOOP-11798-v4.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-19 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588019#comment-15588019
 ] 

SammiChen commented on HADOOP-11798:


Hi, Wei-Chiu and Kai, thanks so much for review the patch!  I realize there may 
be more comments for documents than for code. So I created a new JIRA 
HDFS-11033 to track the documentation. 

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, 
> HADOOP-11798-v3.patch, HADOOP-11798-v4.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13364) Variable HADOOP_LIBEXEC_DIR must be quoted in bin/hadoop line 26

2016-10-19 Thread Yulei Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yulei Li reassigned HADOOP-13364:
-

Assignee: Yulei Li

> Variable HADOOP_LIBEXEC_DIR must be quoted in bin/hadoop line 26
> 
>
> Key: HADOOP-13364
> URL: https://issues.apache.org/jira/browse/HADOOP-13364
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.6.4
> Environment: Linux, Unix, Mac machines with spaces in file paths
>Reporter: Jeffrey McAteer
>Assignee: Yulei Li
>  Labels: script
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Upon a standard download, untaring, and execution of 
> './hadoop-2.6.4/bin/hadoop version', I received: './hadoop-2.6.4/bin/hadoop: 
> line 26: /Users/jeffrey/Projects/Hadoop: No such file or directory'
> My project directory was called 'Hadoop Playground', with a space in it. Upon 
> investigating, I found line 26 held:
> . $HADOOP_LIBEXEC_DIR/hadoop-config.sh
> Which means the variable $HADOOP_LIBEXEC_DIR will be handled as multiple 
> arguments if there is a space. The solution is to quote the variable, like so:
> . "$HADOOP_LIBEXEC_DIR/hadoop-config.sh"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13731) Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8

2016-10-19 Thread Yulei Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587931#comment-15587931
 ] 

Yulei Li commented on HADOOP-13731:
---

I think you should try to use oracle JDK, maybe it's a issue of OpenJDK

> Cant compile Hadoop 2.7.2 on Ubuntu Xenial (16.04) with JDK 7/8
> ---
>
> Key: HADOOP-13731
> URL: https://issues.apache.org/jira/browse/HADOOP-13731
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
> Environment: OS : Ubuntu 16.04 (Xenial)
> JDK: OpenJDK 7 and OpenJDK 8
>Reporter: Anant Sharma
>Priority: Critical
>  Labels: build
>
> I am trying to build Hadoop 2.7.2(direct from the upstream with no 
> modifications) using OpenJDK 7 on Ubuntu 16.04(Xenial) but I get the 
> following errors. The result is same with OpenJDK 8 but I switched back to 
> OpenJDK 7 since its the recommended version. This is critical issue since I 
> am unable to move beyond building Hadoop.
> Other configuration details:
> Protobuf: 2.5.0 (Built from source, backported aarch64 dependencies from 2.6)
> Maven: 3.3.9
> Command Line:
>  mvn package -Pdist -DskipTests -Dtar
> Build log:
> [INFO] Building jar: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-auth-examples/target/hadoop-auth-examples-2.7.2-javadoc.jar
> [INFO]
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 2.7.2
> [INFO] 
> 
> [INFO]
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test-dir
> [mkdir] Created dir: 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/test/data
> [INFO] Executed tasks
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:protoc (compile-protoc) @ hadoop-common 
> ---
> [INFO]
> [INFO] --- hadoop-maven-plugins:2.7.2:version-info (version-info) @ 
> hadoop-common ---
> [WARNING] [svn, info] failed with error code 1
> [WARNING] [git, branch] failed with error code 128
> [INFO] SCM: NONE
> [INFO] Computed MD5: d0fda26633fa762bff87ec759ebe689c
> [INFO]
> [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
> hadoop-common ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 7 resources
> [INFO] Copying 1 resource
> [INFO]
> [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
> hadoop-common ---
> [INFO] Changes detected - recompiling the module!
> [INFO] Compiling 852 source files to 
> /home/ubuntu/hadoop-2.7.2-src/hadoop-common-project/hadoop-common/target/classes
> An exception has occurred in the compiler (1.7.0_95). Please file a bug at 
> the Java Developer Connection (http://java.sun.com/webapps/bugreport)  after 
> checking the Bug Parade for duplicates. Include your program and the 
> following diagnostic in your report.  Thank you.
> java.lang.NullPointerException
> at com.sun.tools.javac.tree.TreeInfo.skipParens(TreeInfo.java:571)
> at com.sun.tools.javac.jvm.Gen.visitIf(Gen.java:1613)
> at com.sun.tools.javac.tree.JCTree$JCIf.accept(JCTree.java:1140)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genLoop(Gen.java:1080)
> at com.sun.tools.javac.jvm.Gen.visitForLoop(Gen.java:1051)
> at com.sun.tools.javac.tree.JCTree$JCForLoop.accept(JCTree.java:872)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:705)
> at com.sun.tools.javac.jvm.Gen.genStats(Gen.java:756)
> at com.sun.tools.javac.jvm.Gen.visitBlock(Gen.java:1031)
> at com.sun.tools.javac.tree.JCTree$JCBlock.accept(JCTree.java:781)
> at com.sun.tools.javac.jvm.Gen.genDef(Gen.java:684)
> at com.sun.tools.javac.jvm.Gen.genStat(Gen.java:719)
> at com.sun.tools.javac.jvm.Gen.genMethod(Gen.java:912)
> at com.sun.tools.javac.jvm.Gen.visitMethodDef(Gen.java:885)
> at 
> 

[jira] [Commented] (HADOOP-13725) Open MapFile for append

2016-10-19 Thread Yulei Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15587885#comment-15587885
 ] 

Yulei Li commented on HADOOP-13725:
---

Could you describe the problem more detailed? In the MapFile source code, I see 
it have append method and before appending the method actually checks the new 
key. So how to reproduce your problem? Thanks

> Open MapFile for append
> ---
>
> Key: HADOOP-13725
> URL: https://issues.apache.org/jira/browse/HADOOP-13725
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: VITALIY SAVCHENKO
>
> I think it possible to open MapFile for appending.
> SequenceFile support it (Option SequenceFile.Writer.appendIfExists(true) 
> HADOOP-7139)
> Now it almost working. But if use SequenceFile.Writer.appendIfExists(true) 
> MapFile.Writer - it not read last key and does not check new keys. That's why 
> MapFile can be corrupted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org