[jira] [Created] (HADOOP-9440) Unit Test: hadoop-common2.0.3 TestIPC unit test fails on protobuf2.5.0+

2013-03-28 Thread Tian Hong Wang (JIRA)
Tian Hong Wang created HADOOP-9440:
--

 Summary: Unit Test: hadoop-common2.0.3 TestIPC unit test fails on 
protobuf2.5.0+
 Key: HADOOP-9440
 URL: https://issues.apache.org/jira/browse/HADOOP-9440
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
 Fix For: 2.0.3-alpha


TestIPC runs normally if use protobuf2.4.1 or below version. But if using 
protobuf2.5.0, TestIPC will fail.

java.io.IOException: Failed on local exception: 
com.google.protobuf.InvalidProtocolBufferException: 500 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
remote=louis-ThinkPad-T410/127.0.0.1:50353]; Host Details : local host is: 
louis-ThinkPad-T410/127.0.0.1; destination host is: 
louis-ThinkPad-T410:50353;

TestIPC fails because it catches the  
com.google.protobuf.InvalidProtocolBufferException not SocketTimeoutException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9440) Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0+

2013-03-28 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HADOOP-9440:
---

Summary: Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0+  
(was: Unit Test: hadoop-common2.0.3 TestIPC unit test fails on protobuf2.5.0+)

 Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0+
 -

 Key: HADOOP-9440
 URL: https://issues.apache.org/jira/browse/HADOOP-9440
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha


 TestIPC runs normally if use protobuf2.4.1 or below version. But if using 
 protobuf2.5.0, TestIPC will fail.
 java.io.IOException: Failed on local exception: 
 com.google.protobuf.InvalidProtocolBufferException: 500 millis timeout while 
 waiting for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
 remote=louis-ThinkPad-T410/127.0.0.1:50353]; Host Details : local host is: 
 louis-ThinkPad-T410/127.0.0.1; destination host is: 
 louis-ThinkPad-T410:50353;
 TestIPC fails because it catches the  
 com.google.protobuf.InvalidProtocolBufferException not SocketTimeoutException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9440) Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0+

2013-03-28 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HADOOP-9440:
---

Attachment: HADOOP-9440.patch

 Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0+
 -

 Key: HADOOP-9440
 URL: https://issues.apache.org/jira/browse/HADOOP-9440
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9440.patch


 TestIPC runs normally if use protobuf2.4.1 or below version. But if using 
 protobuf2.5.0, TestIPC will fail.
 java.io.IOException: Failed on local exception: 
 com.google.protobuf.InvalidProtocolBufferException: 500 millis timeout while 
 waiting for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
 remote=louis-ThinkPad-T410/127.0.0.1:50353]; Host Details : local host is: 
 louis-ThinkPad-T410/127.0.0.1; destination host is: 
 louis-ThinkPad-T410:50353;
 TestIPC fails because it catches the  
 com.google.protobuf.InvalidProtocolBufferException not SocketTimeoutException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9440) Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0+

2013-03-28 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HADOOP-9440:
---

Affects Version/s: 2.0.3-alpha
   Status: Patch Available  (was: Open)

In order to ensure the minimal change of the original code and keep the 
SocketTimeoutException, add additional IOException catch statement to catch 
other IOException except the SocketTimeoutException.

 Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0+
 -

 Key: HADOOP-9440
 URL: https://issues.apache.org/jira/browse/HADOOP-9440
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9440.patch


 TestIPC runs normally if use protobuf2.4.1 or below version. But if using 
 protobuf2.5.0, TestIPC will fail.
 java.io.IOException: Failed on local exception: 
 com.google.protobuf.InvalidProtocolBufferException: 500 millis timeout while 
 waiting for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
 remote=louis-ThinkPad-T410/127.0.0.1:50353]; Host Details : local host is: 
 louis-ThinkPad-T410/127.0.0.1; destination host is: 
 louis-ThinkPad-T410:50353;
 TestIPC fails because it catches the  
 com.google.protobuf.InvalidProtocolBufferException not SocketTimeoutException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9435) Native build hadoop-common-project fails on $JAVA_HOME/include/jni_md.h using ibm java

2013-03-28 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HADOOP-9435:
---

Affects Version/s: 2.0.3-alpha
   Status: Patch Available  (was: Open)

 Native build hadoop-common-project fails on $JAVA_HOME/include/jni_md.h using 
 ibm java
 --

 Key: HADOOP-9435
 URL: https://issues.apache.org/jira/browse/HADOOP-9435
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9435.patch


 When native build hadoop-common-project with IBM java using command like: 
 mvn package -Pnative
 it will exist the following errors.
  [exec] CMake Error at JNIFlags.cmake:113 (MESSAGE):
  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
  [exec] Call Stack (most recent call first):
  [exec]   CMakeLists.txt:24 (include)
  [exec] 
  [exec] 
  [exec] -- Configuring incomplete, errors occurred!
 The reason is that IBM java uses $JAVA_HOME/include/jniport.h instead of 
 $JAVA_HOME/include/jni_md.h in Oracle java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9440) Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0+

2013-03-28 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HADOOP-9440:
---

Description: 
TestIPC runs normally if use protobuf2.4.1 or below version. But if using 
protobuf2.5.0, TestIPC will fail.

java.io.IOException: Failed on local exception: 
com.google.protobuf.InvalidProtocolBufferException: 500 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
remote=louis-ThinkPad-T410/127.0.0.1:50353]; Host Details : local host is: 
louis-ThinkPad-T410/127.0.0.1; destination host is: 
louis-ThinkPad-T410:50353;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:761)
at org.apache.hadoop.ipc.Client.call(Client.java:1239)
at org.apache.hadoop.ipc.Client.call(Client.java:1163)
at org.apache.hadoop.ipc.TestIPC.testIpcTimeout(TestIPC.java:492)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
Caused by: com.google.protobuf.InvalidProtocolBufferException: 500 millis 
timeout while waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
remote=louis-ThinkPad-T410/127.0.0.1:50353]
at 
com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:238)
at 
com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
at 
com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
at 
com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
at 
org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcPayloadHeaderProtos.java:1434)
at 
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:946)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:844)

TestIPC fails because it catches the  
com.google.protobuf.InvalidProtocolBufferException not SocketTimeoutException.

  was:
TestIPC runs normally if use protobuf2.4.1 or below version. But if using 
protobuf2.5.0, TestIPC will fail.

java.io.IOException: Failed on local exception: 
com.google.protobuf.InvalidProtocolBufferException: 500 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
remote=louis-ThinkPad-T410/127.0.0.1:50353]; Host Details : local host is: 

[jira] [Updated] (HADOOP-9440) Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0+

2013-03-28 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HADOOP-9440:
---

Description: 
TestIPC runs normally if use protobuf2.4.1 or below version. But if using 
protobuf2.5.0, TestIPC.testIpcTimeout   TestIPC.testIpcConnectTimeout will 
fail.

java.io.IOException: Failed on local exception: 
com.google.protobuf.InvalidProtocolBufferException: 500 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
remote=louis-ThinkPad-T410/127.0.0.1:50353]; Host Details : local host is: 
louis-ThinkPad-T410/127.0.0.1; destination host is: 
louis-ThinkPad-T410:50353; 
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:761)
at org.apache.hadoop.ipc.Client.call(Client.java:1239)
at org.apache.hadoop.ipc.Client.call(Client.java:1163)
at org.apache.hadoop.ipc.TestIPC.testIpcTimeout(TestIPC.java:492)

testIpcConnectTimeout(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 2009 sec  
 ERROR!
java.io.IOException: Failed on local exception: 
com.google.protobuf.InvalidProtocolBufferException: 2000 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/127.0.0.1:51304 
remote=louis-ThinkPad-T410/127.0.0.1:39525]; Host Details : local host is: 
louis-ThinkPad-T410/127.0.0.1; destination host is: 
louis-ThinkPad-T410:39525; 
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:761)
at org.apache.hadoop.ipc.Client.call(Client.java:1239)
at org.apache.hadoop.ipc.Client.call(Client.java:1163)
at org.apache.hadoop.ipc.TestIPC.testIpcConnectTimeout(TestIPC.java:515)

TestIPC.testIpcTimeout   TestIPC.testIpcConnectTimeout fails because it 
catches the  com.google.protobuf.InvalidProtocolBufferException not 
SocketTimeoutException.

  was:
TestIPC runs normally if use protobuf2.4.1 or below version. But if using 
protobuf2.5.0, TestIPC will fail.

java.io.IOException: Failed on local exception: 
com.google.protobuf.InvalidProtocolBufferException: 500 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
remote=louis-ThinkPad-T410/127.0.0.1:50353]; Host Details : local host is: 
louis-ThinkPad-T410/127.0.0.1; destination host is: 
louis-ThinkPad-T410:50353;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:761)
at org.apache.hadoop.ipc.Client.call(Client.java:1239)
at org.apache.hadoop.ipc.Client.call(Client.java:1163)
at org.apache.hadoop.ipc.TestIPC.testIpcTimeout(TestIPC.java:492)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 

[jira] [Updated] (HADOOP-9440) Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0

2013-03-28 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HADOOP-9440:
---

Summary: Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0  
(was: Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0+)

 Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0
 

 Key: HADOOP-9440
 URL: https://issues.apache.org/jira/browse/HADOOP-9440
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9440.patch


 TestIPC runs normally if use protobuf2.4.1 or below version. But if using 
 protobuf2.5.0, TestIPC.testIpcTimeout   TestIPC.testIpcConnectTimeout will 
 fail.
 java.io.IOException: Failed on local exception: 
 com.google.protobuf.InvalidProtocolBufferException: 500 millis timeout while 
 waiting for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
 remote=louis-ThinkPad-T410/127.0.0.1:50353]; Host Details : local host is: 
 louis-ThinkPad-T410/127.0.0.1; destination host is: 
 louis-ThinkPad-T410:50353; 
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:761)
   at org.apache.hadoop.ipc.Client.call(Client.java:1239)
   at org.apache.hadoop.ipc.Client.call(Client.java:1163)
   at org.apache.hadoop.ipc.TestIPC.testIpcTimeout(TestIPC.java:492)
 testIpcConnectTimeout(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 2009 sec  
  ERROR!
 java.io.IOException: Failed on local exception: 
 com.google.protobuf.InvalidProtocolBufferException: 2000 millis timeout while 
 waiting for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/127.0.0.1:51304 
 remote=louis-ThinkPad-T410/127.0.0.1:39525]; Host Details : local host is: 
 louis-ThinkPad-T410/127.0.0.1; destination host is: 
 louis-ThinkPad-T410:39525; 
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:761)
   at org.apache.hadoop.ipc.Client.call(Client.java:1239)
   at org.apache.hadoop.ipc.Client.call(Client.java:1163)
   at org.apache.hadoop.ipc.TestIPC.testIpcConnectTimeout(TestIPC.java:515)
 TestIPC.testIpcTimeout   TestIPC.testIpcConnectTimeout fails because it 
 catches the  com.google.protobuf.InvalidProtocolBufferException not 
 SocketTimeoutException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9425) Add error codes to rpc-response

2013-03-28 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616152#comment-13616152
 ] 

Luke Lu commented on HADOOP-9425:
-

A well defined set of error codes make sense. Couple of nits:

# RpcDetailErrorProto might be better named as simply RpcErrorCodeProto.
# Add a getter for the errorCode for a unit test, which would obviate the need 
of findbug suppression.

 Add error codes to rpc-response
 ---

 Key: HADOOP-9425
 URL: https://issues.apache.org/jira/browse/HADOOP-9425
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Attachments: HADOOP-9425-1.patch, HADOOP-9425-2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache

2013-03-28 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9200:
---

Attachment: HADOOP-9200-trunk--N2.patch

Hi, Kihval, 
I'm attaching a new version of the patch where I have rewritten the 
implementation of NetgroupCache class. Please review it.
I believe, now the data access problems are solved without any blocking.

 enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache
 

 Key: HADOOP-9200
 URL: https://issues.apache.org/jira/browse/HADOOP-9200
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9200-trunk--N2.patch, HADOOP-9200-trunk.patch


 The class org.apache.hadoop.security.NetgroupCache has poor unit-test 
 coverage. Enhance it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache

2013-03-28 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616171#comment-13616171
 ] 

Ivan A. Veselovsky commented on HADOOP-9200:


I see that you already fixed H-9436, so now we probably need to choose which 
fix is better or merge them somehow.

 enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache
 

 Key: HADOOP-9200
 URL: https://issues.apache.org/jira/browse/HADOOP-9200
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9200-trunk--N2.patch, HADOOP-9200-trunk.patch


 The class org.apache.hadoop.security.NetgroupCache has poor unit-test 
 coverage. Enhance it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext

2013-03-28 Thread Dennis Y (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Y updated HADOOP-9078:
-

Attachment: HADOOP-9078-trunk--N1.patch
HADOOP-9078-branch-2--N1.patch

resolved merge conflicts on branch-2 and trunk (one conflict in imports)

 enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
 

 Key: HADOOP-9078
 URL: https://issues.apache.org/jira/browse/HADOOP-9078
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, 
 HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, 
 HADOOP-9078-branch-2--N1.patch, HADOOP-9078-branch-2.patch, 
 HADOOP-9078.patch, 
 HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch, 
 HADOOP-9078-trunk--N1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9432) Add support for markdown .md files in site documentation

2013-03-28 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616188#comment-13616188
 ] 

Luke Lu commented on HADOOP-9432:
-

+1 for md and the patch.

 Add support for markdown .md files in site documentation
 

 Key: HADOOP-9432
 URL: https://issues.apache.org/jira/browse/HADOOP-9432
 Project: Hadoop Common
  Issue Type: New Feature
  Components: build, documentation
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9432.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The markdown syntax for marking up text is something which the {{mvn site}} 
 build can be set up to support alongside the existing APT formatted text.
 Markdown offers many advantages
  # It's more widely understood.
  # There's tooling support in various text editors (TextMate, an IDEA plugin 
 and others)
  # It can be directly rendered in github
  # the {{.md}} files can be named {{.md.vm}} to trigger velocity 
 preprocessing, at the expense of direct viewing in github
 feature #3 is good as it means that you can point people directly at a doc 
 via a github mirror, and have it rendered. 
 I propose adding the options to Maven to enable content be written as {{.md}} 
 and {{.md.vm}} files in the directory {{src/site/markdown}}. This does not 
 require any changes to the existing {{.apt}} files, which can co-exist and 
 cross-reference each other.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9441) Denial of Service in IPC Server.java

2013-03-28 Thread Wouter de Bie (JIRA)
Wouter de Bie created HADOOP-9441:
-

 Summary: Denial of Service in IPC Server.java
 Key: HADOOP-9441
 URL: https://issues.apache.org/jira/browse/HADOOP-9441
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 1.1.2
Reporter: Wouter de Bie
Priority: Minor


When experimenting with a pure python client for HDFS, I noticed that there is 
a DOS in the IPC Server. The IPC packet specifies the size (32bit int) of the 
protobuf payload and that size is directly used to create a buffer that is used 
to parse the protobuf message. This means that with malformed packets, clients 
are able to allocate 4G of memory on the heap (which in my case, blew the heap 
on my test cluster).

I haven't looked at a good way of solving this, but just wanted to raise the 
issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9442) Splitting issue when using NLineInputFormat with compression

2013-03-28 Thread Qiming He (JIRA)
Qiming He created HADOOP-9442:
-

 Summary: Splitting issue when using NLineInputFormat with 
compression
 Key: HADOOP-9442
 URL: https://issues.apache.org/jira/browse/HADOOP-9442
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Try in Apache Hadoop 1.1.1, CDH4, and Amazon EMR. Same 
result.
Reporter: Qiming He
Priority: Minor



$ cat abook.txt | base64 –w 0 onelinetext.b64
$ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
$ hadoop jar hadoop-streaming.jar  \
-input /input/onelinetext.b64 \
-output /output \
-inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
–mapper wc 
Num task: 1, and output has one line:
Line 1: 1 2 202699
which makes sense because one line per mapper is intended.

Them, using compression with NLineInputFormat 
$ bzip2 onelinetext.b64
$ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
$ hadoop jar hadoop-streaming.jar \
  -Dmapred.input.compress=true \
  -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
  -input /input/onelinetext.b64.bz2 \
  -output /output \
  -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
  –mapper wc 
I am expecting the same results as above, 'coz decompressing should occur 
before processing one-line text (i.e. wc), however, I am getting:

Num task: 397 (or other large num depends on environments), and output has 397 
lines:
Line1-396: 0 0 0
Line 397: 1 2 202699

Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
purposely choose gzip because I believe it is NOT split-able. I got similar 
results when using bzip2 and lzop codecs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9442) Splitting issue when using NLineInputFormat with compression

2013-03-28 Thread Qiming He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiming He updated HADOOP-9442:
--

Description: 
$ cat abook.txt | base64 –w 0 onelinetext.b64
$ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
$ hadoop jar hadoop-streaming.jar  \
-input /input/onelinetext.b64 \
-output /output \
-inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
–mapper wc 
Num task: 1, and output has one line:
Line 1: 1 2 202699
which makes sense because one line per mapper is intended.

Them, using compression with NLineInputFormat 
$ bzip2 onelinetext.b64
$ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
$ hadoop jar hadoop-streaming.jar \
  -Dmapred.input.compress=true \
  -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
  -input /input/onelinetext.b64.gz \
  -output /output \
  -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
  –mapper wc 
I am expecting the same results as above, 'coz decompressing should occur 
before processing one-line text (i.e. wc), however, I am getting:

Num task: 397 (or other large num depends on environments), and output has 397 
lines:
Line1-396: 0 0 0
Line 397: 1 2 202699

Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
purposely choose gzip because I believe it is NOT split-able. I got similar 
results when using bzip2 and lzop codecs.

  was:

$ cat abook.txt | base64 –w 0 onelinetext.b64
$ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
$ hadoop jar hadoop-streaming.jar  \
-input /input/onelinetext.b64 \
-output /output \
-inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
–mapper wc 
Num task: 1, and output has one line:
Line 1: 1 2 202699
which makes sense because one line per mapper is intended.

Them, using compression with NLineInputFormat 
$ bzip2 onelinetext.b64
$ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
$ hadoop jar hadoop-streaming.jar \
  -Dmapred.input.compress=true \
  -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
  -input /input/onelinetext.b64.bz2 \
  -output /output \
  -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
  –mapper wc 
I am expecting the same results as above, 'coz decompressing should occur 
before processing one-line text (i.e. wc), however, I am getting:

Num task: 397 (or other large num depends on environments), and output has 397 
lines:
Line1-396: 0 0 0
Line 397: 1 2 202699

Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
purposely choose gzip because I believe it is NOT split-able. I got similar 
results when using bzip2 and lzop codecs.


 Splitting issue when using NLineInputFormat with compression
 

 Key: HADOOP-9442
 URL: https://issues.apache.org/jira/browse/HADOOP-9442
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Try in Apache Hadoop 1.1.1, CDH4, and Amazon EMR. Same 
 result.
Reporter: Qiming He
Priority: Minor

 $ cat abook.txt | base64 –w 0 onelinetext.b64
 $ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
 $ hadoop jar hadoop-streaming.jar  \
 -input /input/onelinetext.b64 \
 -output /output \
 -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
 –mapper wc 
 Num task: 1, and output has one line:
 Line 1: 1 2 202699
 which makes sense because one line per mapper is intended.
 Them, using compression with NLineInputFormat 
 $ bzip2 onelinetext.b64
 $ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
 $ hadoop jar hadoop-streaming.jar \
   -Dmapred.input.compress=true \
   
 -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
   -input /input/onelinetext.b64.gz \
   -output /output \
   -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
   –mapper wc 
 I am expecting the same results as above, 'coz decompressing should occur 
 before processing one-line text (i.e. wc), however, I am getting:
 Num task: 397 (or other large num depends on environments), and output has 
 397 lines:
 Line1-396: 0 0 0
 Line 397: 1 2 202699
 Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
 purposely choose gzip because I believe it is NOT split-able. I got similar 
 results when using bzip2 and lzop codecs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9442) Splitting issue when using NLineInputFormat with compression

2013-03-28 Thread Qiming He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiming He updated HADOOP-9442:
--

Description: 
#make a long text line. It seems only long line text causing issue.
$ cat abook.txt | base64 –w 0 onelinetext.b64 #200KB+ long
$ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
$ hadoop jar hadoop-streaming.jar  \
-input /input/onelinetext.b64 \
-output /output \
-inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
–mapper wc 
Num task: 1, and output has one line:
Line 1: 1 2 202699
which makes sense because one line per mapper is intended.

Them, using compression with NLineInputFormat 
$ bzip2 onelinetext.b64
$ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
$ hadoop jar hadoop-streaming.jar \
  -Dmapred.input.compress=true \
  -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
  -input /input/onelinetext.b64.gz \
  -output /output \
  -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
  –mapper wc 
I am expecting the same results as above, 'coz decompressing should occur 
before processing one-line text (i.e. wc), however, I am getting:

Num task: 397 (or other large num depends on environments), and output has 397 
lines:
Line1-396: 0 0 0
Line 397: 1 2 202699

Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
purposely choose gzip because I believe it is NOT split-able. I got similar 
results when using bzip2 and lzop codecs.

  was:
$ cat abook.txt | base64 –w 0 onelinetext.b64
$ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
$ hadoop jar hadoop-streaming.jar  \
-input /input/onelinetext.b64 \
-output /output \
-inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
–mapper wc 
Num task: 1, and output has one line:
Line 1: 1 2 202699
which makes sense because one line per mapper is intended.

Them, using compression with NLineInputFormat 
$ bzip2 onelinetext.b64
$ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
$ hadoop jar hadoop-streaming.jar \
  -Dmapred.input.compress=true \
  -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
  -input /input/onelinetext.b64.gz \
  -output /output \
  -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
  –mapper wc 
I am expecting the same results as above, 'coz decompressing should occur 
before processing one-line text (i.e. wc), however, I am getting:

Num task: 397 (or other large num depends on environments), and output has 397 
lines:
Line1-396: 0 0 0
Line 397: 1 2 202699

Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
purposely choose gzip because I believe it is NOT split-able. I got similar 
results when using bzip2 and lzop codecs.


 Splitting issue when using NLineInputFormat with compression
 

 Key: HADOOP-9442
 URL: https://issues.apache.org/jira/browse/HADOOP-9442
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Try in Apache Hadoop 1.1.1, CDH4, and Amazon EMR. Same 
 result.
Reporter: Qiming He
Priority: Minor

 #make a long text line. It seems only long line text causing issue.
 $ cat abook.txt | base64 –w 0 onelinetext.b64 #200KB+ long
 $ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
 $ hadoop jar hadoop-streaming.jar  \
 -input /input/onelinetext.b64 \
 -output /output \
 -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
 –mapper wc 
 Num task: 1, and output has one line:
 Line 1: 1 2 202699
 which makes sense because one line per mapper is intended.
 Them, using compression with NLineInputFormat 
 $ bzip2 onelinetext.b64
 $ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
 $ hadoop jar hadoop-streaming.jar \
   -Dmapred.input.compress=true \
   
 -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
   -input /input/onelinetext.b64.gz \
   -output /output \
   -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
   –mapper wc 
 I am expecting the same results as above, 'coz decompressing should occur 
 before processing one-line text (i.e. wc), however, I am getting:
 Num task: 397 (or other large num depends on environments), and output has 
 397 lines:
 Line1-396: 0 0 0
 Line 397: 1 2 202699
 Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
 purposely choose gzip because I believe it is NOT split-able. I got similar 
 results when using bzip2 and lzop codecs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9442) Splitting issue when using NLineInputFormat with compression

2013-03-28 Thread Qiming He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiming He updated HADOOP-9442:
--

Description: 
#make a long text line. It seems only long line text causing issue.
$ cat abook.txt | base64 –w 0 onelinetext.b64 #200KB+ long
$ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
$ hadoop jar hadoop-streaming.jar  \
-input /input/onelinetext.b64 \
-output /output \
-inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
–mapper wc 
Num task: 1, and output has one line:
Line 1: 1 2 202699
which makes sense because one line per mapper is intended.

Then, using compression with NLineInputFormat 
$ bzip2 onelinetext.b64
$ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
$ hadoop jar hadoop-streaming.jar \
  -Dmapred.input.compress=true \
  -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
  -input /input/onelinetext.b64.gz \
  -output /output \
  -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
  –mapper wc 
I am expecting the same results as above, 'coz decompressing should occur 
before processing one-line text (i.e. wc), however, I am getting:

Num task: 397 (or other large numbers depend on environments), and output has 
397 lines:
Line1-396: 0 0 0
Line 397: 1 2 202699

Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
purposely choose gzip because I believe it is NOT split-able. I got similar 
results when using bzip2 and lzop codecs.

  was:
#make a long text line. It seems only long line text causing issue.
$ cat abook.txt | base64 –w 0 onelinetext.b64 #200KB+ long
$ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
$ hadoop jar hadoop-streaming.jar  \
-input /input/onelinetext.b64 \
-output /output \
-inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
–mapper wc 
Num task: 1, and output has one line:
Line 1: 1 2 202699
which makes sense because one line per mapper is intended.

Them, using compression with NLineInputFormat 
$ bzip2 onelinetext.b64
$ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
$ hadoop jar hadoop-streaming.jar \
  -Dmapred.input.compress=true \
  -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
  -input /input/onelinetext.b64.gz \
  -output /output \
  -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
  –mapper wc 
I am expecting the same results as above, 'coz decompressing should occur 
before processing one-line text (i.e. wc), however, I am getting:

Num task: 397 (or other large num depends on environments), and output has 397 
lines:
Line1-396: 0 0 0
Line 397: 1 2 202699

Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
purposely choose gzip because I believe it is NOT split-able. I got similar 
results when using bzip2 and lzop codecs.


 Splitting issue when using NLineInputFormat with compression
 

 Key: HADOOP-9442
 URL: https://issues.apache.org/jira/browse/HADOOP-9442
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Try in Apache Hadoop 1.1.1, CDH4, and Amazon EMR. Same 
 result.
Reporter: Qiming He
Priority: Minor

 #make a long text line. It seems only long line text causing issue.
 $ cat abook.txt | base64 –w 0 onelinetext.b64 #200KB+ long
 $ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
 $ hadoop jar hadoop-streaming.jar  \
 -input /input/onelinetext.b64 \
 -output /output \
 -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
 –mapper wc 
 Num task: 1, and output has one line:
 Line 1: 1 2 202699
 which makes sense because one line per mapper is intended.
 Then, using compression with NLineInputFormat 
 $ bzip2 onelinetext.b64
 $ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
 $ hadoop jar hadoop-streaming.jar \
   -Dmapred.input.compress=true \
   
 -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
   -input /input/onelinetext.b64.gz \
   -output /output \
   -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
   –mapper wc 
 I am expecting the same results as above, 'coz decompressing should occur 
 before processing one-line text (i.e. wc), however, I am getting:
 Num task: 397 (or other large numbers depend on environments), and output has 
 397 lines:
 Line1-396: 0 0 0
 Line 397: 1 2 202699
 Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
 purposely choose gzip because I believe it is NOT split-able. I got similar 
 results when using bzip2 and lzop codecs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more 

[jira] [Commented] (HADOOP-8040) Add symlink support to FileSystem

2013-03-28 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616278#comment-13616278
 ] 

Brock Noland commented on HADOOP-8040:
--

I took a quick look at this and I did notice there is at least one sysout which 
is not in a test.

 Add symlink support to FileSystem
 -

 Key: HADOOP-8040
 URL: https://issues.apache.org/jira/browse/HADOOP-8040
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 0.23.0, 3.0.0, 2.0.3-alpha
Reporter: Eli Collins
Assignee: Andrew Wang
 Attachments: hadoop-8040-1.patch, hadoop-8040-2.patch, 
 hadoop-8040-3.patch


 HADOOP-6421 added symbolic links to FileContext. Resolving symlinks is done 
 on the client-side, and therefore requires client support. An HDFS symlink 
 (created by FileContext) when accessed by FileSystem will result in an 
 unhandled UnresolvedLinkException. Because not all users will migrate from 
 FileSystem to FileContext in lock step, and we want users of FileSystem to be 
 able to access all paths created by FileContext, we need to support symlink 
 resolution in FileSystem as well, to facilitate migration to FileContext.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9438) LocalFileContext does not throw an exception on mkdir for already existing directory

2013-03-28 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616330#comment-13616330
 ] 

Robert Joseph Evans commented on HADOOP-9438:
-

Thanks Omkar,

This and HDFS-4619 look like they may be dupes.

 LocalFileContext does not throw an exception on mkdir for already existing 
 directory
 

 Key: HADOOP-9438
 URL: https://issues.apache.org/jira/browse/HADOOP-9438
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Robert Joseph Evans
Priority: Critical

 according to 
 http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#mkdir%28org.apache.hadoop.fs.Path,%20org.apache.hadoop.fs.permission.FsPermission,%20boolean%29
 should throw a FileAlreadyExistsException if the directory already exists.
 I tested this and 
 {code}
 FileContext lfc = FileContext.getLocalFSFileContext(new Configuration());
 Path p = new Path(/tmp/bobby.12345);
 FsPermission cachePerms = new FsPermission((short) 0755);
 lfc.mkdir(p, cachePerms, false);
 lfc.mkdir(p, cachePerms, false);
 {code}
 never throws an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9435) Native build hadoop-common-project fails on $JAVA_HOME/include/jni_md.h using ibm java

2013-03-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616398#comment-13616398
 ] 

Colin Patrick McCabe commented on HADOOP-9435:
--

I'm not sure.  You could try re-uploading the patch.  Sometimes Jenkins misses 
something, it seems.

 Native build hadoop-common-project fails on $JAVA_HOME/include/jni_md.h using 
 ibm java
 --

 Key: HADOOP-9435
 URL: https://issues.apache.org/jira/browse/HADOOP-9435
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9435.patch


 When native build hadoop-common-project with IBM java using command like: 
 mvn package -Pnative
 it will exist the following errors.
  [exec] CMake Error at JNIFlags.cmake:113 (MESSAGE):
  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
  [exec] Call Stack (most recent call first):
  [exec]   CMakeLists.txt:24 (include)
  [exec] 
  [exec] 
  [exec] -- Configuring incomplete, errors occurred!
 The reason is that IBM java uses $JAVA_HOME/include/jniport.h instead of 
 $JAVA_HOME/include/jni_md.h in Oracle java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-03-28 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616399#comment-13616399
 ] 

Todd Lipcon commented on HADOOP-9150:
-

Suresh, any further comments or can I go ahead and commit?

 Unnecessary DNS resolution attempts for logical URIs
 

 Key: HADOOP-9150
 URL: https://issues.apache.org/jira/browse/HADOOP-9150
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3, ha, performance, viewfs
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, log.txt, 
 tracing-resolver.tgz


 In the FileSystem code, we accidentally try to DNS-resolve the logical name 
 before it is converted to an actual domain name. In some DNS setups, this can 
 cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
 terasort throughput, since every task wasted a lot of time waiting for slow 
 not found responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9429) TestConfiguration fails with IBM JAVA

2013-03-28 Thread Amir Sanjar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616401#comment-13616401
 ] 

Amir Sanjar commented on HADOOP-9429:
-

will do, thanks Suresh 


 TestConfiguration fails with IBM JAVA 
 --

 Key: HADOOP-9429
 URL: https://issues.apache.org/jira/browse/HADOOP-9429
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Amir Sanjar
 Attachments: HADOOP-9429-branch2.patch, HADOOP-9429-branch2.patch, 
 HADOOP-9429.patch, HADOOP-9429.patch


 failure in assertTrue(Result has proper header, result.startsWith(
   ?xml version=\1.0\ encoding=\UTF-8\ 
 standalone=\no\?configuration));

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2013-03-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-8545:



 Filesystem Implementation for OpenStack Swift
 -

 Key: HADOOP-8545
 URL: https://issues.apache.org/jira/browse/HADOOP-8545
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.0.3-alpha, 1.1.2
Reporter: Tim Miller
Assignee: Dmitry Mezhensky
  Labels: hadoop, patch
 Attachments: HADOOP-8545-10.patch, HADOOP-8545-11.patch, 
 HADOOP-8545-12.patch, HADOOP-8545-13.patch, HADOOP-8545-14.patch, 
 HADOOP-8545-15.patch, HADOOP-8545-16.patch, HADOOP-8545-17.patch, 
 HADOOP-8545-18.patch, HADOOP-8545-1.patch, HADOOP-8545-2.patch, 
 HADOOP-8545-3.patch, HADOOP-8545-4.patch, HADOOP-8545-5.patch, 
 HADOOP-8545-6.patch, HADOOP-8545-7.patch, HADOOP-8545-8.patch, 
 HADOOP-8545-9.patch, HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, 
 HADOOP-8545.patch


 Add a filesystem implementation for OpenStack Swift object store, similar to 
 the one which exists today for S3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9442) Splitting issue when using NLineInputFormat with compression

2013-03-28 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9442.
-

Resolution: Invalid

Jira is for reporting bugs and not for asking bugs. Please use the user mailing 
lists for questions such as this.

 Splitting issue when using NLineInputFormat with compression
 

 Key: HADOOP-9442
 URL: https://issues.apache.org/jira/browse/HADOOP-9442
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Try in Apache Hadoop 1.1.1, CDH4, and Amazon EMR. Same 
 result.
Reporter: Qiming He
Priority: Minor

 #make a long text line. It seems only long line text causing issue.
 $ cat abook.txt | base64 –w 0 onelinetext.b64 #200KB+ long
 $ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
 $ hadoop jar hadoop-streaming.jar  \
 -input /input/onelinetext.b64 \
 -output /output \
 -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
 –mapper wc 
 Num task: 1, and output has one line:
 Line 1: 1 2 202699
 which makes sense because one line per mapper is intended.
 Then, using compression with NLineInputFormat 
 $ bzip2 onelinetext.b64
 $ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
 $ hadoop jar hadoop-streaming.jar \
   -Dmapred.input.compress=true \
   
 -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
   -input /input/onelinetext.b64.gz \
   -output /output \
   -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
   –mapper wc 
 I am expecting the same results as above, 'coz decompressing should occur 
 before processing one-line text (i.e. wc), however, I am getting:
 Num task: 397 (or other large numbers depend on environments), and output has 
 397 lines:
 Line1-396: 0 0 0
 Line 397: 1 2 202699
 Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
 purposely choose gzip because I believe it is NOT split-able. I got similar 
 results when using bzip2 and lzop codecs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-03-28 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616484#comment-13616484
 ] 

Suresh Srinivas commented on HADOOP-9150:
-

+1 for the patch.

 Unnecessary DNS resolution attempts for logical URIs
 

 Key: HADOOP-9150
 URL: https://issues.apache.org/jira/browse/HADOOP-9150
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3, ha, performance, viewfs
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, log.txt, 
 tracing-resolver.tgz


 In the FileSystem code, we accidentally try to DNS-resolve the logical name 
 before it is converted to an actual domain name. In some DNS setups, this can 
 cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
 terasort throughput, since every task wasted a lot of time waiting for slow 
 not found responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9441) Denial of Service in IPC Server.java

2013-03-28 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616487#comment-13616487
 ] 

Suresh Srinivas commented on HADOOP-9441:
-

We could place a size limit using CodeInputStream#setSizeLimit(). 
[~sanjay.radia]?

 Denial of Service in IPC Server.java
 

 Key: HADOOP-9441
 URL: https://issues.apache.org/jira/browse/HADOOP-9441
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 1.1.2
Reporter: Wouter de Bie
Priority: Minor

 When experimenting with a pure python client for HDFS, I noticed that there 
 is a DOS in the IPC Server. The IPC packet specifies the size (32bit int) of 
 the protobuf payload and that size is directly used to create a buffer that 
 is used to parse the protobuf message. This means that with malformed 
 packets, clients are able to allocate 4G of memory on the heap (which in my 
 case, blew the heap on my test cluster).
 I haven't looked at a good way of solving this, but just wanted to raise the 
 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9442) Splitting issue when using NLineInputFormat with compression

2013-03-28 Thread Qiming He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616524#comment-13616524
 ] 

Qiming He commented on HADOOP-9442:
---

It could be a bug, for Hadoop not splitting compressed data correctly using 
NLineInputFormat. 
Same question have been posted to hadoop user mailing list with no answer. 

If you do not think it is a bug, can you explain why it is NOT?

 Splitting issue when using NLineInputFormat with compression
 

 Key: HADOOP-9442
 URL: https://issues.apache.org/jira/browse/HADOOP-9442
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Try in Apache Hadoop 1.1.1, CDH4, and Amazon EMR. Same 
 result.
Reporter: Qiming He
Priority: Minor

 #make a long text line. It seems only long line text causing issue.
 $ cat abook.txt | base64 –w 0 onelinetext.b64 #200KB+ long
 $ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
 $ hadoop jar hadoop-streaming.jar  \
 -input /input/onelinetext.b64 \
 -output /output \
 -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
 –mapper wc 
 Num task: 1, and output has one line:
 Line 1: 1 2 202699
 which makes sense because one line per mapper is intended.
 Then, using compression with NLineInputFormat 
 $ bzip2 onelinetext.b64
 $ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
 $ hadoop jar hadoop-streaming.jar \
   -Dmapred.input.compress=true \
   
 -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
   -input /input/onelinetext.b64.gz \
   -output /output \
   -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
   –mapper wc 
 I am expecting the same results as above, 'coz decompressing should occur 
 before processing one-line text (i.e. wc), however, I am getting:
 Num task: 397 (or other large numbers depend on environments), and output has 
 397 lines:
 Line1-396: 0 0 0
 Line 397: 1 2 202699
 Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
 purposely choose gzip because I believe it is NOT split-able. I got similar 
 results when using bzip2 and lzop codecs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9442) Splitting issue when using NLineInputFormat with compression

2013-03-28 Thread Qiming He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiming He reopened HADOOP-9442:
---


see my comments

 Splitting issue when using NLineInputFormat with compression
 

 Key: HADOOP-9442
 URL: https://issues.apache.org/jira/browse/HADOOP-9442
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.1.2
 Environment: Try in Apache Hadoop 1.1.1, CDH4, and Amazon EMR. Same 
 result.
Reporter: Qiming He
Priority: Minor

 #make a long text line. It seems only long line text causing issue.
 $ cat abook.txt | base64 –w 0 onelinetext.b64 #200KB+ long
 $ hadoop fs –put onelinetext.b64 /input/onelinetext.b64
 $ hadoop jar hadoop-streaming.jar  \
 -input /input/onelinetext.b64 \
 -output /output \
 -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
 –mapper wc 
 Num task: 1, and output has one line:
 Line 1: 1 2 202699
 which makes sense because one line per mapper is intended.
 Then, using compression with NLineInputFormat 
 $ bzip2 onelinetext.b64
 $ hadoop fs –put onelinetext.b64.bz2  /input/onelinetext.b64.bz2
 $ hadoop jar hadoop-streaming.jar \
   -Dmapred.input.compress=true \
   
 -Dmapred.input.compression.codec=org.apache.hadoop.io.compress.GzipCodec \
   -input /input/onelinetext.b64.gz \
   -output /output \
   -inputformat org.apache.hadoop.mapred.lib.NLineInputFormat \
   –mapper wc 
 I am expecting the same results as above, 'coz decompressing should occur 
 before processing one-line text (i.e. wc), however, I am getting:
 Num task: 397 (or other large numbers depend on environments), and output has 
 397 lines:
 Line1-396: 0 0 0
 Line 397: 1 2 202699
 Any idea why so many mapred.map.tasks 1? Is it incorrect splitting? I 
 purposely choose gzip because I believe it is NOT split-able. I got similar 
 results when using bzip2 and lzop codecs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-03-28 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9150:


  Resolution: Fixed
   Fix Version/s: 2.0.5-beta
  3.0.0
Target Version/s: 2.0.3-alpha, 3.0.0  (was: 3.0.0, 2.0.3-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks for reviewing everyone.

 Unnecessary DNS resolution attempts for logical URIs
 

 Key: HADOOP-9150
 URL: https://issues.apache.org/jira/browse/HADOOP-9150
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3, ha, performance, viewfs
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, log.txt, 
 tracing-resolver.tgz


 In the FileSystem code, we accidentally try to DNS-resolve the logical name 
 before it is converted to an actual domain name. In some DNS setups, this can 
 cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
 terasort throughput, since every task wasted a lot of time waiting for slow 
 not found responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9044) add FindClass main class to provide classpath checking of installations

2013-03-28 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616647#comment-13616647
 ] 

Sanjay Radia commented on HADOOP-9044:
--

Given then resource prints the resource URL from the config, how about a more 
general feature along the lines printConfig name.

 add FindClass main class to provide classpath checking of installations
 ---

 Key: HADOOP-9044
 URL: https://issues.apache.org/jira/browse/HADOOP-9044
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 1.1.0, 2.0.3-alpha
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Attachments: HADOOP-9044.patch, HADOOP-9044.patch


 It's useful in postflight checking of a hadoop installation to verify that 
 classes load, especially codes with external JARs and native codecs. 
 An entry point designed to load a named class and create an instance of that 
 class can do this -and be invoked from any shell script or tool that does the 
 installation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9125) LdapGroupsMapping threw CommunicationException after some idle time

2013-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616788#comment-13616788
 ] 

Hudson commented on HADOOP-9125:


Integrated in Hadoop-trunk-Commit #3537 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3537/])
HADOOP-9125. LdapGroupsMapping threw CommunicationException after some idle 
time. Contributed by Kai Zheng. (Revision 1461863)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1461863
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMapping.java


 LdapGroupsMapping threw CommunicationException after some idle time
 ---

 Key: HADOOP-9125
 URL: https://issues.apache.org/jira/browse/HADOOP-9125
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: 2.0.5-beta

 Attachments: HADOOP-9125.patch, HADOOP-9125.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 LdapGroupsMapping threw exception as below after some idle time. During the 
 idle time no call to the group mapping provider should be made to repeat it.
 2012-12-07 02:20:59,738 WARN org.apache.hadoop.security.LdapGroupsMapping: 
 Exception trying to get groups for user aduser2
 javax.naming.CommunicationException: connection closed [Root exception is 
 java.io.IOException: connection closed]; remaining name 
 'CN=Users,DC=EXAMPLE,DC=COM'
 at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1983)
 at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1827)
 at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1752)
 at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1769)
 at 
 com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:394)
 at 
 com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:376)
 at 
 com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:358)
 at 
 javax.naming.directory.InitialDirContext.search(InitialDirContext.java:267)
 at 
 org.apache.hadoop.security.LdapGroupsMapping.getGroups(LdapGroupsMapping.java:187)
 at 
 org.apache.hadoop.security.CompositeGroupsMapping.getGroups(CompositeGroupsMapping.java:97)
 at org.apache.hadoop.security.Groups.doGetGroups(Groups.java:103)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:70)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1035)
 at org.apache.hadoop.hbase.security.User.getGroupNames(User.java:90)
 at 
 org.apache.hadoop.hbase.security.access.TableAuthManager.authorize(TableAuthManager.java:355)
 at 
 org.apache.hadoop.hbase.security.access.AccessController.requirePermission(AccessController.java:379)
 at 
 org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:1051)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java:4914)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java:3546)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:372)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1399)
 Caused by: java.io.IOException: connection closed
 at com.sun.jndi.ldap.LdapClient.ensureOpen(LdapClient.java:1558)
 at com.sun.jndi.ldap.LdapClient.search(LdapClient.java:503)
 at com.sun.jndi.ldap.LdapCtx.doSearch(LdapCtx.java:1965)
 ... 28 more
 2012-12-07 02:20:59,739 WARN org.apache.hadoop.security.UserGroupInformation: 
 No groups available for user aduser2

--
This message is automatically generated by JIRA.
If you 

[jira] [Commented] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616789#comment-13616789
 ] 

Hudson commented on HADOOP-9357:


Integrated in Hadoop-trunk-Commit #3537 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3537/])
HADOOP-9357. Fallback to default authority if not specified in FileContext. 
Contributed by Andrew Wang (Revision 1461898)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1461898
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java


 Fallback to default authority if not specified in FileContext
 -

 Key: HADOOP-9357
 URL: https://issues.apache.org/jira/browse/HADOOP-9357
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.4-alpha

 Attachments: hadoop-9357-1.patch, hadoop-9357-2.patch, 
 hadoop-9357-3.patch


 Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
 parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
 hdfs:///tmp, FileContext will error while FileSystem will add the authority 
 of the default FS (e.g. turn it into hdfs://defaultNN:port/tmp). 
 This is technically correct, but FileSystem's behavior is nicer for users and 
 okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
 {noformat}
 For backwards
 compatibility, an implementation may work around such references
 by removing the scheme if it matches that of the base URI and the
 scheme is known to always use the  syntax.  The parser
 can then continue with the steps below for the remainder of the
 reference components.  Validating parsers should mark such a
 misformed relative reference as an error.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9358) Auth failed log should include exception string

2013-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616791#comment-13616791
 ] 

Hudson commented on HADOOP-9358:


Integrated in Hadoop-trunk-Commit #3537 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3537/])
HADOOP-9358. Auth failed log should include exception string. Contributed 
by Todd Lipcon. (Revision 1461788)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1461788
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


 Auth failed log should include exception string
 -

 Key: HADOOP-9358
 URL: https://issues.apache.org/jira/browse/HADOOP-9358
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc, security
Affects Versions: 3.0.0, 2.0.5-beta
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: hadoop-9385.txt


 Currently, when authentication fails, we see a WARN message like:
 {code}
 2013-02-28 22:49:03,152 WARN  ipc.Server 
 (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null
 {code}
 This is not useful to understand the underlying cause. The WARN entry should 
 additionally include the exception text, eg:
 {code}
 2013-02-28 22:49:03,152 WARN  ipc.Server 
 (Server.java:saslReadAndProcess(1056)) - Auth failed for 1.2.3.4:12345:null 
 (GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API 
 level (Mechanism level: Request is a replay (34))])
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616796#comment-13616796
 ] 

Hudson commented on HADOOP-9150:


Integrated in Hadoop-trunk-Commit #3537 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3537/])
HADOOP-9150. Avoid unnecessary DNS resolution attempts for logical URIs. 
Contributed by Todd Lipcon. (Revision 1462303)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1462303
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemCanonicalization.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java


 Unnecessary DNS resolution attempts for logical URIs
 

 Key: HADOOP-9150
 URL: https://issues.apache.org/jira/browse/HADOOP-9150
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3, ha, performance, viewfs
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
 hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, log.txt, 
 tracing-resolver.tgz


 In the FileSystem code, we accidentally try to DNS-resolve the logical name 
 before it is converted to an actual domain name. In some DNS setups, this can 
 cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
 terasort throughput, since every task wasted a lot of time waiting for slow 
 not found responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9440) Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0

2013-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616929#comment-13616929
 ] 

Hadoop QA commented on HADOOP-9440:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12575838/HADOOP-9440.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2373//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2373//console

This message is automatically generated.

 Unit Test: hadoop-common2.0.3 TestIPC fails on protobuf2.5.0
 

 Key: HADOOP-9440
 URL: https://issues.apache.org/jira/browse/HADOOP-9440
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9440.patch


 TestIPC runs normally if use protobuf2.4.1 or below version. But if using 
 protobuf2.5.0, TestIPC.testIpcTimeout   TestIPC.testIpcConnectTimeout will 
 fail.
 java.io.IOException: Failed on local exception: 
 com.google.protobuf.InvalidProtocolBufferException: 500 millis timeout while 
 waiting for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/127.0.0.1:50850 
 remote=louis-ThinkPad-T410/127.0.0.1:50353]; Host Details : local host is: 
 louis-ThinkPad-T410/127.0.0.1; destination host is: 
 louis-ThinkPad-T410:50353; 
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:761)
   at org.apache.hadoop.ipc.Client.call(Client.java:1239)
   at org.apache.hadoop.ipc.Client.call(Client.java:1163)
   at org.apache.hadoop.ipc.TestIPC.testIpcTimeout(TestIPC.java:492)
 testIpcConnectTimeout(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 2009 sec  
  ERROR!
 java.io.IOException: Failed on local exception: 
 com.google.protobuf.InvalidProtocolBufferException: 2000 millis timeout while 
 waiting for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/127.0.0.1:51304 
 remote=louis-ThinkPad-T410/127.0.0.1:39525]; Host Details : local host is: 
 louis-ThinkPad-T410/127.0.0.1; destination host is: 
 louis-ThinkPad-T410:39525; 
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:761)
   at org.apache.hadoop.ipc.Client.call(Client.java:1239)
   at org.apache.hadoop.ipc.Client.call(Client.java:1163)
   at org.apache.hadoop.ipc.TestIPC.testIpcConnectTimeout(TestIPC.java:515)
 TestIPC.testIpcTimeout   TestIPC.testIpcConnectTimeout fails because it 
 catches the  com.google.protobuf.InvalidProtocolBufferException not 
 SocketTimeoutException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9437) TestNativeIO#testRenameTo fails on Windows due to assumption that POSIX errno is embedded in NativeIOException

2013-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616946#comment-13616946
 ] 

Hadoop QA commented on HADOOP-9437:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12575807/HADOOP-9437.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2374//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2374//console

This message is automatically generated.

 TestNativeIO#testRenameTo fails on Windows due to assumption that POSIX errno 
 is embedded in NativeIOException
 --

 Key: HADOOP-9437
 URL: https://issues.apache.org/jira/browse/HADOOP-9437
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9437.1.patch, HADOOP-9437.2.patch


 HDFS-4428 added a detailed error message for failures to rename files by 
 embedding the POSIX errno in the {{NativeIOException}}.  On Windows, the 
 mapping of errno is not performed, so the errno enum value will not be 
 present in the {{NativeIOException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9436) NetgroupCache does not refresh membership correctly

2013-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616955#comment-13616955
 ] 

Hadoop QA commented on HADOOP-9436:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12575748/HADOOP-9436.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1372 javac 
compiler warnings (more than the trunk's current 1371 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2375//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2375//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2375//console

This message is automatically generated.

 NetgroupCache does not refresh membership correctly
 ---

 Key: HADOOP-9436
 URL: https://issues.apache.org/jira/browse/HADOOP-9436
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-9436.patch


 NetgroupCache is used to get around the problem of inability to obtain a 
 single user-to-groups mapping from netgroup. For example, the ACL code 
 pre-populates this cache, so that any user-group mapping can be resolved for 
 all groups defined in the service.
 However, the current refresh code only adds users to existing groups, so a 
 loss of group membership won't take effect. This is because the internal 
 user-groups mapping cache is never invalidated. If this is simply invalidated 
 on clear(), the cache entries will build up correctly, but user-group 
 resolution may fail during refresh, resulting in incorrectly denying accesses.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9443) Port winutils static code analysis change to trunk

2013-03-28 Thread Chuan Liu (JIRA)
Chuan Liu created HADOOP-9443:
-

 Summary: Port winutils static code analysis change to trunk
 Key: HADOOP-9443
 URL: https://issues.apache.org/jira/browse/HADOOP-9443
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu


We hit a problem in winutils when running tests on Windows. The static code 
analysis change will fix the problem. More specifically, the old code always 
assumes the security descriptor get from GetSecurityDescriptorControl() is 
relative, and will make an absolute security descriptor out of it. The new 
absolute security descriptor will then pass to SetSecurityDescriptorDacl() to 
set permissions on the file. If the security descriptor is absolute, the new 
absolute security descriptor will be NULL, and we will run into the problem. 
This is what happened exactly in our case. The fix from static code analysis 
will solve the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9443) Port winutils static code analysis change to trunk

2013-03-28 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-9443:
--

Attachment: HADOOP-9443-trunk.patch

Attach a patch for trunk.

 Port winutils static code analysis change to trunk
 --

 Key: HADOOP-9443
 URL: https://issues.apache.org/jira/browse/HADOOP-9443
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-9443-trunk.patch


 We hit a problem in winutils when running tests on Windows. The static code 
 analysis change will fix the problem. More specifically, the old code always 
 assumes the security descriptor get from GetSecurityDescriptorControl() is 
 relative, and will make an absolute security descriptor out of it. The new 
 absolute security descriptor will then pass to SetSecurityDescriptorDacl() to 
 set permissions on the file. If the security descriptor is absolute, the new 
 absolute security descriptor will be NULL, and we will run into the problem. 
 This is what happened exactly in our case. The fix from static code analysis 
 will solve the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9443) Port winutils static code analysis change to trunk

2013-03-28 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-9443:
--

Status: Patch Available  (was: Open)

 Port winutils static code analysis change to trunk
 --

 Key: HADOOP-9443
 URL: https://issues.apache.org/jira/browse/HADOOP-9443
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-9443-trunk.patch


 We hit a problem in winutils when running tests on Windows. The static code 
 analysis change will fix the problem. More specifically, the old code always 
 assumes the security descriptor get from GetSecurityDescriptorControl() is 
 relative, and will make an absolute security descriptor out of it. The new 
 absolute security descriptor will then pass to SetSecurityDescriptorDacl() to 
 set permissions on the file. If the security descriptor is absolute, the new 
 absolute security descriptor will be NULL, and we will run into the problem. 
 This is what happened exactly in our case. The fix from static code analysis 
 will solve the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9435) Native build hadoop-common-project fails on $JAVA_HOME/include/jni_md.h using ibm java

2013-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616968#comment-13616968
 ] 

Hadoop QA commented on HADOOP-9435:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12575456/HADOOP-9435.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2376//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2376//console

This message is automatically generated.

 Native build hadoop-common-project fails on $JAVA_HOME/include/jni_md.h using 
 ibm java
 --

 Key: HADOOP-9435
 URL: https://issues.apache.org/jira/browse/HADOOP-9435
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-9435.patch


 When native build hadoop-common-project with IBM java using command like: 
 mvn package -Pnative
 it will exist the following errors.
  [exec] CMake Error at JNIFlags.cmake:113 (MESSAGE):
  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
  [exec] Call Stack (most recent call first):
  [exec]   CMakeLists.txt:24 (include)
  [exec] 
  [exec] 
  [exec] -- Configuring incomplete, errors occurred!
 The reason is that IBM java uses $JAVA_HOME/include/jniport.h instead of 
 $JAVA_HOME/include/jni_md.h in Oracle java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9200) enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache

2013-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616978#comment-13616978
 ] 

Hadoop QA commented on HADOOP-9200:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12575855/HADOOP-9200-trunk--N2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2377//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2377//console

This message is automatically generated.

 enhance unit-test coverage of class org.apache.hadoop.security.NetgroupCache
 

 Key: HADOOP-9200
 URL: https://issues.apache.org/jira/browse/HADOOP-9200
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9200-trunk--N2.patch, HADOOP-9200-trunk.patch


 The class org.apache.hadoop.security.NetgroupCache has poor unit-test 
 coverage. Enhance it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9443) Port winutils static code analysis change to trunk

2013-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616997#comment-13616997
 ] 

Hadoop QA commented on HADOOP-9443:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12576002/HADOOP-9443-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2379//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2379//console

This message is automatically generated.

 Port winutils static code analysis change to trunk
 --

 Key: HADOOP-9443
 URL: https://issues.apache.org/jira/browse/HADOOP-9443
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-9443-trunk.patch


 We hit a problem in winutils when running tests on Windows. The static code 
 analysis change will fix the problem. More specifically, the old code always 
 assumes the security descriptor get from GetSecurityDescriptorControl() is 
 relative, and will make an absolute security descriptor out of it. The new 
 absolute security descriptor will then pass to SetSecurityDescriptorDacl() to 
 set permissions on the file. If the security descriptor is absolute, the new 
 absolute security descriptor will be NULL, and we will run into the problem. 
 This is what happened exactly in our case. The fix from static code analysis 
 will solve the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext

2013-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13617035#comment-13617035
 ] 

Hadoop QA commented on HADOOP-9078:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12575857/HADOOP-9078-trunk--N1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestFcHdfsSymlink

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2378//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2378//console

This message is automatically generated.

 enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
 

 Key: HADOOP-9078
 URL: https://issues.apache.org/jira/browse/HADOOP-9078
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, 
 HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, 
 HADOOP-9078-branch-2--N1.patch, HADOOP-9078-branch-2.patch, 
 HADOOP-9078.patch, 
 HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch, 
 HADOOP-9078-trunk--N1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-28 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13617062#comment-13617062
 ] 

Chris Nauroth commented on HADOOP-9357:
---

Hi, Andrew.  I've just started to see a test failure in YARN nodemanager: 
{{TestContainerLocalizer#testContainerLocalizerMain}}.  git bisect indicates 
that the problem was introduced with this patch for HADOOP-9357.  I first 
noticed the problem because of a test failure during pre-commit checks on my 
patch for YARN-493.  Here is the output from the pre-commit test failure:

https://builds.apache.org/job/PreCommit-YARN-Build/619//testReport/org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer/TestContainerLocalizer/testContainerLocalizerMain/

It looks like a mock was strictly expecting to see file authority, and now 
it's no longer receiving it.  Are you seeing this test fail too?  Thanks!

 Fallback to default authority if not specified in FileContext
 -

 Key: HADOOP-9357
 URL: https://issues.apache.org/jira/browse/HADOOP-9357
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.4-alpha

 Attachments: hadoop-9357-1.patch, hadoop-9357-2.patch, 
 hadoop-9357-3.patch


 Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
 parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
 hdfs:///tmp, FileContext will error while FileSystem will add the authority 
 of the default FS (e.g. turn it into hdfs://defaultNN:port/tmp). 
 This is technically correct, but FileSystem's behavior is nicer for users and 
 okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
 {noformat}
 For backwards
 compatibility, an implementation may work around such references
 by removing the scheme if it matches that of the base URI and the
 scheme is known to always use the  syntax.  The parser
 can then continue with the steps below for the remainder of the
 reference components.  Validating parsers should mark such a
 misformed relative reference as an error.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-28 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13617106#comment-13617106
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-9357:
-

[~cnauroth], we are seeing this consistently, filed YARN-516.

[~andrew.wang], [~eli], it seems like an incompatible change, mark it so?

 Fallback to default authority if not specified in FileContext
 -

 Key: HADOOP-9357
 URL: https://issues.apache.org/jira/browse/HADOOP-9357
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.4-alpha

 Attachments: hadoop-9357-1.patch, hadoop-9357-2.patch, 
 hadoop-9357-3.patch


 Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
 parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
 hdfs:///tmp, FileContext will error while FileSystem will add the authority 
 of the default FS (e.g. turn it into hdfs://defaultNN:port/tmp). 
 This is technically correct, but FileSystem's behavior is nicer for users and 
 okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
 {noformat}
 For backwards
 compatibility, an implementation may work around such references
 by removing the scheme if it matches that of the base URI and the
 scheme is known to always use the  syntax.  The parser
 can then continue with the steps below for the remainder of the
 reference components.  Validating parsers should mark such a
 misformed relative reference as an error.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-28 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13617107#comment-13617107
 ] 

Chris Nauroth commented on HADOOP-9357:
---

I think this patch also has caused a problem with {{TestFcHdfsSymlink}} in 
HDFS.  I saw a failure during the pre-commit checks on my patch for HDFS-4372:

https://builds.apache.org/job/PreCommit-HDFS-Build/4164//testReport/

Are you also seeing this one fail?  Thanks again!


 Fallback to default authority if not specified in FileContext
 -

 Key: HADOOP-9357
 URL: https://issues.apache.org/jira/browse/HADOOP-9357
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.4-alpha

 Attachments: hadoop-9357-1.patch, hadoop-9357-2.patch, 
 hadoop-9357-3.patch


 Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
 parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
 hdfs:///tmp, FileContext will error while FileSystem will add the authority 
 of the default FS (e.g. turn it into hdfs://defaultNN:port/tmp). 
 This is technically correct, but FileSystem's behavior is nicer for users and 
 okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
 {noformat}
 For backwards
 compatibility, an implementation may work around such references
 by removing the scheme if it matches that of the base URI and the
 scheme is known to always use the  syntax.  The parser
 can then continue with the steps below for the remainder of the
 reference components.  Validating parsers should mark such a
 misformed relative reference as an error.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-28 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13617112#comment-13617112
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-9357:
-

Yes, the 4 test-cases are all failing for me too, with same exception message.

 Fallback to default authority if not specified in FileContext
 -

 Key: HADOOP-9357
 URL: https://issues.apache.org/jira/browse/HADOOP-9357
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.4-alpha

 Attachments: hadoop-9357-1.patch, hadoop-9357-2.patch, 
 hadoop-9357-3.patch


 Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
 parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
 hdfs:///tmp, FileContext will error while FileSystem will add the authority 
 of the default FS (e.g. turn it into hdfs://defaultNN:port/tmp). 
 This is technically correct, but FileSystem's behavior is nicer for users and 
 okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
 {noformat}
 For backwards
 compatibility, an implementation may work around such references
 by removing the scheme if it matches that of the base URI and the
 scheme is known to always use the  syntax.  The parser
 can then continue with the steps below for the remainder of the
 reference components.  Validating parsers should mark such a
 misformed relative reference as an error.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9357) Fallback to default authority if not specified in FileContext

2013-03-28 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13617114#comment-13617114
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-9357:
-

The fix version was set to 2.0.4-alpha, should be 2.0.5-beta? Also, it was left 
unresolved?

 Fallback to default authority if not specified in FileContext
 -

 Key: HADOOP-9357
 URL: https://issues.apache.org/jira/browse/HADOOP-9357
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Fix For: 2.0.4-alpha

 Attachments: hadoop-9357-1.patch, hadoop-9357-2.patch, 
 hadoop-9357-3.patch


 Currently, FileContext adheres rather strictly to RFC2396 when it comes to 
 parsing absolute URIs (URIs with a scheme). If a user asks for a URI like 
 hdfs:///tmp, FileContext will error while FileSystem will add the authority 
 of the default FS (e.g. turn it into hdfs://defaultNN:port/tmp). 
 This is technically correct, but FileSystem's behavior is nicer for users and 
 okay based on 5.2.3 in the RFC, so lets do it in FileContext too:
 {noformat}
 For backwards
 compatibility, an implementation may work around such references
 by removing the scheme if it matches that of the base URI and the
 scheme is known to always use the  syntax.  The parser
 can then continue with the steps below for the remainder of the
 reference components.  Validating parsers should mark such a
 misformed relative reference as an error.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira