[jira] [Updated] (HDFS-8374) Remove chunkSize from ECSchema as its not required for coders

2015-05-25 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8374:

Status: Patch Available  (was: Open)

As HDFS-8382 committed, submitting patch now.

 Remove chunkSize from ECSchema as its not required for coders
 -

 Key: HDFS-8374
 URL: https://issues.apache.org/jira/browse/HDFS-8374
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8374-HDFS-7285-01.patch


 Remove {{chunkSize}} from ECSchema as discussed 
 [here|https://issues.apache.org/jira/browse/HDFS-8347?focusedCommentId=14539108page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14539108]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8374) Remove chunkSize from ECSchema as its not required for coders

2015-05-25 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8374:

Issue Type: Bug  (was: Sub-task)
Parent: (was: HDFS-7285)

 Remove chunkSize from ECSchema as its not required for coders
 -

 Key: HDFS-8374
 URL: https://issues.apache.org/jira/browse/HDFS-8374
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-8374-HDFS-7285-01.patch


 Remove {{chunkSize}} from ECSchema as discussed 
 [here|https://issues.apache.org/jira/browse/HDFS-8347?focusedCommentId=14539108page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14539108]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8408) Revisit and refactor ErasureCodingInfo

2015-05-25 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-8408:

   Resolution: Fixed
Fix Version/s: HDFS-7285
   Status: Resolved  (was: Patch Available)

Committed to branch. Thanks [~szetszwo] for review.

 Revisit and refactor ErasureCodingInfo
 --

 Key: HDFS-8408
 URL: https://issues.apache.org/jira/browse/HDFS-8408
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Fix For: HDFS-7285

 Attachments: HDFS-8408-HDFS-7285-01.patch, 
 HDFS-8408-HDFS-7285-02.patch


 As mentioned in HDFS-8375 
 [here|https://issues.apache.org/jira/browse/HDFS-8375?focusedCommentId=14544618page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14544618]
  
 {{ErasureCodingInfo}} needs a revisit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8408) Revisit and refactor ErasureCodingInfo

2015-05-25 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558003#comment-14558003
 ] 

Vinayakumar B commented on HDFS-8408:
-

Thanks [~szetszwo] for the review. I will commit it.

bq. Question: Are the command -createZone and -getZone for ec zone only but not 
encryption zone? If yes, we should rename them to -createEcZone and -getEcZone. 
We could do it in a separated JIRA.
These commands are present in a separate shell 'erasurecode' related to 
erasurecoding itself only. So IMO naming should be fine.

 Revisit and refactor ErasureCodingInfo
 --

 Key: HDFS-8408
 URL: https://issues.apache.org/jira/browse/HDFS-8408
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Attachments: HDFS-8408-HDFS-7285-01.patch, 
 HDFS-8408-HDFS-7285-02.patch


 As mentioned in HDFS-8375 
 [here|https://issues.apache.org/jira/browse/HDFS-8375?focusedCommentId=14544618page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14544618]
  
 {{ErasureCodingInfo}} needs a revisit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8463) Calling DFSInputStream.seekToNewSource just after stream creation causes NullPointerException

2015-05-25 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-8463:
---
Attachment: HDFS-8463.001.patch

I think there should be guard because {{seekToNewSource}} is exposed to public 
via FSDataInputStream.

I attached 001. It keeps current behaviour and just throws IOException instead 
of NullPointerException when {{seekToNewSource}} is called without 
pre-condition satisfied.

 Calling DFSInputStream.seekToNewSource just after stream creation causes  
 NullPointerException
 --

 Key: HDFS-8463
 URL: https://issues.apache.org/jira/browse/HDFS-8463
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-8463.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558077#comment-14558077
 ] 

Varun Saxena commented on HDFS-8474:


This fails because now by default visibility has been made as hidden and 
default visibility has not been specified for {{getJNIEnv}}

 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated HDFS-8474:
---
Status: Patch Available  (was: Open)

 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Varun Saxena (JIRA)
Varun Saxena created HDFS-8474:
--

 Summary: Impala compilation breaks with libhdfs in 2.7 as 
getJNIEnv is not visible
 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical


Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
This is because getJNIEnv is not visible in the so file.

Compilation fails with below error message :
../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
`impala::HBaseTableScanner::Init()':
/usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
undefined reference to `getJNIEnv'
../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
 more undefined references to `getJNIEnv' follow
collect2: ld returned 1 exit status
make[3]: *** [be/build/release/service/impalad] Error 1
make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
make: *** [impalad] Error 2
Compiler Impala Failed, exit


libhdfs.so.0.0.0 returns nothing when following command is run.
nm -D libhdfs.so.0.0.0  | grep getJNIEnv



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated HDFS-8474:
---
Attachment: HDFS-8474.01.patch

 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8473) createErasureCodingZone should check whether cellSize is available

2015-05-25 Thread Yong Zhang (JIRA)
Yong Zhang created HDFS-8473:


 Summary: createErasureCodingZone should check whether cellSize is 
available
 Key: HDFS-8473
 URL: https://issues.apache.org/jira/browse/HDFS-8473
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yong Zhang
Assignee: Yong Zhang


createErasureCodingZone should check whether cellSize is available, otherwise, 
when user create file under this EC zone may throw 
HadoopIllegalArgumentException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558089#comment-14558089
 ] 

Kiran Kumar M R commented on HDFS-8474:
---

On second thought,
I checked getJNIEnv() usage in libhdfs, its used internally to invoke hdfs java 
APIs. There is no reason for libhdfs to export this API.

I checked the Impala file which fails to compile: 
https://github.com/cloudera/Impala/blob/cdh5-trunk/be/src/exec/hbase-table-scanner.cc
  Here JNIEnv is used to invoke hbase API. Looks like Impala is using 
jni_helper from HDFS instead of writing their own.

I think Impala is better off writing their own helper. Otherwise jni_helper may 
need to move to Hadoop-common and provide JNIEnv via jni_helper for all hadoop 
ecosystem services.





 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558099#comment-14558099
 ] 

Kiran Kumar M R commented on HDFS-8474:
---

Link to Impala JIRA https://issues.cloudera.org/browse/IMPALA-2029


 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8475) Exception in createBlockOutputStream java.io.EOFException: Premature EOF: no length prefix available

2015-05-25 Thread Vinod Valecha (JIRA)
Vinod Valecha created HDFS-8475:
---

 Summary: Exception in createBlockOutputStream 
java.io.EOFException: Premature EOF: no length prefix available
 Key: HDFS-8475
 URL: https://issues.apache.org/jira/browse/HDFS-8475
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Vinod Valecha
Priority: Blocker


Scenraio:
=
write a file
corrupt block manually

Exception stack trace- 

2015-05-24 02:31:55.291 INFO [T-33716795] 
[org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Exception in 
createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
at 
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
[5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer createBlockOutputStream 
Exception in createBlockOutputStream
 java.io.EOFException: Premature EOF: no length 
prefix available
at 
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1155)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)

2015-05-24 02:31:55.291 INFO [T-33716795] 
[org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Abandoning 
BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579
[5/24/15 2:31:55:291 UTC] 02027a3b DFSClient I 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream 
Abandoning BP-176676314-10.108.106.59-1402620296713:blk_1404621403_330880579
2015-05-24 02:31:55.299 INFO [T-33716795] 
[org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] Excluding datanode 
10.108.106.59:50010
[5/24/15 2:31:55:299 UTC] 02027a3b DFSClient I 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer nextBlockOutputStream 
Excluding datanode 10.108.106.59:50010
2015-05-24 02:31:55.300 WARNING [T-33716795] 
[org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
 could only be replicated to 0 nodes instead of minReplication (=1).  There are 
1 datanode(s) running and 1 node(s) are excluded in this operation.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)


[5/24/15 2:31:55:300 UTC] 02027a3b DFSClient W 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer run DataStreamer Exception
 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
 could only be replicated to 0 nodes instead of minReplication (=1).  There are 
1 datanode(s) running and 1 node(s) are excluded in this operation.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)


2015-05-24 02:31:55.301 WARNING [T-880] [E-AA380B730CF751508DC9163BAC8E4D1D] 
[job:B94FEC9411E2C8563C842833D78142CF] [org.apache.hadoop.hdfs.DFSOutputStream] 
Error while syncing
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/var/db/opera/files/B4889CCDA75F9751DDBB488E5AAB433E/BE4DAEF290B7136ED6EF3D4B157441A2/BE4DAEF290B7136ED6EF3D4B157441A2-4.pag
 could only be replicated to 0 nodes instead of minReplication (=1).  There are 
1 datanode(s) running and 1 node(s) are excluded in this operation.
at 

[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558082#comment-14558082
 ] 

Kiran Kumar M R commented on HDFS-8474:
---

LGTM
{{LIBHDFS_EXTERNAL}} is already defined in hdfs.h, but undef is done at end of 
file.
May be that can be reused or moved to a common file instead of defining again 
in jni_helper.h


 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558096#comment-14558096
 ] 

Varun Saxena commented on HDFS-8474:


Yeah, I have raised an issue in Impala JIRA too raising the same point. Let's 
see what's the opinion of community there.

 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558095#comment-14558095
 ] 

Varun Saxena commented on HDFS-8474:


Yeah, I have raised an issue in Impala JIRA too raising the same point. Let's 
see what's the opinion of community there.

 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated HDFS-8474:
---
Description: 
Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
This is because getJNIEnv is not visible in the so file.

Compilation fails with below error message :
../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
`impala::HBaseTableScanner::Init()':
/usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
undefined reference to `getJNIEnv'
../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
 more undefined references to `getJNIEnv' follow
collect2: ld returned 1 exit status
make[3]: *** [be/build/release/service/impalad] Error 1
make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
make: *** [impalad] Error 2
Compiler Impala Failed, exit


libhdfs.so.0.0.0 returns nothing when following command is run.
nm -D libhdfs.so.0.0.0  | grep getJNIEnv

The change in HDFS-7879 breaks the backward compatibility of libhdfs although 
it can be argued that Impala shouldn't be using above API.

  was:
Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
This is because getJNIEnv is not visible in the so file.

Compilation fails with below error message :
../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
`impala::HBaseTableScanner::Init()':
/usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
undefined reference to `getJNIEnv'
../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
 more undefined references to `getJNIEnv' follow
collect2: ld returned 1 exit status
make[3]: *** [be/build/release/service/impalad] Error 1
make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
make: *** [impalad] Error 2
Compiler Impala Failed, exit


libhdfs.so.0.0.0 returns nothing when following command is run.
nm -D libhdfs.so.0.0.0  | grep getJNIEnv


 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv
 The change in HDFS-7879 breaks the backward compatibility of libhdfs although 
 it can be argued that Impala shouldn't be using above API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8377) Support HTTP/2 in datanode

2015-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558147#comment-14558147
 ] 

Hudson commented on HDFS-8377:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #207 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/207/])
HDFS-8377. Support HTTP/2 in datanode. Contributed by Duo Zhang. (wheat9: rev 
ada233b7cd7db39e609bb57e487fee8cec59cd48)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/SimpleHttpProxyHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2FrameListener.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/Http2ResponseHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* hadoop-project/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2Handler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/URLDispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/HdfsWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/TestDtpHttp2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/PortUnificationServerHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java


 Support HTTP/2 in datanode
 --

 Key: HDFS-8377
 URL: https://issues.apache.org/jira/browse/HDFS-8377
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 2.8.0

 Attachments: HDFS-8377.1.patch, HDFS-8377.2.patch, HDFS-8377.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8377) Support HTTP/2 in datanode

2015-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558151#comment-14558151
 ] 

Hudson commented on HDFS-8377:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #938 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/938/])
HDFS-8377. Support HTTP/2 in datanode. Contributed by Duo Zhang. (wheat9: rev 
ada233b7cd7db39e609bb57e487fee8cec59cd48)
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/SimpleHttpProxyHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/TestDtpHttp2.java
* hadoop-project/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2Handler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/Http2ResponseHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/PortUnificationServerHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2FrameListener.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/HdfsWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/URLDispatcher.java


 Support HTTP/2 in datanode
 --

 Key: HDFS-8377
 URL: https://issues.apache.org/jira/browse/HDFS-8377
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 2.8.0

 Attachments: HDFS-8377.1.patch, HDFS-8377.2.patch, HDFS-8377.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8473) createErasureCodingZone should check whether cellSize is available

2015-05-25 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang resolved HDFS-8473.
--
Resolution: Not A Problem

Not a issue

 createErasureCodingZone should check whether cellSize is available
 --

 Key: HDFS-8473
 URL: https://issues.apache.org/jira/browse/HDFS-8473
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yong Zhang
Assignee: Yong Zhang

 createErasureCodingZone should check whether cellSize is available, 
 otherwise, when user create file under this EC zone may throw 
 HadoopIllegalArgumentException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558172#comment-14558172
 ] 

Hadoop QA commented on HDFS-8474:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 12s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 26s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 19s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   1m 10s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 125m 13s | Tests failed in hadoop-hdfs. |
| | | 141m 31s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.TestRead |
|   | hadoop.hdfs.TestHdfsAdmin |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.TestRefreshCallQueue |
|   | hadoop.hdfs.TestListFilesInDFS |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.security.TestPermissionSymlinks |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.TestFileConcurrentReader |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.TestGetFileChecksum |
|   | hadoop.hdfs.crypto.TestHdfsCryptoStreams |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.server.datanode.TestTriggerBlockReport |
|   | hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestDFSUpgrade |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.cli.TestXAttrCLI |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestDFSRename |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.security.TestPermission |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestAppendDifferentChecksum |
|   | hadoop.hdfs.TestRemoteBlockReader |
|   | hadoop.hdfs.TestRestartDFS |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.TestHDFSFileSystemContract |

[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558253#comment-14558253
 ] 

Varun Saxena commented on HDFS-8474:


Weird result from Jenkins. Gives NoSuchMethodError for a method unrelated to 
code change. Tried several of the failed tests. Passes in local

 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv
 The change in HDFS-7879 breaks the backward compatibility of libhdfs although 
 it can be argued that Impala shouldn't be using above API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558252#comment-14558252
 ] 

Varun Saxena commented on HDFS-8474:


Weird result from Jenkins. Gives NoSuchMethodError for a method unrelated to 
code change. Tried several of the failed tests. Passes in local

 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv
 The change in HDFS-7879 breaks the backward compatibility of libhdfs although 
 it can be argued that Impala shouldn't be using above API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8377) Support HTTP/2 in datanode

2015-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558284#comment-14558284
 ] 

Hudson commented on HDFS-8377:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2136 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2136/])
HDFS-8377. Support HTTP/2 in datanode. Contributed by Duo Zhang. (wheat9: rev 
ada233b7cd7db39e609bb57e487fee8cec59cd48)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2Handler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2FrameListener.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/HdfsWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/Http2ResponseHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/URLDispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/SimpleHttpProxyHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* hadoop-project/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/PortUnificationServerHandler.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/TestDtpHttp2.java


 Support HTTP/2 in datanode
 --

 Key: HDFS-8377
 URL: https://issues.apache.org/jira/browse/HDFS-8377
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 2.8.0

 Attachments: HDFS-8377.1.patch, HDFS-8377.2.patch, HDFS-8377.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8377) Support HTTP/2 in datanode

2015-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558333#comment-14558333
 ] 

Hudson commented on HDFS-8377:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #206 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/206/])
HDFS-8377. Support HTTP/2 in datanode. Contributed by Duo Zhang. (wheat9: rev 
ada233b7cd7db39e609bb57e487fee8cec59cd48)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/SimpleHttpProxyHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2Handler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/HdfsWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/Http2ResponseHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/TestDtpHttp2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/URLDispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/PortUnificationServerHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2FrameListener.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-project/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Support HTTP/2 in datanode
 --

 Key: HDFS-8377
 URL: https://issues.apache.org/jira/browse/HDFS-8377
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 2.8.0

 Attachments: HDFS-8377.1.patch, HDFS-8377.2.patch, HDFS-8377.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8377) Support HTTP/2 in datanode

2015-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558293#comment-14558293
 ] 

Hudson commented on HDFS-8377:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #196 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/196/])
HDFS-8377. Support HTTP/2 in datanode. Contributed by Duo Zhang. (wheat9: rev 
ada233b7cd7db39e609bb57e487fee8cec59cd48)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/PortUnificationServerHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/Http2ResponseHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/TestDtpHttp2.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/HdfsWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/SimpleHttpProxyHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2Handler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/URLDispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* hadoop-project/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2FrameListener.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java


 Support HTTP/2 in datanode
 --

 Key: HDFS-8377
 URL: https://issues.apache.org/jira/browse/HDFS-8377
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 2.8.0

 Attachments: HDFS-8377.1.patch, HDFS-8377.2.patch, HDFS-8377.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8452) In WebHDFS, duplicate directory creation is not throwing exception.

2015-05-25 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558315#comment-14558315
 ] 

Jagadesh Kiran N commented on HDFS-8452:


please clarify on the above comments. 

 In WebHDFS, duplicate directory creation is not throwing exception.
 ---

 Key: HDFS-8452
 URL: https://issues.apache.org/jira/browse/HDFS-8452
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Jagadesh Kiran N
Priority: Minor
 Fix For: 3.0.0


 *Case 1 (CLI):*
 a. In HDFS Create a new Directory 
   {code}./hdfs dfs -mkdir /new  , A New directory will be 
 created{code}
b. Now Execute the same Command again 
 {code}   mkdir: `/new': File exists  , Error message will be shown  {code}
 *Case 2 (RestAPI) :*
 a. In HDFS Create a new Directory
  {code}curl -i -X PUT -L 
 http://host1:50070/webhdfs/v1/new1?op=MKDIRSoverwrite=false{code}
   A New Directory will be created 
  b. Now Execute the same webhdfs  command again 
 No exception will be thrown back to the client.
{code}
 HTTP/1.1 200 OK
 Cache-Control: no-cache
 Expires: Thu, 21 May 2015 15:11:57 GMT
 Date: Thu, 21 May 2015 15:11:57 GMT
 Pragma: no-cache
 Content-Type: application/json
 Transfer-Encoding: chunked
{code}
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8433) blockToken is not set in constructInternalBlock and parseStripedBlockGroup in StripedBlockUtil

2015-05-25 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8433:

Attachment: HDFS-8433.00.patch

00 initial patch:
1. Token support block Id range. So inner blocks of a block group share a token.
2. set accessToken for inner blocks in StripedBlockUtil.
3. Fix error handling in InputStream
I'll fix error handling in OutputStream in next patch.

 blockToken is not set in constructInternalBlock and parseStripedBlockGroup in 
 StripedBlockUtil
 --

 Key: HDFS-8433
 URL: https://issues.apache.org/jira/browse/HDFS-8433
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Walter Su
 Attachments: HDFS-8433.00.patch


 The blockToken provided in LocatedStripedBlock is not used to create 
 LocatedBlock in constructInternalBlock and parseStripedBlockGroup in 
 StripedBlockUtil.
 We should also add ec tests with security on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8377) Support HTTP/2 in datanode

2015-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558339#comment-14558339
 ] 

Hudson commented on HDFS-8377:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2154 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2154/])
HDFS-8377. Support HTTP/2 in datanode. Contributed by Duo Zhang. (wheat9: rev 
ada233b7cd7db39e609bb57e487fee8cec59cd48)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2Handler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* hadoop-project/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/Http2ResponseHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/DtpHttp2FrameListener.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/SimpleHttpProxyHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/HdfsWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/dtp/TestDtpHttp2.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/URLDispatcher.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/PortUnificationServerHandler.java
* hadoop-hdfs-project/hadoop-hdfs/pom.xml


 Support HTTP/2 in datanode
 --

 Key: HDFS-8377
 URL: https://issues.apache.org/jira/browse/HDFS-8377
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Duo Zhang
Assignee: Duo Zhang
 Fix For: 2.8.0

 Attachments: HDFS-8377.1.patch, HDFS-8377.2.patch, HDFS-8377.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8474) Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible

2015-05-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558426#comment-14558426
 ] 

Allen Wittenauer commented on HDFS-8474:


bq.  There is no reason for libhdfs to export this API.

Exactly this. -1 on this change.

 Impala compilation breaks with libhdfs in 2.7 as getJNIEnv is not visible
 -

 Key: HDFS-8474
 URL: https://issues.apache.org/jira/browse/HDFS-8474
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, libhdfs
Affects Versions: 2.7.0
 Environment: Red Hat Enterprise Linux Server release 6.4 and gcc 4.3.4
Reporter: Varun Saxena
Assignee: Varun Saxena
Priority: Critical
 Attachments: HDFS-8474.01.patch


 Impala in CDH 5.2.0 is not compiling with libhdfs.so in 2.7.0 on RedHat 6.4.
 This is because getJNIEnv is not visible in the so file.
 Compilation fails with below error message :
 ../../build/release/exec/libExec.a(hbase-table-scanner.cc.o): In function 
 `impala::HBaseTableScanner::Init()':
 /usr1/code/Impala/code/current/impala/be/src/exec/hbase-table-scanner.cc:113: 
 undefined reference to `getJNIEnv'
 ../../build/release/exprs/libExprs.a(hive-udf-call.cc.o):/usr1/code/Impala/code/current/impala/be/src/exprs/hive-udf-call.cc:227:
  more undefined references to `getJNIEnv' follow
 collect2: ld returned 1 exit status
 make[3]: *** [be/build/release/service/impalad] Error 1
 make[2]: *** [be/src/service/CMakeFiles/impalad.dir/all] Error 2
 make[1]: *** [be/src/service/CMakeFiles/impalad.dir/rule] Error 2
 make: *** [impalad] Error 2
 Compiler Impala Failed, exit
 libhdfs.so.0.0.0 returns nothing when following command is run.
 nm -D libhdfs.so.0.0.0  | grep getJNIEnv
 The change in HDFS-7879 breaks the backward compatibility of libhdfs although 
 it can be argued that Impala shouldn't be using above API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8319) Erasure Coding: support decoding for stateful read

2015-05-25 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8319:

Attachment: HDFS-8319.001.patch

Initial patch. The main change is to unify the stateful read code and the pread 
code. I will add tests for the decoding functionality later.

 Erasure Coding: support decoding for stateful read
 --

 Key: HDFS-8319
 URL: https://issues.apache.org/jira/browse/HDFS-8319
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-8319.001.patch


 HDFS-7678 adds the decoding functionality for pread. This jira plans to add 
 decoding to stateful read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8465) Mover is success even when space exceeds storage quota.

2015-05-25 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558554#comment-14558554
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8465:
---

Mover is similar to replication -- it should not care about quota.  Even quota 
is exceeded, under replicated blocks will still be replicated.  Similarly, 
mover should keep moving blocks.

 Mover is success even when space exceeds storage quota.
 ---

 Key: HDFS-8465
 URL: https://issues.apache.org/jira/browse/HDFS-8465
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer  mover, namenode
Affects Versions: 2.7.0
Reporter: Archana T
Assignee: surendra singh lilhore

 *Steps :*
 1. Create directory /dir 
 2. Set its storage policy to HOT --
 hdfs storagepolicies -setStoragePolicy -path /dir -policy HOT
 3. Insert files of total size 10,000B  into /dir.
 4. Set above path /dir ARCHIVE type quota to 5,000B --
 hdfs dfsadmin -setSpaceQuota 5000 -storageType ARCHIVE /dir
 {code}
 hdfs dfs -count -v -q -h -t  /dir
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
  none   inf  none   inf 4.9 K 
 4.9 K /dir
 {code}
 5. Now change policy of '/dir' to COLD
 6. Execute Mover command
 *Observations:*
 1. Mover is successful moving all 10,000B to ARCHIVE datapath.
 2. Count command displays negative value '-59.4K'--
 {code}
 hdfs dfs -count -v -q -h -t  /dir
DISK_QUOTAREM_DISK_QUOTA SSD_QUOTA REM_SSD_QUOTA ARCHIVE_QUOTA 
 REM_ARCHIVE_QUOTA PATHNAME
  none   inf  none   inf 4.9 K 
   -59.4 K /dir
 {code}
 *Expected:*
 Mover should not be successful as ARCHIVE quota is only 5,000B.
 Negative value should not be displayed for quota output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8366) Erasure Coding: Make the timeout parameter of polling blocking queue configurable in DFSStripedOutputStream

2015-05-25 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558551#comment-14558551
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8366:
---

FYI, the blocking queue timeout (and the conf) will be removed by HDFS-8254.

 Erasure Coding: Make the timeout parameter of polling blocking queue 
 configurable in DFSStripedOutputStream
 ---

 Key: HDFS-8366
 URL: https://issues.apache.org/jira/browse/HDFS-8366
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-8366-001.patch, HDFS-8366-HDFS-7285-02.patch


 The timeout of getting striped or ended block in 
 {{DFSStripedOutputStream#Coodinator}} should be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8254) In StripedDataStreamer, it is hard to tolerate datanode failure in the leading streamer

2015-05-25 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8254:
--
Attachment: h8254_20150526.patch

h8254_20150526.patch: removes leading streamer code and removes BlockingQueue 
timeout.

 In StripedDataStreamer, it is hard to tolerate datanode failure in the 
 leading streamer
 ---

 Key: HDFS-8254
 URL: https://issues.apache.org/jira/browse/HDFS-8254
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h8254_20150526.patch


 StripedDataStreamer javadoc is shown below.
 {code}
  * The StripedDataStreamer class is used by {@link DFSStripedOutputStream}.
  * There are two kinds of StripedDataStreamer, leading streamer and ordinary
  * stream. Leading streamer requests a block group from NameNode, unwraps
  * it to located blocks and transfers each located block to its corresponding
  * ordinary streamer via a blocking queue.
 {code}
 Leading streamer is the streamer with index 0.  When the datanode of the 
 leading streamer fails, the other steamers cannot continue since no one will 
 request a block group from NameNode anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8254) In StripedDataStreamer, it is hard to tolerate datanode failure in the leading streamer

2015-05-25 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8254:
--
Attachment: h8254_20150526b.patch

h8254_20150526b.patch: revises javadoc.

 In StripedDataStreamer, it is hard to tolerate datanode failure in the 
 leading streamer
 ---

 Key: HDFS-8254
 URL: https://issues.apache.org/jira/browse/HDFS-8254
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h8254_20150526.patch, h8254_20150526b.patch


 StripedDataStreamer javadoc is shown below.
 {code}
  * The StripedDataStreamer class is used by {@link DFSStripedOutputStream}.
  * There are two kinds of StripedDataStreamer, leading streamer and ordinary
  * stream. Leading streamer requests a block group from NameNode, unwraps
  * it to located blocks and transfers each located block to its corresponding
  * ordinary streamer via a blocking queue.
 {code}
 Leading streamer is the streamer with index 0.  When the datanode of the 
 leading streamer fails, the other steamers cannot continue since no one will 
 request a block group from NameNode anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8452) In WebHDFS, duplicate directory creation is not throwing exception.

2015-05-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-8452:
-
Fix Version/s: (was: 3.0.0)

 In WebHDFS, duplicate directory creation is not throwing exception.
 ---

 Key: HDFS-8452
 URL: https://issues.apache.org/jira/browse/HDFS-8452
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Jagadesh Kiran N
Priority: Minor

 *Case 1 (CLI):*
 a. In HDFS Create a new Directory 
   {code}./hdfs dfs -mkdir /new  , A New directory will be 
 created{code}
b. Now Execute the same Command again 
 {code}   mkdir: `/new': File exists  , Error message will be shown  {code}
 *Case 2 (RestAPI) :*
 a. In HDFS Create a new Directory
  {code}curl -i -X PUT -L 
 http://host1:50070/webhdfs/v1/new1?op=MKDIRSoverwrite=false{code}
   A New Directory will be created 
  b. Now Execute the same webhdfs  command again 
 No exception will be thrown back to the client.
{code}
 HTTP/1.1 200 OK
 Cache-Control: no-cache
 Expires: Thu, 21 May 2015 15:11:57 GMT
 Date: Thu, 21 May 2015 15:11:57 GMT
 Pragma: no-cache
 Content-Type: application/json
 Transfer-Encoding: chunked
{code}
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8463) Calling DFSInputStream.seekToNewSource just after stream creation causes NullPointerException

2015-05-25 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-8463:
---
Status: Patch Available  (was: Open)

 Calling DFSInputStream.seekToNewSource just after stream creation causes  
 NullPointerException
 --

 Key: HDFS-8463
 URL: https://issues.apache.org/jira/browse/HDFS-8463
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-8463.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8452) In WebHDFS, duplicate directory creation is not throwing exception.

2015-05-25 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558638#comment-14558638
 ] 

Haohui Mai commented on HDFS-8452:
--

{quote}
1.  If this is an Idempotent Operation when same file name is given it is 
returning 1 ,ideally it should return 0 . 
2.  Check the documentation : 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html#mkdir
Exit Code:Returns 0 on success and -1 on error.
{quote}

Please do not confuse the {{mkdir()}} utility in the FSShell with the 
{{mkdirs()}} operation in HDFS. The utility checks whether the file exists to 
make its behavior closer to the POSIX one. The {{mkdirs()}} operation, however, 
is designed to be idempotent to simplify the process of handling failures. The 
{{mkdirs()}} operation in WebHDFS has the same semantic of the one in HDFS.

 In WebHDFS, duplicate directory creation is not throwing exception.
 ---

 Key: HDFS-8452
 URL: https://issues.apache.org/jira/browse/HDFS-8452
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Jagadesh Kiran N
Priority: Minor

 *Case 1 (CLI):*
 a. In HDFS Create a new Directory 
   {code}./hdfs dfs -mkdir /new  , A New directory will be 
 created{code}
b. Now Execute the same Command again 
 {code}   mkdir: `/new': File exists  , Error message will be shown  {code}
 *Case 2 (RestAPI) :*
 a. In HDFS Create a new Directory
  {code}curl -i -X PUT -L 
 http://host1:50070/webhdfs/v1/new1?op=MKDIRSoverwrite=false{code}
   A New Directory will be created 
  b. Now Execute the same webhdfs  command again 
 No exception will be thrown back to the client.
{code}
 HTTP/1.1 200 OK
 Cache-Control: no-cache
 Expires: Thu, 21 May 2015 15:11:57 GMT
 Date: Thu, 21 May 2015 15:11:57 GMT
 Pragma: no-cache
 Content-Type: application/json
 Transfer-Encoding: chunked
{code}
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8453) Erasure coding: properly assign start offset for internal blocks in a block group

2015-05-25 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8453:

Attachment: HDFS-8453-HDFS-7285.00.patch

 Erasure coding: properly assign start offset for internal blocks in a block 
 group
 -

 Key: HDFS-8453
 URL: https://issues.apache.org/jira/browse/HDFS-8453
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8453-HDFS-7285.00.patch


 {{LocatedBlock#offset}} should indicate the offset of the first byte of the 
 block in the file. In a striped block group, we should properly assign this 
 {{offset}} for internal blocks, so each internal block can be identified from 
 a given offset.
 My current plan is to keep using {{bg.getStartOffset() + idxInBlockGroup * 
 cellSize}} as the start offset for data blocks. For parity blocks, use {{-1 * 
 (bg.getStartOffset() + idxInBlockGroup * cellSize)}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8453) Erasure coding: properly assign start offset for internal blocks in a block group

2015-05-25 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8453:

Status: Patch Available  (was: In Progress)

Actually it's not possible to assign meaningful start offset values for all 
internal blocks, especially parity ones. Consider a block group with 1 byte of 
data. No matter how to set the start offsets for parity blocks (negative 
values, etc.), they will overlap with the next block group in the file. 

So this patch takes another approach: refactor {{DFSInputStream}} with a new 
{{refreshLocatedBlock}} method when the located block is to be refreshed 
instead of calling {{getBlockAt}} at first time. Then the refresh method can be 
extended in {{DFSStripedInputStream}} with index handling.

 Erasure coding: properly assign start offset for internal blocks in a block 
 group
 -

 Key: HDFS-8453
 URL: https://issues.apache.org/jira/browse/HDFS-8453
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8453-HDFS-7285.00.patch


 {{LocatedBlock#offset}} should indicate the offset of the first byte of the 
 block in the file. In a striped block group, we should properly assign this 
 {{offset}} for internal blocks, so each internal block can be identified from 
 a given offset.
 My current plan is to keep using {{bg.getStartOffset() + idxInBlockGroup * 
 cellSize}} as the start offset for data blocks. For parity blocks, use {{-1 * 
 (bg.getStartOffset() + idxInBlockGroup * cellSize)}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8453) Erasure coding: properly assign start offset for internal blocks in a block group

2015-05-25 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8453:

Description: (was: {{LocatedBlock#offset}} should indicate the offset 
of the first byte of the block in the file. In a striped block group, we 
should properly assign this {{offset}} for internal blocks, so each internal 
block can be identified from a given offset.

My current plan is to keep using {{bg.getStartOffset() + idxInBlockGroup * 
cellSize}} as the start offset for data blocks. For parity blocks, use {{-1 * 
(bg.getStartOffset() + idxInBlockGroup * cellSize)}}. )

 Erasure coding: properly assign start offset for internal blocks in a block 
 group
 -

 Key: HDFS-8453
 URL: https://issues.apache.org/jira/browse/HDFS-8453
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8453-HDFS-7285.00.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8452) In WebHDFS, duplicate directory creation is not throwing exception.

2015-05-25 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558674#comment-14558674
 ] 

Jagadesh Kiran N commented on HDFS-8452:


Ok , Got you Haohui Mai ,Thanks for your clarification.

 In WebHDFS, duplicate directory creation is not throwing exception.
 ---

 Key: HDFS-8452
 URL: https://issues.apache.org/jira/browse/HDFS-8452
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Jagadesh Kiran N
Priority: Minor

 *Case 1 (CLI):*
 a. In HDFS Create a new Directory 
   {code}./hdfs dfs -mkdir /new  , A New directory will be 
 created{code}
b. Now Execute the same Command again 
 {code}   mkdir: `/new': File exists  , Error message will be shown  {code}
 *Case 2 (RestAPI) :*
 a. In HDFS Create a new Directory
  {code}curl -i -X PUT -L 
 http://host1:50070/webhdfs/v1/new1?op=MKDIRSoverwrite=false{code}
   A New Directory will be created 
  b. Now Execute the same webhdfs  command again 
 No exception will be thrown back to the client.
{code}
 HTTP/1.1 200 OK
 Cache-Control: no-cache
 Expires: Thu, 21 May 2015 15:11:57 GMT
 Date: Thu, 21 May 2015 15:11:57 GMT
 Pragma: no-cache
 Content-Type: application/json
 Transfer-Encoding: chunked
{code}
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8062) Remove hard-coded values in favor of EC schema

2015-05-25 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558663#comment-14558663
 ] 

Kai Zheng commented on HDFS-8062:
-

A quick look found this. Please note chunkSize had been removed from schema. 
This needs another rebase.
{code}
+this.schema.getChunkSize(),
+this.schema.getNumDataUnits(),
+this.schema.getNumParityUnits());
{code}

 Remove hard-coded values in favor of EC schema
 --

 Key: HDFS-8062
 URL: https://issues.apache.org/jira/browse/HDFS-8062
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Sasaki
 Attachments: HDFS-8062-HDFS-7285-07.patch, 
 HDFS-8062-HDFS-7285-08.patch, HDFS-8062.1.patch, HDFS-8062.2.patch, 
 HDFS-8062.3.patch, HDFS-8062.4.patch, HDFS-8062.5.patch, HDFS-8062.6.patch


 Related issues about EC schema in NameNode side:
 HDFS-7859 is to change fsimage and editlog in NameNode to persist EC schemas;
 HDFS-7866 is to manage EC schemas in NameNode, loading, syncing between 
 persisted ones in image and predefined ones in XML.
 This is to revisit all the places in NameNode that uses hard-coded values in 
 favor of {{ECSchema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8453) Erasure coding: properly assign start offset for internal blocks in a block group

2015-05-25 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558671#comment-14558671
 ] 

Zhe Zhang commented on HDFS-8453:
-

Moving initial description as a comment to avoid confusion:
{quote}
{{LocatedBlock#offset}} should indicate the offset of the first byte of the 
block in the file. In a striped block group, we should properly assign this 
{{offset}} for internal blocks, so each internal block can be identified from a 
given offset.

My current plan is to keep using {{bg.getStartOffset() + idxInBlockGroup * 
cellSize}} as the start offset for data blocks. For parity blocks, use {{-1 * 
(bg.getStartOffset() + idxInBlockGroup * cellSize)}}. 
{quote}

 Erasure coding: properly assign start offset for internal blocks in a block 
 group
 -

 Key: HDFS-8453
 URL: https://issues.apache.org/jira/browse/HDFS-8453
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-8453-HDFS-7285.00.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8463) Calling DFSInputStream.seekToNewSource just after stream creation causes NullPointerException

2015-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558712#comment-14558712
 ] 

Hadoop QA commented on HDFS-8463:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 37s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 20s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  3s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 162m  5s | Tests passed in hadoop-hdfs. 
|
| | | 204m 59s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12735128/HDFS-8463.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / ada233b |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11125/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11125/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11125/console |


This message was automatically generated.

 Calling DFSInputStream.seekToNewSource just after stream creation causes  
 NullPointerException
 --

 Key: HDFS-8463
 URL: https://issues.apache.org/jira/browse/HDFS-8463
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-8463.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)