[jira] [Updated] (HDFS-6565) Use jackson instead jetty json in hdfs-client

2014-06-23 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6565:


Attachment: HDFS-6565.patch

 Use jackson instead jetty json in hdfs-client
 -

 Key: HDFS-6565
 URL: https://issues.apache.org/jira/browse/HDFS-6565
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Akira AJISAKA
 Attachments: HDFS-6565.patch


 hdfs-client should use Jackson instead of jetty to parse JSON.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6565) Use jackson instead jetty json in hdfs-client

2014-06-23 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14041409#comment-14041409
 ] 

Akira AJISAKA commented on HDFS-6565:
-

Attaching a patch to remove jetty json library from JsonUtil and 
WebHdfsFileSystem.
The way Jackson parse JSON number is different from jetty json:
* Jackson: number - Integer, Long, or BigInteger (smallest applicable)
* jetty json: number - Long

so I changed the code for parsing JSON number
{code}
  (Long) m.get(blockId) // doesn't work if m.get(blockId) is Integer
{code}
to
{code}
 ((Number) m.get(blockId)).longValue() // support all classes extends Number
{code}
In addition, the way Jackson parse JSON array is different from jetty json:
* Jackson: array - ArrayListObject
* jetty json: array - Object[]

so I changed the code for parsing JSON array
{code}
  (Object[]) m.get(locatedBlocks)
{code}
to
{code}
  (ListObject) m.get(locatedBlocks)
{code}


 Use jackson instead jetty json in hdfs-client
 -

 Key: HDFS-6565
 URL: https://issues.apache.org/jira/browse/HDFS-6565
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Akira AJISAKA
 Attachments: HDFS-6565.patch


 hdfs-client should use Jackson instead of jetty to parse JSON.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6565) Use jackson instead jetty json in hdfs-client

2014-06-23 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6565:


Status: Patch Available  (was: Open)

 Use jackson instead jetty json in hdfs-client
 -

 Key: HDFS-6565
 URL: https://issues.apache.org/jira/browse/HDFS-6565
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Akira AJISAKA
 Attachments: HDFS-6565.patch


 hdfs-client should use Jackson instead of jetty to parse JSON.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6591) while loop is executed tens of thousands of times in Hedged Read

2014-06-25 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14044172#comment-14044172
 ] 

Akira AJISAKA commented on HDFS-6591:
-

Nice fix to me. In TestPread.java,
{code}
}
isHedgedRead = true;
  }
{code}
Would you please create {{@Before}} class and initialize variables there 
instead of setting at the last of {{@Test}} class like above?

Minor nits:
1. In DFSInputStream.java:1107, 
{code}
  FutureByteBuffer future = null;
{code}
Now that {{future}} is not used in the else clause, would you move the 
declaration into the try-catch clause?
2. There is a trailing white space in
{code}
+CompletionServiceByteBuffer hedgedService = 
{code}

 while loop is executed tens of thousands of times  in Hedged  Read
 --

 Key: HDFS-6591
 URL: https://issues.apache.org/jira/browse/HDFS-6591
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.4.0
Reporter: LiuLei
Assignee: Liang Xie
 Attachments: HDFS-6591.txt, LoopTooManyTimesTestCase.patch


 I download hadoop-2.4.1-rc1 code from 
 http://people.apache.org/~acmurthy/hadoop-2.4.1-rc1/,  I test the  Hedged  
 Read. I find the while loop in hedgedFetchBlockByteRange method is executed 
 tens of thousands of times.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6645) Add test for successive Snapshots between XAttr modifications

2014-07-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14055877#comment-14055877
 ] 

Akira AJISAKA commented on HDFS-6645:
-

LGTM, +1(non-binding) pending Jenkins.

 Add test for successive Snapshots between XAttr modifications
 -

 Key: HDFS-6645
 URL: https://issues.apache.org/jira/browse/HDFS-6645
 Project: Hadoop HDFS
  Issue Type: Test
  Components: snapshots, test
Affects Versions: 3.0.0, 2.6.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor
 Attachments: HDFS-6645.001.patch


 In the current TestXAttrWithSnapshot unit tests, we create a single snapshot 
 per test.
 We should test taking multiple snapshots on a path in between XAttr 
 modifications of that path. We should also verify that deletion of a snapshot 
 does not somehow alter the XAttrs of the other snapshots of the same path.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6649) Documentation for setrep is wrong

2014-07-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HDFS-6649:
---

Assignee: Akira AJISAKA

 Documentation for setrep is wrong
 -

 Key: HDFS-6649
 URL: https://issues.apache.org/jira/browse/HDFS-6649
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.4
Reporter: Alexander Fahlke
Assignee: Akira AJISAKA
Priority: Trivial

 The documentation in: 
 http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#setrep states 
 that one must use the command as follows:
 - {{Usage: hdfs dfs -setrep [-R] path}}
 - {{Example: hdfs dfs -setrep -w 3 -R /user/hadoop/dir1}}
 Correct would be to state that setrep needs the replication factor and the 
 replication factor needs to be right before the DFS path.
 Must look like this:
 - {{Usage: hdfs dfs -setrep [-R] [-w] rep path/file}}
 - {{Example: hdfs dfs -setrep -w -R 3 /user/hadoop/dir1}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6649) Documentation for setrep is wrong

2014-07-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6649:


  Labels: newbie  (was: )
Target Version/s: 1.3.0
  Status: Patch Available  (was: Open)

 Documentation for setrep is wrong
 -

 Key: HDFS-6649
 URL: https://issues.apache.org/jira/browse/HDFS-6649
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.4
Reporter: Alexander Fahlke
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-6649.branch-1.patch


 The documentation in: 
 http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#setrep states 
 that one must use the command as follows:
 - {{Usage: hdfs dfs -setrep [-R] path}}
 - {{Example: hdfs dfs -setrep -w 3 -R /user/hadoop/dir1}}
 Correct would be to state that setrep needs the replication factor and the 
 replication factor needs to be right before the DFS path.
 Must look like this:
 - {{Usage: hdfs dfs -setrep [-R] [-w] rep path/file}}
 - {{Example: hdfs dfs -setrep -w -R 3 /user/hadoop/dir1}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6649) Documentation for setrep is wrong

2014-07-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6649:


Attachment: HDFS-6649.branch-1.patch

This issue has been fixed in branch-2 and trunk, but not fixed in branch-1.
Attaching a patch for branch-1.

 Documentation for setrep is wrong
 -

 Key: HDFS-6649
 URL: https://issues.apache.org/jira/browse/HDFS-6649
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.4
Reporter: Alexander Fahlke
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-6649.branch-1.patch


 The documentation in: 
 http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#setrep states 
 that one must use the command as follows:
 - {{Usage: hdfs dfs -setrep [-R] path}}
 - {{Example: hdfs dfs -setrep -w 3 -R /user/hadoop/dir1}}
 Correct would be to state that setrep needs the replication factor and the 
 replication factor needs to be right before the DFS path.
 Must look like this:
 - {{Usage: hdfs dfs -setrep [-R] [-w] rep path/file}}
 - {{Example: hdfs dfs -setrep -w -R 3 /user/hadoop/dir1}}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6640) [ Web HDFS ] Syntax for MKDIRS, CREATESYMLINK, and SETXATTR are given wrongly(missed webhdfs/v1).).

2014-07-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058167#comment-14058167
 ] 

Akira AJISAKA commented on HDFS-6640:
-

Looks good, +1 (non-binding).

 [ Web HDFS ] Syntax for MKDIRS, CREATESYMLINK, and SETXATTR are given 
 wrongly(missed webhdfs/v1).).
 ---

 Key: HDFS-6640
 URL: https://issues.apache.org/jira/browse/HDFS-6640
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, webhdfs
Affects Versions: 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Stephen Chu
 Attachments: HDFS-6640.001.patch, HDFS-6640.002.patch


 Need to correct the following :
 Make a Directory
 Submit a HTTP PUT request.
 curl -i -X PUT http://HOST:PORT/PATH?op=MKDIRS[permission=OCTAL]
 Create a Symbolic Link
 Submit a HTTP PUT request.
 curl -i -X PUT http://HOST:PORT/PATH?op=CREATESYMLINK
   destination=PATH[createParent=true|false]
 webhdfs/v1 is missed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6654) Setting Extended ACLs recursively for another user belonging to the same group is not working

2014-07-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058213#comment-14058213
 ] 

Akira AJISAKA commented on HDFS-6654:
-

bq. Step 4: Now unable to write a File to Dir1 from User2
This is by specification. User2 needs EXECUTE permission to write a file to 
Dir1.

bq. Fetching filesystem name , when one of the disk configured for NN dir 
becomes full returns a value null.
I suppose it has been fixed by HADOOP-10462. You will see the right value in 
the next release.

 Setting Extended ACLs recursively for  another user belonging to the same 
 group  is not working
 ---

 Key: HDFS-6654
 URL: https://issues.apache.org/jira/browse/HDFS-6654
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: J.Andreina

 {noformat}
 1.Setting Extended ACL recursively for  a user belonging to the same group  
 is not working
 {noformat}
 Step 1: Created a Dir1 with User1
 ./hdfs dfs -rm -R /Dir1
 Step 2: Changed the permission (600) for Dir1 recursively
./hdfs dfs -chmod -R 600 /Dir1
 Step 3: setfacls is executed to give read and write permissions to User2 
 which belongs to the same group as User1
./hdfs dfs -setfacl -R -m user:User2:rw- /Dir1
./hdfs dfs -getfacl -R /Dir1
  No GC_PROFILE is given. Defaults to medium.
# file: /Dir1
# owner: User1
# group: supergroup
user::rw-
user:User2:rw-
group::---
mask::rw-
other::---
 Step 4: Now unable to write a File to Dir1 from User2
./hdfs dfs -put hadoop /Dir1/1
 No GC_PROFILE is given. Defaults to medium.
 put: Permission denied: user=User2, access=EXECUTE, 
 inode=/Dir1:User1:supergroup:drw--
 {noformat}
2. Fetching filesystem name , when one of the disk configured for NN dir 
 becomes full returns a value null.
 {noformat}
 2014-07-08 09:23:43,020 WARN 
 org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space 
 available on volume 'null' is 101060608, which is below the configured 
 reserved amount 104857600
 2014-07-08 09:23:43,020 WARN 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
 available disk space. Already in safe mode.
 2014-07-08 09:23:43,166 WARN 
 org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space 
 available on volume 'null' is 101060608, which is below the configured 
 reserved amount 104857600
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6654) Setting Extended ACLs recursively for another user belonging to the same group is not working

2014-07-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14058225#comment-14058225
 ] 

Akira AJISAKA commented on HDFS-6654:
-

{quote}
I was confused by looking at Test-Plan-for-Extended-Acls-2.pdf attached in 
HDFS-4685 . First scenairo mentioned in the issue works fine by giving 
executable permissions to User1.
It would be helpful , if the following scenario is been updated in the Testplan.
{quote}
I don't think the plan should be updated because it is only to confirm if a 
permission is applied recursively.

 Setting Extended ACLs recursively for  another user belonging to the same 
 group  is not working
 ---

 Key: HDFS-6654
 URL: https://issues.apache.org/jira/browse/HDFS-6654
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: J.Andreina

 {noformat}
 1.Setting Extended ACL recursively for  a user belonging to the same group  
 is not working
 {noformat}
 Step 1: Created a Dir1 with User1
 ./hdfs dfs -rm -R /Dir1
 Step 2: Changed the permission (600) for Dir1 recursively
./hdfs dfs -chmod -R 600 /Dir1
 Step 3: setfacls is executed to give read and write permissions to User2 
 which belongs to the same group as User1
./hdfs dfs -setfacl -R -m user:User2:rw- /Dir1
./hdfs dfs -getfacl -R /Dir1
  No GC_PROFILE is given. Defaults to medium.
# file: /Dir1
# owner: User1
# group: supergroup
user::rw-
user:User2:rw-
group::---
mask::rw-
other::---
 Step 4: Now unable to write a File to Dir1 from User2
./hdfs dfs -put hadoop /Dir1/1
 No GC_PROFILE is given. Defaults to medium.
 put: Permission denied: user=User2, access=EXECUTE, 
 inode=/Dir1:User1:supergroup:drw--
 {noformat}
2. Fetching filesystem name , when one of the disk configured for NN dir 
 becomes full returns a value null.
 {noformat}
 2014-07-08 09:23:43,020 WARN 
 org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space 
 available on volume 'null' is 101060608, which is below the configured 
 reserved amount 104857600
 2014-07-08 09:23:43,020 WARN 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
 available disk space. Already in safe mode.
 2014-07-08 09:23:43,166 WARN 
 org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space 
 available on volume 'null' is 101060608, which is below the configured 
 reserved amount 104857600
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6654) Setting Extended ACLs recursively for another user belonging to the same group is not working

2014-07-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-6654.
-

Resolution: Not a Problem

Closing this issue. [~andreina], please feel free to reopen this if you 
disagree.

 Setting Extended ACLs recursively for  another user belonging to the same 
 group  is not working
 ---

 Key: HDFS-6654
 URL: https://issues.apache.org/jira/browse/HDFS-6654
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: J.Andreina

 {noformat}
 1.Setting Extended ACL recursively for  a user belonging to the same group  
 is not working
 {noformat}
 Step 1: Created a Dir1 with User1
 ./hdfs dfs -rm -R /Dir1
 Step 2: Changed the permission (600) for Dir1 recursively
./hdfs dfs -chmod -R 600 /Dir1
 Step 3: setfacls is executed to give read and write permissions to User2 
 which belongs to the same group as User1
./hdfs dfs -setfacl -R -m user:User2:rw- /Dir1
./hdfs dfs -getfacl -R /Dir1
  No GC_PROFILE is given. Defaults to medium.
# file: /Dir1
# owner: User1
# group: supergroup
user::rw-
user:User2:rw-
group::---
mask::rw-
other::---
 Step 4: Now unable to write a File to Dir1 from User2
./hdfs dfs -put hadoop /Dir1/1
 No GC_PROFILE is given. Defaults to medium.
 put: Permission denied: user=User2, access=EXECUTE, 
 inode=/Dir1:User1:supergroup:drw--
 {noformat}
2. Fetching filesystem name , when one of the disk configured for NN dir 
 becomes full returns a value null.
 {noformat}
 2014-07-08 09:23:43,020 WARN 
 org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space 
 available on volume 'null' is 101060608, which is below the configured 
 reserved amount 104857600
 2014-07-08 09:23:43,020 WARN 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
 available disk space. Already in safe mode.
 2014-07-08 09:23:43,166 WARN 
 org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space 
 available on volume 'null' is 101060608, which is below the configured 
 reserved amount 104857600
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6675) NFS: Fix javadoc warning in RpcProgram.java

2014-07-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060451#comment-14060451
 ] 

Akira AJISAKA commented on HDFS-6675:
-

LGTM, +1 (non-binding).

 NFS: Fix javadoc warning in RpcProgram.java
 ---

 Key: HDFS-6675
 URL: https://issues.apache.org/jira/browse/HDFS-6675
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, nfs
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Trivial
 Attachments: HDFS-6675.patch


 Fix following javadoc warning during hadoop-nfs compilation:
 {code}
 :
 :
 [WARNING] Javadoc Warnings
 [WARNING] 
 /home/abutala/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java:73:
  warning - @param argument DatagramSocket is not a parameter name.
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2014-07-15 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6682:
---

 Summary: Add a metric to expose the timestamp of the oldest 
under-replicated block
 Key: HDFS-6682
 URL: https://issues.apache.org/jira/browse/HDFS-6682
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA


In the following case, the data in the HDFS is lost and a client needs to put 
the same file again.
# A Client puts a file to HDFS
# A DataNode crashes before replicating a block of the file to other DataNodes

I propose a metric to expose the timestamp of the oldest 
under-replicated/corrupt block. That way client can know what file to retain 
for the re-try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-2538) option to disable fsck dots

2014-07-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-2538:


Release Note: fsck does not print out dots for progress reporting by 
default. To print out dots, you should specify '-showprogress' option.
Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

Since this change is an incompatible change, adding a release note.

 option to disable fsck dots 
 

 Key: HDFS-2538
 URL: https://issues.apache.org/jira/browse/HDFS-2538
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Allen Wittenauer
Assignee: Mohammad Kamrul Islam
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0

 Attachments: HDFS-2538-branch-0.20-security-204.patch, 
 HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
 HDFS-2538.1.patch, HDFS-2538.2.patch, HDFS-2538.3.patch


 this patch turns the dots during fsck off by default and provides an option 
 to turn them back on if you have a fetish for millions and millions of dots 
 on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6704) Fix the command to launch JournalNode in HDFS-HA document

2014-07-17 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6704:
---

 Summary: Fix the command to launch JournalNode in HDFS-HA document
 Key: HDFS-6704
 URL: https://issues.apache.org/jira/browse/HDFS-6704
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor


In HDFSHighAvailabilityWithQJM.html,
{code}
After all of the necessary configuration options have been set, you must start 
the JournalNode daemons on the set of machines where they will run. This can be 
done by running the command hdfs-daemon.sh journalnode and waiting for the 
daemon to start on each of the relevant machines.
{code}
hdfs-daemon.sh should be hadoop-daemon.sh since hdfs-daemon.sh does not exist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6704) Fix the command to launch JournalNode in HDFS-HA document

2014-07-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6704:


Attachment: HDFS-6704.patch

 Fix the command to launch JournalNode in HDFS-HA document
 -

 Key: HDFS-6704
 URL: https://issues.apache.org/jira/browse/HDFS-6704
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6704.patch


 In HDFSHighAvailabilityWithQJM.html,
 {code}
 After all of the necessary configuration options have been set, you must 
 start the JournalNode daemons on the set of machines where they will run. 
 This can be done by running the command hdfs-daemon.sh journalnode and 
 waiting for the daemon to start on each of the relevant machines.
 {code}
 hdfs-daemon.sh should be hadoop-daemon.sh since hdfs-daemon.sh does not exist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6704) Fix the command to launch JournalNode in HDFS-HA document

2014-07-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6704:


Target Version/s: 2.6.0
  Status: Patch Available  (was: Open)

 Fix the command to launch JournalNode in HDFS-HA document
 -

 Key: HDFS-6704
 URL: https://issues.apache.org/jira/browse/HDFS-6704
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6704.patch


 In HDFSHighAvailabilityWithQJM.html,
 {code}
 After all of the necessary configuration options have been set, you must 
 start the JournalNode daemons on the set of machines where they will run. 
 This can be done by running the command hdfs-daemon.sh journalnode and 
 waiting for the daemon to start on each of the relevant machines.
 {code}
 hdfs-daemon.sh should be hadoop-daemon.sh since hdfs-daemon.sh does not exist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2014-07-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6682:


Attachment: HDFS-6682.patch

 Add a metric to expose the timestamp of the oldest under-replicated block
 -

 Key: HDFS-6682
 URL: https://issues.apache.org/jira/browse/HDFS-6682
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6682.patch


 In the following case, the data in the HDFS is lost and a client needs to put 
 the same file again.
 # A Client puts a file to HDFS
 # A DataNode crashes before replicating a block of the file to other DataNodes
 I propose a metric to expose the timestamp of the oldest 
 under-replicated/corrupt block. That way client can know what file to retain 
 for the re-try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2014-07-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6682:


Attachment: (was: HDFS-6682.patch)

 Add a metric to expose the timestamp of the oldest under-replicated block
 -

 Key: HDFS-6682
 URL: https://issues.apache.org/jira/browse/HDFS-6682
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA

 In the following case, the data in the HDFS is lost and a client needs to put 
 the same file again.
 # A Client puts a file to HDFS
 # A DataNode crashes before replicating a block of the file to other DataNodes
 I propose a metric to expose the timestamp of the oldest 
 under-replicated/corrupt block. That way client can know what file to retain 
 for the re-try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2014-07-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6682:


Attachment: HDFS-6682.patch

 Add a metric to expose the timestamp of the oldest under-replicated block
 -

 Key: HDFS-6682
 URL: https://issues.apache.org/jira/browse/HDFS-6682
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6682.patch


 In the following case, the data in the HDFS is lost and a client needs to put 
 the same file again.
 # A Client puts a file to HDFS
 # A DataNode crashes before replicating a block of the file to other DataNodes
 I propose a metric to expose the timestamp of the oldest 
 under-replicated/corrupt block. That way client can know what file to retain 
 for the re-try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2014-07-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6682:


Target Version/s: 2.6.0
  Status: Patch Available  (was: Open)

 Add a metric to expose the timestamp of the oldest under-replicated block
 -

 Key: HDFS-6682
 URL: https://issues.apache.org/jira/browse/HDFS-6682
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6682.patch


 In the following case, the data in the HDFS is lost and a client needs to put 
 the same file again.
 # A Client puts a file to HDFS
 # A DataNode crashes before replicating a block of the file to other DataNodes
 I propose a metric to expose the timestamp of the oldest 
 under-replicated/corrupt block. That way client can know what file to retain 
 for the re-try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2014-07-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14066161#comment-14066161
 ] 

Akira AJISAKA commented on HDFS-6682:
-

Attached a patch to expose 'TimeOfTheOldestBlockToBeReplicated' metric. The 
metric shows timestamp of the oldest under-replicated/corrupt block.
I built with the patch and verified the metric was obtained via JMX and 
FileSink.

 Add a metric to expose the timestamp of the oldest under-replicated block
 -

 Key: HDFS-6682
 URL: https://issues.apache.org/jira/browse/HDFS-6682
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6682.patch


 In the following case, the data in the HDFS is lost and a client needs to put 
 the same file again.
 # A Client puts a file to HDFS
 # A DataNode crashes before replicating a block of the file to other DataNodes
 I propose a metric to expose the timestamp of the oldest 
 under-replicated/corrupt block. That way client can know what file to retain 
 for the re-try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6704) Fix the command to launch JournalNode in HDFS-HA document

2014-07-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14066163#comment-14066163
 ] 

Akira AJISAKA commented on HDFS-6704:
-

The patch looks unrelated to the failed test. It was reported at HDFS-6694.

 Fix the command to launch JournalNode in HDFS-HA document
 -

 Key: HDFS-6704
 URL: https://issues.apache.org/jira/browse/HDFS-6704
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6704.patch


 In HDFSHighAvailabilityWithQJM.html,
 {code}
 After all of the necessary configuration options have been set, you must 
 start the JournalNode daemons on the set of machines where they will run. 
 This can be done by running the command hdfs-daemon.sh journalnode and 
 waiting for the daemon to start on each of the relevant machines.
 {code}
 hdfs-daemon.sh should be hadoop-daemon.sh since hdfs-daemon.sh does not exist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6694) TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently with various symptoms

2014-07-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14066547#comment-14066547
 ] 

Akira AJISAKA commented on HDFS-6694:
-

bq. Do openfiles limit needs to be increased in these new machines?
Yes, I think the limit should be increased. I configured the limit to 512 and 
reproduced the same error.

 TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently 
 with various symptoms
 

 Key: HDFS-6694
 URL: https://issues.apache.org/jira/browse/HDFS-6694
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Yongjun Zhang

 TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently 
 with various symptoms. Typical failures are described in first comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6694) TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently with various symptoms

2014-07-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6694:


Attachment: 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover-output.txt

org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.txt

Attaching the logs.

 TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently 
 with various symptoms
 

 Key: HDFS-6694
 URL: https://issues.apache.org/jira/browse/HDFS-6694
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Yongjun Zhang
 Attachments: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover-output.txt, 
 org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.txt


 TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently 
 with various symptoms. Typical failures are described in first comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6704) Fix the command to launch JournalNode in HDFS-HA document

2014-07-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14067336#comment-14067336
 ] 

Akira AJISAKA commented on HDFS-6704:
-

[~jingzhao], you are right. I'll update the patch shortly.

 Fix the command to launch JournalNode in HDFS-HA document
 -

 Key: HDFS-6704
 URL: https://issues.apache.org/jira/browse/HDFS-6704
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6704.patch


 In HDFSHighAvailabilityWithQJM.html,
 {code}
 After all of the necessary configuration options have been set, you must 
 start the JournalNode daemons on the set of machines where they will run. 
 This can be done by running the command hdfs-daemon.sh journalnode and 
 waiting for the daemon to start on each of the relevant machines.
 {code}
 hdfs-daemon.sh should be hadoop-daemon.sh since hdfs-daemon.sh does not exist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6704) Fix the command to launch JournalNode in HDFS-HA document

2014-07-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6704:


Attachment: HDFS-6704.2.patch

 Fix the command to launch JournalNode in HDFS-HA document
 -

 Key: HDFS-6704
 URL: https://issues.apache.org/jira/browse/HDFS-6704
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6704.2.patch, HDFS-6704.patch


 In HDFSHighAvailabilityWithQJM.html,
 {code}
 After all of the necessary configuration options have been set, you must 
 start the JournalNode daemons on the set of machines where they will run. 
 This can be done by running the command hdfs-daemon.sh journalnode and 
 waiting for the daemon to start on each of the relevant machines.
 {code}
 hdfs-daemon.sh should be hadoop-daemon.sh since hdfs-daemon.sh does not exist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6712) Document HDFS Multihoming Settings

2014-07-22 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14069976#comment-14069976
 ] 

Akira AJISAKA commented on HDFS-6712:
-

Looks good to me, +1 (non-binding).

 Document HDFS Multihoming Settings
 --

 Key: HDFS-6712
 URL: https://issues.apache.org/jira/browse/HDFS-6712
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.1
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-6712.02.patch


 A few HDFS settings can be changed to enable better support in multi-homed 
 environments. This task is to write a short guide to these settings.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6731) Run hdfs zkfc-formatZK on a server in a non-namenode will cause a null pointer exception.

2014-07-22 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6731:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Since the patch has already committed to trunk and branch-2, closing this issue.

 Run hdfs zkfc-formatZK on a server in a non-namenode  will cause a null 
 pointer exception.
 

 Key: HDFS-6731
 URL: https://issues.apache.org/jira/browse/HDFS-6731
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: auto-failover, ha
Affects Versions: 2.0.4-alpha, 2.4.0
Reporter: WenJin Ma
Assignee: Masatake Iwasaki
 Fix For: 2.6.0

 Attachments: HADOOP-9603-0.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Run hdfs zkfc-formatZK on a server in a non-namenode  will cause a null 
 pointer exception.
 {code}
 [hadoop@test bin]$ ./hdfs zkfc -formatZK
 Exception in thread main java.lang.NullPointerException
 at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
 at 
 org.apache.hadoop.hdfs.tools.NNHAServiceTarget.init(NNHAServiceTarget.java:57)
 at 
 org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:128)
 at 
 org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:172)
 {code}
 I look at the code, found in the org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs 
 () method does not make judgments on this issue.
 {code}
 static String[] getSuffixIDs(final Configuration conf, final String 
 addressKey,
   String knownNsId, String knownNNId,
   final AddressMatcher matcher) {
 String nameserviceId = null;
 String namenodeId = null;
 int found = 0;
//..do something
if (found  1) { // Only one address must match the local address
   String msg = Configuration has multiple addresses that match 
   + local node's address. Please configure the system with 
   + DFS_NAMESERVICE_ID +  and 
   + DFS_HA_NAMENODE_ID_KEY;
   throw new HadoopIllegalArgumentException(msg);
 }
 // If the IP is not a local address, found to be less than 1.
 // There should be throw an exception with clear message rather than 
 cause a null pointer exception.   
 return new String[] { nameserviceId, namenodeId };
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6761) supergroup permission

2014-07-28 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076007#comment-14076007
 ] 

Akira AJISAKA commented on HDFS-6761:
-

JIRA is to track bug reports, not for end-user questions.
{quote}
When i use 'testuser' account, I try to 'hadoop fs mkdir /user/data' command.

I think 'hadoop fs mkdir /user/data' command not working. 
{quote}
Probably the old user-to-group mapping had been cached, so the command failed. 
You can execute hdfs dfsadmin -refreshUserToGroupsMappings to clean up the 
cache.
If you have a question, please send an e-mail to u...@hadoop.apache.org.

 supergroup permission 
 --

 Key: HDFS-6761
 URL: https://issues.apache.org/jira/browse/HDFS-6761
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: whitejane
Priority: Minor

 hdfs default directory information
 Permission   Owner Group   Size Name
 drwxr-wr-x  hdfs supergroup  0/user
 I created 'testuser' account and added 'testuser' to the supergroup.
 When i use 'testuser' account, I try to 'hadoop fs mkdir /user/data' command.
 I think 'hadoop fs mkdir /user/data' command not working. 
 but I can make 'data' directory.
 Is this correct?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6761) supergroup permission

2014-07-28 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-6761.
-

Resolution: Invalid

 supergroup permission 
 --

 Key: HDFS-6761
 URL: https://issues.apache.org/jira/browse/HDFS-6761
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: whitejane
Priority: Minor

 hdfs default directory information
 Permission   Owner Group   Size Name
 drwxr-wr-x  hdfs supergroup  0/user
 I created 'testuser' account and added 'testuser' to the supergroup.
 When i use 'testuser' account, I try to 'hadoop fs mkdir /user/data' command.
 I think 'hadoop fs mkdir /user/data' command not working. 
 but I can make 'data' directory.
 Is this correct?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6761) supergroup permission

2014-07-28 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14076155#comment-14076155
 ] 

Akira AJISAKA commented on HDFS-6761:
-

Oh, I misunderstood the description.
A user in supergroup is a super-user, so permission checks never fail.
See 
http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#The_Super-User
 for the detail.

 supergroup permission 
 --

 Key: HDFS-6761
 URL: https://issues.apache.org/jira/browse/HDFS-6761
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: newskyblue
Priority: Minor

 hdfs default directory information
 Permission   Owner Group   Size Name
 drwxr-wr-x  hdfs supergroup  0/user
 I created 'testuser' account and added 'testuser' to the supergroup.
 When i use 'testuser' account, I try to 'hadoop fs mkdir /user/data' command.
 I thought when i run 'hadoop fs mkdir /user/data' command, this result would 
 be shown 'permission denied'
 but I can make 'data' directory.
 Is this correct?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6781) Separate HDFS commands from CommandsManual.apt.vm

2014-07-30 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6781:
---

 Summary: Separate HDFS commands from CommandsManual.apt.vm
 Key: HDFS-6781
 URL: https://issues.apache.org/jira/browse/HDFS-6781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA


HDFS-side of HADOOP-10899.
The CommandsManual lists very old information about running HDFS subcommands 
from the 'hadoop' shell CLI. These are deprecated and should be removed. If 
necessary, the HDFS subcommands should be added to the HDFS documentation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6789) TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7

2014-07-31 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HDFS-6789:
---

Assignee: Akira AJISAKA

 TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and 
 TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7
 

 Key: HDFS-6789
 URL: https://issues.apache.org/jira/browse/HDFS-6789
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
 Environment: jdk7
Reporter: Rushabh S Shah
Assignee: Akira AJISAKA

 The following two tests are failing on jdk7.
 org.apache.hadoop.hdfs.TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI
 org.apache.hadoop.hdfs.TestDFSClientFailover.testDoesntDnsResolveLogicalURI
 On jdk6 it just skips the tests .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6789) TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7

2014-07-31 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14081837#comment-14081837
 ] 

Akira AJISAKA commented on HDFS-6789:
-

The tests fail because
{code}
FileSystem fs = HATestUtil.configureFailoverFs(cluster, conf);
{code}
calls {{NameNode.getAddress(nameNodeUri)}} to get {{InetSocketAddress}} for 
initializing {{ProxyAndInfo}} after HDFS-6507.
Since the tests are to ensure {{FileSystem}} and {{FileContext}} does not 
resolve the logical hostname, I think it's fine to spy NameService after 
initializing {{FileSystem}}.

 TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and 
 TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7
 

 Key: HDFS-6789
 URL: https://issues.apache.org/jira/browse/HDFS-6789
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
 Environment: jdk7
Reporter: Rushabh S Shah
Assignee: Akira AJISAKA

 The following two tests are failing on jdk7.
 org.apache.hadoop.hdfs.TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI
 org.apache.hadoop.hdfs.TestDFSClientFailover.testDoesntDnsResolveLogicalURI
 On jdk6 it just skips the tests .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6789) TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7

2014-07-31 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6789:


Attachment: HDFS-6789.patch

Attaching a patch to spy NameSpace after initializing FileSystem.

 TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and 
 TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7
 

 Key: HDFS-6789
 URL: https://issues.apache.org/jira/browse/HDFS-6789
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
 Environment: jdk7
Reporter: Rushabh S Shah
Assignee: Akira AJISAKA
 Attachments: HDFS-6789.patch


 The following two tests are failing on jdk7.
 org.apache.hadoop.hdfs.TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI
 org.apache.hadoop.hdfs.TestDFSClientFailover.testDoesntDnsResolveLogicalURI
 On jdk6 it just skips the tests .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6789) TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7

2014-07-31 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6789:


Component/s: test

 TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and 
 TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7
 

 Key: HDFS-6789
 URL: https://issues.apache.org/jira/browse/HDFS-6789
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
 Environment: jdk7
Reporter: Rushabh S Shah
Assignee: Akira AJISAKA
 Attachments: HDFS-6789.patch


 The following two tests are failing on jdk7.
 org.apache.hadoop.hdfs.TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI
 org.apache.hadoop.hdfs.TestDFSClientFailover.testDoesntDnsResolveLogicalURI
 On jdk6 it just skips the tests .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6789) TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7

2014-07-31 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6789:


Target Version/s: 2.5.0
  Status: Patch Available  (was: Open)

 TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and 
 TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7
 

 Key: HDFS-6789
 URL: https://issues.apache.org/jira/browse/HDFS-6789
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
 Environment: jdk7
Reporter: Rushabh S Shah
Assignee: Akira AJISAKA
 Attachments: HDFS-6789.patch


 The following two tests are failing on jdk7.
 org.apache.hadoop.hdfs.TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI
 org.apache.hadoop.hdfs.TestDFSClientFailover.testDoesntDnsResolveLogicalURI
 On jdk6 it just skips the tests .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6789) TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7

2014-07-31 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14081849#comment-14081849
 ] 

Akira AJISAKA commented on HDFS-6789:
-

I applied the patch and confirmed the tests passed in two environments:
* Oracle JDK7u40 in Mac OS X 10.9
* Oracle JDK7u65 in CentOS 6.4

 TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and 
 TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7
 

 Key: HDFS-6789
 URL: https://issues.apache.org/jira/browse/HDFS-6789
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
 Environment: jdk7
Reporter: Rushabh S Shah
Assignee: Akira AJISAKA
 Attachments: HDFS-6789.patch


 The following two tests are failing on jdk7.
 org.apache.hadoop.hdfs.TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI
 org.apache.hadoop.hdfs.TestDFSClientFailover.testDoesntDnsResolveLogicalURI
 On jdk6 it just skips the tests .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6802) Some tests in TestDFSClientFailover are missing @Test annotation

2014-07-31 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6802:
---

 Summary: Some tests in TestDFSClientFailover are missing @Test 
annotation
 Key: HDFS-6802
 URL: https://issues.apache.org/jira/browse/HDFS-6802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
Reporter: Akira AJISAKA


HDFS-6334 added new tests in TestDFSClientFailover but they are not executed by 
Junit framework because they don't have {{@Test}} annotation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6802) Some tests in TestDFSClientFailover are missing @Test annotation

2014-07-31 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6802:


Assignee: Akira AJISAKA
Target Version/s: 2.5.0
  Status: Patch Available  (was: Open)

 Some tests in TestDFSClientFailover are missing @Test annotation
 

 Key: HDFS-6802
 URL: https://issues.apache.org/jira/browse/HDFS-6802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6802.patch


 HDFS-6334 added new tests in TestDFSClientFailover but they are not executed 
 by Junit framework because they don't have {{@Test}} annotation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6802) Some tests in TestDFSClientFailover are missing @Test annotation

2014-07-31 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14081921#comment-14081921
 ] 

Akira AJISAKA commented on HDFS-6802:
-

Attached a patch to
# add {{@Test}} annotation
# fix {{testWrappedFailoverProxyProvider()}} failure by setting 
{{SecurityUtil}} not to use IP address for token service. 

 Some tests in TestDFSClientFailover are missing @Test annotation
 

 Key: HDFS-6802
 URL: https://issues.apache.org/jira/browse/HDFS-6802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6802.patch


 HDFS-6334 added new tests in TestDFSClientFailover but they are not executed 
 by Junit framework because they don't have {{@Test}} annotation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6802) Some tests in TestDFSClientFailover are missing @Test annotation

2014-08-01 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082407#comment-14082407
 ] 

Akira AJISAKA commented on HDFS-6802:
-

The test failure is not related to the patch. HDFS-6694 tracks the failure.

 Some tests in TestDFSClientFailover are missing @Test annotation
 

 Key: HDFS-6802
 URL: https://issues.apache.org/jira/browse/HDFS-6802
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6802.patch


 HDFS-6334 added new tests in TestDFSClientFailover but they are not executed 
 by Junit framework because they don't have {{@Test}} annotation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6789) TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7

2014-08-01 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082409#comment-14082409
 ] 

Akira AJISAKA commented on HDFS-6789:
-

The test failure is not related to the patch. HDFS-6694 tracks the failure.

 TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI and 
 TestDFSClientFailover.testDoesntDnsResolveLogicalURI failing on jdk7
 

 Key: HDFS-6789
 URL: https://issues.apache.org/jira/browse/HDFS-6789
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
 Environment: jdk7
Reporter: Rushabh S Shah
Assignee: Akira AJISAKA
 Attachments: HDFS-6789.patch


 The following two tests are failing on jdk7.
 org.apache.hadoop.hdfs.TestDFSClientFailover.testFileContextDoesntDnsResolveLogicalURI
 org.apache.hadoop.hdfs.TestDFSClientFailover.testDoesntDnsResolveLogicalURI
 On jdk6 it just skips the tests .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6806) Rolling upgrades document should mention the version available

2014-08-01 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6806:
---

 Summary: Rolling upgrades document should mention the version 
available
 Key: HDFS-6806
 URL: https://issues.apache.org/jira/browse/HDFS-6806
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Priority: Minor


We should document that rolling upgrades do not support upgrades from ~2.3 to 
2.4+. It has been asked in the user ML many times.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6781) Separate HDFS commands from CommandsManual.apt.vm

2014-08-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6781:


Attachment: HDFS-6781.patch

Attaching a patch to
# move HDFS commands from CommandsManual.apt.vm to HDFSCommands.apt.vm
# add missing options in HDFS commands (dfsadmin, namenode, datanode, ...)
# modified some links to the new document

 Separate HDFS commands from CommandsManual.apt.vm
 -

 Key: HDFS-6781
 URL: https://issues.apache.org/jira/browse/HDFS-6781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6781.patch


 HDFS-side of HADOOP-10899.
 The CommandsManual lists very old information about running HDFS subcommands 
 from the 'hadoop' shell CLI. These are deprecated and should be removed. If 
 necessary, the HDFS subcommands should be added to the HDFS documentation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6781) Separate HDFS commands from CommandsManual.apt.vm

2014-08-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6781:


Attachment: HDFS-6781-branch-2.patch

Since hdfs fsck does not support -showprogress option in branch-2, attaching a 
patch for branch-2.

 Separate HDFS commands from CommandsManual.apt.vm
 -

 Key: HDFS-6781
 URL: https://issues.apache.org/jira/browse/HDFS-6781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6781-branch-2.patch, HDFS-6781.patch


 HDFS-side of HADOOP-10899.
 The CommandsManual lists very old information about running HDFS subcommands 
 from the 'hadoop' shell CLI. These are deprecated and should be removed. If 
 necessary, the HDFS subcommands should be added to the HDFS documentation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6781) Separate HDFS commands from CommandsManual.apt.vm

2014-08-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6781:


Target Version/s: 3.0.0, 2.6.0
  Status: Patch Available  (was: Open)

 Separate HDFS commands from CommandsManual.apt.vm
 -

 Key: HDFS-6781
 URL: https://issues.apache.org/jira/browse/HDFS-6781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6781-branch-2.patch, HDFS-6781.patch


 HDFS-side of HADOOP-10899.
 The CommandsManual lists very old information about running HDFS subcommands 
 from the 'hadoop' shell CLI. These are deprecated and should be removed. If 
 necessary, the HDFS subcommands should be added to the HDFS documentation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6792) hadoop-metrics2.properties exists in hdfs subproject

2014-08-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14087469#comment-14087469
 ] 

Akira AJISAKA commented on HDFS-6792:
-

Looks like this issue duplicates HDFS-6517.
[~aw], please close this issue and review the patch in HDFS-6517?

 hadoop-metrics2.properties exists in hdfs subproject
 

 Key: HDFS-6792
 URL: https://issues.apache.org/jira/browse/HDFS-6792
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer

 This file is overwriting the one that ships in common.  It should be removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6517) Remove hadoop-metrics2.properties from hdfs project

2014-08-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14087880#comment-14087880
 ] 

Akira AJISAKA commented on HDFS-6517:
-

Thanks Allen for the review!

 Remove hadoop-metrics2.properties from hdfs project
 ---

 Key: HDFS-6517
 URL: https://issues.apache.org/jira/browse/HDFS-6517
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Fix For: 3.0.0, 2.6.0

 Attachments: HDFS-6517.patch


 HDFS-side of HADOOP-9919.
 HADOOP-9919 updated hadoop-metrics2.properties examples to YARN, however, the 
 examples are still old because hadoop-metrics2.properties in HDFS project is 
 actually packaged.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2014-08-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14088691#comment-14088691
 ] 

Akira AJISAKA commented on HDFS-6682:
-

These failed tests are not related to the patch. They were fixed by INFRA-8097 
and HADOOP-10866.

 Add a metric to expose the timestamp of the oldest under-replicated block
 -

 Key: HDFS-6682
 URL: https://issues.apache.org/jira/browse/HDFS-6682
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6682.patch


 In the following case, the data in the HDFS is lost and a client needs to put 
 the same file again.
 # A Client puts a file to HDFS
 # A DataNode crashes before replicating a block of the file to other DataNodes
 I propose a metric to expose the timestamp of the oldest 
 under-replicated/corrupt block. That way client can know what file to retain 
 for the re-try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6781) Separate HDFS commands from CommandsManual.apt.vm

2014-08-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6781:


Attachment: HDFS-6781.patch

Attaching the same patch to run Jenkins with the trunk patch.

 Separate HDFS commands from CommandsManual.apt.vm
 -

 Key: HDFS-6781
 URL: https://issues.apache.org/jira/browse/HDFS-6781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6781-branch-2.patch, HDFS-6781.patch, 
 HDFS-6781.patch


 HDFS-side of HADOOP-10899.
 The CommandsManual lists very old information about running HDFS subcommands 
 from the 'hadoop' shell CLI. These are deprecated and should be removed. If 
 necessary, the HDFS subcommands should be added to the HDFS documentation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HDFS-6831) Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'

2014-08-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA moved HADOOP-10943 to HDFS-6831:
--

Affects Version/s: (was: 2.4.0)
   2.4.0
  Key: HDFS-6831  (was: HADOOP-10943)
  Project: Hadoop HDFS  (was: Hadoop Common)

 Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'
 ---

 Key: HDFS-6831
 URL: https://issues.apache.org/jira/browse/HDFS-6831
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie

 There is an inconsistency between the console outputs of 'hdfs dfsadmin' 
 command and 'hdfs dfsadmin -help' command.
 {code}
 [root@trunk ~]# hdfs dfsadmin
 Usage: java DFSAdmin
 Note: Administrative commands can only be run as the HDFS superuser.
[-report]
[-safemode enter | leave | get | wait]
[-allowSnapshot snapshotDir]
[-disallowSnapshot snapshotDir]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-finalizeUpgrade]
[-rollingUpgrade [query|prepare|finalize]]
[-metasave filename]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh]
[-printTopology]
[-refreshNamenodes datanodehost:port]
[-deleteBlockPool datanode-host:port blockpoolId [force]]
[-setQuota quota dirname...dirname]
[-clrQuota dirname...dirname]
[-setSpaceQuota quota dirname...dirname]
[-clrSpaceQuota dirname...dirname]
[-setBalancerBandwidth bandwidth in bytes per second]
[-fetchImage local directory]
[-shutdownDatanode datanode_host:ipc_port [upgrade]]
[-getDatanodeInfo datanode_host:ipc_port]
[-help [cmd]]
 {code}
 {code}
 [root@trunk ~]# hdfs dfsadmin -help
 hadoop dfsadmin performs DFS administrative commands.
 The full syntax is: 
 hadoop dfsadmin
   [-report [-live] [-dead] [-decommissioning]]
   [-safemode enter | leave | get | wait]
   [-saveNamespace]
   [-rollEdits]
   [-restoreFailedStorage true|false|check]
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
   [-setSpaceQuota quota dirname...dirname]
   [-clrSpaceQuota dirname...dirname]
   [-finalizeUpgrade]
   [-rollingUpgrade [query|prepare|finalize]]
   [-refreshServiceAcl]
   [-refreshUserToGroupsMappings]
   [-refreshSuperUserGroupsConfiguration]
   [-refreshCallQueue]
   [-refresh host:ipc_port key [arg1..argn]
   [-printTopology]
   [-refreshNamenodes datanodehost:port]
   [-deleteBlockPool datanodehost:port blockpoolId [force]]
   [-setBalancerBandwidth bandwidth]
   [-fetchImage local directory]
   [-allowSnapshot snapshotDir]
   [-disallowSnapshot snapshotDir]
   [-shutdownDatanode datanode_host:ipc_port [upgrade]]
   [-getDatanodeInfo datanode_host:ipc_port
   [-help [cmd]
 {code}
 These two outputs should be the same.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6832) Fix the usage of 'hdfs namenode' command

2014-08-06 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6832:
---

 Summary: Fix the usage of 'hdfs namenode' command
 Key: HDFS-6832
 URL: https://issues.apache.org/jira/browse/HDFS-6832
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Akira AJISAKA
Priority: Minor


{code}
[root@trunk ~]# hdfs namenode -help
Usage: java NameNode [-backup] | 
[-checkpoint] | 
[-format [-clusterid cid ] [-force] [-nonInteractive] ] | 
[-upgrade [-clusterid cid] [-renameReservedk-v pairs] ] | 
[-upgradeOnly [-clusterid cid] [-renameReservedk-v pairs] ] | 
[-rollback] | 
[-rollingUpgrade downgrade|rollback ] | 
[-finalize] | 
[-importCheckpoint] | 
[-initializeSharedEdits] | 
[-bootstrapStandby] | 
[-recover [ -force] ] | 
[-metadataVersion ]  ]
{code}
There're some issues in the usage to be fixed.
# Usage: java NameNode should be Usage: hdfs namenode
# -rollingUpgrade started option should be added
# The last ']' should be removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6781) Separate HDFS commands from CommandsManual.apt.vm

2014-08-07 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6781:


Attachment: HDFS-6781.2.patch

Thanks Arpit for the review.
Modified site.xml to use HDFS Commands Reference instead of HDFS Commands 
Manual.

 Separate HDFS commands from CommandsManual.apt.vm
 -

 Key: HDFS-6781
 URL: https://issues.apache.org/jira/browse/HDFS-6781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6781-branch-2.patch, HDFS-6781.2.patch, 
 HDFS-6781.patch, HDFS-6781.patch


 HDFS-side of HADOOP-10899.
 The CommandsManual lists very old information about running HDFS subcommands 
 from the 'hadoop' shell CLI. These are deprecated and should be removed. If 
 necessary, the HDFS subcommands should be added to the HDFS documentation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6781) Separate HDFS commands from CommandsManual.apt.vm

2014-08-07 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6781:


Attachment: (was: HDFS-6781.2.patch)

 Separate HDFS commands from CommandsManual.apt.vm
 -

 Key: HDFS-6781
 URL: https://issues.apache.org/jira/browse/HDFS-6781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6781-branch-2.patch, HDFS-6781.2.patch, 
 HDFS-6781.patch, HDFS-6781.patch


 HDFS-side of HADOOP-10899.
 The CommandsManual lists very old information about running HDFS subcommands 
 from the 'hadoop' shell CLI. These are deprecated and should be removed. If 
 necessary, the HDFS subcommands should be added to the HDFS documentation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6781) Separate HDFS commands from CommandsManual.apt.vm

2014-08-07 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6781:


Attachment: HDFS-6781.2.patch

 Separate HDFS commands from CommandsManual.apt.vm
 -

 Key: HDFS-6781
 URL: https://issues.apache.org/jira/browse/HDFS-6781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6781-branch-2.patch, HDFS-6781.2.patch, 
 HDFS-6781.patch, HDFS-6781.patch


 HDFS-side of HADOOP-10899.
 The CommandsManual lists very old information about running HDFS subcommands 
 from the 'hadoop' shell CLI. These are deprecated and should be removed. If 
 necessary, the HDFS subcommands should be added to the HDFS documentation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6781) Separate HDFS commands from CommandsManual.apt.vm

2014-08-07 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6781:


Attachment: HDFS-6781-branch-2.2.patch

Updated the patch for branch-2 also.

 Separate HDFS commands from CommandsManual.apt.vm
 -

 Key: HDFS-6781
 URL: https://issues.apache.org/jira/browse/HDFS-6781
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6781-branch-2.2.patch, HDFS-6781-branch-2.patch, 
 HDFS-6781.2.patch, HDFS-6781.patch, HDFS-6781.patch


 HDFS-side of HADOOP-10899.
 The CommandsManual lists very old information about running HDFS subcommands 
 from the 'hadoop' shell CLI. These are deprecated and should be removed. If 
 necessary, the HDFS subcommands should be added to the HDFS documentation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2014-08-07 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14089060#comment-14089060
 ] 

Akira AJISAKA commented on HDFS-6682:
-

[~atm], would you please review this patch?

 Add a metric to expose the timestamp of the oldest under-replicated block
 -

 Key: HDFS-6682
 URL: https://issues.apache.org/jira/browse/HDFS-6682
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6682.patch


 In the following case, the data in the HDFS is lost and a client needs to put 
 the same file again.
 # A Client puts a file to HDFS
 # A DataNode crashes before replicating a block of the file to other DataNodes
 I propose a metric to expose the timestamp of the oldest 
 under-replicated/corrupt block. That way client can know what file to retain 
 for the re-try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6664) HDFS permissions guide documentation states incorrect default group mapping class.

2014-08-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092368#comment-14092368
 ] 

Akira AJISAKA commented on HDFS-6664:
-

Thanks [~rchiang] for the patch! Some comments:
1.
{code}
+   the codebash -c groups/code command to resolve a list of
{code}
{{code}} and {{/code}} should be {{}} and {{}}. Most of Hadoop 
documents are now written in APT format (*.apt.vm). The format is described in 
http://maven.apache.org/doxia/references/apt-format.html

2.
{code}
+   This implementation shells out to the Linux/Unix environment with
{code}
{{ShellBasedUnixGroupsMappings}} now supports Windows also.

3 (minor).
{code}
+   JNI is available the implementation will use the API within hadoop
{code}
I think it's better to add a comma(,) between 'available' and 'the'.

 HDFS permissions guide documentation states incorrect default group mapping 
 class.
 --

 Key: HDFS-6664
 URL: https://issues.apache.org/jira/browse/HDFS-6664
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Priority: Trivial
  Labels: newbie
 Attachments: HDFS6664-01.patch


 The HDFS permissions guide states that our default group mapping class is 
 {{org.apache.hadoop.security.ShellBasedUnixGroupsMapping}}.  This is no 
 longer true.  The default has been changed to 
 {{org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6664) HDFS permissions guide documentation states incorrect default group mapping class.

2014-08-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092419#comment-14092419
 ] 

Akira AJISAKA commented on HDFS-6664:
-

Thanks for the update!
{code}
+   This implementation shells out with the the bash -c groups
{code}
Would you please remove duplicated 'the'? Other than that, the patch looks good 
to me.

 HDFS permissions guide documentation states incorrect default group mapping 
 class.
 --

 Key: HDFS-6664
 URL: https://issues.apache.org/jira/browse/HDFS-6664
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Priority: Trivial
  Labels: newbie
 Attachments: HDFS6664-01.patch, HDFS6664-02.patch


 The HDFS permissions guide states that our default group mapping class is 
 {{org.apache.hadoop.security.ShellBasedUnixGroupsMapping}}.  This is no 
 longer true.  The default has been changed to 
 {{org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-3655) Datanode recoverRbw could hang sometime

2014-08-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-3655.
-

  Resolution: Duplicate
Assignee: (was: Xiaobo Peng)
Target Version/s:   (was: 0.22.1)

Closing this issue as duplicate. Please feel free to reopen if you disagree.

 Datanode recoverRbw could hang sometime
 ---

 Key: HDFS-3655
 URL: https://issues.apache.org/jira/browse/HDFS-3655
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 0.22.0, 1.0.3, 2.0.0-alpha
Reporter: Ming Ma
 Attachments: HDFS-3655-0.22-use-join-instead-of-wait.patch, 
 HDFS-3655-0.22.patch


 This bug seems to apply to 0.22 and hadoop 2.0. I will upload the initial fix 
 done by my colleague Xiaobo Peng shortly ( there is some logistics issue 
 being worked on so that he can upload patch himself later ).
 recoverRbw try to kill the old writer thread, but it took the lock (FSDataset 
 monitor object) which the old writer thread is waiting on ( for example the 
 call to data.getTmpInputStreams ).
 DataXceiver for client /10.110.3.43:40193 [Receiving block 
 blk_-3037542385914640638_57111747 
 client=DFSClient_attempt_201206021424_0001_m_000401_0] daemon prio=10 
 tid=0x7facf8111800 nid=0x6b64 in Object.wait() [0x7facd1ddb000]
 java.lang.Thread.State: WAITING (on object monitor)
 at java.lang.Object.wait(Native Method)
 at java.lang.Thread.join(Thread.java:1186)
 â– locked 0x0007856c1200 (a org.apache.hadoop.util.Daemon)
 at java.lang.Thread.join(Thread.java:1239)
 at 
 org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.stopWriter(ReplicaInPipeline.java:158)
 at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.recoverRbw(FSDataset.java:1347)
 â– locked 0x0007838398c0 (a 
 org.apache.hadoop.hdfs.server.datanode.FSDataset)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:119)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlockInternal(DataXceiver.java:391)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:327)
 at 
 org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:405)
 at 
 org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:344)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:183)
 at java.lang.Thread.run(Thread.java:662)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6664) HDFS permissions guide documentation states incorrect default group mapping class.

2014-08-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14092473#comment-14092473
 ] 

Akira AJISAKA commented on HDFS-6664:
-

+1 (non-binding), pending Jenkins.

 HDFS permissions guide documentation states incorrect default group mapping 
 class.
 --

 Key: HDFS-6664
 URL: https://issues.apache.org/jira/browse/HDFS-6664
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Priority: Trivial
  Labels: newbie
 Attachments: HDFS6664-01.patch, HDFS6664-02.patch, HDFS6664-03.patch


 The HDFS permissions guide states that our default group mapping class is 
 {{org.apache.hadoop.security.ShellBasedUnixGroupsMapping}}.  This is no 
 longer true.  The default has been changed to 
 {{org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6832) Fix the usage of 'hdfs namenode' command

2014-08-13 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14095938#comment-14095938
 ] 

Akira AJISAKA commented on HDFS-6832:
-

[~Pooja.Gupta], feel free to create a patch and attach it to this jira.
Unfortunately, I don't have the permission to assign you. A committer will 
assign you when the patch is committed.

 Fix the usage of 'hdfs namenode' command
 

 Key: HDFS-6832
 URL: https://issues.apache.org/jira/browse/HDFS-6832
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie

 {code}
 [root@trunk ~]# hdfs namenode -help
 Usage: java NameNode [-backup] | 
   [-checkpoint] | 
   [-format [-clusterid cid ] [-force] [-nonInteractive] ] | 
   [-upgrade [-clusterid cid] [-renameReservedk-v pairs] ] | 
   [-upgradeOnly [-clusterid cid] [-renameReservedk-v pairs] ] | 
   [-rollback] | 
   [-rollingUpgrade downgrade|rollback ] | 
   [-finalize] | 
   [-importCheckpoint] | 
   [-initializeSharedEdits] | 
   [-bootstrapStandby] | 
   [-recover [ -force] ] | 
   [-metadataVersion ]  ]
 {code}
 There're some issues in the usage to be fixed.
 # Usage: java NameNode should be Usage: hdfs namenode
 # -rollingUpgrade started option should be added
 # The last ']' should be removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6832) Fix the usage of 'hdfs namenode' command

2014-08-19 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14103273#comment-14103273
 ] 

Akira AJISAKA commented on HDFS-6832:
-

Thanks [~rsk13th] for the trunk patch.
{code}
  +  ];
{code}
Would you remove the last ']'?

 Fix the usage of 'hdfs namenode' command
 

 Key: HDFS-6832
 URL: https://issues.apache.org/jira/browse/HDFS-6832
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Akira AJISAKA
Assignee: skrho
Priority: Minor
  Labels: newbie
 Attachments: hdfs-6832.txt, hdfs-6832_001.txt


 {code}
 [root@trunk ~]# hdfs namenode -help
 Usage: java NameNode [-backup] | 
   [-checkpoint] | 
   [-format [-clusterid cid ] [-force] [-nonInteractive] ] | 
   [-upgrade [-clusterid cid] [-renameReservedk-v pairs] ] | 
   [-upgradeOnly [-clusterid cid] [-renameReservedk-v pairs] ] | 
   [-rollback] | 
   [-rollingUpgrade downgrade|rollback ] | 
   [-finalize] | 
   [-importCheckpoint] | 
   [-initializeSharedEdits] | 
   [-bootstrapStandby] | 
   [-recover [ -force] ] | 
   [-metadataVersion ]  ]
 {code}
 There're some issues in the usage to be fixed.
 # Usage: java NameNode should be Usage: hdfs namenode
 # -rollingUpgrade started option should be added
 # The last ']' should be removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6945) ExcessBlocks metric may not be decremented if there are no over replicated blocks

2014-08-26 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6945:
---

 Summary: ExcessBlocks metric may not be decremented if there are 
no over replicated blocks
 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA


I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
however, there are no over-replicated blocks (confirmed by fsck).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6832) Fix the usage of 'hdfs namenode' command

2014-08-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6832:


Target Version/s: 2.6.0  (was: 2.5.0)

 Fix the usage of 'hdfs namenode' command
 

 Key: HDFS-6832
 URL: https://issues.apache.org/jira/browse/HDFS-6832
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Akira AJISAKA
Assignee: skrho
Priority: Minor
  Labels: newbie
 Attachments: hdfs-6832.txt, hdfs-6832_001.txt


 {code}
 [root@trunk ~]# hdfs namenode -help
 Usage: java NameNode [-backup] | 
   [-checkpoint] | 
   [-format [-clusterid cid ] [-force] [-nonInteractive] ] | 
   [-upgrade [-clusterid cid] [-renameReservedk-v pairs] ] | 
   [-upgradeOnly [-clusterid cid] [-renameReservedk-v pairs] ] | 
   [-rollback] | 
   [-rollingUpgrade downgrade|rollback ] | 
   [-finalize] | 
   [-importCheckpoint] | 
   [-initializeSharedEdits] | 
   [-bootstrapStandby] | 
   [-recover [ -force] ] | 
   [-metadataVersion ]  ]
 {code}
 There're some issues in the usage to be fixed.
 # Usage: java NameNode should be Usage: hdfs namenode
 # -rollingUpgrade started option should be added
 # The last ']' should be removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6945) ExcessBlocks metric may not be decremented if there are no over replicated blocks

2014-08-27 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14111913#comment-14111913
 ] 

Akira AJISAKA commented on HDFS-6945:
-

The number of excess blocks is incremented but not decremented in the following 
sequence.
# A block becomes over-relicated
# NN asks a DN to delete an excess block
# The DN deletes the block
# delete the file includes the block before receiving block report from the DN

If the block has been deleted, the counter is not decremented in processing 
block report.

 ExcessBlocks metric may not be decremented if there are no over replicated 
 blocks
 -

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
  Labels: metrics

 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6945) ExcessBlocks metric may not be decremented if there are no over replicated blocks

2014-08-27 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14111920#comment-14111920
 ] 

Akira AJISAKA commented on HDFS-6945:
-

I propose to add the function to remove the block from {{excessReplicateMap}} 
and decrement the counter in {{BlockManager#removeBlock(Block)}} and 
{{BlockManager#removeBlockFromMap(Block)}} methods.
Now {{excessReplicateMap}} can become large, which means memory leak.

 ExcessBlocks metric may not be decremented if there are no over replicated 
 blocks
 -

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
  Labels: metrics

 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6945) ExcessBlocks metric may not be decremented if there are no over replicated blocks

2014-08-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HDFS-6945:
---

Assignee: Akira AJISAKA

 ExcessBlocks metric may not be decremented if there are no over replicated 
 blocks
 -

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: metrics

 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6945) ExcessBlocks metric may not be decremented if there are no over replicated blocks

2014-08-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6945:


Attachment: HDFS-6945.patch

Attaching a patch.

 ExcessBlocks metric may not be decremented if there are no over replicated 
 blocks
 -

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: metrics
 Attachments: HDFS-6945.patch


 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6945) ExcessBlocks metric may not be decremented if there are no over replicated blocks

2014-08-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6945:


Target Version/s: 2.6.0
  Status: Patch Available  (was: Open)

 ExcessBlocks metric may not be decremented if there are no over replicated 
 blocks
 -

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: metrics
 Attachments: HDFS-6945.patch


 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6945) excessReplicateMap can increase infinitely

2014-08-27 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6945:


Description: 
I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
however, there are no over-replicated blocks (confirmed by fsck).

After a further research, I noticed when deleting a block, BlockManager does 
not remove the block from excessReplicateMap or decrement excessBlocksCount.
Usually the metric is decremented when processing block report, however, if the 
block has been deleted, BlockManager does not remove the block from 
excessReplicateMap or decrement the metric.
That way the metric and excessReplicateMap can increase infinitely (i.e. memory 
leak can occur).

  was:I'm seeing ExcessBlocks metric increases to more than 300K in some 
clusters, however, there are no over-replicated blocks (confirmed by fsck).

   Priority: Critical  (was: Major)
Summary: excessReplicateMap can increase infinitely  (was: ExcessBlocks 
metric may not be decremented if there are no over replicated blocks)

Updated the summary and the description.

 excessReplicateMap can increase infinitely
 --

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Critical
  Labels: metrics
 Attachments: HDFS-6945.patch


 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).
 After a further research, I noticed when deleting a block, BlockManager does 
 not remove the block from excessReplicateMap or decrement excessBlocksCount.
 Usually the metric is decremented when processing block report, however, if 
 the block has been deleted, BlockManager does not remove the block from 
 excessReplicateMap or decrement the metric.
 That way the metric and excessReplicateMap can increase infinitely (i.e. 
 memory leak can occur).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6831) Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'

2014-08-28 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14113490#comment-14113490
 ] 

Akira AJISAKA commented on HDFS-6831:
-

Thanks [~xiaoyuyao] for the patch!
Would you change hdfs DFSAdmin to hdfs dfsadmin?

 Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'
 ---

 Key: HDFS-6831
 URL: https://issues.apache.org/jira/browse/HDFS-6831
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6831.0.patch


 There is an inconsistency between the console outputs of 'hdfs dfsadmin' 
 command and 'hdfs dfsadmin -help' command.
 {code}
 [root@trunk ~]# hdfs dfsadmin
 Usage: java DFSAdmin
 Note: Administrative commands can only be run as the HDFS superuser.
[-report]
[-safemode enter | leave | get | wait]
[-allowSnapshot snapshotDir]
[-disallowSnapshot snapshotDir]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-finalizeUpgrade]
[-rollingUpgrade [query|prepare|finalize]]
[-metasave filename]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh]
[-printTopology]
[-refreshNamenodes datanodehost:port]
[-deleteBlockPool datanode-host:port blockpoolId [force]]
[-setQuota quota dirname...dirname]
[-clrQuota dirname...dirname]
[-setSpaceQuota quota dirname...dirname]
[-clrSpaceQuota dirname...dirname]
[-setBalancerBandwidth bandwidth in bytes per second]
[-fetchImage local directory]
[-shutdownDatanode datanode_host:ipc_port [upgrade]]
[-getDatanodeInfo datanode_host:ipc_port]
[-help [cmd]]
 {code}
 {code}
 [root@trunk ~]# hdfs dfsadmin -help
 hadoop dfsadmin performs DFS administrative commands.
 The full syntax is: 
 hadoop dfsadmin
   [-report [-live] [-dead] [-decommissioning]]
   [-safemode enter | leave | get | wait]
   [-saveNamespace]
   [-rollEdits]
   [-restoreFailedStorage true|false|check]
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
   [-setSpaceQuota quota dirname...dirname]
   [-clrSpaceQuota dirname...dirname]
   [-finalizeUpgrade]
   [-rollingUpgrade [query|prepare|finalize]]
   [-refreshServiceAcl]
   [-refreshUserToGroupsMappings]
   [-refreshSuperUserGroupsConfiguration]
   [-refreshCallQueue]
   [-refresh host:ipc_port key [arg1..argn]
   [-printTopology]
   [-refreshNamenodes datanodehost:port]
   [-deleteBlockPool datanodehost:port blockpoolId [force]]
   [-setBalancerBandwidth bandwidth]
   [-fetchImage local directory]
   [-allowSnapshot snapshotDir]
   [-disallowSnapshot snapshotDir]
   [-shutdownDatanode datanode_host:ipc_port [upgrade]]
   [-getDatanodeInfo datanode_host:ipc_port
   [-help [cmd]
 {code}
 These two outputs should be the same.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6945) excessReplicateMap can increase infinitely

2014-08-31 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6945:

Attachment: HDFS-6945.2.patch

Updated the patch to avoid ConcurrentModificationException when removing a 
value from TreeMap.

 excessReplicateMap can increase infinitely
 --

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Critical
  Labels: metrics
 Attachments: HDFS-6945.2.patch, HDFS-6945.patch


 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).
 After a further research, I noticed when deleting a block, BlockManager does 
 not remove the block from excessReplicateMap or decrement excessBlocksCount.
 Usually the metric is decremented when processing block report, however, if 
 the block has been deleted, BlockManager does not remove the block from 
 excessReplicateMap or decrement the metric.
 That way the metric and excessReplicateMap can increase infinitely (i.e. 
 memory leak can occur).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6832) Fix the usage of 'hdfs namenode' command

2014-08-31 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14117035#comment-14117035
 ] 

Akira AJISAKA commented on HDFS-6832:
-

Thanks [~skrho] for the update. A Trivial comment: Would you remove the below 
line with whitespaces?
{code}
+
{code}

 Fix the usage of 'hdfs namenode' command
 

 Key: HDFS-6832
 URL: https://issues.apache.org/jira/browse/HDFS-6832
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Akira AJISAKA
Assignee: skrho
Priority: Minor
  Labels: newbie
 Attachments: hdfs-6832.txt, hdfs-6832_001.txt, hdfs-6832_002.txt


 {code}
 [root@trunk ~]# hdfs namenode -help
 Usage: java NameNode [-backup] | 
   [-checkpoint] | 
   [-format [-clusterid cid ] [-force] [-nonInteractive] ] | 
   [-upgrade [-clusterid cid] [-renameReservedk-v pairs] ] | 
   [-upgradeOnly [-clusterid cid] [-renameReservedk-v pairs] ] | 
   [-rollback] | 
   [-rollingUpgrade downgrade|rollback ] | 
   [-finalize] | 
   [-importCheckpoint] | 
   [-initializeSharedEdits] | 
   [-bootstrapStandby] | 
   [-recover [ -force] ] | 
   [-metadataVersion ]  ]
 {code}
 There're some issues in the usage to be fixed.
 # Usage: java NameNode should be Usage: hdfs namenode
 # -rollingUpgrade started option should be added
 # The last ']' should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6980) TestWebHdfsFileSystemContract fails in trunk

2014-09-01 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6980:
---

 Summary: TestWebHdfsFileSystemContract fails in trunk
 Key: HDFS-6980
 URL: https://issues.apache.org/jira/browse/HDFS-6980
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA


Many tests in TestWebHdfsFileSystemContract fail by too many open files error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6942) Fix typos in log messages

2014-09-02 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14117924#comment-14117924
 ] 

Akira AJISAKA commented on HDFS-6942:
-

Thanks [~rchiang] for the patch. Looks good to me.
By the way, I found another typo 'targests' in DataNode.java.
{code}
  if (DataTransferProtocol.LOG.isDebugEnabled()) {
DataTransferProtocol.LOG.debug(getClass().getSimpleName() + : 
+ b +  (numBytes= + b.getNumBytes() + )
+ , stage= + stage
+ , clientname= + clientname
+ , targests= + Arrays.asList(targets));
  }
{code}
Would you include fixing the typo in the patch?

 Fix typos in log messages
 -

 Key: HDFS-6942
 URL: https://issues.apache.org/jira/browse/HDFS-6942
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-6942-01.patch


 There are a bunch of typos in log messages. HADOOP-10946 was initially 
 created, but may have failed due to being in multiple components. Try fixing 
 typos on a per-component basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6831) Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'

2014-09-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122282#comment-14122282
 ] 

Akira AJISAKA commented on HDFS-6831:
-

One more comment:
{code}
  The full syntax is: \n\n +
  hadoop dfsadmin\n +
  commonUsageSummary;
{code}
Would you change hadoop dfsadmin to hdfs dfsadmin? I'm +1 (non-binding) if 
that is included.
{code}
String summary = hadoop dfsadmin performs DFS administrative commands.\n +
{code}
I'm thinking the above can be changed to hdfs dfsadmin also. [~arpitagarwal] 
and [~xyao], what do you think?


 Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'
 ---

 Key: HDFS-6831
 URL: https://issues.apache.org/jira/browse/HDFS-6831
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Xiaoyu Yao
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6831.0.patch, HDFS-6831.1.patch


 There is an inconsistency between the console outputs of 'hdfs dfsadmin' 
 command and 'hdfs dfsadmin -help' command.
 {code}
 [root@trunk ~]# hdfs dfsadmin
 Usage: java DFSAdmin
 Note: Administrative commands can only be run as the HDFS superuser.
[-report]
[-safemode enter | leave | get | wait]
[-allowSnapshot snapshotDir]
[-disallowSnapshot snapshotDir]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-finalizeUpgrade]
[-rollingUpgrade [query|prepare|finalize]]
[-metasave filename]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh]
[-printTopology]
[-refreshNamenodes datanodehost:port]
[-deleteBlockPool datanode-host:port blockpoolId [force]]
[-setQuota quota dirname...dirname]
[-clrQuota dirname...dirname]
[-setSpaceQuota quota dirname...dirname]
[-clrSpaceQuota dirname...dirname]
[-setBalancerBandwidth bandwidth in bytes per second]
[-fetchImage local directory]
[-shutdownDatanode datanode_host:ipc_port [upgrade]]
[-getDatanodeInfo datanode_host:ipc_port]
[-help [cmd]]
 {code}
 {code}
 [root@trunk ~]# hdfs dfsadmin -help
 hadoop dfsadmin performs DFS administrative commands.
 The full syntax is: 
 hadoop dfsadmin
   [-report [-live] [-dead] [-decommissioning]]
   [-safemode enter | leave | get | wait]
   [-saveNamespace]
   [-rollEdits]
   [-restoreFailedStorage true|false|check]
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
   [-setSpaceQuota quota dirname...dirname]
   [-clrSpaceQuota dirname...dirname]
   [-finalizeUpgrade]
   [-rollingUpgrade [query|prepare|finalize]]
   [-refreshServiceAcl]
   [-refreshUserToGroupsMappings]
   [-refreshSuperUserGroupsConfiguration]
   [-refreshCallQueue]
   [-refresh host:ipc_port key [arg1..argn]
   [-printTopology]
   [-refreshNamenodes datanodehost:port]
   [-deleteBlockPool datanodehost:port blockpoolId [force]]
   [-setBalancerBandwidth bandwidth]
   [-fetchImage local directory]
   [-allowSnapshot snapshotDir]
   [-disallowSnapshot snapshotDir]
   [-shutdownDatanode datanode_host:ipc_port [upgrade]]
   [-getDatanodeInfo datanode_host:ipc_port
   [-help [cmd]
 {code}
 These two outputs should be the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6945) excessReplicateMap can increase infinitely

2014-09-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122312#comment-14122312
 ] 

Akira AJISAKA commented on HDFS-6945:
-

The test failures look unrelated to the patch. HDFS-6980 and HDFS-6694 track 
these.

 excessReplicateMap can increase infinitely
 --

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Critical
  Labels: metrics
 Attachments: HDFS-6945.2.patch, HDFS-6945.patch


 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).
 After a further research, I noticed when deleting a block, BlockManager does 
 not remove the block from excessReplicateMap or decrement excessBlocksCount.
 Usually the metric is decremented when processing block report, however, if 
 the block has been deleted, BlockManager does not remove the block from 
 excessReplicateMap or decrement the metric.
 That way the metric and excessReplicateMap can increase infinitely (i.e. 
 memory leak can occur).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6945) BlockManager should remove a block from excessReplicateMap and decrement ExcessBlocks metric when the block is removed

2014-09-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6945:

Summary: BlockManager should remove a block from excessReplicateMap and 
decrement ExcessBlocks metric when the block is removed  (was: 
excessReplicateMap can increase infinitely)

I think the patch is ready for review.

 BlockManager should remove a block from excessReplicateMap and decrement 
 ExcessBlocks metric when the block is removed
 --

 Key: HDFS-6945
 URL: https://issues.apache.org/jira/browse/HDFS-6945
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Critical
  Labels: metrics
 Attachments: HDFS-6945.2.patch, HDFS-6945.patch


 I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
 however, there are no over-replicated blocks (confirmed by fsck).
 After a further research, I noticed when deleting a block, BlockManager does 
 not remove the block from excessReplicateMap or decrement excessBlocksCount.
 Usually the metric is decremented when processing block report, however, if 
 the block has been deleted, BlockManager does not remove the block from 
 excessReplicateMap or decrement the metric.
 That way the metric and excessReplicateMap can increase infinitely (i.e. 
 memory leak can occur).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6831) Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'

2014-09-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122349#comment-14122349
 ] 

Akira AJISAKA commented on HDFS-6831:
-

Thanks [~xyao] for updating the patch. Would you please remove grouping the 
imports from the patch?

 Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'
 ---

 Key: HDFS-6831
 URL: https://issues.apache.org/jira/browse/HDFS-6831
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Xiaoyu Yao
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6831.0.patch, HDFS-6831.1.patch, HDFS-6831.2.patch


 There is an inconsistency between the console outputs of 'hdfs dfsadmin' 
 command and 'hdfs dfsadmin -help' command.
 {code}
 [root@trunk ~]# hdfs dfsadmin
 Usage: java DFSAdmin
 Note: Administrative commands can only be run as the HDFS superuser.
[-report]
[-safemode enter | leave | get | wait]
[-allowSnapshot snapshotDir]
[-disallowSnapshot snapshotDir]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-finalizeUpgrade]
[-rollingUpgrade [query|prepare|finalize]]
[-metasave filename]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh]
[-printTopology]
[-refreshNamenodes datanodehost:port]
[-deleteBlockPool datanode-host:port blockpoolId [force]]
[-setQuota quota dirname...dirname]
[-clrQuota dirname...dirname]
[-setSpaceQuota quota dirname...dirname]
[-clrSpaceQuota dirname...dirname]
[-setBalancerBandwidth bandwidth in bytes per second]
[-fetchImage local directory]
[-shutdownDatanode datanode_host:ipc_port [upgrade]]
[-getDatanodeInfo datanode_host:ipc_port]
[-help [cmd]]
 {code}
 {code}
 [root@trunk ~]# hdfs dfsadmin -help
 hadoop dfsadmin performs DFS administrative commands.
 The full syntax is: 
 hadoop dfsadmin
   [-report [-live] [-dead] [-decommissioning]]
   [-safemode enter | leave | get | wait]
   [-saveNamespace]
   [-rollEdits]
   [-restoreFailedStorage true|false|check]
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
   [-setSpaceQuota quota dirname...dirname]
   [-clrSpaceQuota dirname...dirname]
   [-finalizeUpgrade]
   [-rollingUpgrade [query|prepare|finalize]]
   [-refreshServiceAcl]
   [-refreshUserToGroupsMappings]
   [-refreshSuperUserGroupsConfiguration]
   [-refreshCallQueue]
   [-refresh host:ipc_port key [arg1..argn]
   [-printTopology]
   [-refreshNamenodes datanodehost:port]
   [-deleteBlockPool datanodehost:port blockpoolId [force]]
   [-setBalancerBandwidth bandwidth]
   [-fetchImage local directory]
   [-allowSnapshot snapshotDir]
   [-disallowSnapshot snapshotDir]
   [-shutdownDatanode datanode_host:ipc_port [upgrade]]
   [-getDatanodeInfo datanode_host:ipc_port
   [-help [cmd]
 {code}
 These two outputs should be the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6831) Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'

2014-09-05 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122489#comment-14122489
 ] 

Akira AJISAKA commented on HDFS-6831:
-

+1 (non-binding), pending Jenkins to run tests with the v3 patch.

 Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'
 ---

 Key: HDFS-6831
 URL: https://issues.apache.org/jira/browse/HDFS-6831
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Xiaoyu Yao
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6831.0.patch, HDFS-6831.1.patch, HDFS-6831.2.patch, 
 HDFS-6831.3.patch


 There is an inconsistency between the console outputs of 'hdfs dfsadmin' 
 command and 'hdfs dfsadmin -help' command.
 {code}
 [root@trunk ~]# hdfs dfsadmin
 Usage: java DFSAdmin
 Note: Administrative commands can only be run as the HDFS superuser.
[-report]
[-safemode enter | leave | get | wait]
[-allowSnapshot snapshotDir]
[-disallowSnapshot snapshotDir]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-finalizeUpgrade]
[-rollingUpgrade [query|prepare|finalize]]
[-metasave filename]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh]
[-printTopology]
[-refreshNamenodes datanodehost:port]
[-deleteBlockPool datanode-host:port blockpoolId [force]]
[-setQuota quota dirname...dirname]
[-clrQuota dirname...dirname]
[-setSpaceQuota quota dirname...dirname]
[-clrSpaceQuota dirname...dirname]
[-setBalancerBandwidth bandwidth in bytes per second]
[-fetchImage local directory]
[-shutdownDatanode datanode_host:ipc_port [upgrade]]
[-getDatanodeInfo datanode_host:ipc_port]
[-help [cmd]]
 {code}
 {code}
 [root@trunk ~]# hdfs dfsadmin -help
 hadoop dfsadmin performs DFS administrative commands.
 The full syntax is: 
 hadoop dfsadmin
   [-report [-live] [-dead] [-decommissioning]]
   [-safemode enter | leave | get | wait]
   [-saveNamespace]
   [-rollEdits]
   [-restoreFailedStorage true|false|check]
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
   [-setSpaceQuota quota dirname...dirname]
   [-clrSpaceQuota dirname...dirname]
   [-finalizeUpgrade]
   [-rollingUpgrade [query|prepare|finalize]]
   [-refreshServiceAcl]
   [-refreshUserToGroupsMappings]
   [-refreshSuperUserGroupsConfiguration]
   [-refreshCallQueue]
   [-refresh host:ipc_port key [arg1..argn]
   [-printTopology]
   [-refreshNamenodes datanodehost:port]
   [-deleteBlockPool datanodehost:port blockpoolId [force]]
   [-setBalancerBandwidth bandwidth]
   [-fetchImage local directory]
   [-allowSnapshot snapshotDir]
   [-disallowSnapshot snapshotDir]
   [-shutdownDatanode datanode_host:ipc_port [upgrade]]
   [-getDatanodeInfo datanode_host:ipc_port
   [-help [cmd]
 {code}
 These two outputs should be the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6831) Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'

2014-09-05 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14122580#comment-14122580
 ] 

Akira AJISAKA commented on HDFS-6831:
-

TestTools fails with v3 patch. Would you fix the test?
{code}
String pattern = Usage: java DFSAdmin;
checkOutput(new String[] { -cancel, -renew }, pattern, System.err,
DFSAdmin.class);
{code}

 Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'
 ---

 Key: HDFS-6831
 URL: https://issues.apache.org/jira/browse/HDFS-6831
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Xiaoyu Yao
Priority: Minor
  Labels: newbie
 Attachments: HDFS-6831.0.patch, HDFS-6831.1.patch, HDFS-6831.2.patch, 
 HDFS-6831.3.patch


 There is an inconsistency between the console outputs of 'hdfs dfsadmin' 
 command and 'hdfs dfsadmin -help' command.
 {code}
 [root@trunk ~]# hdfs dfsadmin
 Usage: java DFSAdmin
 Note: Administrative commands can only be run as the HDFS superuser.
[-report]
[-safemode enter | leave | get | wait]
[-allowSnapshot snapshotDir]
[-disallowSnapshot snapshotDir]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-finalizeUpgrade]
[-rollingUpgrade [query|prepare|finalize]]
[-metasave filename]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh]
[-printTopology]
[-refreshNamenodes datanodehost:port]
[-deleteBlockPool datanode-host:port blockpoolId [force]]
[-setQuota quota dirname...dirname]
[-clrQuota dirname...dirname]
[-setSpaceQuota quota dirname...dirname]
[-clrSpaceQuota dirname...dirname]
[-setBalancerBandwidth bandwidth in bytes per second]
[-fetchImage local directory]
[-shutdownDatanode datanode_host:ipc_port [upgrade]]
[-getDatanodeInfo datanode_host:ipc_port]
[-help [cmd]]
 {code}
 {code}
 [root@trunk ~]# hdfs dfsadmin -help
 hadoop dfsadmin performs DFS administrative commands.
 The full syntax is: 
 hadoop dfsadmin
   [-report [-live] [-dead] [-decommissioning]]
   [-safemode enter | leave | get | wait]
   [-saveNamespace]
   [-rollEdits]
   [-restoreFailedStorage true|false|check]
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
   [-setSpaceQuota quota dirname...dirname]
   [-clrSpaceQuota dirname...dirname]
   [-finalizeUpgrade]
   [-rollingUpgrade [query|prepare|finalize]]
   [-refreshServiceAcl]
   [-refreshUserToGroupsMappings]
   [-refreshSuperUserGroupsConfiguration]
   [-refreshCallQueue]
   [-refresh host:ipc_port key [arg1..argn]
   [-printTopology]
   [-refreshNamenodes datanodehost:port]
   [-deleteBlockPool datanodehost:port blockpoolId [force]]
   [-setBalancerBandwidth bandwidth]
   [-fetchImage local directory]
   [-allowSnapshot snapshotDir]
   [-disallowSnapshot snapshotDir]
   [-shutdownDatanode datanode_host:ipc_port [upgrade]]
   [-getDatanodeInfo datanode_host:ipc_port
   [-help [cmd]
 {code}
 These two outputs should be the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7002) Failed to rolling upgrade hdfs from 2.2.0 to 2.4.1

2014-09-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-7002.
-
Resolution: Invalid

Rolling upgrades are available for the upgrades from 2.4+ only. Rolling upgrade 
from ~2.3 to 2.4+ is not supported.

 Failed to rolling upgrade hdfs from 2.2.0 to 2.4.1
 --

 Key: HDFS-7002
 URL: https://issues.apache.org/jira/browse/HDFS-7002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: journal-node, namenode, qjm
Affects Versions: 2.2.0, 2.4.1
Reporter: sam liu
Priority: Blocker





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6980) TestWebHdfsFileSystemContract fails in trunk

2014-09-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14125359#comment-14125359
 ] 

Akira AJISAKA commented on HDFS-6980:
-

I'm not sure the patch will fix the test failure in Jenkins, but it looks good 
to me. We should fix resource leaks.

 TestWebHdfsFileSystemContract fails in trunk
 

 Key: HDFS-6980
 URL: https://issues.apache.org/jira/browse/HDFS-6980
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Assignee: Tsuyoshi OZAWA
 Attachments: HDFS-6980.1-2.patch, HDFS-6980.1.patch


 Many tests in TestWebHdfsFileSystemContract fail by too many open files 
 error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6194) Create new tests for ByteRangeInputStream

2014-09-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14130288#comment-14130288
 ] 

Akira AJISAKA commented on HDFS-6194:
-

Thanks [~j...@cloudera.com] for the report. I think the patch should be 
committed to trunk only.
[~wheat9], would you please revert the commit in branch-2 and modify the fix 
version?

 Create new tests for ByteRangeInputStream
 -

 Key: HDFS-6194
 URL: https://issues.apache.org/jira/browse/HDFS-6194
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HDFS-6194.2.patch, HDFS-6194.3.patch, HDFS-6194.4.patch, 
 HDFS-6194.5.patch, HDFS-6194.patch


 HDFS-5570 removes old tests for {{ByteRangeInputStream}}, because the tests 
 only are tightly coupled with hftp / hsftp. New tests need to be written 
 because the same class is also used by {{WebHdfsFileSystem}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6779) hdfs version subcommand is missing

2014-09-11 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6779:

Labels: newbie scripts  (was: scripts)

 hdfs version subcommand is missing
 --

 Key: HDFS-6779
 URL: https://issues.apache.org/jira/browse/HDFS-6779
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: scripts
Reporter: Allen Wittenauer
  Labels: newbie, scripts

 'hdfs version' is missing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7001) Tests in TestTracing depends on the order of execution

2014-09-11 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14131037#comment-14131037
 ] 

Akira AJISAKA commented on HDFS-7001:
-

Hi [~iwasakims],
{code}
+Assert.assertEquals(s, spanReceiverHost);
{code}
would you correct the order of the arguments to match with 
{{assertEquals(expected, actual)}} for improving error message? The rest looks 
good to me.

 Tests in TestTracing depends on the order of execution
 --

 Key: HDFS-7001
 URL: https://issues.apache.org/jira/browse/HDFS-7001
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-7001-0.patch


 o.a.h.tracing.TestTracing#testSpanReceiverHost is assumed to be executed 
 first. It should be done in BeforeClass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6048) DFSClient fails if native library doesn't exist

2014-03-03 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6048:
---

 Summary: DFSClient fails if native library doesn't exist
 Key: HDFS-6048
 URL: https://issues.apache.org/jira/browse/HDFS-6048
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Akira AJISAKA
Priority: Blocker


When I executed FSShell commands (such as hdfs dfs -ls, -mkdir, -cat) in trunk, 
{{UnsupportedOperationException}} occurred in 
{{o.a.h.net.unix.DomainSocketWatcher}} and the commands failed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6048) DFSClient fails if native library doesn't exist

2014-03-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918896#comment-13918896
 ] 

Akira AJISAKA commented on HDFS-6048:
-

The stacktrace is as follows: 
{code}
[root@trunk ~]# hdfs dfs -ls
14/03/04 10:28:59 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
-ls: Fatal internal error
java.lang.UnsupportedOperationException: libhadoop cannot be loaded.
at 
org.apache.hadoop.net.unix.DomainSocketWatcher.init(DomainSocketWatcher.java:229)
at 
org.apache.hadoop.hdfs.client.DfsClientShmManager.init(DfsClientShmManager.java:404)
at 
org.apache.hadoop.hdfs.client.ShortCircuitCache.init(ShortCircuitCache.java:380)
at org.apache.hadoop.hdfs.ClientContext.init(ClientContext.java:96)
at org.apache.hadoop.hdfs.ClientContext.get(ClientContext.java:145)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:587)
at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:507)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:144)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2396)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2430)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2412)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:228)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:211)
at 
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)
{code}

 DFSClient fails if native library doesn't exist
 ---

 Key: HDFS-6048
 URL: https://issues.apache.org/jira/browse/HDFS-6048
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Akira AJISAKA
Priority: Blocker

 When I executed FSShell commands (such as hdfs dfs -ls, -mkdir, -cat) in 
 trunk, {{UnsupportedOperationException}} occurred in 
 {{o.a.h.net.unix.DomainSocketWatcher}} and the commands failed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6048) DFSClient fails if native library doesn't exist

2014-03-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13918901#comment-13918901
 ] 

Akira AJISAKA commented on HDFS-6048:
-

Thank you for notifying me, [~atm].

 DFSClient fails if native library doesn't exist
 ---

 Key: HDFS-6048
 URL: https://issues.apache.org/jira/browse/HDFS-6048
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.0
Reporter: Akira AJISAKA
Priority: Blocker

 When I executed FSShell commands (such as hdfs dfs -ls, -mkdir, -cat) in 
 trunk, {{UnsupportedOperationException}} occurred in 
 {{o.a.h.net.unix.DomainSocketWatcher}} and the commands failed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6059) TestBlockReaderLocal fails if native library is not available

2014-03-05 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-6059:
---

 Summary: TestBlockReaderLocal fails if native library is not 
available
 Key: HDFS-6059
 URL: https://issues.apache.org/jira/browse/HDFS-6059
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Akira AJISAKA


I ran TestBlockReaderLocal locally and it failed.
{code}
---
 T E S T S
---
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 37, Failures: 0, Errors: 35, Skipped: 2, Time elapsed: 77.261 sec 
 FAILURE! - in org.apache.hadoop.hdfs.TestBlockReaderLocal
testBlockReaderLocalImmediateClose(org.apache.hadoop.hdfs.TestBlockReaderLocal) 
 Time elapsed: 51.562 sec   ERROR!
java.lang.UnsupportedOperationException: NativeIO is not available.
at 
org.apache.hadoop.hdfs.ShortCircuitShm.init(ShortCircuitShm.java:461)
at 
org.apache.hadoop.hdfs.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:181)
at 
org.apache.hadoop.hdfs.TestBlockReaderLocal.testBlockReaderLocalImmediateClose(TestBlockReaderLocal.java:218)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6059) TestBlockReaderLocal fails if native library is not available

2014-03-05 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13921171#comment-13921171
 ] 

Akira AJISAKA commented on HDFS-6059:
-

I'll create a patch to skip the test if native library is not available shortly.

 TestBlockReaderLocal fails if native library is not available
 -

 Key: HDFS-6059
 URL: https://issues.apache.org/jira/browse/HDFS-6059
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA

 I ran TestBlockReaderLocal locally and it failed.
 {code}
 ---
  T E S T S
 ---
 Running org.apache.hadoop.hdfs.TestBlockReaderLocal
 Tests run: 37, Failures: 0, Errors: 35, Skipped: 2, Time elapsed: 77.261 sec 
  FAILURE! - in org.apache.hadoop.hdfs.TestBlockReaderLocal
 testBlockReaderLocalImmediateClose(org.apache.hadoop.hdfs.TestBlockReaderLocal)
   Time elapsed: 51.562 sec   ERROR!
 java.lang.UnsupportedOperationException: NativeIO is not available.
 at 
 org.apache.hadoop.hdfs.ShortCircuitShm.init(ShortCircuitShm.java:461)
 at 
 org.apache.hadoop.hdfs.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:181)
 at 
 org.apache.hadoop.hdfs.TestBlockReaderLocal.testBlockReaderLocalImmediateClose(TestBlockReaderLocal.java:218)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6059) TestBlockReaderLocal fails if native library is not available

2014-03-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HDFS-6059:
---

Assignee: Akira AJISAKA

 TestBlockReaderLocal fails if native library is not available
 -

 Key: HDFS-6059
 URL: https://issues.apache.org/jira/browse/HDFS-6059
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA

 I ran TestBlockReaderLocal locally and it failed.
 {code}
 ---
  T E S T S
 ---
 Running org.apache.hadoop.hdfs.TestBlockReaderLocal
 Tests run: 37, Failures: 0, Errors: 35, Skipped: 2, Time elapsed: 77.261 sec 
  FAILURE! - in org.apache.hadoop.hdfs.TestBlockReaderLocal
 testBlockReaderLocalImmediateClose(org.apache.hadoop.hdfs.TestBlockReaderLocal)
   Time elapsed: 51.562 sec   ERROR!
 java.lang.UnsupportedOperationException: NativeIO is not available.
 at 
 org.apache.hadoop.hdfs.ShortCircuitShm.init(ShortCircuitShm.java:461)
 at 
 org.apache.hadoop.hdfs.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:181)
 at 
 org.apache.hadoop.hdfs.TestBlockReaderLocal.testBlockReaderLocalImmediateClose(TestBlockReaderLocal.java:218)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6059) TestBlockReaderLocal fails if native library is not available

2014-03-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6059:


Attachment: HDFS-6059.patch

Attaching a patch.

 TestBlockReaderLocal fails if native library is not available
 -

 Key: HDFS-6059
 URL: https://issues.apache.org/jira/browse/HDFS-6059
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6059.patch


 I ran TestBlockReaderLocal locally and it failed.
 {code}
 ---
  T E S T S
 ---
 Running org.apache.hadoop.hdfs.TestBlockReaderLocal
 Tests run: 37, Failures: 0, Errors: 35, Skipped: 2, Time elapsed: 77.261 sec 
  FAILURE! - in org.apache.hadoop.hdfs.TestBlockReaderLocal
 testBlockReaderLocalImmediateClose(org.apache.hadoop.hdfs.TestBlockReaderLocal)
   Time elapsed: 51.562 sec   ERROR!
 java.lang.UnsupportedOperationException: NativeIO is not available.
 at 
 org.apache.hadoop.hdfs.ShortCircuitShm.init(ShortCircuitShm.java:461)
 at 
 org.apache.hadoop.hdfs.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:181)
 at 
 org.apache.hadoop.hdfs.TestBlockReaderLocal.testBlockReaderLocalImmediateClose(TestBlockReaderLocal.java:218)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6059) TestBlockReaderLocal fails if native library is not available

2014-03-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6059:


Labels: newbie  (was: )
Status: Patch Available  (was: Open)

 TestBlockReaderLocal fails if native library is not available
 -

 Key: HDFS-6059
 URL: https://issues.apache.org/jira/browse/HDFS-6059
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HDFS-6059.patch


 I ran TestBlockReaderLocal locally and it failed.
 {code}
 ---
  T E S T S
 ---
 Running org.apache.hadoop.hdfs.TestBlockReaderLocal
 Tests run: 37, Failures: 0, Errors: 35, Skipped: 2, Time elapsed: 77.261 sec 
  FAILURE! - in org.apache.hadoop.hdfs.TestBlockReaderLocal
 testBlockReaderLocalImmediateClose(org.apache.hadoop.hdfs.TestBlockReaderLocal)
   Time elapsed: 51.562 sec   ERROR!
 java.lang.UnsupportedOperationException: NativeIO is not available.
 at 
 org.apache.hadoop.hdfs.ShortCircuitShm.init(ShortCircuitShm.java:461)
 at 
 org.apache.hadoop.hdfs.TestBlockReaderLocal.runBlockReaderLocalTest(TestBlockReaderLocal.java:181)
 at 
 org.apache.hadoop.hdfs.TestBlockReaderLocal.testBlockReaderLocalImmediateClose(TestBlockReaderLocal.java:218)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6058) Fix TestHDFSCLI failures after HADOOP-8691 change

2014-03-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HDFS-6058:
---

Assignee: Akira AJISAKA

 Fix TestHDFSCLI failures after HADOOP-8691 change
 -

 Key: HDFS-6058
 URL: https://issues.apache.org/jira/browse/HDFS-6058
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Akira AJISAKA

 HADOOP-8691 changed the ls command output.
 TestHDFSCLI needs to be updated after this change,
 Latest precommit builds are failing because of this.
 https://builds.apache.org/job/PreCommit-HDFS-Build/6305//testReport/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6058) Fix TestHDFSCLI failures after HADOOP-8691 change

2014-03-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6058:


Attachment: HDFS-6058.patch

Attaching a patch to remove the comparators which assert Found 1 items in the 
output.

 Fix TestHDFSCLI failures after HADOOP-8691 change
 -

 Key: HDFS-6058
 URL: https://issues.apache.org/jira/browse/HDFS-6058
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Akira AJISAKA
 Attachments: HDFS-6058.patch


 HADOOP-8691 changed the ls command output.
 TestHDFSCLI needs to be updated after this change,
 Latest precommit builds are failing because of this.
 https://builds.apache.org/job/PreCommit-HDFS-Build/6305//testReport/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   4   5   6   7   8   9   10   >