[jira] [Updated] (HDFS-2740) Enable the trash feature by default

2012-04-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2740:
--

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

The decision here is to not change this default behavior.

However, we can still improve the docs, for which the JIRA is now available at 
HDFS-3302

 Enable the trash feature by default
 ---

 Key: HDFS-2740
 URL: https://issues.apache.org/jira/browse/HDFS-2740
 Project: Hadoop HDFS
  Issue Type: Wish
  Components: hdfs client, name-node
Affects Versions: 0.23.0
Reporter: Harsh J
  Labels: newbie
 Attachments: hdfs-2740.patch, hdfs-2740.patch


 Currently trash is disabled out of box. I do not think it'd be of high 
 surprise to anyone (but surely a relief when *hit happens) to have trash 
 enabled by default, with the usually recommended periods of 1-day.
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2413) Add public APIs for safemode

2012-03-27 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2413:
--

Target Version/s:   (was: 0.23.2, 0.24.0)

 Add public APIs for safemode
 

 Key: HDFS-2413
 URL: https://issues.apache.org/jira/browse/HDFS-2413
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Harsh J
 Fix For: 0.24.0, 0.23.3

 Attachments: HDFS-2413.patch, HDFS-2413.patch, HDFS-2413.patch, 
 HDFS-2413.patch


 Currently the APIs for safe-mode are part of DistributedFileSystem, which is 
 supposed to be a private interface. However, dependent software often wants 
 to wait until the NN is out of safemode. Though it could poll trying to 
 create a file and catching SafeModeException, we should consider making some 
 of these APIs public.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2413) Add public APIs for safemode

2012-03-26 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2413:
--

Attachment: HDFS-2413.patch

Silly me (Applied changes onto the wrong class). Here's a new patch that cleans 
the additions up, and applies Nicholas' comments as well, properly this time.

 Add public APIs for safemode
 

 Key: HDFS-2413
 URL: https://issues.apache.org/jira/browse/HDFS-2413
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Harsh J
 Attachments: HDFS-2413.patch, HDFS-2413.patch, HDFS-2413.patch, 
 HDFS-2413.patch


 Currently the APIs for safe-mode are part of DistributedFileSystem, which is 
 supposed to be a private interface. However, dependent software often wants 
 to wait until the NN is out of safemode. Though it could poll trying to 
 create a file and catching SafeModeException, we should consider making some 
 of these APIs public.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2413) Add public APIs for safemode

2012-03-25 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2413:
--

Attachment: HDFS-2413.patch

Sounds good Nicholas.

I updated the patch to add that in.

 Add public APIs for safemode
 

 Key: HDFS-2413
 URL: https://issues.apache.org/jira/browse/HDFS-2413
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Harsh J
 Attachments: HDFS-2413.patch, HDFS-2413.patch, HDFS-2413.patch


 Currently the APIs for safe-mode are part of DistributedFileSystem, which is 
 supposed to be a private interface. However, dependent software often wants 
 to wait until the NN is out of safemode. Though it could poll trying to 
 create a file and catching SafeModeException, we should consider making some 
 of these APIs public.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3074) HDFS ignores group of a user when creating a file or a directory, and instead inherits

2012-03-10 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-3074:
--

Description: 
When creating a file or making a directory on HDFS, the namesystem calls pass 
{{null}} for the group name, thereby having the parent directory permissions 
inherited onto the file.

This is not how the Linux FS works at least.

For instance, if I have today a user 'foo' with default group 'foo', and I have 
my HDFS home dir created as foo:foo by the HDFS admin, all files I create 
under my directory too will have foo as group unless I chgrp them myself. 
This makes sense.

Now, if my admin were to change my local accounts' default/primary group to 
'bar' (but did not change so on my homedir on HDFS, and I were to continue 
writing files to my home directory or any subdirectory that has 'foo' as group, 
all files still get created with group 'foo' - as if the NN has not realized 
the primary group of the mapped shell account has already changed.

On linux this is the opposite. My login session's current primary group is what 
determines the default group on my created files and directories, not the 
parent dir owner.

If the create and mkdirs call passed UGI's group info 
(UserGroupInformation.getCurrentUser().getGroupNames()[0] should give primary 
group?) along into their calls instead of a null in the PermissionsStatus 
object, perhaps this can be avoided.

Or should we leave this as-is, and instead state that if admins wish their 
default groups of users to change, they'd have to chgrp all the directories 
themselves?

  was:
When creating a file or making a directory on HDFS, the namesystem calls pass 
{{null}} for the group name, thereby having the parent directory permissions 
inherited onto the file.

This is not how the Linux FS works at least.

For instance, if I have today a user 'foo' with default group 'foo', and I have 
my HDFS home dir created as foo:foo by the HDFS admin, all files I create 
under my directory too will have foo as group unless I chgrp them myself. 
This makes sense.

Now, if my admin were to change my local accounts' default/primary group to 
'bar' (but did not change so, and I were to continue writing files to my home 
directory (or any subdirectory that has 'foo' as group), all files still get 
created with group 'foo' - as if the NN has not realized the primary group has 
changed.

On linux this is the opposite. My login session's current primary group is what 
determines the default group on my created files and directories, not the 
parent dir owner.

If the create and mkdirs call passed UGI's group info 
(UserGroupInformation.getCurrentUser().getGroupNames()[0] should give primary 
group?) along into their calls instead of a null in the PermissionsStatus 
object, perhaps this can be avoided.

Or should we leave this as-is, and instead state that if admins wish their 
default groups of users to change, they'd have to chgrp all the directories 
themselves?


 HDFS ignores group of a user when creating a file or a directory, and instead 
 inherits
 --

 Key: HDFS-3074
 URL: https://issues.apache.org/jira/browse/HDFS-3074
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.1
Reporter: Harsh J
Priority: Minor

 When creating a file or making a directory on HDFS, the namesystem calls pass 
 {{null}} for the group name, thereby having the parent directory permissions 
 inherited onto the file.
 This is not how the Linux FS works at least.
 For instance, if I have today a user 'foo' with default group 'foo', and I 
 have my HDFS home dir created as foo:foo by the HDFS admin, all files I 
 create under my directory too will have foo as group unless I chgrp them 
 myself. This makes sense.
 Now, if my admin were to change my local accounts' default/primary group to 
 'bar' (but did not change so on my homedir on HDFS, and I were to continue 
 writing files to my home directory or any subdirectory that has 'foo' as 
 group, all files still get created with group 'foo' - as if the NN has not 
 realized the primary group of the mapped shell account has already changed.
 On linux this is the opposite. My login session's current primary group is 
 what determines the default group on my created files and directories, not 
 the parent dir owner.
 If the create and mkdirs call passed UGI's group info 
 (UserGroupInformation.getCurrentUser().getGroupNames()[0] should give primary 
 group?) along into their calls instead of a null in the PermissionsStatus 
 object, perhaps this can be avoided.
 Or should we leave this as-is, and instead state that if admins wish their 
 default groups of users to change, they'd have to chgrp all the directories 
 themselves?

--
This message is 

[jira] [Updated] (HDFS-2413) Add public APIs for safemode

2012-02-11 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2413:
--

Target Version/s: 0.24.0, 0.23.2
   Fix Version/s: (was: 0.24.0)

 Add public APIs for safemode
 

 Key: HDFS-2413
 URL: https://issues.apache.org/jira/browse/HDFS-2413
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Harsh J
 Attachments: HDFS-2413.patch


 Currently the APIs for safe-mode are part of DistributedFileSystem, which is 
 supposed to be a private interface. However, dependent software often wants 
 to wait until the NN is out of safemode. Though it could poll trying to 
 create a file and catching SafeModeException, we should consider making some 
 of these APIs public.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2413) Add public APIs for safemode

2012-02-11 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2413:
--

Attachment: HDFS-2413.patch

Patch addresses Eli's comments.

 Add public APIs for safemode
 

 Key: HDFS-2413
 URL: https://issues.apache.org/jira/browse/HDFS-2413
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Harsh J
 Attachments: HDFS-2413.patch, HDFS-2413.patch


 Currently the APIs for safe-mode are part of DistributedFileSystem, which is 
 supposed to be a private interface. However, dependent software often wants 
 to wait until the NN is out of safemode. Though it could poll trying to 
 create a file and catching SafeModeException, we should consider making some 
 of these APIs public.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2869) Error in Webhdfs documentation for mkdir

2012-02-11 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2869:
--

  Resolution: Fixed
   Fix Version/s: 0.23.2
  1.1.0
  0.24.0
Target Version/s:   (was: 1.1.0, 0.23.1)
  Status: Resolved  (was: Patch Available)

This was a trivial fix to make the mkdir url in sync with the others. Went 
ahead and committed the docfix into branch-1, branch-0.23, and trunk.

 Error in Webhdfs documentation for mkdir
 

 Key: HDFS-2869
 URL: https://issues.apache.org/jira/browse/HDFS-2869
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.23.1, 1.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0, 1.1.0, 0.23.2

 Attachments: HDFS-2869-branch-1.patch, HDFS-2869.patch


 Reported over the lists by user Stuti Awasthi:
 {quote}
 I have tried the webhdfs functionality of Hadoop-1.0.0 and it is working fine.
 Just a small change is required in the documentation :
 Make a Directory declaration in documentation:
 curl -i -X PUT http://HOST:PORT/PATH?op=MKDIRS[permission=OCTAL]
 Gives following error :
 HTTP/1.1 405 HTTP method PUT is not supported by this URL
 Content-Length: 0
 Server: Jetty(6.1.26)
 Correction Required : This works for me
 curl -i -X PUT http://host:port/*webhdfs/v1/*PATH?op=MKDIRS
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2931) Switch the DataNode's BlockVolumeChoosingPolicy to be a private-audience interface

2012-02-09 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2931:
--

Attachment: HDFS-2391.patch

 Switch the DataNode's BlockVolumeChoosingPolicy to be a private-audience 
 interface
 --

 Key: HDFS-2931
 URL: https://issues.apache.org/jira/browse/HDFS-2931
 Project: Hadoop HDFS
  Issue Type: Task
  Components: data-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
  Labels: api
 Attachments: HDFS-2391.patch


 From Nicholas, at the tail of HDFS-1120:
 {quote}
 However, if we annotate it as public, all the classes associated with them 
 should also be annotated as public. Also, whenever we change the interface or 
 any of the associated classes, it is an incompatible change.
 In our case, BlockVolumeChoosingPolicy uses FSVolumeInterface, which is a 
 part of FSDatasetInterface. In FSDatasetInterface, there are many classes 
 should not be exposed. One way to solve it is to make FSVolumeInterface 
 independent of FSDatasetInterface. However, FSVolumeInterface is not yet a 
 well-designed interface for the public.
 For these reasons, it is justified to annotate it as private, the same as 
 BlockPlacementPolicy.
 {quote}
 We should switch BlockVolumeChoosingPolicy to for a private audience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2931) Switch the DataNode's BlockVolumeChoosingPolicy to be a private-audience interface

2012-02-09 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2931:
--

Target Version/s: 0.24.0, 0.23.2  (was: 0.23.2, 0.24.0)
  Status: Patch Available  (was: Open)

 Switch the DataNode's BlockVolumeChoosingPolicy to be a private-audience 
 interface
 --

 Key: HDFS-2931
 URL: https://issues.apache.org/jira/browse/HDFS-2931
 Project: Hadoop HDFS
  Issue Type: Task
  Components: data-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
  Labels: api
 Attachments: HDFS-2391.patch


 From Nicholas, at the tail of HDFS-1120:
 {quote}
 However, if we annotate it as public, all the classes associated with them 
 should also be annotated as public. Also, whenever we change the interface or 
 any of the associated classes, it is an incompatible change.
 In our case, BlockVolumeChoosingPolicy uses FSVolumeInterface, which is a 
 part of FSDatasetInterface. In FSDatasetInterface, there are many classes 
 should not be exposed. One way to solve it is to make FSVolumeInterface 
 independent of FSDatasetInterface. However, FSVolumeInterface is not yet a 
 well-designed interface for the public.
 For these reasons, it is justified to annotate it as private, the same as 
 BlockPlacementPolicy.
 {quote}
 We should switch BlockVolumeChoosingPolicy to for a private audience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2931) Switch the DataNode's BlockVolumeChoosingPolicy to be a private-audience interface

2012-02-09 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2931:
--

Attachment: (was: HDFS-2391.patch)

 Switch the DataNode's BlockVolumeChoosingPolicy to be a private-audience 
 interface
 --

 Key: HDFS-2931
 URL: https://issues.apache.org/jira/browse/HDFS-2931
 Project: Hadoop HDFS
  Issue Type: Task
  Components: data-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
  Labels: api
 Attachments: HDFS-2931.patch


 From Nicholas, at the tail of HDFS-1120:
 {quote}
 However, if we annotate it as public, all the classes associated with them 
 should also be annotated as public. Also, whenever we change the interface or 
 any of the associated classes, it is an incompatible change.
 In our case, BlockVolumeChoosingPolicy uses FSVolumeInterface, which is a 
 part of FSDatasetInterface. In FSDatasetInterface, there are many classes 
 should not be exposed. One way to solve it is to make FSVolumeInterface 
 independent of FSDatasetInterface. However, FSVolumeInterface is not yet a 
 well-designed interface for the public.
 For these reasons, it is justified to annotate it as private, the same as 
 BlockPlacementPolicy.
 {quote}
 We should switch BlockVolumeChoosingPolicy to for a private audience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2931) Switch the DataNode's BlockVolumeChoosingPolicy to be a private-audience interface

2012-02-09 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2931:
--

Attachment: HDFS-2931.patch

 Switch the DataNode's BlockVolumeChoosingPolicy to be a private-audience 
 interface
 --

 Key: HDFS-2931
 URL: https://issues.apache.org/jira/browse/HDFS-2931
 Project: Hadoop HDFS
  Issue Type: Task
  Components: data-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
  Labels: api
 Attachments: HDFS-2931.patch


 From Nicholas, at the tail of HDFS-1120:
 {quote}
 However, if we annotate it as public, all the classes associated with them 
 should also be annotated as public. Also, whenever we change the interface or 
 any of the associated classes, it is an incompatible change.
 In our case, BlockVolumeChoosingPolicy uses FSVolumeInterface, which is a 
 part of FSDatasetInterface. In FSDatasetInterface, there are many classes 
 should not be exposed. One way to solve it is to make FSVolumeInterface 
 independent of FSDatasetInterface. However, FSVolumeInterface is not yet a 
 well-designed interface for the public.
 For these reasons, it is justified to annotate it as private, the same as 
 BlockPlacementPolicy.
 {quote}
 We should switch BlockVolumeChoosingPolicy to for a private audience.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2784) Update hftp and hdfs for host-based token support

2012-02-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2784:
--

 Target Version/s:   (was: 0.23.1, 0.24.0)
Affects Version/s: (was: 0.24.0)
Fix Version/s: 0.23.1

 Update hftp and hdfs for host-based token support
 -

 Key: HDFS-2784
 URL: https://issues.apache.org/jira/browse/HDFS-2784
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client, name-node, security
Affects Versions: 0.23.1
Reporter: Daryn Sharp
Assignee: Kihwal Lee
 Fix For: 0.23.1

 Attachments: add_new_file.sh, hdfs-2784.patch.txt, 
 hdfs-2784.patch.txt, hdfs-2784.patch.txt, hdfs-2784.patch.txt, 
 hdfs-2784.patch.txt


 Need to port 205 token changes and update any new related code dealing with 
 tokens in these filesystems.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2868) Add number of active transfer threads to the DataNode status

2012-02-05 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2868:
--

Attachment: HDFS-2868.patch

Patch that addresses all of Eli's comments. Thanks for the quick review Eli!

 Add number of active transfer threads to the DataNode status
 

 Key: HDFS-2868
 URL: https://issues.apache.org/jira/browse/HDFS-2868
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2868.patch, HDFS-2868.patch


 Presently, we do not provide any stats from the DN that specifically 
 indicates the total number of active transfer threads (xceivers). Having such 
 a metric can be very helpful as well, over plain num-ops(type) form of 
 metrics, which already exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2869) Error in Webhdfs documentation for mkdir

2012-02-04 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2869:
--

Attachment: HDFS-2869-branch-1.patch
HDFS-2869.patch

 Error in Webhdfs documentation for mkdir
 

 Key: HDFS-2869
 URL: https://issues.apache.org/jira/browse/HDFS-2869
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.23.1, 1.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2869-branch-1.patch, HDFS-2869.patch


 Reported over the lists by user Stuti Awasthi:
 {quote}
 I have tried the webhdfs functionality of Hadoop-1.0.0 and it is working fine.
 Just a small change is required in the documentation :
 Make a Directory declaration in documentation:
 curl -i -X PUT http://HOST:PORT/PATH?op=MKDIRS[permission=OCTAL]
 Gives following error :
 HTTP/1.1 405 HTTP method PUT is not supported by this URL
 Content-Length: 0
 Server: Jetty(6.1.26)
 Correction Required : This works for me
 curl -i -X PUT http://host:port/*webhdfs/v1/*PATH?op=MKDIRS
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2869) Error in Webhdfs documentation for mkdir

2012-02-04 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2869:
--

Target Version/s: 0.23.1, 1.1.0  (was: 1.1.0, 0.23.1)
  Status: Patch Available  (was: Open)

 Error in Webhdfs documentation for mkdir
 

 Key: HDFS-2869
 URL: https://issues.apache.org/jira/browse/HDFS-2869
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.0, 0.23.1
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2869-branch-1.patch, HDFS-2869.patch


 Reported over the lists by user Stuti Awasthi:
 {quote}
 I have tried the webhdfs functionality of Hadoop-1.0.0 and it is working fine.
 Just a small change is required in the documentation :
 Make a Directory declaration in documentation:
 curl -i -X PUT http://HOST:PORT/PATH?op=MKDIRS[permission=OCTAL]
 Gives following error :
 HTTP/1.1 405 HTTP method PUT is not supported by this URL
 Content-Length: 0
 Server: Jetty(6.1.26)
 Correction Required : This works for me
 curl -i -X PUT http://host:port/*webhdfs/v1/*PATH?op=MKDIRS
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2868) Add number of active transfer threads to the DataNode status

2012-02-04 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2868:
--

 Description: Presently, we do not provide any stats from the DN that 
specifically indicates the total number of active transfer threads (xceivers). 
Having such a metric can be very helpful as well, over plain num-ops(type) form 
of metrics, which already exist.  (was: Presently, we do not provide any 
metrics from the DN that specifically indicates the total number of active 
transfer threads (xceivers). Having such a metric can be very helpful as well, 
over plain num-ops(type) form of metrics, which already exist.)
Target Version/s: 0.24.0, 0.23.1  (was: 0.23.1, 0.24.0)
 Summary: Add number of active transfer threads to the DataNode 
status  (was: Add number of active transfer threads to the DataNode metrics)

Tweaked description and summary since we're adding an mxbean method, not a 
metric.

 Add number of active transfer threads to the DataNode status
 

 Key: HDFS-2868
 URL: https://issues.apache.org/jira/browse/HDFS-2868
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor

 Presently, we do not provide any stats from the DN that specifically 
 indicates the total number of active transfer threads (xceivers). Having such 
 a metric can be very helpful as well, over plain num-ops(type) form of 
 metrics, which already exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2868) Add number of active transfer threads to the DataNode status

2012-02-04 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2868:
--

Target Version/s: 0.24.0, 0.23.1  (was: 0.23.1, 0.24.0)
  Status: Patch Available  (was: Open)

 Add number of active transfer threads to the DataNode status
 

 Key: HDFS-2868
 URL: https://issues.apache.org/jira/browse/HDFS-2868
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2868.patch


 Presently, we do not provide any stats from the DN that specifically 
 indicates the total number of active transfer threads (xceivers). Having such 
 a metric can be very helpful as well, over plain num-ops(type) form of 
 metrics, which already exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2316) [umbrella] WebHDFS: a complete FileSystem implementation for accessing HDFS over HTTP

2012-01-25 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2316:
--

Target Version/s: 1.0.0, 0.22.0, 0.23.1  (was: 1.0.0, 0.22.0, 0.23.0)

 [umbrella] WebHDFS: a complete FileSystem implementation for accessing HDFS 
 over HTTP
 -

 Key: HDFS-2316
 URL: https://issues.apache.org/jira/browse/HDFS-2316
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
  Labels: critical-0.22.0
 Fix For: 0.23.1, 1.0.0

 Attachments: WebHdfsAPI20111020.pdf, WebHdfsAPI2003.pdf, 
 WebHdfsAPI2011.pdf, test-webhdfs, test-webhdfs-0.20s


 We current have hftp for accessing HDFS over HTTP.  However, hftp is a 
 read-only FileSystem and does not provide write accesses.
 In HDFS-2284, we propose to have WebHDFS for providing a complete FileSystem 
 implementation for accessing HDFS over HTTP.  The is the umbrella JIRA for 
 the tasks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2316) [umbrella] WebHDFS: a complete FileSystem implementation for accessing HDFS over HTTP

2012-01-25 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2316:
--

Target Version/s: 1.0.0, 0.23.1, 0.22.1  (was: 0.23.1, 0.22.0, 1.0.0)

 [umbrella] WebHDFS: a complete FileSystem implementation for accessing HDFS 
 over HTTP
 -

 Key: HDFS-2316
 URL: https://issues.apache.org/jira/browse/HDFS-2316
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
  Labels: critical-0.22.0
 Fix For: 0.23.1, 1.0.0

 Attachments: WebHdfsAPI20111020.pdf, WebHdfsAPI2003.pdf, 
 WebHdfsAPI2011.pdf, test-webhdfs, test-webhdfs-0.20s


 We current have hftp for accessing HDFS over HTTP.  However, hftp is a 
 read-only FileSystem and does not provide write accesses.
 In HDFS-2284, we propose to have WebHDFS for providing a complete FileSystem 
 implementation for accessing HDFS over HTTP.  The is the umbrella JIRA for 
 the tasks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-442) dfsthroughput in test.jar throws NPE

2012-01-23 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-442:
-

  Resolution: Fixed
   Fix Version/s: 0.23.1
Target Version/s:   (was: 0.23.1, 0.24.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the reviews Hitesh and Mahadev! :)

Committed to trunk and branch-0.23.

 dfsthroughput in test.jar throws NPE
 

 Key: HDFS-442
 URL: https://issues.apache.org/jira/browse/HDFS-442
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.20.1
Reporter: Ramya Sunil
Assignee: Harsh J
Priority: Minor
 Fix For: 0.23.1

 Attachments: HDFS-442.patch


 On running hadoop jar hadoop-test.jar dfsthroughput OR hadoop 
 org.apache.hadoop.hdfs.BenchmarkThroughput, we get NullPointerException. 
 Below is the stacktrace:
 {noformat}
 Exception in thread main java.lang.NullPointerException
 at java.util.Hashtable.put(Hashtable.java:394)
 at java.util.Properties.setProperty(Properties.java:143)
 at java.lang.System.setProperty(System.java:731)
 at 
 org.apache.hadoop.hdfs.BenchmarkThroughput.run(BenchmarkThroughput.java:198)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at 
 org.apache.hadoop.hdfs.BenchmarkThroughput.main(BenchmarkThroughput.java:229)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2075) Add Number of Reporting Nodes to namenode web UI

2012-01-22 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2075:
--

Target Version/s: 0.24.0, 0.23.1

 Add Number of Reporting Nodes to namenode web UI
 --

 Key: HDFS-2075
 URL: https://issues.apache.org/jira/browse/HDFS-2075
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node, tools
Affects Versions: 0.20.1, 0.20.2
Reporter: Xing Jin
Priority: Minor
  Labels: newbie
 Fix For: 0.20.3

 Attachments: HDFS-2075.patch.txt

   Original Estimate: 1h
  Remaining Estimate: 1h

 The namenode web UI misses some information when safemode is on (e.g., the 
 number of reporting nodes). These information will help us understand the 
 system status.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2818) dfshealth.jsp missing space between role and node name

2012-01-22 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2818:
--

  Resolution: Fixed
   Fix Version/s: 0.23.1
Target Version/s: 0.24.0, 0.23.1  (was: 0.23.1, 0.24.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to branch-0.23 and trunk. Thanks for your contribution Devaraj!

 dfshealth.jsp missing space between role and node name
 --

 Key: HDFS-2818
 URL: https://issues.apache.org/jira/browse/HDFS-2818
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Priority: Trivial
  Labels: newbie
 Fix For: 0.23.1

 Attachments: HDFS-2818.patch


 There seems to be a missing space in the titles of our webpages. EG: 
 titleHadoop NameNodestyx01.sf.cloudera.com:8021/title. It seems like the 
 JSP compiler is doing something to the space which is in the .jsp. Probably a 
 simple fix if you know something about JSP :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2263) Make DFSClient report bad blocks more quickly

2012-01-18 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2263:
--

Status: Open  (was: Patch Available)

Patch was invalid. Cancelling.

 Make DFSClient report bad blocks more quickly
 -

 Key: HDFS-2263
 URL: https://issues.apache.org/jira/browse/HDFS-2263
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.20.2
Reporter: Aaron T. Myers
Assignee: Harsh J
 Attachments: HDFS-2263.patch


 In certain circumstances the DFSClient may detect a block as being bad 
 without reporting it promptly to the NN.
 If when reading a file a client finds an invalid checksum of a block, it 
 immediately reports that bad block to the NN. If when serving up a block a DN 
 finds a truncated block, it reports this to the client, but the client merely 
 adds that DN to the list of dead nodes and moves on to trying another DN, 
 without reporting this to the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-69) Improve dfsadmin command line help

2012-01-11 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-69?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-69:


  Resolution: Fixed
   Fix Version/s: 0.23.1
Target Version/s: 0.24.0, 0.23.1  (was: 0.23.1, 0.24.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to branch-0.23 and trunk. Thanks Ravi and Jakob!

 Improve dfsadmin command line help 
 ---

 Key: HDFS-69
 URL: https://issues.apache.org/jira/browse/HDFS-69
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Ravi Phulari
Assignee: Harsh J
Priority: Minor
 Fix For: 0.23.1

 Attachments: HDFS-69.patch


 Enhance dfsadmin command line help informing A quota of one forces a 
 directory to remain empty 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-891) DataNode.instantiateDataNode calls system.exit(-1) if conf.get(dfs.network.script) != null

2012-01-07 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-891:
-

Attachment: HDFS-891.patch

I think we can do away with this check. It is a wrong prop name today, and even 
if it does exist in the configuration, its not an issue if we already ignore it.

Patch that gets rid of this legacy check.

 DataNode.instantiateDataNode calls system.exit(-1) if 
 conf.get(dfs.network.script) != null
 

 Key: HDFS-891
 URL: https://issues.apache.org/jira/browse/HDFS-891
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.22.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HDFS-891.patch


 Looking at the code for {{DataNode.instantiateDataNode())} , I see that it 
 calls {{system.exit(-1)}} if it is not happy with the configuration
 {code}
 if (conf.get(dfs.network.script) != null) {
   LOG.error(This configuration for rack identification is not supported 
 +
anymore. RackID resolution is handled by the NameNode.);
   System.exit(-1);
 }
 {code}
 This is excessive. It should throw an exception and let whoever called the 
 method decide how to handle it. The {{DataNode.main()}} method will log the 
 exception and exit with a -1 value, but other callers (such as anything using 
 {{MiniDFSCluster}} will now see a meaningful message rather than some Junit 
 tests exited without completing warning. 
 Easy to write a test for the correct behaviour: start a {{MiniDFSCluster}} 
 with this configuration set, see what happens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-583) HDFS should enforce a max block size

2012-01-07 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-583:
-

Component/s: (was: data-node)
 name-node
Summary: HDFS should enforce a max block size  (was: DataNode should 
enforce a max block size)

 HDFS should enforce a max block size
 

 Key: HDFS-583
 URL: https://issues.apache.org/jira/browse/HDFS-583
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Hairong Kuang

 When DataNode creates a replica, it should enforce a max block size, so 
 clients can't go crazy. One way of enforcing this is to make 
 BlockWritesStreams to be filter steams that check the block size.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-664) Add a way to efficiently replace a disk in a live datanode

2012-01-07 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-664:
-

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Dupe of HDFS-1362, per comments above.

 Add a way to efficiently replace a disk in a live datanode
 --

 Key: HDFS-664
 URL: https://issues.apache.org/jira/browse/HDFS-664
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node
Affects Versions: 0.22.0
Reporter: Steve Loughran
 Attachments: HDFS-664.0-20-3-rc2.patch.1, HDFS-664.patch


 In clusters where the datanode disks are hot swappable, you need to be able 
 to swap out a disk on a live datanode without taking down the datanode. You 
 don't want to decommission the whole node as that is overkill. on a system 
 with 4 1TB HDDs, giving 3 TB of datanode storage, a decommissioning and 
 restart will consume up to 6 TB of bandwidth. If a single disk were swapped 
 in then there would only be 1TB of data to recover over the network. More 
 importantly, if that data could be moved to free space on the same machine, 
 the recommissioning could take place at disk rates, not network speeds. 
 # Maybe have a way of decommissioning a single disk on the DN; the files 
 could be moved to space on the other disks or the other machines in the rack.
 # There may not be time to use that option, in which case pulling out the 
 disk would be done with no warning, a new disk inserted.
 # The DN needs to see that a disk has been replaced (or react to some ops 
 request telling it this), and start using the new disk again -pushing back 
 data, rebuilding the balance. 
 To complicate the process, assume there is a live TT on the system, running 
 jobs against the data. The TT would probably need to be paused while the work 
 takes place, any ongoing work handled somehow. Halting the TT and then 
 restarting it after the replacement disk went in is probably simplest. 
 The more disks you add to a node, the more this scenario becomes a need.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1273) Handle disk failure when writing new blocks on datanode

2012-01-07 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-1273:
--

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Resolving based on Uma's comment above.

 Handle disk failure when writing new blocks on datanode
 ---

 Key: HDFS-1273
 URL: https://issues.apache.org/jira/browse/HDFS-1273
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.21.0
Reporter: Jeff Zhang
Assignee: Jeff Zhang
 Attachments: HDFS_1273.patch


 This issues relates to HDFS-457, in the patch of HDFS-457 only disk failure 
 when reading is handled. This jira is to handle the disk failure when writing 
 new blocks on data node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2349) DN should log a WARN, not an INFO when it detects a corruption during block transfer

2012-01-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2349:
--

Target Version/s:   (was: 0.23.1)
   Fix Version/s: (was: 0.24.0)
  0.23.1

Backported to 0.23.

 DN should log a WARN, not an INFO when it detects a corruption during block 
 transfer
 

 Key: HDFS-2349
 URL: https://issues.apache.org/jira/browse/HDFS-2349
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.20.204.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.23.1

 Attachments: HDFS-2349.diff


 Currently, in DataNode.java, we have:
 {code}
   LOG.info(Can't replicate block  + block
   +  because on-disk length  + onDiskLength 
   +  is shorter than NameNode recorded length  + 
 block.getNumBytes());
 {code}
 This log is better off as a WARN as it indicates (and also reports) a 
 corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2012-01-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2729:
--

Target Version/s:   (was: 0.23.1)
   Fix Version/s: (was: 0.24.0)
  0.23.1

Backported to 0.23.

 Update BlockManager's comments regarding the invalid block set
 --

 Key: HDFS-2729
 URL: https://issues.apache.org/jira/browse/HDFS-2729
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 0.23.1

 Attachments: HDFS-2729.patch


 Looks like after HDFS-82 was covered at some point, the comments and logs 
 still carry presence of two sets when there really is just one set.
 This patch changes the logs and comments to be more accurate about that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2726) Exception in createBlockOutputStream shouldn't delete exception stack trace

2012-01-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2726:
--

 Target Version/s:   (was: 0.23.1)
Affects Version/s: 0.23.0
Fix Version/s: (was: 0.24.0)
   0.23.1

Backported to 0.23.

 Exception in createBlockOutputStream shouldn't delete exception stack trace
 -

 Key: HDFS-2726
 URL: https://issues.apache.org/jira/browse/HDFS-2726
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.23.0
Reporter: Michael Bieniosek
Assignee: Harsh J
 Fix For: 0.23.1

 Attachments: HDFS-2726.patch


 I'm occasionally (1/5000 times) getting this error after upgrading everything 
 to hadoop-0.18:
 08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream 
 java.io.IOException: Could not read from stream
 08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block 
 blk_624229997631234952_8205908
 DFSClient contains the logging code:
 LOG.info(Exception in createBlockOutputStream  + ie);
 This would be better written with ie as the second argument to LOG.info, so 
 that the stack trace could be preserved.  As it is, I don't know how to start 
 debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Target Version/s:   (was: 0.23.1)
   Fix Version/s: (was: 0.24.0)
  0.23.1

Backported to 0.23.

 BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
 --

 Key: HDFS-554
 URL: https://issues.apache.org/jira/browse/HDFS-554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Harsh J
Priority: Minor
 Fix For: 0.23.1

 Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt


 BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
 the expanded array.  {{System.arraycopy()}} is generally much faster for 
 this, as it can do a bulk memory copy. There is also the typesafe Java6 
 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-06 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-1314:
--

 Target Version/s:   (was: 0.23.1)
Affects Version/s: 0.23.0
Fix Version/s: (was: 0.24.0)
   0.23.1

Backported to 0.23.

 dfs.blocksize accepts only absolute value
 -

 Key: HDFS-1314
 URL: https://issues.apache.org/jira/browse/HDFS-1314
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Karim Saadah
Assignee: Sho Shimauchi
Priority: Minor
  Labels: newbie
 Fix For: 0.23.1

 Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt, 
 hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt


 Using dfs.block.size=8388608 works 
 but dfs.block.size=8mb does not.
 Using dfs.block.size=8mb should throw some WARNING on NumberFormatException.
 (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-04 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-1314:
--

   Resolution: Fixed
Fix Version/s: 0.24.0
 Release Note: The default blocksize property 'dfs.blocksize' now accepts 
size prefixes to be used instead of byte length. Values such as 10k, 128m, 
1g are now OK to provide instead of just no. of bytes as was before.
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for your contribution Sho, much appreciated!

 dfs.blocksize accepts only absolute value
 -

 Key: HDFS-1314
 URL: https://issues.apache.org/jira/browse/HDFS-1314
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Karim Saadah
Assignee: Sho Shimauchi
Priority: Minor
  Labels: newbie
 Fix For: 0.24.0

 Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt, 
 hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt


 Using dfs.block.size=8388608 works 
 but dfs.block.size=8mb does not.
 Using dfs.block.size=8mb should throw some WARNING on NumberFormatException.
 (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-04 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-1314:
--

Attachment: hdfs-1314.txt

Attaching committed patch since we had to strip docs to pass tests.

 dfs.blocksize accepts only absolute value
 -

 Key: HDFS-1314
 URL: https://issues.apache.org/jira/browse/HDFS-1314
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Karim Saadah
Assignee: Sho Shimauchi
Priority: Minor
  Labels: newbie
 Fix For: 0.24.0

 Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt, 
 hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt


 Using dfs.block.size=8388608 works 
 but dfs.block.size=8mb does not.
 Using dfs.block.size=8mb should throw some WARNING on NumberFormatException.
 (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2750) Humanize AccessControlException messages thrown from HDFS

2012-01-04 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2750:
--

Description: 
Right now, we get messages like:

bq. {{org.apache.hadoop.security.AccessControlException: Permission denied: 
user=admin, access=WRITE, inode=/user/beeswax/warehouse:hue:hive:drwxrwxr-x)}}

I feel we can further humanize such strings, to perhaps read like (just an 
example, looking for your comments as well):

bq. {{org.apache.hadoop.security.AccessControlException: Permission denied for 
WRITE access for user 'admin' on path /user/beeswax/warehouse (hue:hive 
drwxrwxr-x).}}

  was:
Right now, we get messages like:
{noformat}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=admin, access=WRITE, inode=/user/beeswax/warehouse:hue:hive:drwxrwxr-x)
{noformat}

I feel we can further humanize such strings, to perhaps read like (just an 
example, looking for your comments as well):
{noformat}
org.apache.hadoop.security.AccessControlException: Permission denied for WRITE 
access for user 'admin' on path /user/beeswax/warehouse (hue:hive drwxrwxr-x).
{noformat}


Or even:

bq. {{Permission denied to user 'admin' on path /user/beeswax/warehouse 
(hue:hive drwxrwxr-x) for WRITE access.}}

 Humanize AccessControlException messages thrown from HDFS
 -

 Key: HDFS-2750
 URL: https://issues.apache.org/jira/browse/HDFS-2750
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Harsh J
Priority: Minor

 Right now, we get messages like:
 bq. {{org.apache.hadoop.security.AccessControlException: Permission denied: 
 user=admin, access=WRITE, 
 inode=/user/beeswax/warehouse:hue:hive:drwxrwxr-x)}}
 I feel we can further humanize such strings, to perhaps read like (just an 
 example, looking for your comments as well):
 bq. {{org.apache.hadoop.security.AccessControlException: Permission denied 
 for WRITE access for user 'admin' on path /user/beeswax/warehouse (hue:hive 
 drwxrwxr-x).}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-03 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-1314:
--

Status: Patch Available  (was: Open)

Dunno if QA bot is back, but here's another try for you.

 dfs.blocksize accepts only absolute value
 -

 Key: HDFS-1314
 URL: https://issues.apache.org/jira/browse/HDFS-1314
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Karim Saadah
Assignee: Sho Shimauchi
Priority: Minor
  Labels: newbie
 Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt


 Using dfs.block.size=8388608 works 
 but dfs.block.size=8mb does not.
 Using dfs.block.size=8mb should throw some WARNING on NumberFormatException.
 (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1314) dfs.blocksize accepts only absolute value

2012-01-03 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-1314:
--

Status: Open  (was: Patch Available)

 dfs.blocksize accepts only absolute value
 -

 Key: HDFS-1314
 URL: https://issues.apache.org/jira/browse/HDFS-1314
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Karim Saadah
Assignee: Sho Shimauchi
Priority: Minor
  Labels: newbie
 Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt


 Using dfs.block.size=8388608 works 
 but dfs.block.size=8mb does not.
 Using dfs.block.size=8mb should throw some WARNING on NumberFormatException.
 (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-929) DFSClient#getBlockSize is unused

2012-01-03 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-929:
-

Attachment: HDFS-929.patch

Patch that deprecates the ClientProtocol calls.

For tests, INodeFile already has a test for its use internally, should we add a 
test with ClientProtocol use in mind?

 DFSClient#getBlockSize is unused
 

 Key: HDFS-929
 URL: https://issues.apache.org/jira/browse/HDFS-929
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Eli Collins
Assignee: Jim Plush
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-929-take1.txt, HDFS-929.patch


 DFSClient#getBlockSize is unused. Since it's a public class internal to HDFS 
 we just remove it? If not then we should add a unit test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2741) dfs.datanode.max.xcievers missing in 0.20.205.0

2012-01-03 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2741:
--

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

Markus,

I think its mostly only HBase that needs this raised anyway. Thanks for the 
patch, it looks great, committing in a few :)

 dfs.datanode.max.xcievers missing in 0.20.205.0
 ---

 Key: HDFS-2741
 URL: https://issues.apache.org/jira/browse/HDFS-2741
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Markus Jelsma
Priority: Minor
 Attachments: HDFS-2741-branch1-1.patch, HDFS-2741-branch1-2.patch


 The dfs.datanode.max.xcievers configuration directive is missing in the 
 hdfs-default.xml and documentation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2741) dfs.datanode.max.xcievers missing in 0.20.205.0

2012-01-03 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2741:
--

   Resolution: Fixed
Fix Version/s: 1.1.0
   Status: Resolved  (was: Patch Available)

Committed to branch-1. Thanks for your time and contributions Markus! Looking 
forward to more improvements and reports from you :)

 dfs.datanode.max.xcievers missing in 0.20.205.0
 ---

 Key: HDFS-2741
 URL: https://issues.apache.org/jira/browse/HDFS-2741
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Markus Jelsma
Priority: Minor
 Fix For: 1.1.0

 Attachments: HDFS-2741-branch1-1.patch, HDFS-2741-branch1-2.patch


 The dfs.datanode.max.xcievers configuration directive is missing in the 
 hdfs-default.xml and documentation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2741) dfs.datanode.max.xcievers missing in 0.20.205.0

2012-01-03 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2741:
--

Release Note: Document and raise the maximum allowed transfer threads on a 
DataNode to 4096. This helps Apache HBase in particular.

 dfs.datanode.max.xcievers missing in 0.20.205.0
 ---

 Key: HDFS-2741
 URL: https://issues.apache.org/jira/browse/HDFS-2741
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Markus Jelsma
Priority: Minor
 Fix For: 1.1.0

 Attachments: HDFS-2741-branch1-1.patch, HDFS-2741-branch1-2.patch


 The dfs.datanode.max.xcievers configuration directive is missing in the 
 hdfs-default.xml and documentation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2722) HttpFs shouldn't be using an int for block size

2012-01-03 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2722:
--

  Resolution: Fixed
   Fix Version/s: 0.23.1
Target Version/s: 0.24.0, 0.23.1  (was: 0.23.1, 0.24.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to branch-0.23 and trunk. Thanks for the review Alejandro!

 HttpFs shouldn't be using an int for block size
 ---

 Key: HDFS-2722
 URL: https://issues.apache.org/jira/browse/HDFS-2722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 0.23.1

 Attachments: HDFS-2722.patch


 {{./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java:
  blockSize = fs.getConf().getInt(dfs.block.size, 67108864);}}
 Should instead be using dfs.blocksize and should instead be long.
 I'll post a patch for this after HDFS-1314 is resolved -- which changes the 
 internal behavior a bit (should be getLongBytes, and not just getLong, to 
 gain formatting advantages).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2208) Add tests for -h option of FSshell -ls

2012-01-02 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2208:
--

Resolution: Later
Status: Resolved  (was: Patch Available)

This was not needed.

 Add tests for  -h option of FSshell -ls
 -

 Key: HDFS-2208
 URL: https://issues.apache.org/jira/browse/HDFS-2208
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 0.23.0
Reporter: XieXianshan
Priority: Trivial
 Attachments: HDFS-2208.patch


 This testcode for HADOOP-7485

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2741) dfs.datanode.max.xcievers missing in 0.20.205.0

2012-01-02 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2741:
--

Target Version/s: 1.0.0

 dfs.datanode.max.xcievers missing in 0.20.205.0
 ---

 Key: HDFS-2741
 URL: https://issues.apache.org/jira/browse/HDFS-2741
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Markus Jelsma
Priority: Minor

 The dfs.datanode.max.xcievers configuration directive is missing in the 
 hdfs-default.xml and documentation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2319) Add test cases for FSshell -stat

2012-01-01 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2319:
--

   Fix Version/s: (was: 0.24.0)
Target Version/s: 0.24.0
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 Add test cases for FSshell -stat
 

 Key: HDFS-2319
 URL: https://issues.apache.org/jira/browse/HDFS-2319
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.24.0
Reporter: XieXianshan
Priority: Trivial
 Attachments: HDFS-2319.patch


 Add test cases for HADOOP-7574.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2319) Add test cases for FSshell -stat

2012-01-01 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2319:
--

Status: Open  (was: Patch Available)

 Add test cases for FSshell -stat
 

 Key: HDFS-2319
 URL: https://issues.apache.org/jira/browse/HDFS-2319
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.24.0
Reporter: XieXianshan
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2319.patch


 Add test cases for HADOOP-7574.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2012-01-01 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
 --

 Key: HDFS-554
 URL: https://issues.apache.org/jira/browse/HDFS-554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt


 BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
 the expanded array.  {{System.arraycopy()}} is generally much faster for 
 this, as it can do a bulk memory copy. There is also the typesafe Java6 
 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2574) remove references to deprecated properties in hdfs-site.xml template and hdfs-default.xml

2011-12-31 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2574:
--

Target Version/s: 0.24.0, 0.23.1

 remove references to deprecated properties in hdfs-site.xml template and 
 hdfs-default.xml
 -

 Key: HDFS-2574
 URL: https://issues.apache.org/jira/browse/HDFS-2574
 Project: Hadoop HDFS
  Issue Type: Task
  Components: documentation
Affects Versions: 0.23.0
Reporter: Joe Crobak
Assignee: Joe Crobak
Priority: Trivial
 Attachments: HDFS-2574.patch, HDFS-2574.patch, HDFS-2574.patch


 Some examples: hadoop-hdfs/src/main/packages/templates/conf/hdfs-site.xml 
 contains an entry for dfs.name.dir rather than dfs.namenode.name.dir and 
 hdfs-default.xml references dfs.name.dir twice in description tags rather 
 than using dfs.namenode.name.dir.
 List of deprecated properties is here: 
 http://hadoop.apache.org/common/docs/r0.23.0/hadoop-project-dist/hadoop-common/DeprecatedProperties.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2574) remove references to deprecated properties in hdfs-site.xml template and hdfs-default.xml

2011-12-31 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2574:
--

  Resolution: Fixed
   Fix Version/s: 0.23.1
Target Version/s: 0.24.0, 0.23.1  (was: 0.23.1, 0.24.0)
  Status: Resolved  (was: Patch Available)

Committed to branch-0.23 and trunk. Thanks very much for your patches Joe!

 remove references to deprecated properties in hdfs-site.xml template and 
 hdfs-default.xml
 -

 Key: HDFS-2574
 URL: https://issues.apache.org/jira/browse/HDFS-2574
 Project: Hadoop HDFS
  Issue Type: Task
  Components: documentation
Affects Versions: 0.23.0
Reporter: Joe Crobak
Assignee: Joe Crobak
Priority: Trivial
 Fix For: 0.23.1

 Attachments: HDFS-2574.patch, HDFS-2574.patch, HDFS-2574.patch


 Some examples: hadoop-hdfs/src/main/packages/templates/conf/hdfs-site.xml 
 contains an entry for dfs.name.dir rather than dfs.namenode.name.dir and 
 hdfs-default.xml references dfs.name.dir twice in description tags rather 
 than using dfs.namenode.name.dir.
 List of deprecated properties is here: 
 http://hadoop.apache.org/common/docs/r0.23.0/hadoop-project-dist/hadoop-common/DeprecatedProperties.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2574) remove references to deprecated properties in hdfs-site.xml template and hdfs-default.xml

2011-12-31 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2574:
--

Target Version/s: 0.24.0, 0.23.1  (was: 0.23.1, 0.24.0)
Hadoop Flags: Reviewed

 remove references to deprecated properties in hdfs-site.xml template and 
 hdfs-default.xml
 -

 Key: HDFS-2574
 URL: https://issues.apache.org/jira/browse/HDFS-2574
 Project: Hadoop HDFS
  Issue Type: Task
  Components: documentation
Affects Versions: 0.23.0
Reporter: Joe Crobak
Assignee: Joe Crobak
Priority: Trivial
 Fix For: 0.23.1

 Attachments: HDFS-2574.patch, HDFS-2574.patch, HDFS-2574.patch


 Some examples: hadoop-hdfs/src/main/packages/templates/conf/hdfs-site.xml 
 contains an entry for dfs.name.dir rather than dfs.namenode.name.dir and 
 hdfs-default.xml references dfs.name.dir twice in description tags rather 
 than using dfs.namenode.name.dir.
 List of deprecated properties is here: 
 http://hadoop.apache.org/common/docs/r0.23.0/hadoop-project-dist/hadoop-common/DeprecatedProperties.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1339) NameNodeMetrics should use MetricsTimeVaryingLong

2011-12-31 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-1339:
--

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

This should be resolved with HDFS-1117 on 0.23+. Please reopen if not.

 NameNodeMetrics should use MetricsTimeVaryingLong 
 --

 Key: HDFS-1339
 URL: https://issues.apache.org/jira/browse/HDFS-1339
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Minor
 Attachments: HDFS-1339.txt


 NameNodeMetrics uses MetricsTimeVaryingInt. We see that FileInfoOps and 
 GetBlockLocations overflow in our cluster.
 Using MetricsTimeVaryingLong will easily solve this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2739) SecondaryNameNode doesn't start up

2011-12-30 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2739:
--

Affects Version/s: 0.24.0

 SecondaryNameNode doesn't start up
 --

 Key: HDFS-2739
 URL: https://issues.apache.org/jira/browse/HDFS-2739
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.24.0
Reporter: Sho Shimauchi
Priority: Critical

 Built a 0.24-SNAPSHOT tar from today, used a general config, started NN/DN, 
 but SNN won't come up with following error:
 {code}
 11/12/31 12:13:14 ERROR namenode.SecondaryNameNode: Throwable Exception in 
 doCheckpoint
 java.lang.RuntimeException: java.lang.NoSuchFieldException: versionID
   at org.apache.hadoop.ipc.RPC.getProtocolVersion(RPC.java:154)
   at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invocation.init(WritableRpcEngine.java:112)
   at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:226)
   at $Proxy9.getTransationId(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.getTransactionID(NamenodeProtocolTranslatorPB.java:185)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.countUncheckpointedTxns(SecondaryNameNode.java:625)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.shouldCheckpointBasedOnCount(SecondaryNameNode.java:633)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:386)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:356)
   at java.lang.Thread.run(Thread.java:680)
 Caused by: java.lang.NoSuchFieldException: versionID
   at java.lang.Class.getField(Class.java:1520)
   at org.apache.hadoop.ipc.RPC.getProtocolVersion(RPC.java:150)
   ... 9 more
 java.lang.RuntimeException: java.lang.NoSuchFieldException: versionID
   at org.apache.hadoop.ipc.RPC.getProtocolVersion(RPC.java:154)
   at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invocation.init(WritableRpcEngine.java:112)
   at 
 org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:226)
   at $Proxy9.getTransationId(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.getTransactionID(NamenodeProtocolTranslatorPB.java:185)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.countUncheckpointedTxns(SecondaryNameNode.java:625)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.shouldCheckpointBasedOnCount(SecondaryNameNode.java:633)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:386)
   at 
 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:356)
   at java.lang.Thread.run(Thread.java:680)
 Caused by: java.lang.NoSuchFieldException: versionID
   at java.lang.Class.getField(Class.java:1520)
   at org.apache.hadoop.ipc.RPC.getProtocolVersion(RPC.java:150)
   ... 9 more
 11/12/31 12:13:14 INFO namenode.SecondaryNameNode: SHUTDOWN_MSG: 
 /
 SHUTDOWN_MSG: Shutting down SecondaryNameNode at sho-mba.local/192.168.11.2
 /
 {code}
 full error log: http://pastebin.com/mSaVbS34

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2580) NameNode#main(...) can make use of GenericOptionsParser.

2011-12-29 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2580:
--

Status: Open  (was: Patch Available)

 NameNode#main(...) can make use of GenericOptionsParser.
 

 Key: HDFS-2580
 URL: https://issues.apache.org/jira/browse/HDFS-2580
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-2580.patch


 DataNode supports passing generic opts when calling via {{hdfs datanode}}. 
 NameNode can support the same thing as well, but doesn't right now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2728) Remove dfsadmin -printTopology from branch-1 docs since it does not exist

2011-12-29 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2728:
--

Status: Patch Available  (was: Open)

 Remove dfsadmin -printTopology from branch-1 docs since it does not exist
 -

 Key: HDFS-2728
 URL: https://issues.apache.org/jira/browse/HDFS-2728
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 1.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2728.patch


 It is documented we have -printTopology but we do not really have it in this 
 branch. Possible docs mixup from somewhere in security branch pre-merge?
 {code}
 ➜  branch-1  grep printTopology -R .
 ./src/docs/src/documentation/content/xdocs/.svn/text-base/hdfs_user_guide.xml.svn-base:
   code-printTopology/code
 ./src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml:  
 code-printTopology/code
 {code}
 Lets remove the reference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2011-12-29 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2729:
--

   Resolution: Fixed
Fix Version/s: 0.24.0
   Status: Resolved  (was: Patch Available)

Committed revision 1225591. Thanks Eli!

 Update BlockManager's comments regarding the invalid block set
 --

 Key: HDFS-2729
 URL: https://issues.apache.org/jira/browse/HDFS-2729
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-2729.patch


 Looks like after HDFS-82 was covered at some point, the comments and logs 
 still carry presence of two sets when there really is just one set.
 This patch changes the logs and comments to be more accurate about that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2728) Remove dfsadmin -printTopology from branch-1 docs since it does not exist

2011-12-29 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2728:
--

Attachment: HDFS-2728.patch

 Remove dfsadmin -printTopology from branch-1 docs since it does not exist
 -

 Key: HDFS-2728
 URL: https://issues.apache.org/jira/browse/HDFS-2728
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 1.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2728.patch


 It is documented we have -printTopology but we do not really have it in this 
 branch. Possible docs mixup from somewhere in security branch pre-merge?
 {code}
 ➜  branch-1  grep printTopology -R .
 ./src/docs/src/documentation/content/xdocs/.svn/text-base/hdfs_user_guide.xml.svn-base:
   code-printTopology/code
 ./src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml:  
 code-printTopology/code
 {code}
 Lets remove the reference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1314) dfs.block.size accepts only absolute value

2011-12-29 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-1314:
--

Target Version/s: 0.24.0
  Status: Patch Available  (was: Open)

+1. Will commit once Hudson reports its build.

 dfs.block.size accepts only absolute value
 --

 Key: HDFS-1314
 URL: https://issues.apache.org/jira/browse/HDFS-1314
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Karim Saadah
Assignee: Sho Shimauchi
Priority: Minor
  Labels: newbie
 Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt


 Using dfs.block.size=8388608 works 
 but dfs.block.size=8mb does not.
 Using dfs.block.size=8mb should throw some WARNING on NumberFormatException.
 (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2728) Remove dfsadmin -printTopology from branch-1 docs since it does not exist

2011-12-29 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2728:
--

Target Version/s: 1.1.0  (was: 0.24.0)

 Remove dfsadmin -printTopology from branch-1 docs since it does not exist
 -

 Key: HDFS-2728
 URL: https://issues.apache.org/jira/browse/HDFS-2728
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 1.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2728.patch


 It is documented we have -printTopology but we do not really have it in this 
 branch. Possible docs mixup from somewhere in security branch pre-merge?
 {code}
 ➜  branch-1  grep printTopology -R .
 ./src/docs/src/documentation/content/xdocs/.svn/text-base/hdfs_user_guide.xml.svn-base:
   code-printTopology/code
 ./src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml:  
 code-printTopology/code
 {code}
 Lets remove the reference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2011-12-29 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2729:
--

Attachment: HDFS-2729.patch

 Update BlockManager's comments regarding the invalid block set
 --

 Key: HDFS-2729
 URL: https://issues.apache.org/jira/browse/HDFS-2729
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2729.patch


 Looks like after HDFS-82 was covered at some point, the comments and logs 
 still carry presence of two sets when there really is just one set.
 This patch changes the logs and comments to be more accurate about that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2580) NameNode#main(...) can make use of GenericOptionsParser.

2011-12-29 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2580:
--

Status: Patch Available  (was: Open)

Resubmitting for tests.

I don't see an elegant way to use Tool interface, given the createNamenode(…) 
static call required to initialize 'this'. This should suffice.

 NameNode#main(...) can make use of GenericOptionsParser.
 

 Key: HDFS-2580
 URL: https://issues.apache.org/jira/browse/HDFS-2580
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-2580.patch


 DataNode supports passing generic opts when calling via {{hdfs datanode}}. 
 NameNode can support the same thing as well, but doesn't right now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2729) Update BlockManager's comments regarding the invalid block set

2011-12-29 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2729:
--

Status: Patch Available  (was: Open)

Trivial patch that changes comments and log statements. No tests required.

 Update BlockManager's comments regarding the invalid block set
 --

 Key: HDFS-2729
 URL: https://issues.apache.org/jira/browse/HDFS-2729
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2729.patch


 Looks like after HDFS-82 was covered at some point, the comments and logs 
 still carry presence of two sets when there really is just one set.
 This patch changes the logs and comments to be more accurate about that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1314) dfs.block.size accepts only absolute value

2011-12-29 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-1314:
--

Status: Open  (was: Patch Available)

 dfs.block.size accepts only absolute value
 --

 Key: HDFS-1314
 URL: https://issues.apache.org/jira/browse/HDFS-1314
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Karim Saadah
Assignee: Sho Shimauchi
Priority: Minor
  Labels: newbie
 Attachments: hdfs-1314.txt, hdfs-1314.txt, hdfs-1314.txt


 Using dfs.block.size=8388608 works 
 but dfs.block.size=8mb does not.
 Using dfs.block.size=8mb should throw some WARNING on NumberFormatException.
 (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2726) Exception in createBlockOutputStream shouldn't delete exception stack trace

2011-12-28 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2726:
--

Attachment: HDFS-2726.patch

 Exception in createBlockOutputStream shouldn't delete exception stack trace
 -

 Key: HDFS-2726
 URL: https://issues.apache.org/jira/browse/HDFS-2726
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Michael Bieniosek
Assignee: Harsh J
 Attachments: HDFS-2726.patch


 I'm occasionally (1/5000 times) getting this error after upgrading everything 
 to hadoop-0.18:
 08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream 
 java.io.IOException: Could not read from stream
 08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block 
 blk_624229997631234952_8205908
 DFSClient contains the logging code:
 LOG.info(Exception in createBlockOutputStream  + ie);
 This would be better written with ie as the second argument to LOG.info, so 
 that the stack trace could be preserved.  As it is, I don't know how to start 
 debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2726) Exception in createBlockOutputStream shouldn't delete exception stack trace

2011-12-28 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2726:
--

Issue Type: Improvement  (was: Bug)

This is a trivial change to the logger statement that's still a problem on 
trunk's DFSOutputStream, so am going ahead and committing it.

 Exception in createBlockOutputStream shouldn't delete exception stack trace
 -

 Key: HDFS-2726
 URL: https://issues.apache.org/jira/browse/HDFS-2726
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Michael Bieniosek
Assignee: Harsh J
 Fix For: 0.24.0

 Attachments: HDFS-2726.patch


 I'm occasionally (1/5000 times) getting this error after upgrading everything 
 to hadoop-0.18:
 08/09/09 03:28:36 INFO dfs.DFSClient: Exception in createBlockOutputStream 
 java.io.IOException: Could not read from stream
 08/09/09 03:28:36 INFO dfs.DFSClient: Abandoning block 
 blk_624229997631234952_8205908
 DFSClient contains the logging code:
 LOG.info(Exception in createBlockOutputStream  + ie);
 This would be better written with ie as the second argument to LOG.info, so 
 that the stack trace could be preserved.  As it is, I don't know how to start 
 debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-27 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Status: Patch Available  (was: Open)

 BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
 --

 Key: HDFS-554
 URL: https://issues.apache.org/jira/browse/HDFS-554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt


 BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
 the expanded array.  {{System.arraycopy()}} is generally much faster for 
 this, as it can do a bulk memory copy. There is also the typesafe Java6 
 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-27 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Status: Open  (was: Patch Available)

 BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
 --

 Key: HDFS-554
 URL: https://issues.apache.org/jira/browse/HDFS-554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt


 BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
 the expanded array.  {{System.arraycopy()}} is generally much faster for 
 this, as it can do a bulk memory copy. There is also the typesafe Java6 
 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-27 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Attachment: HDFS-554.txt

For some reason patch didn't trigger a jenkins build. Re-upping same thing.

 BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
 --

 Key: HDFS-554
 URL: https://issues.apache.org/jira/browse/HDFS-554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt


 BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
 the expanded array.  {{System.arraycopy()}} is generally much faster for 
 this, as it can do a bulk memory copy. There is also the typesafe Java6 
 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2547) ReplicationTargetChooser has incorrect block placement comments

2011-12-27 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2547:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks Aaron for your persistence on this one. Committed to branch-1.

 ReplicationTargetChooser has incorrect block placement comments
 ---

 Key: HDFS-2547
 URL: https://issues.apache.org/jira/browse/HDFS-2547
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20.1
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 1.1.0

 Attachments: HDFS-2547.patch, HDFS-2547.patch


 {code}
 /** The class is responsible for choosing the desired number of targets
  * for placing block replicas.
  * The replica placement strategy is that if the writer is on a datanode,
  * the 1st replica is placed on the local machine, 
  * otherwise a random datanode. The 2nd replica is placed on a datanode
  * that is on a different rack. The 3rd replica is placed on a datanode
  * which is on the same rack as the **first replca**.
  */
 {code}
 That should read second replica. The test cases confirm that this is the 
 behavior, as well as the docs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2413) Add public APIs for safemode

2011-12-26 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2413:
--

Attachment: HDFS-2413.patch

Patch that makes DFSAdmin carry two utility functions that lets its users check 
safemode and wait upon it.

 Add public APIs for safemode
 

 Key: HDFS-2413
 URL: https://issues.apache.org/jira/browse/HDFS-2413
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Harsh J
 Fix For: 0.24.0

 Attachments: HDFS-2413.patch


 Currently the APIs for safe-mode are part of DistributedFileSystem, which is 
 supposed to be a private interface. However, dependent software often wants 
 to wait until the NN is out of safemode. Though it could poll trying to 
 create a file and catching SafeModeException, we should consider making some 
 of these APIs public.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2413) Add public APIs for safemode

2011-12-26 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2413:
--

Status: Patch Available  (was: Open)

 Add public APIs for safemode
 

 Key: HDFS-2413
 URL: https://issues.apache.org/jira/browse/HDFS-2413
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Harsh J
 Fix For: 0.24.0

 Attachments: HDFS-2413.patch


 Currently the APIs for safe-mode are part of DistributedFileSystem, which is 
 supposed to be a private interface. However, dependent software often wants 
 to wait until the NN is out of safemode. Though it could poll trying to 
 create a file and catching SafeModeException, we should consider making some 
 of these APIs public.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2263) Make DFSClient report bad blocks more quickly

2011-12-26 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2263:
--

Attachment: HDFS-2263.patch

(Issue affects trunk, and attached patch is against that.)

Aaron/Arpit,

An error in OP_READ_BLOCK operation can also arise out of xceiver loads apart 
from truncation of block files and missing / bad-permission block files.

Attached patch reports for every error encountered, and not just the final 
tried LocatedBlock. I do know this is wrong, as it'd spark a replication storm 
for a reason as simple as filled up xceiver loads causing the read error -- but 
let me know if am wrong, and I'll tweak the patches and the tests a bit to 
accomodate final-retry corrupt marking.

 Make DFSClient report bad blocks more quickly
 -

 Key: HDFS-2263
 URL: https://issues.apache.org/jira/browse/HDFS-2263
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.20.2
Reporter: Aaron T. Myers
Assignee: Harsh J
 Attachments: HDFS-2263.patch


 In certain circumstances the DFSClient may detect a block as being bad 
 without reporting it promptly to the NN.
 If when reading a file a client finds an invalid checksum of a block, it 
 immediately reports that bad block to the NN. If when serving up a block a DN 
 finds a truncated block, it reports this to the client, but the client merely 
 adds that DN to the list of dead nodes and moves on to trying another DN, 
 without reporting this to the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2263) Make DFSClient report bad blocks more quickly

2011-12-26 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2263:
--

Target Version/s: 0.24.0

 Make DFSClient report bad blocks more quickly
 -

 Key: HDFS-2263
 URL: https://issues.apache.org/jira/browse/HDFS-2263
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.20.2
Reporter: Aaron T. Myers
Assignee: Harsh J
 Attachments: HDFS-2263.patch


 In certain circumstances the DFSClient may detect a block as being bad 
 without reporting it promptly to the NN.
 If when reading a file a client finds an invalid checksum of a block, it 
 immediately reports that bad block to the NN. If when serving up a block a DN 
 finds a truncated block, it reports this to the client, but the client merely 
 adds that DN to the list of dead nodes and moves on to trying another DN, 
 without reporting this to the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2263) Make DFSClient report bad blocks more quickly

2011-12-26 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2263:
--

Status: Patch Available  (was: Open)

 Make DFSClient report bad blocks more quickly
 -

 Key: HDFS-2263
 URL: https://issues.apache.org/jira/browse/HDFS-2263
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.20.2
Reporter: Aaron T. Myers
Assignee: Harsh J
 Attachments: HDFS-2263.patch


 In certain circumstances the DFSClient may detect a block as being bad 
 without reporting it promptly to the NN.
 If when reading a file a client finds an invalid checksum of a block, it 
 immediately reports that bad block to the NN. If when serving up a block a DN 
 finds a truncated block, it reports this to the client, but the client merely 
 adds that DN to the list of dead nodes and moves on to trying another DN, 
 without reporting this to the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2722) HttpFs shouldn't be using an int for block size

2011-12-24 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2722:
--

Target Version/s: 0.24.0, 0.23.1  (was: 1.1.0, 0.23.1, 0.24.0)

 HttpFs shouldn't be using an int for block size
 ---

 Key: HDFS-2722
 URL: https://issues.apache.org/jira/browse/HDFS-2722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HDFS-2722.patch


 {{./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java:
  blockSize = fs.getConf().getInt(dfs.block.size, 67108864);}}
 Should instead be using dfs.blocksize and should instead be long.
 I'll post a patch for this after HDFS-1314 is resolved -- which changes the 
 internal behavior a bit (should be getLongBytes, and not just getLong, to 
 gain formatting advantages).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2722) HttpFs shouldn't be using an int for block size

2011-12-24 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2722:
--

Attachment: HDFS-2722.patch

 HttpFs shouldn't be using an int for block size
 ---

 Key: HDFS-2722
 URL: https://issues.apache.org/jira/browse/HDFS-2722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HDFS-2722.patch


 {{./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java:
  blockSize = fs.getConf().getInt(dfs.block.size, 67108864);}}
 Should instead be using dfs.blocksize and should instead be long.
 I'll post a patch for this after HDFS-1314 is resolved -- which changes the 
 internal behavior a bit (should be getLongBytes, and not just getLong, to 
 gain formatting advantages).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2722) HttpFs shouldn't be using an int for block size

2011-12-24 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2722:
--

Target Version/s: 0.24.0, 0.23.1, 1.1.0
  Status: Patch Available  (was: Open)

 HttpFs shouldn't be using an int for block size
 ---

 Key: HDFS-2722
 URL: https://issues.apache.org/jira/browse/HDFS-2722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Harsh J
Assignee: Harsh J
 Attachments: HDFS-2722.patch


 {{./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java:
  blockSize = fs.getConf().getInt(dfs.block.size, 67108864);}}
 Should instead be using dfs.blocksize and should instead be long.
 I'll post a patch for this after HDFS-1314 is resolved -- which changes the 
 internal behavior a bit (should be getLongBytes, and not just getLong, to 
 gain formatting advantages).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2722) HttpFs shouldn't be using an int for block size

2011-12-23 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2722:
--

Description: 
{{./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java:
 blockSize = fs.getConf().getInt(dfs.block.size, 67108864);}}

Should instead be using dfs.blocksize and should instead be long.

I'll post a patch for this after HDFS-1314 is resolved -- which changes the 
internal behavior a bit (should be getLongBytes, and not just getLong, to gain 
formatting advantages).

  was:
{{./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java:
 blockSize = fs.getConf().getInt(dfs.block.size, 67108864);}}

Should instead be using dfs.blocksize.

I'll post a patch for this after HDFS-1314 is resolved -- which changes the 
internal behavior a bit (should be getLongBytes, and not just getLong, to gain 
formatting advantages).


 HttpFs shouldn't be using an int for block size
 ---

 Key: HDFS-2722
 URL: https://issues.apache.org/jira/browse/HDFS-2722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Reporter: Harsh J

 {{./hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java:
  blockSize = fs.getConf().getInt(dfs.block.size, 67108864);}}
 Should instead be using dfs.blocksize and should instead be long.
 I'll post a patch for this after HDFS-1314 is resolved -- which changes the 
 internal behavior a bit (should be getLongBytes, and not just getLong, to 
 gain formatting advantages).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-69) Improve dfsadmin command line help

2011-12-18 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-69?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-69:


 Target Version/s: 0.24.0, 0.23.1
Affects Version/s: 1.0.0
   Status: Patch Available  (was: Open)

 Improve dfsadmin command line help 
 ---

 Key: HDFS-69
 URL: https://issues.apache.org/jira/browse/HDFS-69
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Ravi Phulari
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-69.patch


 Enhance dfsadmin command line help informing A quota of one forces a 
 directory to remain empty 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-442) dfsthroughput in test.jar throws NPE

2011-12-18 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-442:
-

Target Version/s: 0.24.0, 0.23.1
   Fix Version/s: (was: 0.24.0)

Can someone take a look at the trivial patch and review it? Ramya?

It should do good for 0.23 as well.

 dfsthroughput in test.jar throws NPE
 

 Key: HDFS-442
 URL: https://issues.apache.org/jira/browse/HDFS-442
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.20.1
Reporter: Ramya Sunil
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-442.patch


 On running hadoop jar hadoop-test.jar dfsthroughput OR hadoop 
 org.apache.hadoop.hdfs.BenchmarkThroughput, we get NullPointerException. 
 Below is the stacktrace:
 {noformat}
 Exception in thread main java.lang.NullPointerException
 at java.util.Hashtable.put(Hashtable.java:394)
 at java.util.Properties.setProperty(Properties.java:143)
 at java.lang.System.setProperty(System.java:731)
 at 
 org.apache.hadoop.hdfs.BenchmarkThroughput.run(BenchmarkThroughput.java:198)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at 
 org.apache.hadoop.hdfs.BenchmarkThroughput.main(BenchmarkThroughput.java:229)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-17 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Attachment: HDFS-554.txt

Thanks for that catch Todd, you're right :)

 BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
 --

 Key: HDFS-554
 URL: https://issues.apache.org/jira/browse/HDFS-554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-554.patch, HDFS-554.txt


 BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
 the expanded array.  {{System.arraycopy()}} is generally much faster for 
 this, as it can do a bulk memory copy. There is also the typesafe Java6 
 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-17 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Status: Patch Available  (was: Open)

 BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
 --

 Key: HDFS-554
 URL: https://issues.apache.org/jira/browse/HDFS-554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-554.patch, HDFS-554.txt


 BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
 the expanded array.  {{System.arraycopy()}} is generally much faster for 
 this, as it can do a bulk memory copy. There is also the typesafe Java6 
 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-17 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Status: Open  (was: Patch Available)

 BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
 --

 Key: HDFS-554
 URL: https://issues.apache.org/jira/browse/HDFS-554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-554.patch, HDFS-554.txt


 BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
 the expanded array.  {{System.arraycopy()}} is generally much faster for 
 this, as it can do a bulk memory copy. There is also the typesafe Java6 
 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-11-21 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Attachment: HDFS-554.patch

The difference is apparently very clear, in terms of speed, from even a silly 
test:

{code}
public class TestSpeed {
  public static void main(String[] args) {
// Load about a million Integers.
Object[] arr = new Object[100];
for (Integer i = 0; i  100; i++) {
  arr[i] = i;
}
long now = System.currentTimeMillis();
// Copy iteratively into a new sized array.
Object[] arr2 = new Object[300];
for (Integer i = 0; i  arr.length; i++) {
  arr2[i] = arr[i];
}
System.out.println(System.currentTimeMillis() - now);
now = System.currentTimeMillis();
// arraycopy into a new sized array.
Object[] arr3 = new Object[300];
System.arraycopy(arr, 0, arr3, 0, arr.length);
System.out.println(System.currentTimeMillis() - now);
  }
}
{code}

A few runs do, for example:
||Loop||System.arraycopy||
|59|17|
|54|14|
|52|14|
|52|15|

 BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
 --

 Key: HDFS-554
 URL: https://issues.apache.org/jira/browse/HDFS-554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HDFS-554.patch


 BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
 the expanded array.  {{System.arraycopy()}} is generally much faster for 
 this, as it can do a bulk memory copy. There is also the typesafe Java6 
 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-11-21 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Fix Version/s: 0.24.0
   Status: Patch Available  (was: Open)

 BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
 --

 Key: HDFS-554
 URL: https://issues.apache.org/jira/browse/HDFS-554
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Steve Loughran
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-554.patch


 BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
 the expanded array.  {{System.arraycopy()}} is generally much faster for 
 this, as it can do a bulk memory copy. There is also the typesafe Java6 
 {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2574) remove references to deprecated properties in hdfs-site.xml template and hdfs-default.xml

2011-11-21 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2574:
--

Attachment: HDFS-2574.patch

Done and done. Please lemme know if this covers this one!

 remove references to deprecated properties in hdfs-site.xml template and 
 hdfs-default.xml
 -

 Key: HDFS-2574
 URL: https://issues.apache.org/jira/browse/HDFS-2574
 Project: Hadoop HDFS
  Issue Type: Task
  Components: documentation
Affects Versions: 0.23.0
Reporter: Joe Crobak
Priority: Trivial
 Attachments: HDFS-2574.patch


 Some examples: hadoop-hdfs/src/main/packages/templates/conf/hdfs-site.xml 
 contains an entry for dfs.name.dir rather than dfs.namenode.name.dir and 
 hdfs-default.xml references dfs.name.dir twice in description tags rather 
 than using dfs.namenode.name.dir.
 List of deprecated properties is here: 
 http://hadoop.apache.org/common/docs/r0.23.0/hadoop-project-dist/hadoop-common/DeprecatedProperties.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2574) remove references to deprecated properties in hdfs-site.xml template and hdfs-default.xml

2011-11-21 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2574:
--

Attachment: HDFS-2574.patch

Missed adding the default xml changeset.

 remove references to deprecated properties in hdfs-site.xml template and 
 hdfs-default.xml
 -

 Key: HDFS-2574
 URL: https://issues.apache.org/jira/browse/HDFS-2574
 Project: Hadoop HDFS
  Issue Type: Task
  Components: documentation
Affects Versions: 0.23.0
Reporter: Joe Crobak
Assignee: Harsh J
Priority: Trivial
 Attachments: HDFS-2574.patch, HDFS-2574.patch


 Some examples: hadoop-hdfs/src/main/packages/templates/conf/hdfs-site.xml 
 contains an entry for dfs.name.dir rather than dfs.namenode.name.dir and 
 hdfs-default.xml references dfs.name.dir twice in description tags rather 
 than using dfs.namenode.name.dir.
 List of deprecated properties is here: 
 http://hadoop.apache.org/common/docs/r0.23.0/hadoop-project-dist/hadoop-common/DeprecatedProperties.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2536) Remove unused imports

2011-11-21 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2536:
--

Attachment: 0.23-HDFS-2536.patch

Patch for 0.23. Thanks for review+commit of the previous one Eli!

Verified that {{mvn clean compile}} passes.

 Remove unused imports
 -

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Harsh J
Priority: Trivial
  Labels: newbie
 Attachments: 0.23-HDFS-2536.patch, 
 HDFS-2536.FSImageTransactionalStorageInspector.patch, HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2580) NameNode#main(...) can make use of GenericOptionsParser.

2011-11-21 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2580:
--

Status: Patch Available  (was: Open)

 NameNode#main(...) can make use of GenericOptionsParser.
 

 Key: HDFS-2580
 URL: https://issues.apache.org/jira/browse/HDFS-2580
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-2580.patch


 DataNode supports passing generic opts when calling via {{hdfs datanode}}. 
 NameNode can support the same thing as well, but doesn't right now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2580) NameNode#main(...) can make use of GenericOptionsParser.

2011-11-21 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2580:
--

Attachment: HDFS-2580.patch

 NameNode#main(...) can make use of GenericOptionsParser.
 

 Key: HDFS-2580
 URL: https://issues.apache.org/jira/browse/HDFS-2580
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-2580.patch


 DataNode supports passing generic opts when calling via {{hdfs datanode}}. 
 NameNode can support the same thing as well, but doesn't right now.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2572) Unnecessary double-check in DN#getHostName

2011-11-20 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2572:
--

Attachment: HDFS-2572.patch

Nit fixed. Committing to trunk.

 Unnecessary double-check in DN#getHostName
 --

 Key: HDFS-2572
 URL: https://issues.apache.org/jira/browse/HDFS-2572
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2572.patch, HDFS-2572.patch


 We do a double config.get unnecessarily inside DN#getHostName(...). Can be 
 removed by this patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2572) Unnecessary double-check in DN#getHostName

2011-11-20 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2572:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Unnecessary double-check in DN#getHostName
 --

 Key: HDFS-2572
 URL: https://issues.apache.org/jira/browse/HDFS-2572
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2572.patch, HDFS-2572.patch


 We do a double config.get unnecessarily inside DN#getHostName(...). Can be 
 removed by this patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2567) When 0 DNs are available, show a proper error when trying to browse DFS via web UI

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2567:
--

Summary: When 0 DNs are available, show a proper error when trying to 
browse DFS via web UI  (was: Can't browse HDFS on a fresh NN instance)

 When 0 DNs are available, show a proper error when trying to browse DFS via 
 web UI
 --

 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J

 Trace:
 {code}
 HTTP ERROR 500
 Problem accessing /nn_browsedfscontent.jsp. Reason:
 n must be positive
 Caused by:
 java.lang.IllegalArgumentException: n must be positive
   at java.util.Random.nextInt(Random.java:250)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
   at 
 org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 {code}
 Steps I did to run into this:
 1. Start a new NN, freshly formatted.
 2. No DNs yet.
 3. Visit the DFS browser link 
 {{http://localhost:50070/nn_browsedfscontent.jsp}}
 4. Above error shows itself
 5. {{hdfs dfs -touchz afile}}
 6. Re-visit, still shows the same issue.
 Perhaps its cause of no added DN so far.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2567) When 0 DNs are available, show a proper error when trying to browse DFS via web UI

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2567:
--

Fix Version/s: 0.24.0
   Status: Patch Available  (was: Open)

 When 0 DNs are available, show a proper error when trying to browse DFS via 
 web UI
 --

 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
 Fix For: 0.24.0

 Attachments: HDFS-2567.patch


 Trace:
 {code}
 HTTP ERROR 500
 Problem accessing /nn_browsedfscontent.jsp. Reason:
 n must be positive
 Caused by:
 java.lang.IllegalArgumentException: n must be positive
   at java.util.Random.nextInt(Random.java:250)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
   at 
 org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 {code}
 Steps I did to run into this:
 1. Start a new NN, freshly formatted.
 2. No DNs yet.
 3. Visit the DFS browser link 
 {{http://localhost:50070/nn_browsedfscontent.jsp}}
 4. Above error shows itself
 5. {{hdfs dfs -touchz afile}}
 6. Re-visit, still shows the same issue.
 Perhaps its cause of no added DN so far.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2567) When 0 DNs are available, show a proper error when trying to browse DFS via web UI

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2567:
--

Attachment: HDFS-2567.patch

New message:

{code}
HTTP ERROR 500

Problem accessing /nn_browsedfscontent.jsp. Reason:

Can't browse the DFS since there are no live nodes available to redirect to.
Caused by:

java.io.IOException: Can't browse the DFS since there are no live nodes 
available to redirect to.
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:388)
at 
org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:988)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
{code}

Manually tested on a 0.24-snapshot instance.

 When 0 DNs are available, show a proper error when trying to browse DFS via 
 web UI
 --

 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
 Fix For: 0.24.0

 Attachments: HDFS-2567.patch


 Trace:
 {code}
 HTTP ERROR 500
 Problem accessing /nn_browsedfscontent.jsp. Reason:
 n must be positive
 Caused by:
 java.lang.IllegalArgumentException: n must be positive
   at java.util.Random.nextInt(Random.java:250)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
   at 
 org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 

[jira] [Updated] (HDFS-2536) FSImageTransactionalStorageInspector has a bunch of unused imports

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2536:
--

Attachment: HDFS-2536.patch

Patch that cleans up the 'hadoop-hdfs' projects off all unused imports.

Note: I've not reorganized imports, just cleaned up unused ones and blank lines 
in between.

 FSImageTransactionalStorageInspector has a bunch of unused imports
 --

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2536) FSImageTransactionalStorageInspector has a bunch of unused imports

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2536:
--

Fix Version/s: 0.24.0
 Assignee: Harsh J
   Status: Patch Available  (was: Open)

 FSImageTransactionalStorageInspector has a bunch of unused imports
 --

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Harsh J
Priority: Trivial
  Labels: newbie
 Fix For: 0.24.0

 Attachments: HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2536) FSImageTransactionalStorageInspector has a bunch of unused imports

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2536:
--

Attachment: HDFS-2536.FSImageTransactionalStorageInspector.patch

Or alternative patch, just for the bad mentioned file.

 FSImageTransactionalStorageInspector has a bunch of unused imports
 --

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Harsh J
Priority: Trivial
  Labels: newbie
 Fix For: 0.24.0

 Attachments: HDFS-2536.FSImageTransactionalStorageInspector.patch, 
 HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2568:
--

Attachment: HDFS-2568.patch

 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >