[jira] [Commented] (HBASE-4191) hbase load balancer needs locality awareness

2019-04-22 Thread Uma Maheswara Rao G (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823307#comment-16823307
 ] 

Uma Maheswara Rao G commented on HBASE-4191:


I am in travel to India from 8th to 26th April. Please allow my delayed 
responses during this period.


> hbase load balancer needs locality awareness
> 
>
> Key: HBASE-4191
> URL: https://issues.apache.org/jira/browse/HBASE-4191
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Reporter: Ted Yu
>Assignee: Liyin Tang
>Priority: Major
>  Labels: balancer
>
> Previously, HBASE-4114 implements the metrics for HFile HDFS block locality, 
> which provides the HFile level locality information.
> But in order to work with load balancer and region assignment, we need the 
> region level locality information.
> Let's define the region locality information first, which is almost the same 
> as HFile locality index.
> HRegion locality index (HRegion A, RegionServer B) = 
> (Total number of HDFS blocks that can be retrieved locally by the 
> RegionServer B for the HRegion A) / ( Total number of the HDFS blocks for the 
> Region A)
> So the HRegion locality index tells us that how much locality we can get if 
> the HMaster assign the HRegion A to the RegionServer B.
> So there will be 2 steps involved to assign regions based on the locality.
> 1) During the cluster start up time, the master will scan the hdfs to 
> calculate the "HRegion locality index" for each pair of HRegion and Region 
> Server. It is pretty expensive to scan the dfs. So we only needs to do this 
> once during the start up time.
> 2) During the cluster run time, each region server will update the "HRegion 
> locality index" as metrics periodically as HBASE-4114 did. The Region Server 
> can expose them to the Master through ZK, meta table, or just RPC messages. 
> Based on the "HRegion locality index", the assignment manager in the master 
> would have a global knowledge about the region locality distribution and can 
> run the MIN COST MAXIMUM FLOW solver to reach the global optimization.
> Let's construct the graph first:
> [Graph]
> Imaging there is a bipartite graph and the left side is the set of regions 
> and the right side is the set of region servers.
> There is a source node which links itself to each node in the region set. 
> There is a sink node which is linked from each node in the region server set.
> [Capacity]
> The capacity between the source node and region nodes is 1.
> And the capacity between the region nodes and region server nodes is also 1.
> (The purpose is each region can ONLY be assigned to one region server at one 
> time)
> The capacity between the region server nodes and sink node are the avg number 
> of regions which should be assigned each region server.
> (The purpose is balance the load for each region server)
> [Cost]
> The cost between each region and region server is the opposite of locality 
> index, which means the higher locality is, if region A is assigned to region 
> server B, the lower cost it is.
> The cost function could be more sophisticated when we put more metrics into 
> account.
> So after running the min-cost max flow solver, the master could assign the 
> regions based on the global locality optimization.
> Also the master should share this global view to secondary master in case the 
> master fail over happens.
> In addition, the HBASE-4491 (Locality Checker) is the tool, which is based on 
> the same metrics, to proactively to scan dfs to calculate the global locality 
> information in the cluster. It will help us to verify data locality 
> information during the run time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-7878) recoverFileLease does not check return value of recoverLease

2013-02-26 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13588013#comment-13588013
 ] 

Uma Maheswara Rao G commented on HBASE-7878:


+1 for committing this. 
This test can ensure to do recoverlease with append call. But practically I 
don't see any reason for failing recoverLease call and passing append call.
But we don't want to remove append call, I think this is the way to test it.
Another negative test could be, call the recoverLease on deleted non existent 
WAL file. There also recoverLease call will fail. Then call will go to append, 
here append also will be failed because fileDoesNotexist. and it should not 
hand for long time in such case.

I found one case, that if there are no blocks allocated but file is just 
created. then recoverLease call actually will finalize the inode and return 
true internally. But that value will not be really propagated to out. So, 
recoverLease will return false to out side. That should not be a problem 
because, next recoverLease call will get true because inode already finalized. 
If possible may be we can have one test, but not forcing to add here. 

Thanks Ted, you can go ahead and commit if Lars also ok for it.

 recoverFileLease does not check return value of recoverLease
 

 Key: HBASE-7878
 URL: https://issues.apache.org/jira/browse/HBASE-7878
 Project: HBase
  Issue Type: Bug
  Components: util
Affects Versions: 0.95.0, 0.94.6
Reporter: Eric Newton
Assignee: Ted Yu
Priority: Critical
 Fix For: 0.95.0, 0.98.0, 0.94.6

 Attachments: 7878-trunk-v2.txt, 7878-trunk-v3.txt, 7878-trunk-v4.txt, 
 7878-trunk-v5.txt


 I think this is a problem, so I'm opening a ticket so an HBase person takes a 
 look.
 Apache Accumulo has moved its write-ahead log to HDFS. I modeled the lease 
 recovery for Accumulo after HBase's lease recovery.  During testing, we 
 experienced data loss.  I found it is necessary to wait until recoverLease 
 returns true to know that the file has been truly closed.  In FSHDFSUtils, 
 the return result of recoverLease is not checked. In the unit tests created 
 to check lease recovery in HBASE-2645, the return result of recoverLease is 
 always checked.
 I think FSHDFSUtils should be modified to check the return result, and wait 
 until it returns true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7878) recoverFileLease does not check return value of recoverLease

2013-02-22 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13585030#comment-13585030
 ] 

Uma Maheswara Rao G commented on HBASE-7878:


+1 for committing this patch. Later once HDFS API comes to know whether file is 
comlpetely colsed or not, that time may be we can limit retries with config. 
But I am not sure why append and close required in code?[not part of this 
patch]. Looping one more time by invoking recoverLease itself can get true 
return value if file is closed. If file is not closed append call also will 
fail anyway.

 recoverFileLease does not check return value of recoverLease
 

 Key: HBASE-7878
 URL: https://issues.apache.org/jira/browse/HBASE-7878
 Project: HBase
  Issue Type: Bug
  Components: util
Reporter: Eric Newton
Assignee: Ted Yu
Priority: Critical
 Fix For: 0.96.0, 0.94.6

 Attachments: 7878-trunk-v2.txt, 7878-trunk-v3.txt


 I think this is a problem, so I'm opening a ticket so an HBase person takes a 
 look.
 Apache Accumulo has moved its write-ahead log to HDFS. I modeled the lease 
 recovery for Accumulo after HBase's lease recovery.  During testing, we 
 experienced data loss.  I found it is necessary to wait until recoverLease 
 returns true to know that the file has been truly closed.  In FSHDFSUtils, 
 the return result of recoverLease is not checked. In the unit tests created 
 to check lease recovery in HBASE-2645, the return result of recoverLease is 
 always checked.
 I think FSHDFSUtils should be modified to check the return result, and wait 
 until it returns true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7878) recoverFileLease does not check return value of recoverLease

2013-02-22 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13585031#comment-13585031
 ] 

Uma Maheswara Rao G commented on HBASE-7878:


I will raise one HDFS JIRA for having such API support..

 recoverFileLease does not check return value of recoverLease
 

 Key: HBASE-7878
 URL: https://issues.apache.org/jira/browse/HBASE-7878
 Project: HBase
  Issue Type: Bug
  Components: util
Reporter: Eric Newton
Assignee: Ted Yu
Priority: Critical
 Fix For: 0.96.0, 0.94.6

 Attachments: 7878-trunk-v2.txt, 7878-trunk-v3.txt


 I think this is a problem, so I'm opening a ticket so an HBase person takes a 
 look.
 Apache Accumulo has moved its write-ahead log to HDFS. I modeled the lease 
 recovery for Accumulo after HBase's lease recovery.  During testing, we 
 experienced data loss.  I found it is necessary to wait until recoverLease 
 returns true to know that the file has been truly closed.  In FSHDFSUtils, 
 the return result of recoverLease is not checked. In the unit tests created 
 to check lease recovery in HBASE-2645, the return result of recoverLease is 
 always checked.
 I think FSHDFSUtils should be modified to check the return result, and wait 
 until it returns true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7878) recoverFileLease does not check return value of recoverLease

2013-02-22 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13585037#comment-13585037
 ] 

Uma Maheswara Rao G commented on HBASE-7878:


{quote}
It works if file is closed else we will be stuck forever because recovered flag 
will be forever false – least, that is how I read it. Is that right Uma 
Maheswara Rao G?
{quote}
Yes, you are right. Later once the file is closed, that API will return tru 
after many retrails. 
@Ted, when you verify this patch, what is the behaviour you are observing?

 recoverFileLease does not check return value of recoverLease
 

 Key: HBASE-7878
 URL: https://issues.apache.org/jira/browse/HBASE-7878
 Project: HBase
  Issue Type: Bug
  Components: util
Reporter: Eric Newton
Assignee: Ted Yu
Priority: Critical
 Fix For: 0.96.0, 0.94.6

 Attachments: 7878-trunk-v2.txt, 7878-trunk-v3.txt


 I think this is a problem, so I'm opening a ticket so an HBase person takes a 
 look.
 Apache Accumulo has moved its write-ahead log to HDFS. I modeled the lease 
 recovery for Accumulo after HBase's lease recovery.  During testing, we 
 experienced data loss.  I found it is necessary to wait until recoverLease 
 returns true to know that the file has been truly closed.  In FSHDFSUtils, 
 the return result of recoverLease is not checked. In the unit tests created 
 to check lease recovery in HBASE-2645, the return result of recoverLease is 
 always checked.
 I think FSHDFSUtils should be modified to check the return result, and wait 
 until it returns true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7878) recoverFileLease does not check return value of recoverLease

2013-02-22 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13585047#comment-13585047
 ] 

Uma Maheswara Rao G commented on HBASE-7878:


HDFS JIRA HDFS-4525

 recoverFileLease does not check return value of recoverLease
 

 Key: HBASE-7878
 URL: https://issues.apache.org/jira/browse/HBASE-7878
 Project: HBase
  Issue Type: Bug
  Components: util
Reporter: Eric Newton
Assignee: Ted Yu
Priority: Critical
 Fix For: 0.96.0, 0.94.6

 Attachments: 7878-trunk-v2.txt, 7878-trunk-v3.txt


 I think this is a problem, so I'm opening a ticket so an HBase person takes a 
 look.
 Apache Accumulo has moved its write-ahead log to HDFS. I modeled the lease 
 recovery for Accumulo after HBase's lease recovery.  During testing, we 
 experienced data loss.  I found it is necessary to wait until recoverLease 
 returns true to know that the file has been truly closed.  In FSHDFSUtils, 
 the return result of recoverLease is not checked. In the unit tests created 
 to check lease recovery in HBASE-2645, the return result of recoverLease is 
 always checked.
 I think FSHDFSUtils should be modified to check the return result, and wait 
 until it returns true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7878) recoverFileLease does not check return value of recoverLease

2013-02-21 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13584046#comment-13584046
 ] 

Uma Maheswara Rao G commented on HBASE-7878:


Currently recoverLease API will not gaurantee that lease recovery completely 
done in for that file. It will just initiate the recovery and returns false. 
recoverLease api returns true if that file already closed. Otherwise it always 
returns false.

Lease recovery steps:
 1) clinet requests for lease recovery
 2) NN will check the inode state. If it is not in underconstruction, then 
returns true as file already closed. Otherwise proceeds with recovery.
 3) If last block state is UNDERConstruction/UNDERRecovery, the intiated 
recovery. This is nothing but choosing a primary DN from the locations and add 
that block to that node's recoverBlocks queue.
 4) recover lease API will be retuned back with false result.
 5) Now NN will send this recover block details to primary DN as part of 
heartbeart response.
 6) Primary DN, recovery the blocks in all DNS and call the 
comitBlockSynchronization call to NN.
 7) NN will update the block with recovered genspamp in blocksmap and file 
inode will be finalized.

Here step 5,6,7 will happen in asynchronously. Here we have to be care that 
multiple recovery calls will go as we are calling in loop until it returns true.
But unfortunately we don't have such cases checked in Branch-1 and don't have 
such execption also I remember. Before NN giving the block to DN if new 
recovery request comes, that block will not be added for recovery anyway.
I am not sure about the dataloss, what is the exact scenario? But the current 
problem with lease recovery is, there is no way to ensure recovery is complete 
at DNs as well. If we simply call recover lease and proceed with the operations 
by assuming file might have recovered, can lead to a problem that file might be 
stil in recoevery inprogress stage. So, client may see the block with older 
genstamp and when it is trying to read it may get wrong length as the blocks 
were not recovered yet. This is the issue filed as HDFS-2296.

May be one option, what I am thinking is, How about HDFS expose an API like, 
fs.isFileClosed(src). If recovery successfully completed, file should have been 
closed. So, Hbase can loop on this API for some period?




 recoverFileLease does not check return value of recoverLease
 

 Key: HBASE-7878
 URL: https://issues.apache.org/jira/browse/HBASE-7878
 Project: HBase
  Issue Type: Bug
  Components: util
Reporter: Eric Newton
Assignee: Ted Yu
Priority: Critical
 Fix For: 0.96.0, 0.94.6

 Attachments: 7878-trunk-v1.txt, 7878-trunk-v2.txt, 7878-trunk-v3.txt


 I think this is a problem, so I'm opening a ticket so an HBase person takes a 
 look.
 Apache Accumulo has moved its write-ahead log to HDFS. I modeled the lease 
 recovery for Accumulo after HBase's lease recovery.  During testing, we 
 experienced data loss.  I found it is necessary to wait until recoverLease 
 returns true to know that the file has been truly closed.  In FSHDFSUtils, 
 the return result of recoverLease is not checked. In the unit tests created 
 to check lease recovery in HBASE-2645, the return result of recoverLease is 
 always checked.
 I think FSHDFSUtils should be modified to check the return result, and wait 
 until it returns true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7715) FSUtils#waitOnSafeMode can incorrectly loop on standby NN

2013-01-29 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13566161#comment-13566161
 ] 

Uma Maheswara Rao G commented on HBASE-7715:


In DistributedFileSystem there is another API that can be used as Util API. 
That is defaulting that flag as true.
{code}
 public boolean isInSafeMode() throws IOException {
return setSafeMode(SafeModeAction.SAFEMODE_GET, true);
  }
{code}

Yes, we found this issue some time back. Here is the JIRA HDFS-3507.
This was committed in 2.0.3.

 FSUtils#waitOnSafeMode can incorrectly loop on standby NN
 -

 Key: HBASE-7715
 URL: https://issues.apache.org/jira/browse/HBASE-7715
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.4
Reporter: Andrew Wang
Assignee: Ted Yu
 Fix For: 0.96.0

 Attachments: 7715-trunk-v2.txt


 We encountered an issue where HMaster failed to start with an active NN not 
 in safe mode and a standby NN in safemode. The relevant lines in 
 {{FSUtils.java}} show the issue:
 {noformat}
 while 
 (dfs.setSafeMode(org.apache.hadoop.hdfs.protocol.FSConstants.SafeModeAction.SAFEMODE_GET))
  {
 {noformat}
 This call skips the normal client failover from the standby to active NN, so 
 it will loop polling the standby NN if it unfortunately talks to the standby 
 first.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5876) TestImportExport has been failing against hadoop 0.23 profile

2012-04-27 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263875#comment-13263875
 ] 

Uma Maheswara Rao G commented on HBASE-5876:


Thanks a lot, Jon for signing up this. Actually I thought to take a look on 
this weekend. Unfortunately I may not be able to look on this weekend as came 
out of station. Thanks for taking this:-)

 TestImportExport has been failing against hadoop 0.23 profile
 -

 Key: HBASE-5876
 URL: https://issues.apache.org/jira/browse/HBASE-5876
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
Assignee: Jonathan Hsieh

 TestImportExport has been failing against hadoop 0.23 profile

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5745) SequenceFileLogReader#getPos may get wrong length on DFS restart

2012-04-26 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13263130#comment-13263130
 ] 

Uma Maheswara Rao G commented on HBASE-5745:


I have just committed the HDFS issue in trunk and branch-2.
For supporting older released versions, we may have to implement the similar 
logic in Hbase SFLR, to ensure the correct length.Otherwise this is a dataloss 
case from Hbase perspective.

 SequenceFileLogReader#getPos may get wrong length on DFS restart
 

 Key: HBASE-5745
 URL: https://issues.apache.org/jira/browse/HBASE-5745
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
Priority: Critical

 This is actually a kind of integration bug from Hbase perspective.
 Currently HDFS will count the partial block length as 0, if there are no 
 locations found for the partial block. This can happend opn DFS restart, 
 before DNs completely reports to NN.
 Explained the scenario in HDFS-3222. Actually this is a bug in HDFS. we may 
 solve this in latest versions.
 So, whatever the versions Hbase using may have this bug. HMaster may not be 
 able to replay the complete edits if there is an Hmaster switch also at the 
 same time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.

2012-04-25 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HBASE-5878:
--

 Summary: Use getVisibleLength public api from HdfsDataInputStream 
from Hadoop-2.
 Key: HBASE-5878
 URL: https://issues.apache.org/jira/browse/HBASE-5878
 Project: HBase
  Issue Type: Bug
  Components: wal
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


SequencFileLogReader: 

Currently Hbase using getFileLength api from DFSInputStream class by 
reflection. DFSInputStream is not exposed as public. So, this may change in 
future. Now HDFS exposed HdfsDataInputStream as public API.
We can make use of it, when we are not able to find the getFileLength api from 
DFSInputStream as a else condition. So, that we will not have any sudden 
surprise like we are facing today.

Also,  it is just logging one warn message and proceeding if it throws any 
exception while getting the length. I think we can re-throw the exception 
because there is no point in continuing with dataloss.


{code}
long adjust = 0;

  try {
Field fIn = FilterInputStream.class.getDeclaredField(in);
fIn.setAccessible(true);
Object realIn = fIn.get(this.in);
// In hadoop 0.22, DFSInputStream is a standalone class.  Before 
this,
// it was an inner class of DFSClient.
if (realIn.getClass().getName().endsWith(DFSInputStream)) {
  Method getFileLength = realIn.getClass().
getDeclaredMethod(getFileLength, new Class? []{});
  getFileLength.setAccessible(true);
  long realLength = ((Long)getFileLength.
invoke(realIn, new Object []{})).longValue();
  assert(realLength = this.length);
  adjust = realLength - this.length;
} else {
  LOG.info(Input stream class:  + realIn.getClass().getName() +
  , not adjusting length);
}
  } catch(Exception e) {
SequenceFileLogReader.LOG.warn(
  Error while trying to get accurate file length.   +
  Truncation / data loss may occur if RegionServers die., e);
  }

  return adjust + super.getPos();
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-5876) TestImportExport has been failing against hadoop 0.23 profile

2012-04-25 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G reassigned HBASE-5876:
--

Assignee: Uma Maheswara Rao G

 TestImportExport has been failing against hadoop 0.23 profile
 -

 Key: HBASE-5876
 URL: https://issues.apache.org/jira/browse/HBASE-5876
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
Assignee: Uma Maheswara Rao G

 TestImportExport has been failing against hadoop 0.23 profile

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5876) TestImportExport has been failing against hadoop 0.23 profile

2012-04-25 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13262006#comment-13262006
 ] 

Uma Maheswara Rao G commented on HBASE-5876:


{quote}
2012-04-26 01:23:47,790 ERROR [main] common.Util(58): Syntax error in URI 
E:\Repoitories\Hbase\target\test-data\79fa72c8-f019-4c29-be3d-cc67230f70cd\dfscluster_177a4b0f-5ebc-4dfc-9d89-867fefec2c6a\dfs\name2.
 Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in opaque part at index 2: 
E:\Repoitories\Hbase\target\test-data\79fa72c8-f019-4c29-be3d-cc67230f70cd\dfscluster_177a4b0f-5ebc-4dfc-9d89-867fefec2c6a\dfs\name2
at java.net.URI$Parser.fail(Unknown Source)
at java.net.URI$Parser.checkChars(Unknown Source)
at java.net.URI$Parser.parse(Unknown Source)
at java.net.URI.init(Unknown Source)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:56)
at 
org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:106)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:761)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNamespaceEditsDirs(FSNamesystem.java:806)
{quote}
URI creation is failing with passed directory here. Let me take a look.

 TestImportExport has been failing against hadoop 0.23 profile
 -

 Key: HBASE-5876
 URL: https://issues.apache.org/jira/browse/HBASE-5876
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
Assignee: Uma Maheswara Rao G

 TestImportExport has been failing against hadoop 0.23 profile

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5876) TestImportExport has been failing against hadoop 0.23 profile

2012-04-25 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13262020#comment-13262020
 ] 

Uma Maheswara Rao G commented on HBASE-5876:


@Ted, is it the same issue you are facing?
This failure is on my windows box. Actually problem due to backward slash in 
windows i think. Simple URI creation also fails with the same reason. Once I 
changed it to forward slash '/' , it started working.


 TestImportExport has been failing against hadoop 0.23 profile
 -

 Key: HBASE-5876
 URL: https://issues.apache.org/jira/browse/HBASE-5876
 Project: HBase
  Issue Type: Bug
Reporter: Zhihong Yu
Assignee: Uma Maheswara Rao G

 TestImportExport has been failing against hadoop 0.23 profile

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HBASE-5855) [findbugs] address remaining findbugs warnings

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G reassigned HBASE-5855:
--

Assignee: Uma Maheswara Rao G

 [findbugs] address remaining findbugs warnings 
 ---

 Key: HBASE-5855
 URL: https://issues.apache.org/jira/browse/HBASE-5855
 Project: HBase
  Issue Type: Sub-task
Reporter: Jonathan Hsieh
Assignee: Uma Maheswara Rao G

 As we've been cleaning up the code related to findbugs warnings, new patches 
 are coming in that introduce new warnings.  This would is the last sub-isuse 
 that will cleanup any recently introduced warnings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5830) Cleanup SequenceFileLogWriter to use syncFs api from SequenceFile#Writer directly in trunk.

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HBASE-5830:
---

Attachment: HBASE-5830.patch

 Cleanup SequenceFileLogWriter to use syncFs api from SequenceFile#Writer 
 directly in trunk.
 ---

 Key: HBASE-5830
 URL: https://issues.apache.org/jira/browse/HBASE-5830
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HBASE-5830.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5830) Cleanup SequenceFileLogWriter to use syncFs api from SequenceFile#Writer directly in trunk.

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13259767#comment-13259767
 ] 

Uma Maheswara Rao G commented on HBASE-5830:


Attached the patch. SequenceFileLogWriter will use the syncFs api from 
SequenceFile writer.

 Cleanup SequenceFileLogWriter to use syncFs api from SequenceFile#Writer 
 directly in trunk.
 ---

 Key: HBASE-5830
 URL: https://issues.apache.org/jira/browse/HBASE-5830
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HBASE-5830.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-5830) Cleanup SequenceFileLogWriter to use syncFs api from SequenceFile#Writer directly in trunk.

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HBASE-5830:
---

Status: Patch Available  (was: Open)

 Cleanup SequenceFileLogWriter to use syncFs api from SequenceFile#Writer 
 directly in trunk.
 ---

 Key: HBASE-5830
 URL: https://issues.apache.org/jira/browse/HBASE-5830
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HBASE-5830.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5861) Hadoop 23 compile broken due to tests introduced in HBASE-5064

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13259785#comment-13259785
 ] 

Uma Maheswara Rao G commented on HBASE-5861:


In branch-1, JobContext is a class.

{code}
public class JobContext {
{code}

Now in hadoop trunk or hadoop-2, that is changed to interface.

{code}
@InterfaceAudience.Public
@InterfaceStability.Evolving
public interface JobContext extends MRJobConfig {
{code}

 Hadoop 23 compile broken due to tests introduced in HBASE-5064 
 ---

 Key: HBASE-5861
 URL: https://issues.apache.org/jira/browse/HBASE-5861
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.94.0, 0.96.0
Reporter: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.94.0, 0.96.0


 When attempting to compile HBase 0.94rc1 against hadoop 23, I got this set of 
 compilation error messages:
 {code}
 jon@swoop:~/proj/hbase-0.94$ mvn clean test -Dhadoop.profile=23 -DskipTests
 ...
 [INFO] 
 
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 18.926s
 [INFO] Finished at: Mon Apr 23 10:38:47 PDT 2012
 [INFO] Final Memory: 55M/555M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-compiler-plugin:2.0.2:testCompile 
 (default-testCompile) on project hbase: Compilation failure: Compilation 
 failure:
 [ERROR] 
 /home/jon/proj/hbase-0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java:[147,46]
  org.apache.hadoop.mapreduce.JobContext is abstract; cannot be instantiated
 [ERROR] 
 [ERROR] 
 /home/jon/proj/hbase-0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java:[153,29]
  org.apache.hadoop.mapreduce.JobContext is abstract; cannot be instantiated
 [ERROR] 
 [ERROR] 
 /home/jon/proj/hbase-0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java:[194,46]
  org.apache.hadoop.mapreduce.JobContext is abstract; cannot be instantiated
 [ERROR] 
 [ERROR] 
 /home/jon/proj/hbase-0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java:[206,29]
  org.apache.hadoop.mapreduce.JobContext is abstract; cannot be instantiated
 [ERROR] 
 [ERROR] 
 /home/jon/proj/hbase-0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java:[213,29]
  org.apache.hadoop.mapreduce.JobContext is abstract; cannot be instantiated
 [ERROR] 
 [ERROR] 
 /home/jon/proj/hbase-0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java:[226,29]
  org.apache.hadoop.mapreduce.TaskAttemptContext is abstract; cannot be 
 instantiated
 [ERROR] - [Help 1]
 {code}
 Upon further investigation this issue is due to code introduced in HBASE-5064 
 and is also present in trunk.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5830) Cleanup SequenceFileLogWriter to use syncFs api from SequenceFile#Writer directly in trunk.

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13259800#comment-13259800
 ] 

Uma Maheswara Rao G commented on HBASE-5830:


Yes, both has the public api:

Hadoop-2 or trunk code:

SequenceFile#Writer:
{code}
/** flush all currently written data to the file system */
public void syncFs() throws IOException {
  if (out != null) {
out.hflush();  // flush contents to file system
  }
}
{code}

Branch-1:

  SequenceFile#Writer:
{code}
   /** flush all currently written data to the file system */
public void syncFs() throws IOException {
  if (out != null) {
out.sync();   // flush contents to file 
system
  }
}
{code}
   

 Cleanup SequenceFileLogWriter to use syncFs api from SequenceFile#Writer 
 directly in trunk.
 ---

 Key: HBASE-5830
 URL: https://issues.apache.org/jira/browse/HBASE-5830
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HBASE-5830.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5857) RIT map in RS not getting cleared while region opening

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13259813#comment-13259813
 ] 

Uma Maheswara Rao G commented on HBASE-5857:


Chinna, if you still retain this testcode  in your next patch, please take care 
of this comments.
1) {code}
 try {
+  regionServer.openRegion(REGIONINFO);
+} catch (RegionAlreadyInTransitionException e) {
+  fail(It should not throw this exception  + e);
+}
{code}

you need not catch the exception explicitely. Let the test fail with this 
exception.
Javadoc of your test should say about the expectation of your test.

2) {code}
   try {
+  regionServer.openRegion(REGIONINFO);
+  fail(It should throw IOException );
+} catch (Exception e) {
+}
{code}
It would great if we catch the IOException, instead of Exception.

3)Also unnecessary empty line in patch.

 RIT map in RS not getting cleared while region opening
 --

 Key: HBASE-5857
 URL: https://issues.apache.org/jira/browse/HBASE-5857
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
Reporter: Chinna Rao Lalam
Assignee: Chinna Rao Lalam
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5857_0.92.patch, HBASE-5857_94.patch, 
 HBASE-5857_trunk.patch


 While opening the region in RS after adding the region to 
 regionsInTransitionInRS if tableDescriptors.get() throws exception the region 
 wont be cleared from regionsInTransitionInRS. So next time if it tries to 
 open the region in the same RS it will throw the 
 RegionAlreadyInTransitionException.
 if swap the below statement this issue wont come.
 {code}
 this.regionsInTransitionInRS.putIfAbsent(region.getEncodedNameAsBytes(),true);
 HTableDescriptor htd = this.tableDescriptors.get(region.getTableName());
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5830) Cleanup SequenceFileLogWriter to use syncFs api from SequenceFile#Writer directly in trunk.

2012-04-23 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13259855#comment-13259855
 ] 

Uma Maheswara Rao G commented on HBASE-5830:


{quote} -1 tests included.  The patch doesn't appear to include any new or 
modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.
{quote}

No tests included , as this is the patch to use the SequenceFile#Writer API 
directly, instead of using reflections.

Thanks a lot, Stack for the review!

 Cleanup SequenceFileLogWriter to use syncFs api from SequenceFile#Writer 
 directly in trunk.
 ---

 Key: HBASE-5830
 URL: https://issues.apache.org/jira/browse/HBASE-5830
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HBASE-5830.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5652) [findbugs] Fix lock release on all paths

2012-04-22 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13259177#comment-13259177
 ] 

Uma Maheswara Rao G commented on HBASE-5652:


@Gregory, Also please update the test-patch.properties reflecting to the 
current findbugs OK count with your patch

 [findbugs] Fix lock release on all paths 
 -

 Key: HBASE-5652
 URL: https://issues.apache.org/jira/browse/HBASE-5652
 Project: HBase
  Issue Type: Sub-task
  Components: scripts
Reporter: Jonathan Hsieh
Assignee: Gregory Chanan
 Attachments: HBASE-5652-v0.patch


 See 
 https://builds.apache.org/job/PreCommit-HBASE-Build/1313//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html#Warnings_MT_CORRECTNESS
 Category UL

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5652) [findbugs] Fix lock release on all paths

2012-04-21 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258791#comment-13258791
 ] 

Uma Maheswara Rao G commented on HBASE-5652:


Agreed with Ram, variable assignment in finally block before unlocking would 
not cause any exception here. But the standard pattern for read/write locks I 
have seen is, after acquiring the lock, every line should be in try and in 
finally block we will release the lock. That might be the findbugs worry here. 
But in this case, there is no way of throwing exception from variable 
assignments. So, we can just skip I feel. Let's see Jon opinion on this.

Here try/finally almost no use.
{code}
try {
+this.logRollRunning = false;
+  } finally {
+this.cacheFlushLock.unlock();
+  }
{code}

Other problem I see in adding into exclude list is, we are not able to pin 
point exact lication of the code. We may give just 
package/class/method/feilds..and bug pattern,type ...etc. Unfortunately if same 
bug introduces but this is valid to fix in the same area of code, then it may 
get skipped due to other exclude entry presents in the file which is almost 
matching to the same. So, we have to reduce exclude filter entries also as less 
as possible.

 [findbugs] Fix lock release on all paths 
 -

 Key: HBASE-5652
 URL: https://issues.apache.org/jira/browse/HBASE-5652
 Project: HBase
  Issue Type: Sub-task
  Components: scripts
Reporter: Jonathan Hsieh
Assignee: Gregory Chanan
 Attachments: HBASE-5652-v0.patch


 See 
 https://builds.apache.org/job/PreCommit-HBASE-Build/1313//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html#Warnings_MT_CORRECTNESS
 Category UL

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira