Re: read HBase regions in MPP way?

2015-02-04 Thread Nick Dimiduk
Sounds like you're wanting to do a lot of what the TableInputFormat
facilitates for mapreduce programs. Probably you can use code from that
package to turn a Scan into input splits, which contain region name
and RegionServer location, and consume those from your custom coordinator.

-n

On Tuesday, February 3, 2015, Demai Ni nid...@gmail.com wrote:

 hi, Guys,

 I am looking for a way to Read HBase table through MPP(Postgres-XC). And
 hoping to get some suggestions to either validate or invalidate the
 approach.

 Kind of like Apache Drill, but through PostgresSQL. Long story about why
 Postgres, and how c/c++ will give me headache for months to come. :-) I
 will leave it as is for now.

 The design is to have distributed Postgres-XC installed on the same HBase
 cluster, so Postgres' datanodes are on the same physical node as HBase's
 regionServers. connect HBase from PostgresSQL through existing HBase client
 code.

 Step1: At Postgres coordinator node(like Master of HBase), use
 HTable.getRegionLocations to get all Regions of a particular table:
 NavigableMapHRegionInfo, ServerName
 Step 2: iterate through above NavigatbleMap to map HBase ServerName to
 PG-XC's dataNode. The goal is to let the dataNode of Postgres handle the
 regions on its own physical machine.
 Step 3: Postgres coordinator node send the execution plan to Postgres
 datanode , through a existing framework called foreign data wrapper.
 Step 4: Postgres DataNode iterate through its assigned regions, and open a
 HBase Client.Scan() with .setStartRow and .setStopRow so it will only read
 the assigned region.  I was hoping to use HRegionInfo.regionId directly,
 but can find such API in Client.Scan
 Step 5: Posgres DataNode further analyse the retrieve data.

 So in short, the architect design is to leverage Postgres optimizer to
 parse SQL Query, and use Postgres DataNode as HBase' client to read HBase
 regions directly in parallel. With the hope to 1) read HRegion locally; 2)
 leverage existing HBase filters.

 On step4 above, is there a way to talk to RegionSever directly without
 communicating with HMaster?

 Similar ideas(Drill for one, how about HP vertica?) are brought up before,
 and discussed.  So before I am heading down the same road, Can I pick your
 brain, please shed me some light? or prevent me from doing something
 stupid?

 Many thanks

 Demai



Re: Re: Re: Wrong Configuration lead to a failure when enabling table

2015-02-04 Thread Weichen YE
Hi,Ted, Ram,

 Thank you for your attemtion for this bug.

 I meet this bug in production environment and the table contains
important data. If we are not able to enable this table in current cluster,
do you have any idea to get the table data back in some other way? Maybe
export, snapshot, copytable, distcp all table files to another cluster ?

2015-02-04 13:17 GMT+08:00 Ted Yu yuzhih...@gmail.com:

 Looks like the NPE was caused by the following method in BaseLoadBalancer
 returning null:

   protected MapServerName, ListHRegionInfo assignMasterRegions(

   CollectionHRegionInfo regions, ListServerName servers) {

 if (servers == null || regions == null || regions.isEmpty()) {

   return null;

 Since bulkPlan is null, calling BulkAssigner seems unnecessary.



 On Tue, Feb 3, 2015 at 9:01 PM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

 It is not only about the state on the table descriptor but also the in
 memory state in the AM.  I remember some time back Rajeshbabu worked on a
 HBCK like tool which will forcefully change the state of these tables in
 such cases. I don't remember the JIRA now.I thought of restarting the
 master thinking the in memory state would change and I got this

 java.lang.NullPointerException
 at

 org.apache.hadoop.hbase.master.handler.EnableTableHandler.handleEnableTable(EnableTableHandler.java:210)
 at

 org.apache.hadoop.hbase.master.handler.EnableTableHandler.process(EnableTableHandler.java:142)
 at

 org.apache.hadoop.hbase.master.AssignmentManager.recoverTableInEnablingState(AssignmentManager.java:1695)
 at

 org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:416)
 at

 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:720)
 at
 org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:170)
 at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1459)
 at java.lang.Thread.run(Thread.java:745)
 2015-02-04 16:11:45,932 FATAL [stobdtserver3:16040.activeMasterManager]
 master.HMaster: Master server abort: loaded coprocessors are:
 [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
 2015-02-04 16:11:45,933 FATAL [stobdtserver3:16040.activeMasterManager]
 master.HMaster: Unhandled exception. Starting shutdown.
 java.lang.NullPointerException
 at

 org.apache.hadoop.hbase.master.handler.EnableTableHandler.handleEnableTable(EnableTableHandler.java:210)
 at

 org.apache.hadoop.hbase.master.handler.EnableTableHandler.process(EnableTableHandler.java:142)
 at

 org.apache.hadoop.hbase.master.AssignmentManager.recoverTableInEnablingState(AssignmentManager.java:1695)
 at

 org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:416)
 at

 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:720)
 at
 org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:170)
 at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1459)


 Regards
 Ram

 On Wed, Feb 4, 2015 at 10:25 AM, Ted Yu yuzhih...@gmail.com wrote:

  What about creating an offline tool which can modify the table
 descriptor
  so that table goes to designated state ?
 
  Cheers
 
  On Tue, Feb 3, 2015 at 8:51 PM, ramkrishna vasudevan 
  ramkrishna.s.vasude...@gmail.com wrote:
 
   I tried reproducing this scenario on trunk. The same problem exists.
   Currently in the master the table state is noted in the Table
 descriptor
   and not on the ZK.  In 0.98.XX version it should be on the zk.
  
   When we tried to enable the table the region assignment failed due to
   ClassNotFound and already the state is in ENABLING.  But doing a
 describe
   table still shows it in DISABLED.
  
   Thought we could alter the correct Configuration but specifying
 another
   alter Table command we are still not able to enable the table.
  
   Moving this to dev to see if there is any workaround for this issue.
 If
   not we may have to solve this issue across branches until we have the
   Procedure V2 implemenation ready on trunk.
  
   Any suggestions?
  
   Regards
   Ram
  
   On Wed, Feb 4, 2015 at 4:05 AM, 叶炜晨 yeweic...@qiyi.com wrote:
  
 my version is 0.98.6-cdh5.2.0, the problem in my production
  environment.
   
So should I first delete znode? And then how to distable this
 table?my
goal is to fix the wrong table configuration to get my data.
   
   
 from my mobile phone.
   
 在 2015-2-4 上午12:46,ramkrishna vasudevan 
   ramkrishna.s.vasude...@gmail.com
写道:
   

 I think the only way out here is to clear the zookeeper node.
 But am
not sure on the ramifications of that.

   
 Which version are you using?  The newer versions are
 'protobuf'fed.

   
 Are you running this in production?

   
 Regards
 Ram

   
 On Tue, Feb 3, 2015 at 5:00 

Re: [VOTE] The 3rd HBase 0.98.10 release candidate (RC2) is available, vote closing 2/4/2015

2015-02-04 Thread Andrew Purtell
Gentle reminder that our abbreviated voting closes today. I plan to +1
later today, am just waiting for the usual unit test runs to finish. We
will need one more vote as of now. If we can get one today that will be
great, otherwise I can certainly extend the vote until the end of the week.




On Sat, Jan 31, 2015 at 10:07 PM, Andrew Purtell apurt...@apache.org
wrote:

 ​The 3rd HBase 0.98.10 release candidate (RC2) is available for
 download at http://people.apache.org/~apurtell/0.98.10RC2/ and Maven
 artifacts are also available in the temporary repository
 https://repository.apache.org/content/repositories/orgapachehbase-1060/

 Signed with my code signing key D5365CCD.

 The issues resolved in this release can be found at
 http://s.apache.org/7hO

 Please try out the candidate and vote +1/-1 by midnight Pacific Time
 (00:00 -0800 GMT) on ​February 4​ ​on whether or not we should release this
 as​ ​0.98.10. Three +1 votes from PMC will be required to release.

 --
 Best regards,

- Andy

 Problems worthy of attack prove their worth by hitting back. - Piet Hein
 (via Tom White)




-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: [VOTE] The 3rd HBase 0.98.10 release candidate (RC2) is available, vote closing 2/4/2015

2015-02-04 Thread Elliott Clark
I'm currently testing this on a cluster. I'll have results before the vote
should close.

On Wed, Feb 4, 2015 at 8:23 AM, Andrew Purtell apurt...@apache.org wrote:

 Gentle reminder that our abbreviated voting closes today. I plan to +1
 later today, am just waiting for the usual unit test runs to finish. We
 will need one more vote as of now. If we can get one today that will be
 great, otherwise I can certainly extend the vote until the end of the week.




 On Sat, Jan 31, 2015 at 10:07 PM, Andrew Purtell apurt...@apache.org
 wrote:

  ​The 3rd HBase 0.98.10 release candidate (RC2) is available for
  download at http://people.apache.org/~apurtell/0.98.10RC2/ and Maven
  artifacts are also available in the temporary repository
  https://repository.apache.org/content/repositories/orgapachehbase-1060/
 
  Signed with my code signing key D5365CCD.
 
  The issues resolved in this release can be found at
  http://s.apache.org/7hO
 
  Please try out the candidate and vote +1/-1 by midnight Pacific Time
  (00:00 -0800 GMT) on ​February 4​ ​on whether or not we should release
 this
  as​ ​0.98.10. Three +1 votes from PMC will be required to release.
 
  --
  Best regards,
 
 - Andy
 
  Problems worthy of attack prove their worth by hitting back. - Piet Hein
  (via Tom White)
 



 --
 Best regards,

- Andy

 Problems worthy of attack prove their worth by hitting back. - Piet Hein
 (via Tom White)



[jira] [Created] (HBASE-12966) NPE in HMaster while recovering tables in Enabling state

2015-02-04 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-12966:
--

 Summary: NPE in HMaster while recovering tables in Enabling state
 Key: HBASE-12966
 URL: https://issues.apache.org/jira/browse/HBASE-12966
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: ramkrishna.s.vasudevan
 Fix For: 2.0.0


{code}
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.master.handler.EnableTableHandler.handleEnableTable(EnableTableHandler.java:210)
at 
org.apache.hadoop.hbase.master.handler.EnableTableHandler.process(EnableTableHandler.java:142)
at 
org.apache.hadoop.hbase.master.AssignmentManager.recoverTableInEnablingState(AssignmentManager.java:1695)
at 
org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:416)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:720)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:170)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1459)
at java.lang.Thread.run(Thread.java:745)
2015-02-04 16:11:45,932 FATAL [stobdtserver3:16040.activeMasterManager] 
master.HMaster: Master server abort: loaded coprocessors are: 
[org.apache.hadoop.hbor.MultiRowMutationEndpoint]
2015-02-04 16:11:45,933 FATAL [stobdtserver3:16040.activeMasterManager] 
master.HMaster: Unhandled exception. Starting shutdown.
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.master.handler.EnableTableHandler.handleEnableTable(EnableTableHandler.java:210)
at 
org.apache.hadoop.hbase.master.handler.EnableTableHandler.process(EnableTableHandler.java:142)
at 
org.apache.hadoop.hbase.master.AssignmentManager.recoverTableInEnablingState(AssignmentManager.java:1695)
at 
org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:416)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:720)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:170)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1459)
at java.lang.Thread.run(Thread.java:745)

{code}

A table was trying to recover from ENABLING state and the master got the above 
exception. Note that the set up was 2 master with 1 RS (total 3 machines).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12967) Invalid FQCNs in alter table command leaves the table unusable

2015-02-04 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-12967:
--

 Summary: Invalid FQCNs in alter table command leaves the table 
unusable
 Key: HBASE-12967
 URL: https://issues.apache.org/jira/browse/HBASE-12967
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Priority: Critical
 Fix For: 1.0.0, 2.0.0, 1.1.0, 0.98.11


Refer to this thread
http://osdir.com/ml/general/2015-02/msg03547.html

A user tries to alter a table with a new split policy.  Due to an invalid 
classname the table does not get enabled and the table becomes unusable.  I 
think Procedure V2 is a long term soln for this but I think we atleast need to 
provide a work around or a set of steps to come out of this.  Any fix before 
Procedure V2 comes into place would useful for the already released versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Re: Re: Wrong Configuration lead to a failure when enabling table

2015-02-04 Thread ramkrishna vasudevan
distcp should work.  Try online snapshot case - I think it might work. But
ideally here the table is in the DISABLING state. so it is again tricky.

Take a raw back up of the table data in the HDFS -  and again put it back
after restoring the table?

On Wed, Feb 4, 2015 at 12:41 PM, Weichen YE yeweichen2...@gmail.com wrote:

 Hi,Ted, Ram,

  Thank you for your attemtion for this bug.

  I meet this bug in production environment and the table contains
 important data. If we are not able to enable this table in current cluster,
 do you have any idea to get the table data back in some other way? Maybe
 export, snapshot, copytable, distcp all table files to another cluster ?

 2015-02-04 13:17 GMT+08:00 Ted Yu yuzhih...@gmail.com:

  Looks like the NPE was caused by the following method in BaseLoadBalancer
  returning null:
 
protected MapServerName, ListHRegionInfo assignMasterRegions(
 
CollectionHRegionInfo regions, ListServerName servers) {
 
  if (servers == null || regions == null || regions.isEmpty()) {
 
return null;
 
  Since bulkPlan is null, calling BulkAssigner seems unnecessary.
 
 
 
  On Tue, Feb 3, 2015 at 9:01 PM, ramkrishna vasudevan 
  ramkrishna.s.vasude...@gmail.com wrote:
 
  It is not only about the state on the table descriptor but also the in
  memory state in the AM.  I remember some time back Rajeshbabu worked on
 a
  HBCK like tool which will forcefully change the state of these tables in
  such cases. I don't remember the JIRA now.I thought of restarting the
  master thinking the in memory state would change and I got this
 
  java.lang.NullPointerException
  at
 
 
 org.apache.hadoop.hbase.master.handler.EnableTableHandler.handleEnableTable(EnableTableHandler.java:210)
  at
 
 
 org.apache.hadoop.hbase.master.handler.EnableTableHandler.process(EnableTableHandler.java:142)
  at
 
 
 org.apache.hadoop.hbase.master.AssignmentManager.recoverTableInEnablingState(AssignmentManager.java:1695)
  at
 
 
 org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:416)
  at
 
 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:720)
  at
  org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:170)
  at
 org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1459)
  at java.lang.Thread.run(Thread.java:745)
  2015-02-04 16:11:45,932 FATAL [stobdtserver3:16040.activeMasterManager]
  master.HMaster: Master server abort: loaded coprocessors are:
  [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
  2015-02-04 16:11:45,933 FATAL [stobdtserver3:16040.activeMasterManager]
  master.HMaster: Unhandled exception. Starting shutdown.
  java.lang.NullPointerException
  at
 
 
 org.apache.hadoop.hbase.master.handler.EnableTableHandler.handleEnableTable(EnableTableHandler.java:210)
  at
 
 
 org.apache.hadoop.hbase.master.handler.EnableTableHandler.process(EnableTableHandler.java:142)
  at
 
 
 org.apache.hadoop.hbase.master.AssignmentManager.recoverTableInEnablingState(AssignmentManager.java:1695)
  at
 
 
 org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:416)
  at
 
 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:720)
  at
  org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:170)
  at
 org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1459)
 
 
  Regards
  Ram
 
  On Wed, Feb 4, 2015 at 10:25 AM, Ted Yu yuzhih...@gmail.com wrote:
 
   What about creating an offline tool which can modify the table
  descriptor
   so that table goes to designated state ?
  
   Cheers
  
   On Tue, Feb 3, 2015 at 8:51 PM, ramkrishna vasudevan 
   ramkrishna.s.vasude...@gmail.com wrote:
  
I tried reproducing this scenario on trunk. The same problem exists.
Currently in the master the table state is noted in the Table
  descriptor
and not on the ZK.  In 0.98.XX version it should be on the zk.
   
When we tried to enable the table the region assignment failed due
 to
ClassNotFound and already the state is in ENABLING.  But doing a
  describe
table still shows it in DISABLED.
   
Thought we could alter the correct Configuration but specifying
  another
alter Table command we are still not able to enable the table.
   
Moving this to dev to see if there is any workaround for this issue.
  If
not we may have to solve this issue across branches until we have
 the
Procedure V2 implemenation ready on trunk.
   
Any suggestions?
   
Regards
Ram
   
On Wed, Feb 4, 2015 at 4:05 AM, 叶炜晨 yeweic...@qiyi.com wrote:
   
  my version is 0.98.6-cdh5.2.0, the problem in my production
   environment.

 So should I first delete znode? And then how to distable this
  table?my
 goal is to fix the wrong table configuration to get my data.


Re: Re: Re: Wrong Configuration lead to a failure when enabling table

2015-02-04 Thread ramkrishna vasudevan
Raised this https://issues.apache.org/jira/browse/HBASE-12967.

On Wed, Feb 4, 2015 at 1:36 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 distcp should work.  Try online snapshot case - I think it might work. But
 ideally here the table is in the DISABLING state. so it is again tricky.

 Take a raw back up of the table data in the HDFS -  and again put it back
 after restoring the table?

 On Wed, Feb 4, 2015 at 12:41 PM, Weichen YE yeweichen2...@gmail.com
 wrote:

 Hi,Ted, Ram,

  Thank you for your attemtion for this bug.

  I meet this bug in production environment and the table contains
 important data. If we are not able to enable this table in current
 cluster,
 do you have any idea to get the table data back in some other way? Maybe
 export, snapshot, copytable, distcp all table files to another cluster ?

 2015-02-04 13:17 GMT+08:00 Ted Yu yuzhih...@gmail.com:

  Looks like the NPE was caused by the following method in
 BaseLoadBalancer
  returning null:
 
protected MapServerName, ListHRegionInfo assignMasterRegions(
 
CollectionHRegionInfo regions, ListServerName servers) {
 
  if (servers == null || regions == null || regions.isEmpty()) {
 
return null;
 
  Since bulkPlan is null, calling BulkAssigner seems unnecessary.
 
 
 
  On Tue, Feb 3, 2015 at 9:01 PM, ramkrishna vasudevan 
  ramkrishna.s.vasude...@gmail.com wrote:
 
  It is not only about the state on the table descriptor but also the in
  memory state in the AM.  I remember some time back Rajeshbabu worked
 on a
  HBCK like tool which will forcefully change the state of these tables
 in
  such cases. I don't remember the JIRA now.I thought of restarting the
  master thinking the in memory state would change and I got this
 
  java.lang.NullPointerException
  at
 
 
 org.apache.hadoop.hbase.master.handler.EnableTableHandler.handleEnableTable(EnableTableHandler.java:210)
  at
 
 
 org.apache.hadoop.hbase.master.handler.EnableTableHandler.process(EnableTableHandler.java:142)
  at
 
 
 org.apache.hadoop.hbase.master.AssignmentManager.recoverTableInEnablingState(AssignmentManager.java:1695)
  at
 
 
 org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:416)
  at
 
 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:720)
  at
  org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:170)
  at
 org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1459)
  at java.lang.Thread.run(Thread.java:745)
  2015-02-04 16:11:45,932 FATAL [stobdtserver3:16040.activeMasterManager]
  master.HMaster: Master server abort: loaded coprocessors are:
  [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint]
  2015-02-04 16:11:45,933 FATAL [stobdtserver3:16040.activeMasterManager]
  master.HMaster: Unhandled exception. Starting shutdown.
  java.lang.NullPointerException
  at
 
 
 org.apache.hadoop.hbase.master.handler.EnableTableHandler.handleEnableTable(EnableTableHandler.java:210)
  at
 
 
 org.apache.hadoop.hbase.master.handler.EnableTableHandler.process(EnableTableHandler.java:142)
  at
 
 
 org.apache.hadoop.hbase.master.AssignmentManager.recoverTableInEnablingState(AssignmentManager.java:1695)
  at
 
 
 org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:416)
  at
 
 
 org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:720)
  at
  org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:170)
  at
 org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1459)
 
 
  Regards
  Ram
 
  On Wed, Feb 4, 2015 at 10:25 AM, Ted Yu yuzhih...@gmail.com wrote:
 
   What about creating an offline tool which can modify the table
  descriptor
   so that table goes to designated state ?
  
   Cheers
  
   On Tue, Feb 3, 2015 at 8:51 PM, ramkrishna vasudevan 
   ramkrishna.s.vasude...@gmail.com wrote:
  
I tried reproducing this scenario on trunk. The same problem
 exists.
Currently in the master the table state is noted in the Table
  descriptor
and not on the ZK.  In 0.98.XX version it should be on the zk.
   
When we tried to enable the table the region assignment failed due
 to
ClassNotFound and already the state is in ENABLING.  But doing a
  describe
table still shows it in DISABLED.
   
Thought we could alter the correct Configuration but specifying
  another
alter Table command we are still not able to enable the table.
   
Moving this to dev to see if there is any workaround for this
 issue.
  If
not we may have to solve this issue across branches until we have
 the
Procedure V2 implemenation ready on trunk.
   
Any suggestions?
   
Regards
Ram
   
On Wed, Feb 4, 2015 at 4:05 AM, 叶炜晨 yeweic...@qiyi.com wrote:
   
  my version is 0.98.6-cdh5.2.0, the problem in my production
   

[jira] [Created] (HBASE-12968) SecureServer should not ignore CallQueueSize

2015-02-04 Thread hongyu bi (JIRA)
hongyu bi created HBASE-12968:
-

 Summary: SecureServer should not ignore CallQueueSize
 Key: HBASE-12968
 URL: https://issues.apache.org/jira/browse/HBASE-12968
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.15
Reporter: hongyu bi
Assignee: hongyu bi


Per HBASE-5190 HBaseServer will reject the request If callQueueSize exceed 
ipc.server.max.callqueue.length, but SecureServer ignore this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12971) Replication stuck due to large default value for replication.source.maxretriesmultiplier

2015-02-04 Thread Adrian Muraru (JIRA)
Adrian Muraru created HBASE-12971:
-

 Summary: Replication stuck due to large default value for 
replication.source.maxretriesmultiplier
 Key: HBASE-12971
 URL: https://issues.apache.org/jira/browse/HBASE-12971
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.10, 1.0.0
Reporter: Adrian Muraru


We are setting in hbase-site the default value of 300 for 
{{replication.source.maxretriesmultiplier}} introduced in HBASE-11964.

While this value works fine to recover for transient errors with remote ZK 
quorum from the peer Hbase cluster - it proved to have side effects in the code 
introduced in HBASE-11367 Pluggable replication endpoint, where the default is 
much lower (10).
See:
1. 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java#L169
2. 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java#L79

The the two default values are definitely conflicting - when 
{{replication.source.maxretriesmultiplier}} is set in the hbase-site to 300 
this will lead to a  sleep time of 300*300 (25h!) when a sockettimeout 
exception is thrown.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12973) RegionCoprocessorEnvironment should provide HRegionInfo directly

2015-02-04 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-12973:
--

 Summary: RegionCoprocessorEnvironment should provide HRegionInfo 
directly
 Key: HBASE-12973
 URL: https://issues.apache.org/jira/browse/HBASE-12973
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Priority: Minor


A coprocessor must go through RegionCoprocessorEnvironment#getRegion. in order 
order to retrieve HRegionInfo for its associated region. It should be possible 
to get HRegionInfo directly from RegionCoprocessorEnvironment. (Or Region, see 
HBASE-12972)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] The 3rd HBase 0.98.10 release candidate (RC2) is available, vote closing 2/4/2015

2015-02-04 Thread Enis Söztutar
+1.

Checked sigs, checksums
Checked dir layout
Checked the book
Checked hadoop and hbase jars, versions
Run local mode (both h1 and h2)
Simple shell tests
Ran LTT in local mode
Compiled with h1 and h2

Enis


On Wed, Feb 4, 2015 at 9:30 AM, Elliott Clark ecl...@apache.org wrote:

 I'm currently testing this on a cluster. I'll have results before the vote
 should close.

 On Wed, Feb 4, 2015 at 8:23 AM, Andrew Purtell apurt...@apache.org
 wrote:

  Gentle reminder that our abbreviated voting closes today. I plan to +1
  later today, am just waiting for the usual unit test runs to finish. We
  will need one more vote as of now. If we can get one today that will be
  great, otherwise I can certainly extend the vote until the end of the
 week.
 
 
 
 
  On Sat, Jan 31, 2015 at 10:07 PM, Andrew Purtell apurt...@apache.org
  wrote:
 
   ​The 3rd HBase 0.98.10 release candidate (RC2) is available for
   download at http://people.apache.org/~apurtell/0.98.10RC2/ and Maven
   artifacts are also available in the temporary repository
  
 https://repository.apache.org/content/repositories/orgapachehbase-1060/
  
   Signed with my code signing key D5365CCD.
  
   The issues resolved in this release can be found at
   http://s.apache.org/7hO
  
   Please try out the candidate and vote +1/-1 by midnight Pacific Time
   (00:00 -0800 GMT) on ​February 4​ ​on whether or not we should release
  this
   as​ ​0.98.10. Three +1 votes from PMC will be required to release.
  
   --
   Best regards,
  
  - Andy
  
   Problems worthy of attack prove their worth by hitting back. - Piet
 Hein
   (via Tom White)
  
 
 
 
  --
  Best regards,
 
 - Andy
 
  Problems worthy of attack prove their worth by hitting back. - Piet Hein
  (via Tom White)
 



Re: read HBase regions in MPP way?

2015-02-04 Thread Demai Ni
Nick,

many thanks for the pointer. Yeah, the TableInputFormat looks fit my needs.
I will dig into it. Appreciate the help

Demai

On Wed, Feb 4, 2015 at 8:13 AM, Nick Dimiduk ndimi...@gmail.com wrote:

 Sounds like you're wanting to do a lot of what the TableInputFormat
 facilitates for mapreduce programs. Probably you can use code from that
 package to turn a Scan into input splits, which contain region name
 and RegionServer location, and consume those from your custom coordinator.

 -n

 On Tuesday, February 3, 2015, Demai Ni nid...@gmail.com wrote:

  hi, Guys,
 
  I am looking for a way to Read HBase table through MPP(Postgres-XC). And
  hoping to get some suggestions to either validate or invalidate the
  approach.
 
  Kind of like Apache Drill, but through PostgresSQL. Long story about why
  Postgres, and how c/c++ will give me headache for months to come. :-) I
  will leave it as is for now.
 
  The design is to have distributed Postgres-XC installed on the same HBase
  cluster, so Postgres' datanodes are on the same physical node as HBase's
  regionServers. connect HBase from PostgresSQL through existing HBase
 client
  code.
 
  Step1: At Postgres coordinator node(like Master of HBase), use
  HTable.getRegionLocations to get all Regions of a particular table:
  NavigableMapHRegionInfo, ServerName
  Step 2: iterate through above NavigatbleMap to map HBase ServerName to
  PG-XC's dataNode. The goal is to let the dataNode of Postgres handle the
  regions on its own physical machine.
  Step 3: Postgres coordinator node send the execution plan to Postgres
  datanode , through a existing framework called foreign data wrapper.
  Step 4: Postgres DataNode iterate through its assigned regions, and open
 a
  HBase Client.Scan() with .setStartRow and .setStopRow so it will only
 read
  the assigned region.  I was hoping to use HRegionInfo.regionId directly,
  but can find such API in Client.Scan
  Step 5: Posgres DataNode further analyse the retrieve data.
 
  So in short, the architect design is to leverage Postgres optimizer to
  parse SQL Query, and use Postgres DataNode as HBase' client to read HBase
  regions directly in parallel. With the hope to 1) read HRegion locally;
 2)
  leverage existing HBase filters.
 
  On step4 above, is there a way to talk to RegionSever directly without
  communicating with HMaster?
 
  Similar ideas(Drill for one, how about HP vertica?) are brought up
 before,
  and discussed.  So before I am heading down the same road, Can I pick
 your
  brain, please shed me some light? or prevent me from doing something
  stupid?
 
  Many thanks
 
  Demai
 



[jira] [Created] (HBASE-12972) Region, a supportable public/evolving subset of HRegion

2015-02-04 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-12972:
--

 Summary: Region, a supportable public/evolving subset of HRegion
 Key: HBASE-12972
 URL: https://issues.apache.org/jira/browse/HBASE-12972
 Project: HBase
  Issue Type: New Feature
Reporter: Andrew Purtell


On HBASE-12566, [~lhofhansl] proposed:
{quote}
Maybe we can have a {{Region}} interface that is to {{HRegion}} is what 
{{Store}} is to {{HStore}}. Store marked with {{@InterfaceAudience.Private}} 
but used in some coprocessor hooks.
{quote}

By example, now coprocessors have to reach into HRegion in order to participate 
in row and region locking protocols, this is one area where the functionality 
is legitimate for coprocessors but not for users, so an in-between interface 
make sense.

In addition we should promote {{Store}}'s interface audience to 
LimitedPrivate(COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] The 3rd HBase 0.98.10 release candidate (RC2) is available, vote closing 2/4/2015

2015-02-04 Thread Elliott Clark
+1
ran on a test cluster
checked sig
checked tar layout and contentes
ran ITBLL

On Wed, Feb 4, 2015 at 12:17 PM, Enis Söztutar e...@apache.org wrote:

 +1.

 Checked sigs, checksums
 Checked dir layout
 Checked the book
 Checked hadoop and hbase jars, versions
 Run local mode (both h1 and h2)
 Simple shell tests
 Ran LTT in local mode
 Compiled with h1 and h2

 Enis


 On Wed, Feb 4, 2015 at 9:30 AM, Elliott Clark ecl...@apache.org wrote:

  I'm currently testing this on a cluster. I'll have results before the
 vote
  should close.
 
  On Wed, Feb 4, 2015 at 8:23 AM, Andrew Purtell apurt...@apache.org
  wrote:
 
   Gentle reminder that our abbreviated voting closes today. I plan to +1
   later today, am just waiting for the usual unit test runs to finish. We
   will need one more vote as of now. If we can get one today that will be
   great, otherwise I can certainly extend the vote until the end of the
  week.
  
  
  
  
   On Sat, Jan 31, 2015 at 10:07 PM, Andrew Purtell apurt...@apache.org
   wrote:
  
​The 3rd HBase 0.98.10 release candidate (RC2) is available for
download at http://people.apache.org/~apurtell/0.98.10RC2/ and Maven
artifacts are also available in the temporary repository
   
  https://repository.apache.org/content/repositories/orgapachehbase-1060/
   
Signed with my code signing key D5365CCD.
   
The issues resolved in this release can be found at
http://s.apache.org/7hO
   
Please try out the candidate and vote +1/-1 by midnight Pacific Time
(00:00 -0800 GMT) on ​February 4​ ​on whether or not we should
 release
   this
as​ ​0.98.10. Three +1 votes from PMC will be required to release.
   
--
Best regards,
   
   - Andy
   
Problems worthy of attack prove their worth by hitting back. - Piet
  Hein
(via Tom White)
   
  
  
  
   --
   Best regards,
  
  - Andy
  
   Problems worthy of attack prove their worth by hitting back. - Piet
 Hein
   (via Tom White)
  
 



[jira] [Reopened] (HBASE-12961) Negative values in read and write region server metrics

2015-02-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reopened HBASE-12961:


 Negative values in read and write region server metrics 
 

 Key: HBASE-12961
 URL: https://issues.apache.org/jira/browse/HBASE-12961
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Victoria
Assignee: Victoria
Priority: Minor
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12961-2.0.0-v1.patch, HBASE-12961-v1.patch


 HMaster web page ui, shows the read/write request per region server. They are 
 currently displayed by using 32 bit integers. Hence, if the servers are up 
 for a long time the values can be shown as negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12961) Negative values in read and write region server metrics

2015-02-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-12961.

Resolution: Fixed

Ok. Let me track that down

 Negative values in read and write region server metrics 
 

 Key: HBASE-12961
 URL: https://issues.apache.org/jira/browse/HBASE-12961
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Victoria
Assignee: Victoria
Priority: Minor
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-12961-2.0.0-v1.patch, HBASE-12961-v1.patch


 HMaster web page ui, shows the read/write request per region server. They are 
 currently displayed by using 32 bit integers. Hence, if the servers are up 
 for a long time the values can be shown as negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12530) TestStatusResource can fail if run in parallel with other tests

2015-02-04 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-12530.

   Resolution: Cannot Reproduce
Fix Version/s: (was: 0.98.11)
   (was: 2.0.0)

Haven't seen this in a while

 TestStatusResource can fail if run in parallel with other tests
 ---

 Key: HBASE-12530
 URL: https://issues.apache.org/jira/browse/HBASE-12530
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: Andrew Purtell
Priority: Trivial
  Labels: newbie

 TestStatusResource can fail if run in parallel with other tests, fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 4884: Backport HBASE-12533: staging directories are not deleted after secure bulk load

2015-02-04 Thread Srikanth Srungarapu

---
This is an automatically generated e-mail. To reply, visit:
https://review.cloudera.org/r/4884/
---

Review request for hbase.


Repository: hbase


Description
---

Summarizing discussion on HBASE-12533:
Looks like the prepareBulkLoad is fired up to hit all data regions so it will 
create same number of staging folders as the number of regions of the 
bulkloaded table while we only use the first one. That's why you can see many 
staging folders are left.


Diffs
-

  
hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/SecureBulkLoadClient.java
 48986b1eb6a45ca57f40cfe0e99ed302a024d2d1 
  
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
 fcb9270e3d33f48ae6cfed3ea4cbe0874f0a37ef 
  
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
 27c809a3b71bfcb465703cf66f896098935e2da0 

Diff: https://review.cloudera.org/r/4884/diff/


Testing
---

Local testing done.


Thanks,

Srikanth Srungarapu



Re: Review Request 4884: Backport HBASE-12533: staging directories are not deleted after secure bulk load

2015-02-04 Thread Jonathan Hsieh

---
This is an automatically generated e-mail. To reply, visit:
https://review.cloudera.org/r/4884/#review8003
---

Ship it!


I don't see the delta from upstream except for maybe spacing nits.  take care 
of those upstream.  otherwise lgtm.


hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/SecureBulkLoadClient.java
https://review.cloudera.org/r/4884/#comment12936

nit (if upstream, fix there too)



hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
https://review.cloudera.org/r/4884/#comment12937

nit (fix upstream if there)


- Jonathan Hsieh


On Feb. 4, 2015, 9:55 p.m., Srikanth Srungarapu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://review.cloudera.org/r/4884/
 ---
 
 (Updated Feb. 4, 2015, 9:55 p.m.)
 
 
 Review request for hbase.
 
 
 Repository: hbase
 
 
 Description
 ---
 
 Summarizing discussion on HBASE-12533:
 Looks like the prepareBulkLoad is fired up to hit all data regions so it will 
 create same number of staging folders as the number of regions of the 
 bulkloaded table while we only use the first one. That's why you can see many 
 staging folders are left.
 
 
 Diffs
 -
 
   
 hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/SecureBulkLoadClient.java
  48986b1eb6a45ca57f40cfe0e99ed302a024d2d1 
   
 hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
  fcb9270e3d33f48ae6cfed3ea4cbe0874f0a37ef 
   
 hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
  27c809a3b71bfcb465703cf66f896098935e2da0 
 
 Diff: https://review.cloudera.org/r/4884/diff/
 
 
 Testing
 ---
 
 Local testing done.
 
 
 Thanks,
 
 Srikanth Srungarapu
 




[jira] [Reopened] (HBASE-11568) Async WAL replication for region replicas

2015-02-04 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das reopened HBASE-11568:
-

Reopening for branch-1 commit

 Async WAL replication for region replicas
 -

 Key: HBASE-11568
 URL: https://issues.apache.org/jira/browse/HBASE-11568
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0

 Attachments: hbase-11568_v2.patch, hbase-11568_v3.patch


 As mentioned in parent issue, and design docs for phase-1 (HBASE-10070) and 
 Phase-2 (HBASE-11183), implement asynchronous WAL replication from the WAL 
 files of the primary region to the secondary region replicas. 
 The WAL replication will build upon the pluggable replication framework 
 introduced in HBASE-11367, and the distributed WAL replay. 
 Upon having some experience with the patch, we changed the design so that 
 there is only one replication queue for doing the async wal replication to 
 secondary replicas rather than having a queue per region replica. This is due 
 to the fact that, we do not want to tail the logs of every region server for 
 a single region replica. 
 Handling of flushes/compactions and memstore accounting will be handled in 
 other subtasks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12974) Opaque AsyncProcess failure: RetriesExhaustedWithDetailsException but no detail

2015-02-04 Thread stack (JIRA)
stack created HBASE-12974:
-

 Summary: Opaque AsyncProcess failure: 
RetriesExhaustedWithDetailsException but no detail
 Key: HBASE-12974
 URL: https://issues.apache.org/jira/browse/HBASE-12974
 Project: HBase
  Issue Type: Bug
Reporter: stack


I'm trying to do longer running tests but when I up the numbers for a task I 
run into this:

{code}
2015-02-04 15:35:10,267 FATAL [IPC Server handler 17 on 43975] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
attempt_1419986015214_0204_m_02_3 - exited : 
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
action: IOException: 1 time,
at 
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:227)
at 
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:207)
at 
org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1658)
at 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
at 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.doMutate(BufferedMutatorImpl.java:141)
at 
org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:98)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator$GeneratorMapper.persist(IntegrationTestBigLinkedList.java:449)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator$GeneratorMapper.map(IntegrationTestBigLinkedList.java:407)
at 
org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList$Generator$GeneratorMapper.map(IntegrationTestBigLinkedList.java:355)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
{code}

Its telling me an action failed but 1 time only with an empty IOE?

I'm kinda stumped.

Starting up this issue to see if I can get to the bottom of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] The 3rd HBase 0.98.10 release candidate (RC2) is available, vote closing 2/4/2015

2015-02-04 Thread Andrew Purtell
+1

- Release passed RAT checks
- Unit test suite passes 20/20 times on 7u75, 5/5 with 6u45
- Bootstrapped hadoop1 and hadoop2 test clusters from the tarballs, no
errors, warnings, or failures
- Checked Phoenix compilation with master and 4.0 branches, no compile
issues, Phoenix unit tests pass
- Ran LTT on an all-localhost cluster, 1M keys, no unusual warnings or
errors in the logs
- Ran ITI and ITBLL in an all-localhost configuration
- Tested replication between two small clusters in DR and active-active
peer configurations
- Ran Dima's cool new compatibility checker, see results at
http://people.apache.org/~apurtell/0.98.9_0.98.10RC2_compat_report.html .
There are no issues of concern affecting interfaces or classes meant for
user extension, except VisibilityLabelService (from HBASE-12745), but per
the discussion at the tail of the issue the changes are acceptable.


On Sat, Jan 31, 2015 at 10:07 PM, Andrew Purtell apurt...@apache.org
 wrote:

 ​The 3rd HBase 0.98.10 release candidate (RC2) is available for
 download at http://people.apache.org/~apurtell/0.98.10RC2/ and Maven
 artifacts are also available in the temporary repository
 https://repository.apache.org/content/repositories/orgapachehbase-1060/

 Signed with my code signing key D5365CCD.

 The issues resolved in this release can be found at
 http://s.apache.org/7hO

 Please try out the candidate and vote +1/-1 by midnight Pacific Time
 (00:00 -0800 GMT) on ​February 4​ ​on whether or not we should release this
 as​ ​0.98.10. Three +1 votes from PMC will be required to release.



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


[jira] [Created] (HBASE-12970) Annotate methods and parameters with @Nullable/@Nonnull

2015-02-04 Thread Andrey Stepachev (JIRA)
Andrey Stepachev created HBASE-12970:


 Summary: Annotate methods and parameters with @Nullable/@Nonnull
 Key: HBASE-12970
 URL: https://issues.apache.org/jira/browse/HBASE-12970
 Project: HBase
  Issue Type: Improvement
Reporter: Andrey Stepachev
Priority: Minor


We have many bugs with NPE. But current IDE allows us to see that null is 
possible as return value. findbugs can handle that too.
So we need to:
1. annotate methods with null/nonnull
2. force annotation of interfaces (and even better - classes) with 
@Nullable/@Nonnull annotations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12969) Parameter Validation is not there for shell script, local-master-backup.sh and local-regionservers.sh

2015-02-04 Thread Y. SREENIVASULU REDDY (JIRA)
Y. SREENIVASULU REDDY created HBASE-12969:
-

 Summary: Parameter Validation is not there for shell script, 
local-master-backup.sh and local-regionservers.sh
 Key: HBASE-12969
 URL: https://issues.apache.org/jira/browse/HBASE-12969
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.98.9
Reporter: Y. SREENIVASULU REDDY
Priority: Minor
 Fix For: 1.0.0, 2.0.0, 0.98.11


while executing local-regionservers.sh or local-master-backup.sh in 
$HBASE_HOME/bin 
if parameter is non numeric value then scripts are throwing failures.

we need to handle the validation also for those scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12976) Default hbase.client.scanner.max.result.size

2015-02-04 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-12976:
-

 Summary: Default hbase.client.scanner.max.result.size
 Key: HBASE-12976
 URL: https://issues.apache.org/jira/browse/HBASE-12976
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl


Setting scanner caching is somewhat of a black art. It's hard to estimate ahead 
of time how large the result set will be.

I propose we hbase.client.scanner.max.result.size to 2mb. That is good 
compromise between performance and buffer usage on typical networks (avoiding 
OOMs when the caching was chosen too high).

To an HTable client this is completely transparent.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12975) SplitTranaction, RegionMergeTransaction to should have InterfaceAudience of LimitedPrivate(Coproc,Phoenix)

2015-02-04 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-12975:
---

 Summary: SplitTranaction, RegionMergeTransaction to should have 
InterfaceAudience of LimitedPrivate(Coproc,Phoenix)
 Key: HBASE-12975
 URL: https://issues.apache.org/jira/browse/HBASE-12975
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


Making SplitTransaction, RegionMergeTransaction limited private is required to 
support local indexing feature in Phoenix to ensure regions colocation. 

We can ensure region split, regions merge in the coprocessors in few method 
calls without touching internals like creating zk's, file layout changes or 
assignments.
1) stepsBeforePONR, stepsAfterPONR we can ensure split.
2) meta entries can pass through coprocessors to atomically update with the 
normal split/merge.
3) rollback on failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)