Re: ANN: HBase 0.94.2 is available for download

2012-10-17 Thread Stack
On Tue, Oct 16, 2012 at 7:00 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
 Hi St.Atck,

 Is the foll out upgrade process documented anywhere? I looked at the
 book by only find upgrade from 0.90 to 0.92. Can you point me to
 something? If there is no documentation yet, can someone draft the
 steps here so I can propose an update to the online book?


Thanks Jean-Marc.

You should be able to do a rolling restart from 0.92.x to 0.94.x.  Its
a bug if you can't.  There is no entry in the reference guide but
there should be if only to say this... You might want to also call out
https://issues.apache.org/jira/browse/HBASE-6710.  Folks should be
conscious of its implications upgrading.

Thanks boss,
St.Ack


Re: Slow scanning for PrefixFilter on EncodedBlocks

2012-10-17 Thread J Mohamed Zahoor
Sorry for the delay.

It looks like the problem is because of PrefixFilter...
I assumed that i does a seek...

If i use startRow instead.. it works fine.. But is it the correct approach?

./zahoor


On Wed, Oct 17, 2012 at 3:38 AM, lars hofhansl lhofha...@yahoo.com wrote:

 I reopened HBASE-6577



 - Original Message -
 From: lars hofhansl lhofha...@yahoo.com
 To: user@hbase.apache.org user@hbase.apache.org; lars hofhansl 
 lhofha...@yahoo.com
 Cc:
 Sent: Tuesday, October 16, 2012 2:39 PM
 Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks

 Looks like this is exactly the scenario I was trying to optimize with
 HBASE-6577. Hmm...
 
 From: lars hofhansl lhofha...@yahoo.com
 To: user@hbase.apache.org user@hbase.apache.org
 Sent: Tuesday, October 16, 2012 12:21 AM
 Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks

 PrefixFilter does not do any seeking by itself, so I doubt this is related
 to HBASE-6757.
 Does this only happen with FAST_DIFF compression?


 If you can create an isolated test program (that sets up the scenario and
 then runs a scan with the filter such that it is very slow), I'm happy to
 take a look.

 -- Lars



 - Original Message -
 From: J Mohamed Zahoor jmo...@gmail.com
 To: user@hbase.apache.org user@hbase.apache.org
 Cc:
 Sent: Monday, October 15, 2012 10:27 AM
 Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks

 Is this related to HBASE-6757 ?
 I use a filter list with
   - prefix filter
   - filter list of column filters

 /zahoor

 On Monday, October 15, 2012, J Mohamed Zahoor wrote:

  Hi
 
  My scanner performance is very slow when using a Prefix filter on a
  **Encoded Column** ( encoded using FAST_DIFF on both memory and disk).
  I am using 94.1 hbase.
 
  jstack shows that much time is spent on seeking the row.
  Even if i give a exact row key match in the prefix filter it takes about
  two minutes to return a single row.
  Running this multiple times also seems to be redirecting things to disk
  (loadBlock).
 
 
  at
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1027)
  at
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:461)
   at
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:493)
  at
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:242)
   at
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:167)
  at
 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
   at
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
  - locked 0x00059584fab8 (a
  org.apache.hadoop.hbase.regionserver.StoreScanner)
   at
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
  - locked 0x00059584fab8 (a
  org.apache.hadoop.hbase.regionserver.StoreScanner)
   at
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRow(HRegion.java:3507)
  at
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3455)
   at
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3406)
  - locked 0x00059589bb30 (a
  org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
   at
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3423)
 
  If is set the start and end row as same row in scan ... it come in very
  quick.
 
  Saw this link
 
 http://search-hadoop.com/m/9f0JH1Kz24U1subj=Re+HBase+0+94+2+SNAPSHOT+Scanning+Bug
  But it looks like things are fine in 94.1.
 
  Any pointers on why this is slow?
 
 
  Note: the row has not many columns(5 and less than a kb) and lots of
  versions (1500+)
 
  ./zahoor
 
 
 




Re: Slow scanning for PrefixFilter on EncodedBlocks

2012-10-17 Thread J Mohamed Zahoor
First i upgraded my cluster to 94.2.. even then the problem persisted..
Then i moved to using startRow instead of prefix filter..


,/zahoor

On Wed, Oct 17, 2012 at 2:12 PM, J Mohamed Zahoor jmo...@gmail.com wrote:

 Sorry for the delay.

 It looks like the problem is because of PrefixFilter...
 I assumed that i does a seek...

 If i use startRow instead.. it works fine.. But is it the correct approach?

 ./zahoor


 On Wed, Oct 17, 2012 at 3:38 AM, lars hofhansl lhofha...@yahoo.comwrote:

 I reopened HBASE-6577



 - Original Message -
 From: lars hofhansl lhofha...@yahoo.com
 To: user@hbase.apache.org user@hbase.apache.org; lars hofhansl 
 lhofha...@yahoo.com
 Cc:
 Sent: Tuesday, October 16, 2012 2:39 PM
 Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks

 Looks like this is exactly the scenario I was trying to optimize with
 HBASE-6577. Hmm...
 
 From: lars hofhansl lhofha...@yahoo.com
 To: user@hbase.apache.org user@hbase.apache.org
 Sent: Tuesday, October 16, 2012 12:21 AM
 Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks

 PrefixFilter does not do any seeking by itself, so I doubt this is
 related to HBASE-6757.
 Does this only happen with FAST_DIFF compression?


 If you can create an isolated test program (that sets up the scenario and
 then runs a scan with the filter such that it is very slow), I'm happy to
 take a look.

 -- Lars



 - Original Message -
 From: J Mohamed Zahoor jmo...@gmail.com
 To: user@hbase.apache.org user@hbase.apache.org
 Cc:
 Sent: Monday, October 15, 2012 10:27 AM
 Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks

 Is this related to HBASE-6757 ?
 I use a filter list with
   - prefix filter
   - filter list of column filters

 /zahoor

 On Monday, October 15, 2012, J Mohamed Zahoor wrote:

  Hi
 
  My scanner performance is very slow when using a Prefix filter on a
  **Encoded Column** ( encoded using FAST_DIFF on both memory and disk).
  I am using 94.1 hbase.
 
  jstack shows that much time is spent on seeking the row.
  Even if i give a exact row key match in the prefix filter it takes about
  two minutes to return a single row.
  Running this multiple times also seems to be redirecting things to disk
  (loadBlock).
 
 
  at
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1027)
  at
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:461)
   at
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:493)
  at
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:242)
   at
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:167)
  at
 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
   at
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
  - locked 0x00059584fab8 (a
  org.apache.hadoop.hbase.regionserver.StoreScanner)
   at
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
  - locked 0x00059584fab8 (a
  org.apache.hadoop.hbase.regionserver.StoreScanner)
   at
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRow(HRegion.java:3507)
  at
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3455)
   at
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3406)
  - locked 0x00059589bb30 (a
  org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
   at
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3423)
 
  If is set the start and end row as same row in scan ... it come in very
  quick.
 
  Saw this link
 
 http://search-hadoop.com/m/9f0JH1Kz24U1subj=Re+HBase+0+94+2+SNAPSHOT+Scanning+Bug
  But it looks like things are fine in 94.1.
 
  Any pointers on why this is slow?
 
 
  Note: the row has not many columns(5 and less than a kb) and lots of
  versions (1500+)
 
  ./zahoor
 
 
 





hdfs upgrade timing

2012-10-17 Thread Amit Sela
Hi all,

Anyone has a clue how long it takes to upgrade HDFS from from 0.20.3 to
1.0.3 ?

And to upgrade HBase from 0.90.2 to 0.94 ?

I guess it depends on cluster size, so let's say 1TB.

Thanks,

Amit.


Re: ANN: HBase 0.94.2 is available for download

2012-10-17 Thread Jean-Marc Spaggiari
Thanks. I tried to call some MR using the 0.94.2 jar on a 0.94.0
cluster and it's working fine.
To install it on the cluster I have done a full install on the nodes
and re-started them one by one. Seems it worked fine.

The only issue was with the master since I don't have a secondary
master. I was on 0.94.0 so HBASE-6710 had no impact to me.

2012/10/17, Stack st...@duboce.net:
 On Tue, Oct 16, 2012 at 7:00 AM, Jean-Marc Spaggiari
 jean-m...@spaggiari.org wrote:
 Hi St.Atck,

 Is the foll out upgrade process documented anywhere? I looked at the
 book by only find upgrade from 0.90 to 0.92. Can you point me to
 something? If there is no documentation yet, can someone draft the
 steps here so I can propose an update to the online book?


 Thanks Jean-Marc.

 You should be able to do a rolling restart from 0.92.x to 0.94.x.  Its
 a bug if you can't.  There is no entry in the reference guide but
 there should be if only to say this... You might want to also call out
 https://issues.apache.org/jira/browse/HBASE-6710.  Folks should be
 conscious of its implications upgrading.

 Thanks boss,
 St.Ack



RE: ANN: HBase 0.94.2 is available for download

2012-10-17 Thread Ramkrishna.S.Vasudevan
Thanks Jean for your update.

Regards
Ram

 -Original Message-
 From: Jean-Marc Spaggiari [mailto:jean-m...@spaggiari.org]
 Sent: Wednesday, October 17, 2012 5:14 PM
 To: user@hbase.apache.org
 Subject: Re: ANN: HBase 0.94.2 is available for download
 
 Thanks. I tried to call some MR using the 0.94.2 jar on a 0.94.0
 cluster and it's working fine.
 To install it on the cluster I have done a full install on the nodes
 and re-started them one by one. Seems it worked fine.
 
 The only issue was with the master since I don't have a secondary
 master. I was on 0.94.0 so HBASE-6710 had no impact to me.
 
 2012/10/17, Stack st...@duboce.net:
  On Tue, Oct 16, 2012 at 7:00 AM, Jean-Marc Spaggiari
  jean-m...@spaggiari.org wrote:
  Hi St.Atck,
 
  Is the foll out upgrade process documented anywhere? I looked at the
  book by only find upgrade from 0.90 to 0.92. Can you point me to
  something? If there is no documentation yet, can someone draft the
  steps here so I can propose an update to the online book?
 
 
  Thanks Jean-Marc.
 
  You should be able to do a rolling restart from 0.92.x to 0.94.x.
 Its
  a bug if you can't.  There is no entry in the reference guide but
  there should be if only to say this... You might want to also call
 out
  https://issues.apache.org/jira/browse/HBASE-6710.  Folks should be
  conscious of its implications upgrading.
 
  Thanks boss,
  St.Ack
 



Re: how to get the timestamp of hfile, like when it's generated.

2012-10-17 Thread yun peng
Hi, Ram and Stack,
It works for me, after an initial try out. Though the name seems a bit
confusing, as hfile in hbase is immutable and not supposed to be modified
afterwards, -)
Regards,
Yun

On Wed, Oct 17, 2012 at 1:09 AM, Stack st...@duboce.net wrote:

 On Tue, Oct 16, 2012 at 9:06 PM, Ramkrishna.S.Vasudevan
 ramkrishna.vasude...@huawei.com wrote:
  Hi Yun Peng
 
  You want to know the creation time? I could see the getModificationTime()
  api.  Internally it is used to get a store file with minimum timestamp.
  I have not tried it out.  Let me know if it solves your purpose.
  Just try it out.
 

 To add to Rams' response, if you have a reader on the file, you can
 get some of its attributes (which I believe come from metadata block
 -- you should check the code) and you can iterate the metadata too
 which has stuff first and last keys, average size, etc.  See here:

 http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/HFile.Reader.html
  To look at a particular file's metadata, see the hfile tool at
 9.7.5.2.2. HFile Tool in the reference guide.

 St.Ack



Where is code in hbase that physically delete a record?

2012-10-17 Thread yun peng
Hi, All,
I want to find internal code in hbase where physical deleting a record
occurs.

-some of my understanding.
Correct me if I am wrong. (It is largely based on my experience and even
speculation.) Logically deleting a KeyValue data in hbase is performed by
marking tombmarker (by Delete() per records) or setting TTL/max_version
(per Store). After these actions, however, the physical data are still
there, somewhere in the system. Physically deleting a record in hbase is
realised by *a scanner to discard a keyvalue data record* during the
major_compact.

-what I need
I want to extend hbase to associate some actions with physically deleting a
record. Does hbase provide such hook (or coprocessor API) to inject code
for each KV record that is skipped by hbase storescanner in major_compact.
If not, anyone knows where should I look into in hbase (-0.94.2) for such
code modification?

Thanks.
Yun


RE: Where is code in hbase that physically delete a record?

2012-10-17 Thread Anoop Sam John
You can see the code in ScanQueryMatcher
Basically in major compact a scan will be happening scanning all the files... 
As per the delete markers, the deleted KVs wont come out of the scanner and 
thus gets eliminated.  Also in case of major compact the delete markers itself 
will get deleted ( Still some more complicated conditions are there though for 
these like keep deleted cells and time to purge deletes etc)
I would say check the code in that class...

-Anoop-

From: yun peng [pengyunm...@gmail.com]
Sent: Wednesday, October 17, 2012 5:54 PM
To: user@hbase.apache.org
Subject: Where is code in hbase that physically delete a record?

Hi, All,
I want to find internal code in hbase where physical deleting a record
occurs.

-some of my understanding.
Correct me if I am wrong. (It is largely based on my experience and even
speculation.) Logically deleting a KeyValue data in hbase is performed by
marking tombmarker (by Delete() per records) or setting TTL/max_version
(per Store). After these actions, however, the physical data are still
there, somewhere in the system. Physically deleting a record in hbase is
realised by *a scanner to discard a keyvalue data record* during the
major_compact.

-what I need
I want to extend hbase to associate some actions with physically deleting a
record. Does hbase provide such hook (or coprocessor API) to inject code
for each KV record that is skipped by hbase storescanner in major_compact.
If not, anyone knows where should I look into in hbase (-0.94.2) for such
code modification?

Thanks.
Yun

RE: Where is code in hbase that physically delete a record?

2012-10-17 Thread Ramkrishna.S.Vasudevan
Hi Yun
Logically deleting a KeyValue data in hbase is performed
 by
 marking tombmarker (by Delete() per records) or setting TTL/max_version
 (per Store). After these actions, however, the physical data are still
 there, somewhere in the system. Physically deleting a record in hbase
 is
 realised by *a scanner to discard a keyvalue data record* during the
 major_compact.
Yes correct.  As you understood correctly the major_compact will try to
avoid the deleted records when the kvs are copied
Into a new file by major compaction.  

In 0.94.2 there are some new hooks added like preCompactScannerOpen where
you can write your own scanner implementation.
But this will help you to write custom logic of which KVs to avoid during
compaction.  For eg, say you don't want any KV where any specific col c1 =
'a'.
Then you can write your scanner and pass it thro preCompactScanneOpen.

But suppose the system itself is trying to avoid the kvs that got deleted
then currently there is no hook provided in CP to get those values
specifically.

Hope this helps.

Regards
Ram

 -Original Message-
 From: yun peng [mailto:pengyunm...@gmail.com]
 Sent: Wednesday, October 17, 2012 5:54 PM
 To: user@hbase.apache.org
 Subject: Where is code in hbase that physically delete a record?
 
 Hi, All,
 I want to find internal code in hbase where physical deleting a record
 occurs.
 
 -some of my understanding.
 Correct me if I am wrong. (It is largely based on my experience and
 even
 speculation.) Logically deleting a KeyValue data in hbase is performed
 by
 marking tombmarker (by Delete() per records) or setting TTL/max_version
 (per Store). After these actions, however, the physical data are still
 there, somewhere in the system. Physically deleting a record in hbase
 is
 realised by *a scanner to discard a keyvalue data record* during the
 major_compact.
 
 -what I need
 I want to extend hbase to associate some actions with physically
 deleting a
 record. Does hbase provide such hook (or coprocessor API) to inject
 code
 for each KV record that is skipped by hbase storescanner in
 major_compact.
 If not, anyone knows where should I look into in hbase (-0.94.2) for
 such
 code modification?
 
 Thanks.
 Yun



RE: Where is code in hbase that physically delete a record?

2012-10-17 Thread Ramkrishna.S.Vasudevan
Also to see the code how the delete happens pls refer to StoreScanner.java
and how the ScanQueryMatcher.match() works.

That is where we decide if any kv has to be avoided due to already deleted
tombstone marker.  

Forgot to tell you about this.

Regards
Ram

 -Original Message-
 From: yun peng [mailto:pengyunm...@gmail.com]
 Sent: Wednesday, October 17, 2012 5:54 PM
 To: user@hbase.apache.org
 Subject: Where is code in hbase that physically delete a record?
 
 Hi, All,
 I want to find internal code in hbase where physical deleting a record
 occurs.
 
 -some of my understanding.
 Correct me if I am wrong. (It is largely based on my experience and
 even
 speculation.) Logically deleting a KeyValue data in hbase is performed
 by
 marking tombmarker (by Delete() per records) or setting TTL/max_version
 (per Store). After these actions, however, the physical data are still
 there, somewhere in the system. Physically deleting a record in hbase
 is
 realised by *a scanner to discard a keyvalue data record* during the
 major_compact.
 
 -what I need
 I want to extend hbase to associate some actions with physically
 deleting a
 record. Does hbase provide such hook (or coprocessor API) to inject
 code
 for each KV record that is skipped by hbase storescanner in
 major_compact.
 If not, anyone knows where should I look into in hbase (-0.94.2) for
 such
 code modification?
 
 Thanks.
 Yun



RE: Regionservers not connecting to master

2012-10-17 Thread Ramkrishna.S.Vasudevan
Can you try like start any of the regionservers that are not connecting at
all.  May be start 2 of them.
Observer master logs.  See whether it says 
'Waiting for RegionServers to checkin'?.  

Just to confirm your ZK ip and port is correct thro out the cluster? If
multitenant cluster then you may be the other regionservers are connecting
to someother ZK cluster? 
Wild guess :)

Regards
Ram
 -Original Message-
 From: Dan Brodsky [mailto:danbrod...@gmail.com]
 Sent: Wednesday, October 17, 2012 6:31 PM
 To: user@hbase.apache.org
 Subject: Regionservers not connecting to master
 
 Good morning,
 
 I have a 10 node Hadoop/Hbase cluster, plus a namenode VM, plus three
 Zookeeper quorum peers (one on the namenode, one on a dedicated ZK
 peer VM, and one on a third box). All 10 HDFS datanodes are also Hbase
 regionservers.
 
 Several weeks ago, we had six HDFS datanodes go offline suddenly (with
 no meaningful error messages), and since then, I have been unable to
 get all 10 regionservers to connect to the Hbase master. I've tried
 bringing the cluster down and rebooting all the boxes, but no joy. The
 machines are all running, and hbase-regionserver appears to start
 normally on each one.
 
 Right now, my master status page (http://namenode:60010) shows 3
 regionservers online. There are also dozens of regions in transition
 listed on the status page (in the PENDING_OPEN state), but each of
 those are on one of the regionservers already online.
 
 The 7 other regionservers' log files show a successful connection to
 one ZK peer, followed by a regular trail of these messages:
 
 2012-10-17 12:36:08,394 DEBUG
 org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=8.17
 MB, free=987.67 MB, max=995.84 MB, blocks=0, accesses=0, hits=0,
 hitRatio=0cachingAccesses=0, cachingHits=0,
 cachingHitsRatio=0evictions=0, evicted=0, evictedPerRun=NaN
 
 If I had to wager a guess, it seems like the 7 offline regionservers
 are not connecting to other ZK peers, but there isn't anything in the
 ZK logs to indicate why.
 
 Thoughts?
 
 Dan



Re: Merge large number of regions

2012-10-17 Thread Kevin O'dell
Shrijeet,

  Here is a thread on doing a proper incremental with Import:

http://hadoop-hbase.blogspot.com/2012/04/timestamp-consistent-backups-in-hbase.html
I am a fan of this one as it is well laid out.  Breaking this up for you
use case should be pretty easy.

CopyTable should work just as easily -
http://www.cloudera.com/blog/2012/06/online-hbase-backups-with-copytable-2/


If you follow the above it is really going to be a matter of preference.

On Tue, Oct 16, 2012 at 1:16 PM, Shrijeet Paliwal
shrij...@rocketfuel.comwrote:

 Hi Kevin,

 Thanks for answering. What are your thoughts on copyTable vs export-import
 considering my use case. Will one tool have lesser chance of copying
 inconsistent data over another?

 I wish to do increment copy of a live cluster to minimize downtime.

 On Tue, Oct 16, 2012 at 8:47 AM, Kevin O'dell kevin.od...@cloudera.com
 wrote:

  Shrijeet,
 
   I think a better approach would be a pre-split table and then do the
  export/import.  This will save you from having to script the merges,
 which
  can be end badly for META if done wrong.
 
  On Mon, Oct 15, 2012 at 5:31 PM, Shrijeet Paliwal
  shrij...@rocketfuel.comwrote:
 
   We moved to 0.92.2 some time ago and with that, increased the max file
  size
   setting to 4GB (from 2GB). Also an application triggered cleanup
  operation
   deleted lots of unwanted rows.
   These two combined have gotten us to a state where lots of regions are
   smaller than desired size.
  
   Merging regions two at a time seems time consuming and will be hard to
   automate. https://issues.apache.org/jira/browse/HBASE-1621 automates
   merging, but it is not stable.
  
   I am interested in knowing about other possible approaches folks have
   tried. What do you guys think about copyTable based approach ? (old
   ---copyTable--- new and then rename new to old)
  
   -Shrijeet
  
 
 
 
  --
  Kevin O'Dell
  Customer Operations Engineer, Cloudera
 




-- 
Kevin O'Dell
Customer Operations Engineer, Cloudera


Re: crafting your key - scan vs. get

2012-10-17 Thread Michael Segel
Neil, 


Since you asked 
Actually your question is kind of a boring question. ;-) [Note I will probably 
get flamed for saying it, even if it is the truth!]

Having said that...
Boring as it is, its an important topic that many still seem to trivialize in 
terms of its impact on performance. 

Before answering your question, lets take a step back and ask a more important 
question... 
What data do you to capture and store in HBase?
and then ask yourself...
How do I plan on accessing the data?

From what I can tell, you want to track certain events made by a user. 
So you're recording at Time X, user A did something. 

Then the question is how do you want to access the data.

Do you primarily say Show me all the events in the past 15 minutes and 
organize them by user? 
Or do you say Show me the most recent events by user A ?

Here's the issue. 

If you are more interested and will frequently ask the question of Show me the 
most recent events by user A, 

Then you would want to do the following:
Key = User ID (hashed if necessary) 
Column Family: Data (For lack of a better name) 

Then store each event in a separate column where the column name is something 
like event + (max Long - Time Stamp) .

This will place the most recent event first.

The reason I say event + the long, is that you may want to place user 
specific information in a column and you would want to make sure it was in 
front of the event data.

Now if your access pattern was more along the lines of show me  the events that 
occurred in the past 15 minutes, then you would use the time stamp and then 
have to worry about hot spotting and region splits. But then you could get your 
data from a simple start/stop row scan. 

In the first case, you can use get() while still a scan, its a very efficient 
fetch. 
In the second, you will always need to do a scan. 

Having said that, there are other things to think about including frequency and 
how wide your rows will get over time. 
(Mainly in terms of the first example I gave.) 

The reason I said that your question is boring is that its been asked numerous 
times and every time its asked, the initial question doesn't provide enough 
information to actually give a good answer...

HTH

-Mike



On Oct 16, 2012, at 4:53 PM, Neil Yalowitz neilyalow...@gmail.com wrote:

 Hopefully this is a fun question.  :)
 
 Assume you could architect an HBase table from scratch and you were
 choosing between the following two key structures.
 
 1)
 
 The first structure creates a unique row key for each PUT.  The rows are
 events related to a user ID.  There may be up to several hundred events for
 each user ID (probably not thousands, an average of perhaps ~100 events per
 user).  Each key would be made unique with a reverse-order-timestamp or
 perhaps just random characters (we don't particularly care about using ROT
 for sorting newest here).
 
 key
 
 AA + some-unique-chars
 
 The table will look like this:
 
 key   vals  cf:mycfts
 ---
 AA... myval1 1350345600
 AA... myval2 1350259200
 AA... myval3 1350172800
 
 
 Retrieving these values will use a Scan with startRow and stopRow.  In
 hbase shell, it would look like:
 
 $ scan 'mytable',{STARTROW='AA', ENDROW='AA_'}
 
 
 2)
 
 The second structure choice uses only the user ID as the key and relies on
 row versions to store all the events.  For example:
 
 key   vals   cf:mycf ts
 -
 AAmyval1   1350345600
 AAmyval2   1350259200
 AAmyval3   1350172800
 
 Retrieving these values will use a Get with VERSIONS = somebignumber.  In
 hbase shell, it would look like:
 
 $ get 'mytable','AA',{COLUMN='cf:mycf', VERSIONS=999}
 
 ...although this probably violates a comment in the HBase documentation:
 
 It is not recommended setting the number of max versions to an exceedingly
 high level (e.g., hundreds or more) unless those old values are very dear
 to you because this will greatly increase StoreFile size.
 
 ...found here: http://hbase.apache.org/book/schema.versions.html
 
 
 So, are there any performance considerations between Scan vs. Get in this
 use case?  Which choice would you go for?
 
 
 
 Neil Yalowitz
 neilyalow...@gmail.com



Re: Where is code in hbase that physically delete a record?

2012-10-17 Thread yun peng
Hi, Ram and Anoop, Thanks for the nice reference on the java file, which I
will check through.

It is interesting to know about the recent feature on
preCompactScannerOpen() hook. Ram, it would be nice if I can know how to
specify conditions like c1 = 'a'.  I have also checked the example code in
hbase 6496 link https://issues.apache.org/jira/browse/HBASE-6496. which
show how to delete data before time as in a on-demand specification...
Cheers,
Yun

On Wed, Oct 17, 2012 at 8:46 AM, Ramkrishna.S.Vasudevan 
ramkrishna.vasude...@huawei.com wrote:

 Also to see the code how the delete happens pls refer to StoreScanner.java
 and how the ScanQueryMatcher.match() works.

 That is where we decide if any kv has to be avoided due to already deleted
 tombstone marker.

 Forgot to tell you about this.

 Regards
 Ram

  -Original Message-
  From: yun peng [mailto:pengyunm...@gmail.com]
  Sent: Wednesday, October 17, 2012 5:54 PM
  To: user@hbase.apache.org
  Subject: Where is code in hbase that physically delete a record?
 
  Hi, All,
  I want to find internal code in hbase where physical deleting a record
  occurs.
 
  -some of my understanding.
  Correct me if I am wrong. (It is largely based on my experience and
  even
  speculation.) Logically deleting a KeyValue data in hbase is performed
  by
  marking tombmarker (by Delete() per records) or setting TTL/max_version
  (per Store). After these actions, however, the physical data are still
  there, somewhere in the system. Physically deleting a record in hbase
  is
  realised by *a scanner to discard a keyvalue data record* during the
  major_compact.
 
  -what I need
  I want to extend hbase to associate some actions with physically
  deleting a
  record. Does hbase provide such hook (or coprocessor API) to inject
  code
  for each KV record that is skipped by hbase storescanner in
  major_compact.
  If not, anyone knows where should I look into in hbase (-0.94.2) for
  such
  code modification?
 
  Thanks.
  Yun




HBase Issue after Namenode Formating

2012-10-17 Thread Pankaj Misra
Hi All,

I have been using HBase 0.94.1 with Hadoop 0.23.1. I had the complete setup in 
a pseudo-distributed mode, and was running fine.

Today I had to reset and reformat the namenode and clear all the data as well 
to have a fresh run, but since then Hbase is not working as expected. For 
instance I am able to list the tables but cannot create new one, it reports 
that Master is initializing, as seen from the exception below

hbase(main):004:0 create 'testrun','samplecf'

ERROR: org.apache.hadoop.hbase.PleaseHoldException: 
org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at 
org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:1626)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

In order to resolve this I have even tried to delete the root so that hbase can 
recreate it fresh. HBase is able to recreate the ROOT but this issue persists. 
I am also not able to connect via native java client, as it does not get 
reference of the Master.

Calling for help, as I think I am missing something here.

Thanks and Regards
Pankaj Misra



Impetus Ranked in the Top 50 India's Best Companies to Work For 2012.

Impetus webcast 'Designing a Test Automation Framework for Multi-vendor 
Interoperable Systems' available at http://lf1.me/0E/.


NOTE: This message may contain information that is confidential, proprietary, 
privileged or otherwise protected by law. The message is intended solely for 
the named addressee. If received in error, please destroy and notify the 
sender. Any use of this email is prohibited when received in error. Impetus 
does not represent, warrant and/or guarantee, that the integrity of this 
communication has been maintained nor that the communication is free of errors, 
virus, interception or interference.


RE: HBase Issue after Namenode Formating

2012-10-17 Thread Pankaj Misra

Additionally I see the following error in the HBase Master logs

2012-10-17 21:15:33,043 ERROR 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node /hbase/master 
already exists and this is not a retry
2012-10-17 21:15:33,045 INFO 
org.apache.hadoop.hbase.master.ActiveMasterManager: Adding ZNode for 
/hbase/backup-masters/localhost,6,1350488732582 in backup master directory
2012-10-17 21:15:33,083 INFO 
org.apache.hadoop.hbase.master.ActiveMasterManager: Current master has this 
master's address, localhost,6,1350487401859; master was restarted? Deleting 
node.
2012-10-17 21:15:33,084 DEBUG 
org.apache.hadoop.hbase.master.ActiveMasterManager: No master available. 
Notifying waiting threads
2012-10-17 21:15:33,084 INFO org.apache.hadoop.hbase.master.HMaster: Cluster 
went down before this master became active
2012-10-17 21:15:33,084 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping 
service threads

Thanks and Regards
Pankaj Misra

From: Pankaj Misra
Sent: Wednesday, October 17, 2012 9:05 PM
To: user@hbase.apache.org
Subject: HBase Issue after Namenode Formating

Hi All,

I have been using HBase 0.94.1 with Hadoop 0.23.1. I had the complete setup in 
a pseudo-distributed mode, and was running fine.

Today I had to reset and reformat the namenode and clear all the data as well 
to have a fresh run, but since then Hbase is not working as expected. For 
instance I am able to list the tables but cannot create new one, it reports 
that Master is initializing, as seen from the exception below

hbase(main):004:0 create 'testrun','samplecf'

ERROR: org.apache.hadoop.hbase.PleaseHoldException: 
org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at 
org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:1626)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

In order to resolve this I have even tried to delete the root so that hbase can 
recreate it fresh. HBase is able to recreate the ROOT but this issue persists. 
I am also not able to connect via native java client, as it does not get 
reference of the Master.

Calling for help, as I think I am missing something here.

Thanks and Regards
Pankaj Misra



Impetus Ranked in the Top 50 India's Best Companies to Work For 2012.

Impetus webcast 'Designing a Test Automation Framework for Multi-vendor 
Interoperable Systems' available at http://lf1.me/0E/.


NOTE: This message may contain information that is confidential, proprietary, 
privileged or otherwise protected by law. The message is intended solely for 
the named addressee. If received in error, please destroy and notify the 
sender. Any use of this email is prohibited when received in error. Impetus 
does not represent, warrant and/or guarantee, that the integrity of this 
communication has been maintained nor that the communication is free of errors, 
virus, interception or interference.


columnprefix filter in rest java api

2012-10-17 Thread Erman Pattuk
I'm using HBase 0.94.1. I have to use Rest Java API to connect to a 
remote cluster. And I have to use filters (especially 
ColumnPrefixFilter) on scan and get operations. But, it always says Scan 
(or Get) operation does not support filters.


Question is: is it really the case? Or is there any way to use 
ColumnPrefixFilter on Rest Java API?


Thanks,

Erman


Re: Slow scanning for PrefixFilter on EncodedBlocks

2012-10-17 Thread anil gupta
Hi Zahoor,

I heavily use prefix filter. Every time i have to explicitly define the
startRow. So, that's the current behavior. However, initially this behavior
was confusing to me also.
I think that when a Prefix filter is defined then internally the
startRow=prefix can be set. User defined StartRow takes precedence over the
prefixFilter startRow. If the current prefixFilter can be modified in that
way then it will eradicate this confusion regarding performance of prefix
filter.

Thanks,
Anil Gupta

On Wed, Oct 17, 2012 at 3:44 AM, J Mohamed Zahoor jmo...@gmail.com wrote:

 First i upgraded my cluster to 94.2.. even then the problem persisted..
 Then i moved to using startRow instead of prefix filter..


 ,/zahoor

 On Wed, Oct 17, 2012 at 2:12 PM, J Mohamed Zahoor jmo...@gmail.com
 wrote:

  Sorry for the delay.
 
  It looks like the problem is because of PrefixFilter...
  I assumed that i does a seek...
 
  If i use startRow instead.. it works fine.. But is it the correct
 approach?
 
  ./zahoor
 
 
  On Wed, Oct 17, 2012 at 3:38 AM, lars hofhansl lhofha...@yahoo.com
 wrote:
 
  I reopened HBASE-6577
 
 
 
  - Original Message -
  From: lars hofhansl lhofha...@yahoo.com
  To: user@hbase.apache.org user@hbase.apache.org; lars hofhansl 
  lhofha...@yahoo.com
  Cc:
  Sent: Tuesday, October 16, 2012 2:39 PM
  Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
 
  Looks like this is exactly the scenario I was trying to optimize with
  HBASE-6577. Hmm...
  
  From: lars hofhansl lhofha...@yahoo.com
  To: user@hbase.apache.org user@hbase.apache.org
  Sent: Tuesday, October 16, 2012 12:21 AM
  Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
 
  PrefixFilter does not do any seeking by itself, so I doubt this is
  related to HBASE-6757.
  Does this only happen with FAST_DIFF compression?
 
 
  If you can create an isolated test program (that sets up the scenario
 and
  then runs a scan with the filter such that it is very slow), I'm happy
 to
  take a look.
 
  -- Lars
 
 
 
  - Original Message -
  From: J Mohamed Zahoor jmo...@gmail.com
  To: user@hbase.apache.org user@hbase.apache.org
  Cc:
  Sent: Monday, October 15, 2012 10:27 AM
  Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
 
  Is this related to HBASE-6757 ?
  I use a filter list with
- prefix filter
- filter list of column filters
 
  /zahoor
 
  On Monday, October 15, 2012, J Mohamed Zahoor wrote:
 
   Hi
  
   My scanner performance is very slow when using a Prefix filter on a
   **Encoded Column** ( encoded using FAST_DIFF on both memory and disk).
   I am using 94.1 hbase.
  
   jstack shows that much time is spent on seeking the row.
   Even if i give a exact row key match in the prefix filter it takes
 about
   two minutes to return a single row.
   Running this multiple times also seems to be redirecting things to
 disk
   (loadBlock).
  
  
   at
  
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1027)
   at
  
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:461)
at
  
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:493)
   at
  
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:242)
at
  
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:167)
   at
  
 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
at
  
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
   - locked 0x00059584fab8 (a
   org.apache.hadoop.hbase.regionserver.StoreScanner)
at
  
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
   - locked 0x00059584fab8 (a
   org.apache.hadoop.hbase.regionserver.StoreScanner)
at
  
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRow(HRegion.java:3507)
   at
  
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3455)
at
  
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3406)
   - locked 0x00059589bb30 (a
   org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
at
  
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3423)
  
   If is set the start and end row as same row in scan ... it come in
 very
   quick.
  
   Saw this link
  
 
 http://search-hadoop.com/m/9f0JH1Kz24U1subj=Re+HBase+0+94+2+SNAPSHOT+Scanning+Bug
   But it looks like things are fine in 94.1.
  
   Any pointers on why this is slow?
  
  
   Note: the row has not many columns(5 and less than a kb) and lots of
   versions (1500+)
  
   ./zahoor
  
  
  
 
 
 




-- 
Thanks  Regards,
Anil Gupta


RE: HBase Issue after Namenode Formating

2012-10-17 Thread Pankaj Misra


Hi All,

I have been using HBase 0.94.1 with Hadoop 0.23.1. I had the complete setup in 
a pseudo-distributed mode, and was running fine.

Today I had to reset and reformat the namenode and clear all the data as well 
to have a fresh run, but since then Hbase is not working as expected. For 
instance I am able to list the tables but cannot create new one, it reports 
that Master is initializing, as seen from the exception below

hbase(main):004:0 create 'testrun','samplecf'

ERROR: org.apache.hadoop.hbase.PleaseHoldException: 
org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at 
org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:1626)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

In order to resolve this I have even tried to delete the root so that hbase can 
recreate it fresh. HBase is able to recreate the ROOT but this issue persists. 
I am also not able to connect via native java client, as it does not get 
reference of the Master.

Additionally I see the following error in the HBase Master logs

2012-10-17 21:15:33,043 ERROR 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node /hbase/master 
already exists and this is not a retry
2012-10-17 21:15:33,045 INFO 
org.apache.hadoop.hbase.master.ActiveMasterManager: Adding ZNode for 
/hbase/backup-masters/localhost,6,1350488732582 in backup master directory
2012-10-17 21:15:33,083 INFO 
org.apache.hadoop.hbase.master.ActiveMasterManager: Current master has this 
master's address, localhost,6,1350487401859; master was restarted? Deleting 
node.
2012-10-17 21:15:33,084 DEBUG 
org.apache.hadoop.hbase.master.ActiveMasterManager: No master available. 
Notifying waiting threads
2012-10-17 21:15:33,084 INFO org.apache.hadoop.hbase.master.HMaster: Cluster 
went down before this master became active
2012-10-17 21:15:33,084 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping 
service threads


Calling for help, as I think I am missing something here.

Thanks and Regards
Pankaj Misra



Impetus Ranked in the Top 50 India's Best Companies to Work For 2012.

Impetus webcast 'Designing a Test Automation Framework for Multi-vendor 
Interoperable Systems' available at http://lf1.me/0E/.


NOTE: This message may contain information that is confidential, proprietary, 
privileged or otherwise protected by law. The message is intended solely for 
the named addressee. If received in error, please destroy and notify the 
sender. Any use of this email is prohibited when received in error. Impetus 
does not represent, warrant and/or guarantee, that the integrity of this 
communication has been maintained nor that the communication is free of errors, 
virus, interception or interference.


Re: Regionservers not connecting to master

2012-10-17 Thread Dan Brodsky
Ram,

Thanks for your suggestions.

The datanodes are all built using the same image, so I know they're
all pointed to the same ZK nodes.

I monitored all three ZK logs, the master log, and the regionserver
log for each RS I was trying to bring back online. I'm glad I have a
big screen. :-) Here is what I found:

Whenever a regionserver connects to one particular ZK peer *first*, it
never goes online. The ZK log shows a successful connection
negotiating a timeout value, and the RS's log shows a successful ZK
connection, but then it just sits there.

When a regionserver starts up and connects to one of the other two ZK
peers first, it connects to a second one successfully, then contacts
the master, and it comes up and all is happy.

So the problem of regionservers not connecting to master only happens
when the RS tries one particular ZK node as its first ZK connection.
But the logs aren't helpful for diagnosing further than that.

Additional thoughts?


On Wed, Oct 17, 2012 at 9:12 AM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
 Can you try like start any of the regionservers that are not connecting at
 all.  May be start 2 of them.
 Observer master logs.  See whether it says
 'Waiting for RegionServers to checkin'?.

 Just to confirm your ZK ip and port is correct thro out the cluster? If
 multitenant cluster then you may be the other regionservers are connecting
 to someother ZK cluster?
 Wild guess :)

 Regards
 Ram
 -Original Message-
 From: Dan Brodsky [mailto:danbrod...@gmail.com]
 Sent: Wednesday, October 17, 2012 6:31 PM
 To: user@hbase.apache.org
 Subject: Regionservers not connecting to master

 Good morning,

 I have a 10 node Hadoop/Hbase cluster, plus a namenode VM, plus three
 Zookeeper quorum peers (one on the namenode, one on a dedicated ZK
 peer VM, and one on a third box). All 10 HDFS datanodes are also Hbase
 regionservers.

 Several weeks ago, we had six HDFS datanodes go offline suddenly (with
 no meaningful error messages), and since then, I have been unable to
 get all 10 regionservers to connect to the Hbase master. I've tried
 bringing the cluster down and rebooting all the boxes, but no joy. The
 machines are all running, and hbase-regionserver appears to start
 normally on each one.

 Right now, my master status page (http://namenode:60010) shows 3
 regionservers online. There are also dozens of regions in transition
 listed on the status page (in the PENDING_OPEN state), but each of
 those are on one of the regionservers already online.

 The 7 other regionservers' log files show a successful connection to
 one ZK peer, followed by a regular trail of these messages:

 2012-10-17 12:36:08,394 DEBUG
 org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=8.17
 MB, free=987.67 MB, max=995.84 MB, blocks=0, accesses=0, hits=0,
 hitRatio=0cachingAccesses=0, cachingHits=0,
 cachingHitsRatio=0evictions=0, evicted=0, evictedPerRun=NaN

 If I had to wager a guess, it seems like the 7 offline regionservers
 are not connecting to other ZK peers, but there isn't anything in the
 ZK logs to indicate why.

 Thoughts?

 Dan



Re: Regionservers not connecting to master

2012-10-17 Thread Dan Brodsky
Well, slight change: only 1 of the ZK peers happens to work. When a RS
connects to the other 2, it doesn't go further than that. The 1 ZK
node that happens to work is the one that runs on the same VM as the
master.

Sounds like it could be network connectivity issues, so I'm going to
investigate that a bit further, but other suggestions are welcome.


On Wed, Oct 17, 2012 at 1:29 PM, Dan Brodsky danbrod...@gmail.com wrote:
 Ram,

 Thanks for your suggestions.

 The datanodes are all built using the same image, so I know they're
 all pointed to the same ZK nodes.

 I monitored all three ZK logs, the master log, and the regionserver
 log for each RS I was trying to bring back online. I'm glad I have a
 big screen. :-) Here is what I found:

 Whenever a regionserver connects to one particular ZK peer *first*, it
 never goes online. The ZK log shows a successful connection
 negotiating a timeout value, and the RS's log shows a successful ZK
 connection, but then it just sits there.

 When a regionserver starts up and connects to one of the other two ZK
 peers first, it connects to a second one successfully, then contacts
 the master, and it comes up and all is happy.

 So the problem of regionservers not connecting to master only happens
 when the RS tries one particular ZK node as its first ZK connection.
 But the logs aren't helpful for diagnosing further than that.

 Additional thoughts?


 On Wed, Oct 17, 2012 at 9:12 AM, Ramkrishna.S.Vasudevan
 ramkrishna.vasude...@huawei.com wrote:
 Can you try like start any of the regionservers that are not connecting at
 all.  May be start 2 of them.
 Observer master logs.  See whether it says
 'Waiting for RegionServers to checkin'?.

 Just to confirm your ZK ip and port is correct thro out the cluster? If
 multitenant cluster then you may be the other regionservers are connecting
 to someother ZK cluster?
 Wild guess :)

 Regards
 Ram
 -Original Message-
 From: Dan Brodsky [mailto:danbrod...@gmail.com]
 Sent: Wednesday, October 17, 2012 6:31 PM
 To: user@hbase.apache.org
 Subject: Regionservers not connecting to master

 Good morning,

 I have a 10 node Hadoop/Hbase cluster, plus a namenode VM, plus three
 Zookeeper quorum peers (one on the namenode, one on a dedicated ZK
 peer VM, and one on a third box). All 10 HDFS datanodes are also Hbase
 regionservers.

 Several weeks ago, we had six HDFS datanodes go offline suddenly (with
 no meaningful error messages), and since then, I have been unable to
 get all 10 regionservers to connect to the Hbase master. I've tried
 bringing the cluster down and rebooting all the boxes, but no joy. The
 machines are all running, and hbase-regionserver appears to start
 normally on each one.

 Right now, my master status page (http://namenode:60010) shows 3
 regionservers online. There are also dozens of regions in transition
 listed on the status page (in the PENDING_OPEN state), but each of
 those are on one of the regionservers already online.

 The 7 other regionservers' log files show a successful connection to
 one ZK peer, followed by a regular trail of these messages:

 2012-10-17 12:36:08,394 DEBUG
 org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=8.17
 MB, free=987.67 MB, max=995.84 MB, blocks=0, accesses=0, hits=0,
 hitRatio=0cachingAccesses=0, cachingHits=0,
 cachingHitsRatio=0evictions=0, evicted=0, evictedPerRun=NaN

 If I had to wager a guess, it seems like the 7 offline regionservers
 are not connecting to other ZK peers, but there isn't anything in the
 ZK logs to indicate why.

 Thoughts?

 Dan



RE: HBase Issue after Namenode Formating

2012-10-17 Thread Pankaj Misra
Hi All,

I did some more fact finding around this issue, and it seems to me that the 
Master is not getting initialized and hence I am not able to create table. I 
tried connecting via the Java client as well, which gave me the following 
exception. I am not sure what is causing Master to not initialize at all. I am 
not able to figure out, where am I going wrong, and would request help. Thanks.

java.lang.RuntimeException: java.io.IOException: HRegionInfo was null or empty 
in -ROOT-, row=keyvalues={.META.,,1/info:server/1350496120102/Put/vlen=15/ts=0, 
.META.,,1/info:serverstartcode/1350496120102/Put/vlen=8/ts=0}
at 
com.benchmark.uds.hbase.NativeJavaHBaseLoader.createSchema(NativeJavaHBaseLoader.java:56)
at 
com.benchmark.uds.hbase.RequestGenerator.main(RequestGenerator.java:72)
Caused by: java.io.IOException: HRegionInfo was null or empty in -ROOT-, 
row=keyvalues={.META.,,1/info:server/1350496120102/Put/vlen=15/ts=0, 
.META.,,1/info:serverstartcode/1350496120102/Put/vlen=8/ts=0}
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:985)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:841)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:810)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:232)
at org.apache.hadoop.hbase.client.HTable.init(HTable.java:172)
at 
org.apache.hadoop.hbase.catalog.MetaReader.getHTable(MetaReader.java:200)
at 
org.apache.hadoop.hbase.catalog.MetaReader.getMetaHTable(MetaReader.java:226)
at 
org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:705)
at 
org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:183)
at 
org.apache.hadoop.hbase.catalog.MetaReader.tableExists(MetaReader.java:448)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:233)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:247)
at 
com.benchmark.uds.hbase.NativeJavaHBaseLoader.createSchema(NativeJavaHBaseLoader.java:44)

Also, the error as seen in zookeeper log for the above session is
2012-10-17 23:32:04,274 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed 
socket connection for client /127.0.0.1:51602 which had sessionid 
0x13a6fd74b5c
2012-10-17 23:32:05,105 WARN org.apache.zookeeper.server.NIOServerCnxn: caught 
end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x13a6fd74b59, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
at java.lang.Thread.run(Thread.java:662)
2012-10-17 23:32:05,105 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed 
socket connection for client /127.0.0.1:51515 which had sessionid 
0x13a6fd74b59

Thanks and Regards
Pankaj Misra



From: Pankaj Misra
Sent: Wednesday, October 17, 2012 10:30 PM
To: user@hbase.apache.org
Subject: RE: HBase Issue after Namenode Formating

Hi All,

I have been using HBase 0.94.1 with Hadoop 0.23.1. I had the complete setup in 
a pseudo-distributed mode, and was running fine.

Today I had to reset and reformat the namenode and clear all the data as well 
to have a fresh run, but since then Hbase is not working as expected. For 
instance I am able to list the tables but cannot create new one, it reports 
that Master is initializing, as seen from the exception below

hbase(main):004:0 create 'testrun','samplecf'

ERROR: org.apache.hadoop.hbase.PleaseHoldException: 
org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at 
org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:1626)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

In order to resolve this I have even tried to delete the root so that hbase can 
recreate it fresh. HBase is able to recreate the ROOT but this issue persists. 
I am also not able to connect via native java client, as it does not get 
reference of the Master.

Additionally I see the following error in the HBase Master logs

2012-10-17 21:15:33,043 ERROR 

Re: Slow scanning for PrefixFilter on EncodedBlocks

2012-10-17 Thread lars hofhansl
That is a good point. There is no reason why prefix filter cannot issue a seek 
to the first KV for that prefix.
Although it lead to a practice where people would the prefix filter when they 
in fact should just set the start row.




- Original Message -
From: anil gupta anilgupt...@gmail.com
To: user@hbase.apache.org
Cc: 
Sent: Wednesday, October 17, 2012 9:41 AM
Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks

Hi Zahoor,

I heavily use prefix filter. Every time i have to explicitly define the
startRow. So, that's the current behavior. However, initially this behavior
was confusing to me also.
I think that when a Prefix filter is defined then internally the
startRow=prefix can be set. User defined StartRow takes precedence over the
prefixFilter startRow. If the current prefixFilter can be modified in that
way then it will eradicate this confusion regarding performance of prefix
filter.

Thanks,
Anil Gupta

On Wed, Oct 17, 2012 at 3:44 AM, J Mohamed Zahoor jmo...@gmail.com wrote:

 First i upgraded my cluster to 94.2.. even then the problem persisted..
 Then i moved to using startRow instead of prefix filter..


 ,/zahoor

 On Wed, Oct 17, 2012 at 2:12 PM, J Mohamed Zahoor jmo...@gmail.com
 wrote:

  Sorry for the delay.
 
  It looks like the problem is because of PrefixFilter...
  I assumed that i does a seek...
 
  If i use startRow instead.. it works fine.. But is it the correct
 approach?
 
  ./zahoor
 
 
  On Wed, Oct 17, 2012 at 3:38 AM, lars hofhansl lhofha...@yahoo.com
 wrote:
 
  I reopened HBASE-6577
 
 
 
  - Original Message -
  From: lars hofhansl lhofha...@yahoo.com
  To: user@hbase.apache.org user@hbase.apache.org; lars hofhansl 
  lhofha...@yahoo.com
  Cc:
  Sent: Tuesday, October 16, 2012 2:39 PM
  Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
 
  Looks like this is exactly the scenario I was trying to optimize with
  HBASE-6577. Hmm...
  
  From: lars hofhansl lhofha...@yahoo.com
  To: user@hbase.apache.org user@hbase.apache.org
  Sent: Tuesday, October 16, 2012 12:21 AM
  Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
 
  PrefixFilter does not do any seeking by itself, so I doubt this is
  related to HBASE-6757.
  Does this only happen with FAST_DIFF compression?
 
 
  If you can create an isolated test program (that sets up the scenario
 and
  then runs a scan with the filter such that it is very slow), I'm happy
 to
  take a look.
 
  -- Lars
 
 
 
  - Original Message -
  From: J Mohamed Zahoor jmo...@gmail.com
  To: user@hbase.apache.org user@hbase.apache.org
  Cc:
  Sent: Monday, October 15, 2012 10:27 AM
  Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
 
  Is this related to HBASE-6757 ?
  I use a filter list with
    - prefix filter
    - filter list of column filters
 
  /zahoor
 
  On Monday, October 15, 2012, J Mohamed Zahoor wrote:
 
   Hi
  
   My scanner performance is very slow when using a Prefix filter on a
   **Encoded Column** ( encoded using FAST_DIFF on both memory and disk).
   I am using 94.1 hbase.
  
   jstack shows that much time is spent on seeking the row.
   Even if i give a exact row key match in the prefix filter it takes
 about
   two minutes to return a single row.
   Running this multiple times also seems to be redirecting things to
 disk
   (loadBlock).
  
  
   at
  
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1027)
   at
  
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:461)
    at
  
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:493)
   at
  
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:242)
    at
  
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:167)
   at
  
 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
    at
  
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)
   - locked 0x00059584fab8 (a
   org.apache.hadoop.hbase.regionserver.StoreScanner)
    at
  
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:402)
   - locked 0x00059584fab8 (a
   org.apache.hadoop.hbase.regionserver.StoreScanner)
    at
  
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRow(HRegion.java:3507)
   at
  
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3455)
    at
  
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3406)
   - locked 0x00059589bb30 (a
   org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
    at
  
 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3423)
  
   If is set the start and end row as same row in scan ... it come in
 

Re: HBase Issue after Namenode Formating

2012-10-17 Thread lars hofhansl
Can you try clearing all the state in ZK as well? (stop ZK, wipe the ZK data 
directory, restart ZK)



- Original Message -
From: Pankaj Misra pankaj.mi...@impetus.co.in
To: user@hbase.apache.org user@hbase.apache.org
Cc: 
Sent: Wednesday, October 17, 2012 11:09 AM
Subject: RE: HBase Issue after Namenode Formating

Hi All,

I did some more fact finding around this issue, and it seems to me that the 
Master is not getting initialized and hence I am not able to create table. I 
tried connecting via the Java client as well, which gave me the following 
exception. I am not sure what is causing Master to not initialize at all. I am 
not able to figure out, where am I going wrong, and would request help. Thanks.

java.lang.RuntimeException: java.io.IOException: HRegionInfo was null or empty 
in -ROOT-, row=keyvalues={.META.,,1/info:server/1350496120102/Put/vlen=15/ts=0, 
.META.,,1/info:serverstartcode/1350496120102/Put/vlen=8/ts=0}
        at 
com.benchmark.uds.hbase.NativeJavaHBaseLoader.createSchema(NativeJavaHBaseLoader.java:56)
        at 
com.benchmark.uds.hbase.RequestGenerator.main(RequestGenerator.java:72)
Caused by: java.io.IOException: HRegionInfo was null or empty in -ROOT-, 
row=keyvalues={.META.,,1/info:server/1350496120102/Put/vlen=15/ts=0, 
.META.,,1/info:serverstartcode/1350496120102/Put/vlen=8/ts=0}
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:985)
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:841)
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:810)
        at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:232)
        at org.apache.hadoop.hbase.client.HTable.init(HTable.java:172)
        at 
org.apache.hadoop.hbase.catalog.MetaReader.getHTable(MetaReader.java:200)
        at 
org.apache.hadoop.hbase.catalog.MetaReader.getMetaHTable(MetaReader.java:226)
        at 
org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:705)
        at 
org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:183)
        at 
org.apache.hadoop.hbase.catalog.MetaReader.tableExists(MetaReader.java:448)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:233)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:247)
        at 
com.benchmark.uds.hbase.NativeJavaHBaseLoader.createSchema(NativeJavaHBaseLoader.java:44)

Also, the error as seen in zookeeper log for the above session is
2012-10-17 23:32:04,274 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed 
socket connection for client /127.0.0.1:51602 which had sessionid 
0x13a6fd74b5c
2012-10-17 23:32:05,105 WARN org.apache.zookeeper.server.NIOServerCnxn: caught 
end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x13a6fd74b59, likely client has closed socket
        at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
        at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
        at java.lang.Thread.run(Thread.java:662)
2012-10-17 23:32:05,105 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed 
socket connection for client /127.0.0.1:51515 which had sessionid 
0x13a6fd74b59

Thanks and Regards
Pankaj Misra



From: Pankaj Misra
Sent: Wednesday, October 17, 2012 10:30 PM
To: user@hbase.apache.org
Subject: RE: HBase Issue after Namenode Formating

Hi All,

I have been using HBase 0.94.1 with Hadoop 0.23.1. I had the complete setup in 
a pseudo-distributed mode, and was running fine.

Today I had to reset and reformat the namenode and clear all the data as well 
to have a fresh run, but since then Hbase is not working as expected. For 
instance I am able to list the tables but cannot create new one, it reports 
that Master is initializing, as seen from the exception below

hbase(main):004:0 create 'testrun','samplecf'

ERROR: org.apache.hadoop.hbase.PleaseHoldException: 
org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
        at 
org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:1626)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1154)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
        at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1400)

In order to resolve this I have even tried to delete the 

Re: Merge large number of regions

2012-10-17 Thread Shrijeet Paliwal
Thanks Kevin! Very useful pointers.

On Wed, Oct 17, 2012 at 7:39 AM, Kevin O'dell kevin.od...@cloudera.comwrote:

 Shrijeet,

   Here is a thread on doing a proper incremental with Import:


 http://hadoop-hbase.blogspot.com/2012/04/timestamp-consistent-backups-in-hbase.html
 I am a fan of this one as it is well laid out.  Breaking this up for you
 use case should be pretty easy.

 CopyTable should work just as easily -
 http://www.cloudera.com/blog/2012/06/online-hbase-backups-with-copytable-2/


 If you follow the above it is really going to be a matter of preference.

 On Tue, Oct 16, 2012 at 1:16 PM, Shrijeet Paliwal
 shrij...@rocketfuel.comwrote:

  Hi Kevin,
 
  Thanks for answering. What are your thoughts on copyTable vs
 export-import
  considering my use case. Will one tool have lesser chance of copying
  inconsistent data over another?
 
  I wish to do increment copy of a live cluster to minimize downtime.
 
  On Tue, Oct 16, 2012 at 8:47 AM, Kevin O'dell kevin.od...@cloudera.com
  wrote:
 
   Shrijeet,
  
I think a better approach would be a pre-split table and then do the
   export/import.  This will save you from having to script the merges,
  which
   can be end badly for META if done wrong.
  
   On Mon, Oct 15, 2012 at 5:31 PM, Shrijeet Paliwal
   shrij...@rocketfuel.comwrote:
  
We moved to 0.92.2 some time ago and with that, increased the max
 file
   size
setting to 4GB (from 2GB). Also an application triggered cleanup
   operation
deleted lots of unwanted rows.
These two combined have gotten us to a state where lots of regions
 are
smaller than desired size.
   
Merging regions two at a time seems time consuming and will be hard
 to
automate. https://issues.apache.org/jira/browse/HBASE-1621 automates
merging, but it is not stable.
   
I am interested in knowing about other possible approaches folks have
tried. What do you guys think about copyTable based approach ? (old
---copyTable--- new and then rename new to old)
   
-Shrijeet
   
  
  
  
   --
   Kevin O'Dell
   Customer Operations Engineer, Cloudera
  
 



 --
 Kevin O'Dell
 Customer Operations Engineer, Cloudera



Re: Problems using unqualified hostname on hbase

2012-10-17 Thread Doug Meil

Hi there.  You generally don't want to run with 2 clusters like (HBase on
one, HDFS on the other) that because your regions have 0% locality.

For more information on this topic, seeŠ.

http://hbase.apache.org/book.html#regions.arch.locality




On 10/17/12 12:19 PM, Richard Tang tristartom.t...@gmail.com wrote:

Hello, everyone,
I have problems using hbase based on unqualified hostname. My ``hbase``
runs  in a cluster and ``hdfs`` on another cluster. While using fully
qualified name on ``hbase``, for properties like ``hbase.rootdir`` and
``hbase.zookeeper.quorum``, there is no problem. But when I change them to
be shorter unqualified names, like ``node4`` and ``node2``, (which are
resolved to local ip address by ``/etc/hosts``, like ``10.0.0.8``), the
hbase cluster begin to throw ``Connect refused`` messages. Anyone
encounter
same problem here?What is the possible reason behind all theses? Thanks.
Regards,
Richard




Re: Slow scanning for PrefixFilter on EncodedBlocks

2012-10-17 Thread anil gupta
Hi Lars,

There is a specific use case for this:

Table: Suppose i have a rowkey:customer_idevent_timestampuid

Use case: I would like to get all the events of customer_id=123.
Case 1: If i only use startRow=123 then i will get events of  other
customers having customers_id  123 since the scanner will be keep on
fetching rows until the end of table.
Case 2: If i use prefixFilter=123 and startRow=123 then i will get the
correct result.

IMHO, adding the feature of smartly adding the startRow in PrefixFilter
wont hurt any existing functionality. Use of StartRow and PrefixFilter will
still be different.

Thanks,
Anil Gupta


On Wed, Oct 17, 2012 at 1:11 PM, lars hofhansl lhofha...@yahoo.com wrote:

 That is a good point. There is no reason why prefix filter cannot issue a
 seek to the first KV for that prefix.
 Although it lead to a practice where people would the prefix filter when
 they in fact should just set the start row.




 - Original Message -
 From: anil gupta anilgupt...@gmail.com
 To: user@hbase.apache.org
 Cc:
 Sent: Wednesday, October 17, 2012 9:41 AM
 Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks

 Hi Zahoor,

 I heavily use prefix filter. Every time i have to explicitly define the
 startRow. So, that's the current behavior. However, initially this behavior
 was confusing to me also.
 I think that when a Prefix filter is defined then internally the
 startRow=prefix can be set. User defined StartRow takes precedence over the
 prefixFilter startRow. If the current prefixFilter can be modified in that
 way then it will eradicate this confusion regarding performance of prefix
 filter.

 Thanks,
 Anil Gupta

 On Wed, Oct 17, 2012 at 3:44 AM, J Mohamed Zahoor jmo...@gmail.com
 wrote:

  First i upgraded my cluster to 94.2.. even then the problem persisted..
  Then i moved to using startRow instead of prefix filter..
 
 
  ,/zahoor
 
  On Wed, Oct 17, 2012 at 2:12 PM, J Mohamed Zahoor jmo...@gmail.com
  wrote:
 
   Sorry for the delay.
  
   It looks like the problem is because of PrefixFilter...
   I assumed that i does a seek...
  
   If i use startRow instead.. it works fine.. But is it the correct
  approach?
  
   ./zahoor
  
  
   On Wed, Oct 17, 2012 at 3:38 AM, lars hofhansl lhofha...@yahoo.com
  wrote:
  
   I reopened HBASE-6577
  
  
  
   - Original Message -
   From: lars hofhansl lhofha...@yahoo.com
   To: user@hbase.apache.org user@hbase.apache.org; lars hofhansl 
   lhofha...@yahoo.com
   Cc:
   Sent: Tuesday, October 16, 2012 2:39 PM
   Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
  
   Looks like this is exactly the scenario I was trying to optimize with
   HBASE-6577. Hmm...
   
   From: lars hofhansl lhofha...@yahoo.com
   To: user@hbase.apache.org user@hbase.apache.org
   Sent: Tuesday, October 16, 2012 12:21 AM
   Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
  
   PrefixFilter does not do any seeking by itself, so I doubt this is
   related to HBASE-6757.
   Does this only happen with FAST_DIFF compression?
  
  
   If you can create an isolated test program (that sets up the scenario
  and
   then runs a scan with the filter such that it is very slow), I'm happy
  to
   take a look.
  
   -- Lars
  
  
  
   - Original Message -
   From: J Mohamed Zahoor jmo...@gmail.com
   To: user@hbase.apache.org user@hbase.apache.org
   Cc:
   Sent: Monday, October 15, 2012 10:27 AM
   Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
  
   Is this related to HBASE-6757 ?
   I use a filter list with
 - prefix filter
 - filter list of column filters
  
   /zahoor
  
   On Monday, October 15, 2012, J Mohamed Zahoor wrote:
  
Hi
   
My scanner performance is very slow when using a Prefix filter on a
**Encoded Column** ( encoded using FAST_DIFF on both memory and
 disk).
I am using 94.1 hbase.
   
jstack shows that much time is spent on seeking the row.
Even if i give a exact row key match in the prefix filter it takes
  about
two minutes to return a single row.
Running this multiple times also seems to be redirecting things to
  disk
(loadBlock).
   
   
at
   
  
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.loadBlockAndSeekToKey(HFileReaderV2.java:1027)
at
   
  
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:461)
 at
   
  
 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:493)
at
   
  
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:242)
 at
   
  
 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:167)
at
   
  
 
 org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:54)
 at
   
  
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:521)

RE: HBase Issue after Namenode Formating

2012-10-17 Thread Pankaj Misra
Hi Lars,

Thank you so much Lars for your perfect solution, it worked, you have been a 
life saver here, worked like a charm. Thanks again Lars.

Regards
Pankaj Misra



From: lars hofhansl [lhofha...@yahoo.com]
Sent: Wednesday, October 17, 2012 11:43 PM
To: user@hbase.apache.org
Subject: Re: HBase Issue after Namenode Formating

Can you try clearing all the state in ZK as well? (stop ZK, wipe the ZK data 
directory, restart ZK)



- Original Message -
From: Pankaj Misra pankaj.mi...@impetus.co.in
To: user@hbase.apache.org user@hbase.apache.org
Cc:
Sent: Wednesday, October 17, 2012 11:09 AM
Subject: RE: HBase Issue after Namenode Formating

Hi All,

I did some more fact finding around this issue, and it seems to me that the 
Master is not getting initialized and hence I am not able to create table. I 
tried connecting via the Java client as well, which gave me the following 
exception. I am not sure what is causing Master to not initialize at all. I am 
not able to figure out, where am I going wrong, and would request help. Thanks.

java.lang.RuntimeException: java.io.IOException: HRegionInfo was null or empty 
in -ROOT-, row=keyvalues={.META.,,1/info:server/1350496120102/Put/vlen=15/ts=0, 
.META.,,1/info:serverstartcode/1350496120102/Put/vlen=8/ts=0}
at 
com.benchmark.uds.hbase.NativeJavaHBaseLoader.createSchema(NativeJavaHBaseLoader.java:56)
at 
com.benchmark.uds.hbase.RequestGenerator.main(RequestGenerator.java:72)
Caused by: java.io.IOException: HRegionInfo was null or empty in -ROOT-, 
row=keyvalues={.META.,,1/info:server/1350496120102/Put/vlen=15/ts=0, 
.META.,,1/info:serverstartcode/1350496120102/Put/vlen=8/ts=0}
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:985)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:841)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:810)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:232)
at org.apache.hadoop.hbase.client.HTable.init(HTable.java:172)
at 
org.apache.hadoop.hbase.catalog.MetaReader.getHTable(MetaReader.java:200)
at 
org.apache.hadoop.hbase.catalog.MetaReader.getMetaHTable(MetaReader.java:226)
at 
org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:705)
at 
org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:183)
at 
org.apache.hadoop.hbase.catalog.MetaReader.tableExists(MetaReader.java:448)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:233)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:247)
at 
com.benchmark.uds.hbase.NativeJavaHBaseLoader.createSchema(NativeJavaHBaseLoader.java:44)

Also, the error as seen in zookeeper log for the above session is
2012-10-17 23:32:04,274 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed 
socket connection for client /127.0.0.1:51602 which had sessionid 
0x13a6fd74b5c
2012-10-17 23:32:05,105 WARN org.apache.zookeeper.server.NIOServerCnxn: caught 
end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x13a6fd74b59, likely client has closed socket
at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
at java.lang.Thread.run(Thread.java:662)
2012-10-17 23:32:05,105 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed 
socket connection for client /127.0.0.1:51515 which had sessionid 
0x13a6fd74b59

Thanks and Regards
Pankaj Misra



From: Pankaj Misra
Sent: Wednesday, October 17, 2012 10:30 PM
To: user@hbase.apache.org
Subject: RE: HBase Issue after Namenode Formating

Hi All,

I have been using HBase 0.94.1 with Hadoop 0.23.1. I had the complete setup in 
a pseudo-distributed mode, and was running fine.

Today I had to reset and reformat the namenode and clear all the data as well 
to have a fresh run, but since then Hbase is not working as expected. For 
instance I am able to list the tables but cannot create new one, it reports 
that Master is initializing, as seen from the exception below

hbase(main):004:0 create 'testrun','samplecf'

ERROR: org.apache.hadoop.hbase.PleaseHoldException: 
org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at 
org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:1626)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1154)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 

Re: HBase Issue after Namenode Formating

2012-10-17 Thread lars hofhansl
No problem...

We need to fix this in HBase too.There's HBASE-6294, but no patch for it, yet.

-- Lars




 From: Pankaj Misra pankaj.mi...@impetus.co.in
To: user@hbase.apache.org user@hbase.apache.org; lars hofhansl 
lhofha...@yahoo.com 
Sent: Wednesday, October 17, 2012 3:18 PM
Subject: RE: HBase Issue after Namenode Formating
 
Hi Lars,

Thank you so much Lars for your perfect solution, it worked, you have been a 
life saver here, worked like a charm. Thanks again Lars.

Regards
Pankaj Misra



From: lars hofhansl [lhofha...@yahoo.com]
Sent: Wednesday, October 17, 2012 11:43 PM
To: user@hbase.apache.org
Subject: Re: HBase Issue after Namenode Formating

Can you try clearing all the state in ZK as well? (stop ZK, wipe the ZK data 
directory, restart ZK)



- Original Message -
From: Pankaj Misra pankaj.mi...@impetus.co.in
To: user@hbase.apache.org user@hbase.apache.org
Cc:
Sent: Wednesday, October 17, 2012 11:09 AM
Subject: RE: HBase Issue after Namenode Formating

Hi All,

I did some more fact finding around this issue, and it seems to me that the 
Master is not getting initialized and hence I am not able to create table. I 
tried connecting via the Java client as well, which gave me the following 
exception. I am not sure what is causing Master to not initialize at all. I am 
not able to figure out, where am I going wrong, and would request help. Thanks.

java.lang.RuntimeException: java.io.IOException: HRegionInfo was null or empty 
in -ROOT-, row=keyvalues={.META.,,1/info:server/1350496120102/Put/vlen=15/ts=0, 
.META.,,1/info:serverstartcode/1350496120102/Put/vlen=8/ts=0}
        at 
com.benchmark.uds.hbase.NativeJavaHBaseLoader.createSchema(NativeJavaHBaseLoader.java:56)
        at 
com.benchmark.uds.hbase.RequestGenerator.main(RequestGenerator.java:72)
Caused by: java.io.IOException: HRegionInfo was null or empty in -ROOT-, 
row=keyvalues={.META.,,1/info:server/1350496120102/Put/vlen=15/ts=0, 
.META.,,1/info:serverstartcode/1350496120102/Put/vlen=8/ts=0}
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:985)
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:841)
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:810)
        at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:232)
        at org.apache.hadoop.hbase.client.HTable.init(HTable.java:172)
        at 
org.apache.hadoop.hbase.catalog.MetaReader.getHTable(MetaReader.java:200)
        at 
org.apache.hadoop.hbase.catalog.MetaReader.getMetaHTable(MetaReader.java:226)
        at 
org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:705)
        at 
org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:183)
        at 
org.apache.hadoop.hbase.catalog.MetaReader.tableExists(MetaReader.java:448)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:233)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:247)
        at 
com.benchmark.uds.hbase.NativeJavaHBaseLoader.createSchema(NativeJavaHBaseLoader.java:44)

Also, the error as seen in zookeeper log for the above session is
2012-10-17 23:32:04,274 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed 
socket connection for client /127.0.0.1:51602 which had sessionid 
0x13a6fd74b5c
2012-10-17 23:32:05,105 WARN org.apache.zookeeper.server.NIOServerCnxn: caught 
end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 
0x13a6fd74b59, likely client has closed socket
        at 
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
        at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
        at java.lang.Thread.run(Thread.java:662)
2012-10-17 23:32:05,105 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed 
socket connection for client /127.0.0.1:51515 which had sessionid 
0x13a6fd74b59

Thanks and Regards
Pankaj Misra



From: Pankaj Misra
Sent: Wednesday, October 17, 2012 10:30 PM
To: user@hbase.apache.org
Subject: RE: HBase Issue after Namenode Formating

Hi All,

I have been using HBase 0.94.1 with Hadoop 0.23.1. I had the complete setup in 
a pseudo-distributed mode, and was running fine.

Today I had to reset and reformat the namenode and clear all the data as well 
to have a fresh run, but since then Hbase is not working as expected. For 
instance I am able to list the tables but cannot create new one, it reports 
that Master is initializing, as seen from the exception below

hbase(main):004:0 create 'testrun','samplecf'

ERROR: org.apache.hadoop.hbase.PleaseHoldException: 
org.apache.hadoop.hbase.PleaseHoldException: Master is 

Coprocessor end point vs MapReduce?

2012-10-17 Thread Jean-Marc Spaggiari
Hi,

Can someone please help me to understand the pros and cons between
those 2 options for the following usecase?

I need to transfer all the rows between 2 timestamps to another table.

My first idea was to run a MapReduce to map the rows and store them on
another table, and then delete them using an end point coprocessor.
But the more I look into it, the more I think the MapReduce is not a
good idea and I should use a coprocessor instead.

BUT... The MapReduce framework guarantee me that it will run against
all the regions. I tried to stop a regionserver while the job was
running. The region moved, and the MapReduce restarted the job from
the new location. Will the coprocessor do the same thing?

Also, I found the webconsole for the MapReduce with the number of
jobs, the status, etc. Is there the same thing with the coprocessors?

Are all coprocessors running at the same time on all regions, which
mean we can have 100 of them running on a regionserver at a time? Or
are they running like the MapReduce jobs based on some configured
values?

Thanks,

JM


Re: Coprocessor end point vs MapReduce?

2012-10-17 Thread Michael Segel
Hi, 

I'm a firm believer in KISS (Keep It Simple, Stupid) 

The Map/Reduce (map job only) is the simplest and least prone to failure. 

Not sure why you would want to do this using coprocessors. 

How often are you running this job? It sounds like its going to be sporadic.

-Mike

On Oct 17, 2012, at 7:11 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org 
wrote:

 Hi,
 
 Can someone please help me to understand the pros and cons between
 those 2 options for the following usecase?
 
 I need to transfer all the rows between 2 timestamps to another table.
 
 My first idea was to run a MapReduce to map the rows and store them on
 another table, and then delete them using an end point coprocessor.
 But the more I look into it, the more I think the MapReduce is not a
 good idea and I should use a coprocessor instead.
 
 BUT... The MapReduce framework guarantee me that it will run against
 all the regions. I tried to stop a regionserver while the job was
 running. The region moved, and the MapReduce restarted the job from
 the new location. Will the coprocessor do the same thing?
 
 Also, I found the webconsole for the MapReduce with the number of
 jobs, the status, etc. Is there the same thing with the coprocessors?
 
 Are all coprocessors running at the same time on all regions, which
 mean we can have 100 of them running on a regionserver at a time? Or
 are they running like the MapReduce jobs based on some configured
 values?
 
 Thanks,
 
 JM
 



Re: Coprocessor end point vs MapReduce?

2012-10-17 Thread Jean-Marc Spaggiari
Hi Mike,

I'm expecting to run the job weekly. I initially thought about using
end points because I found HBASE-6942 which was a good example for my
needs.

I'm fine with the Put part for the Map/Reduce, but I'm not sure about
the delete. That's why I look at coprocessors. Then I figure that I
also can do the Put on the coprocessor side.

On a M/R, can I delete the row I'm dealing with based on some criteria
like timestamp? If I do that, I will not do bulk deletes, but I will
delete the rows one by one, right? Which might be very slow.

If in the future I want to run the job daily, might that be an issue?

Or should I go with the initial idea of doing the Put with the M/R job
and the delete with HBASE-6942?

Thanks,

JM


2012/10/17, Michael Segel michael_se...@hotmail.com:
 Hi,

 I'm a firm believer in KISS (Keep It Simple, Stupid)

 The Map/Reduce (map job only) is the simplest and least prone to failure.

 Not sure why you would want to do this using coprocessors.

 How often are you running this job? It sounds like its going to be
 sporadic.

 -Mike

 On Oct 17, 2012, at 7:11 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
 wrote:

 Hi,

 Can someone please help me to understand the pros and cons between
 those 2 options for the following usecase?

 I need to transfer all the rows between 2 timestamps to another table.

 My first idea was to run a MapReduce to map the rows and store them on
 another table, and then delete them using an end point coprocessor.
 But the more I look into it, the more I think the MapReduce is not a
 good idea and I should use a coprocessor instead.

 BUT... The MapReduce framework guarantee me that it will run against
 all the regions. I tried to stop a regionserver while the job was
 running. The region moved, and the MapReduce restarted the job from
 the new location. Will the coprocessor do the same thing?

 Also, I found the webconsole for the MapReduce with the number of
 jobs, the status, etc. Is there the same thing with the coprocessors?

 Are all coprocessors running at the same time on all regions, which
 mean we can have 100 of them running on a regionserver at a time? Or
 are they running like the MapReduce jobs based on some configured
 values?

 Thanks,

 JM





Re: Coprocessor end point vs MapReduce?

2012-10-17 Thread Michael Segel
If you're going to be running this weekly, I would suggest that you stick with 
the M/R job. 

Is there any reason why you need to be worried about the time it takes to do 
the deletes?


On Oct 17, 2012, at 8:19 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org 
wrote:

 Hi Mike,
 
 I'm expecting to run the job weekly. I initially thought about using
 end points because I found HBASE-6942 which was a good example for my
 needs.
 
 I'm fine with the Put part for the Map/Reduce, but I'm not sure about
 the delete. That's why I look at coprocessors. Then I figure that I
 also can do the Put on the coprocessor side.
 
 On a M/R, can I delete the row I'm dealing with based on some criteria
 like timestamp? If I do that, I will not do bulk deletes, but I will
 delete the rows one by one, right? Which might be very slow.
 
 If in the future I want to run the job daily, might that be an issue?
 
 Or should I go with the initial idea of doing the Put with the M/R job
 and the delete with HBASE-6942?
 
 Thanks,
 
 JM
 
 
 2012/10/17, Michael Segel michael_se...@hotmail.com:
 Hi,
 
 I'm a firm believer in KISS (Keep It Simple, Stupid)
 
 The Map/Reduce (map job only) is the simplest and least prone to failure.
 
 Not sure why you would want to do this using coprocessors.
 
 How often are you running this job? It sounds like its going to be
 sporadic.
 
 -Mike
 
 On Oct 17, 2012, at 7:11 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
 wrote:
 
 Hi,
 
 Can someone please help me to understand the pros and cons between
 those 2 options for the following usecase?
 
 I need to transfer all the rows between 2 timestamps to another table.
 
 My first idea was to run a MapReduce to map the rows and store them on
 another table, and then delete them using an end point coprocessor.
 But the more I look into it, the more I think the MapReduce is not a
 good idea and I should use a coprocessor instead.
 
 BUT... The MapReduce framework guarantee me that it will run against
 all the regions. I tried to stop a regionserver while the job was
 running. The region moved, and the MapReduce restarted the job from
 the new location. Will the coprocessor do the same thing?
 
 Also, I found the webconsole for the MapReduce with the number of
 jobs, the status, etc. Is there the same thing with the coprocessors?
 
 Are all coprocessors running at the same time on all regions, which
 mean we can have 100 of them running on a regionserver at a time? Or
 are they running like the MapReduce jobs based on some configured
 values?
 
 Thanks,
 
 JM
 
 
 
 



Re: Coprocessor end point vs MapReduce?

2012-10-17 Thread Jean-Marc Spaggiari
I don't have any concern about the time it's taking. It's more about
the load it's putting on the cluster. I have other jobs that I need to
run (secondary index, data processing, etc.). So the more time this
new job is taking, the less CPU the others will have.

I tried the M/R and I really liked the way it's done. So my only
concern will really be the performance of the delete part.

That's why I'm wondering what's the best practice to move a row to
another table.

2012/10/17, Michael Segel michael_se...@hotmail.com:
 If you're going to be running this weekly, I would suggest that you stick
 with the M/R job.

 Is there any reason why you need to be worried about the time it takes to do
 the deletes?


 On Oct 17, 2012, at 8:19 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
 wrote:

 Hi Mike,

 I'm expecting to run the job weekly. I initially thought about using
 end points because I found HBASE-6942 which was a good example for my
 needs.

 I'm fine with the Put part for the Map/Reduce, but I'm not sure about
 the delete. That's why I look at coprocessors. Then I figure that I
 also can do the Put on the coprocessor side.

 On a M/R, can I delete the row I'm dealing with based on some criteria
 like timestamp? If I do that, I will not do bulk deletes, but I will
 delete the rows one by one, right? Which might be very slow.

 If in the future I want to run the job daily, might that be an issue?

 Or should I go with the initial idea of doing the Put with the M/R job
 and the delete with HBASE-6942?

 Thanks,

 JM


 2012/10/17, Michael Segel michael_se...@hotmail.com:
 Hi,

 I'm a firm believer in KISS (Keep It Simple, Stupid)

 The Map/Reduce (map job only) is the simplest and least prone to
 failure.

 Not sure why you would want to do this using coprocessors.

 How often are you running this job? It sounds like its going to be
 sporadic.

 -Mike

 On Oct 17, 2012, at 7:11 PM, Jean-Marc Spaggiari
 jean-m...@spaggiari.org
 wrote:

 Hi,

 Can someone please help me to understand the pros and cons between
 those 2 options for the following usecase?

 I need to transfer all the rows between 2 timestamps to another table.

 My first idea was to run a MapReduce to map the rows and store them on
 another table, and then delete them using an end point coprocessor.
 But the more I look into it, the more I think the MapReduce is not a
 good idea and I should use a coprocessor instead.

 BUT... The MapReduce framework guarantee me that it will run against
 all the regions. I tried to stop a regionserver while the job was
 running. The region moved, and the MapReduce restarted the job from
 the new location. Will the coprocessor do the same thing?

 Also, I found the webconsole for the MapReduce with the number of
 jobs, the status, etc. Is there the same thing with the coprocessors?

 Are all coprocessors running at the same time on all regions, which
 mean we can have 100 of them running on a regionserver at a time? Or
 are they running like the MapReduce jobs based on some configured
 values?

 Thanks,

 JM








Re: Problems using unqualified hostname on hbase

2012-10-17 Thread yun peng
Thanks for the notes. I am running the project configuration for
comparison, (as worst case for locality)...
On other hand, even if I make them to colocate, the problem persists, as
the property hbase.zookeeper.quorum has to be fully qualified name..

Does this problem have anything to do with DNS setting in hbase?
Regards,
Richard

On Wed, Oct 17, 2012 at 3:13 PM, Doug Meil doug.m...@explorysmedical.comwrote:


 Hi there.  You generally don't want to run with 2 clusters like (HBase on
 one, HDFS on the other) that because your regions have 0% locality.

 For more information on this topic, seeŠ.

 http://hbase.apache.org/book.html#regions.arch.locality




 On 10/17/12 12:19 PM, Richard Tang tristartom.t...@gmail.com wrote:

 Hello, everyone,
 I have problems using hbase based on unqualified hostname. My ``hbase``
 runs  in a cluster and ``hdfs`` on another cluster. While using fully
 qualified name on ``hbase``, for properties like ``hbase.rootdir`` and
 ``hbase.zookeeper.quorum``, there is no problem. But when I change them to
 be shorter unqualified names, like ``node4`` and ``node2``, (which are
 resolved to local ip address by ``/etc/hosts``, like ``10.0.0.8``), the
 hbase cluster begin to throw ``Connect refused`` messages. Anyone
 encounter
 same problem here?What is the possible reason behind all theses? Thanks.
 Regards,
 Richard





RE: Where is code in hbase that physically delete a record?

2012-10-17 Thread Anoop Sam John
Hi Yun,
 We have preCompactScannerOpen() and preCompact() hooks.. 
As we said, for compaction, a scanner for reading all corresponding HFiles ( 
all HFiles in major compaction) will be created and scan via that scanner.. ( 
calling next() methods).. The kernel will do this way.. 
Now using these hooks you can create a wrapper over the actual scanner... In 
fact you can use preCompact() hook(I think that is fine for you).. By the time 
this is being called,  the actual scanner is made and will get that object 
passed to your hook... You can create  a custom scanner impl and wrap the 
actual scanner within that and return the new wrapper scanner from your post 
hook.. [Yes its return type is InternalScanner]  The actual scanner you can use 
as a delegator to do the actual scanning purpose... Now all the KVs ( which the 
underlying scanner passed) will flow via ur new wrapper scanner where you can 
avoid certain KVs based on your condition or logic

Core WrapperScannerImpl  Actual Scanner 
(created by core)
- next(ListKeyValue) - 
next(ListKeyValue)  
- 
Do the real scan from HFiles
  See List KVs and remove 
   those u dont want
 -
Only the passed
KVs come in final
merged file
   
Hope I make it clear for you :)

Note : - preCompactScannerOpen() will be called before even creating the actual 
scanner while preCompact() after this scanner creation.. You can see the code 
in Store#compactStore()

-Anoop-

From: yun peng [pengyunm...@gmail.com]
Sent: Wednesday, October 17, 2012 9:04 PM
To: user@hbase.apache.org
Subject: Re: Where is code in hbase that physically delete a record?

Hi, Ram and Anoop, Thanks for the nice reference on the java file, which I
will check through.

It is interesting to know about the recent feature on
preCompactScannerOpen() hook. Ram, it would be nice if I can know how to
specify conditions like c1 = 'a'.  I have also checked the example code in
hbase 6496 link https://issues.apache.org/jira/browse/HBASE-6496. which
show how to delete data before time as in a on-demand specification...
Cheers,
Yun

On Wed, Oct 17, 2012 at 8:46 AM, Ramkrishna.S.Vasudevan 
ramkrishna.vasude...@huawei.com wrote:

 Also to see the code how the delete happens pls refer to StoreScanner.java
 and how the ScanQueryMatcher.match() works.

 That is where we decide if any kv has to be avoided due to already deleted
 tombstone marker.

 Forgot to tell you about this.

 Regards
 Ram

  -Original Message-
  From: yun peng [mailto:pengyunm...@gmail.com]
  Sent: Wednesday, October 17, 2012 5:54 PM
  To: user@hbase.apache.org
  Subject: Where is code in hbase that physically delete a record?
 
  Hi, All,
  I want to find internal code in hbase where physical deleting a record
  occurs.
 
  -some of my understanding.
  Correct me if I am wrong. (It is largely based on my experience and
  even
  speculation.) Logically deleting a KeyValue data in hbase is performed
  by
  marking tombmarker (by Delete() per records) or setting TTL/max_version
  (per Store). After these actions, however, the physical data are still
  there, somewhere in the system. Physically deleting a record in hbase
  is
  realised by *a scanner to discard a keyvalue data record* during the
  major_compact.
 
  -what I need
  I want to extend hbase to associate some actions with physically
  deleting a
  record. Does hbase provide such hook (or coprocessor API) to inject
  code
  for each KV record that is skipped by hbase storescanner in
  major_compact.
  If not, anyone knows where should I look into in hbase (-0.94.2) for
  such
  code modification?
 
  Thanks.
  Yun



答复: could not start HMaster

2012-10-17 Thread 谢良
Is there any complain in HDFS log ?

发件人: yulin...@dell.com [yulin...@dell.com]
发送时间: 2012年10月16日 4:35
收件人: user@hbase.apache.org
主题: RE: could not start HMaster

No, I don't think so. This is a dedicated testing machine and no automatic 
cleaning up on the /tmp folder...

Thanks,

YuLing

-Original Message-
From: Jimmy Xiang [mailto:jxi...@cloudera.com]
Sent: Monday, October 15, 2012 1:32 PM
To: user@hbase.apache.org
Subject: Re: could not start HMaster

Is your /tmp folder cleaned up automatically and some files are gone?

Thanks,
Jimmy

On Mon, Oct 15, 2012 at 12:26 PM,  yulin...@dell.com wrote:
 Hi,

 I set up a single node HBase server on top of Hadoop and it has been working 
 fine with most of my testing scenarios such as creating tables and inserting 
 data. Just during the weekend, I accidentally left a testing script running 
 that inserts about 67 rows every min for three days. Today when I looked at 
 the environment, I found out that HBase master could not be started anymore. 
 Digging into the logs, I could see that starting from the second day, HBase 
 first got an exception as follows:

 2012-10-13 13:05:07,367 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: 
 Roll 
 /tmp/hbase-root/hbase/.logs/sflow-linux02.santanet.dell.com,47137,1348606516541/sflow-linux02.santanet.dell.com%2C47137%2C1348606516541.1350155105992,
  entries=7981, filesize=3754556.  for 
 /tmp/hbase-root/hbase/.logs/sflow-linux02.santanet.dell.com,47137,1348606516541/sflow-linux02.santanet.dell.com%2C47137%2C1348606516541.1350158707364
 2012-10-13 13:05:07,367 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: 
 moving old hlog file 
 /tmp/hbase-root/hbase/.logs/sflow-linux02.santanet.dell.com,47137,1348606516541/sflow-linux02.santanet.dell.com%2C47137%2C1348606516541.1348606520442
  whose highest sequenceid is 4 to 
 /tmp/hbase-root/hbase/.oldlogs/sflow-linux02.santanet.dell.com%2C47137%2C1348606516541.1348606520442
 2012-10-13 13:05:07,379 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 sflow-linux02.santanet.dell.com,47137,1348606516541: IOE in log roller
 java.io.FileNotFoundException: File 
 file:/tmp/hbase-root/hbase/.logs/sflow-linux02.santanet.dell.com,47137,1348606516541/sflow-linux02.santanet.dell.com%2C47137%2C1348606516541.1348606520442
  does not exist.
at 
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:213)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
at 
 org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:287)
at 
 org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:428)
at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.archiveLogFile(HLog.java:825)
at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.cleanOldLogs(HLog.java:708)
at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.rollWriter(HLog.java:603)
at 
 org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:94)
at java.lang.Thread.run(Thread.java:662)

 Then SplitLogManager kept splitting the logs for about two days:
 2012-10-13 13:05:09,061 WARN org.apache.zookeeper.server.NIOServerCnxn: 
 caught end of stream exception
 EndOfStreamException: Unable to read additional data from client sessionid 
 0x139ff3656b30003, likely client has closed socket
at 
 org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at 
 org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224)
at java.lang.Thread.run(Thread.java:662)
 2012-10-13 13:05:09,061 INFO org.apache.zookeeper.server.NIOServerCnxn: 
 Closed socket connection for client /127.0.0.1:52573 which had sessionid 
 0x139ff3656b30003
 2012-10-13 13:05:09,082 INFO org.apache.zookeeper.ClientCnxn: EventThread 
 shut down
 2012-10-13 13:05:09,085 INFO 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Splitting logs 
 for sflow-linux02.santanet.dell.com,47137,1348606516541
 2012-10-13 13:05:09,086 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 dead splitlog worker sflow-linux02.santanet.dell.com,47137,1348606516541
 2012-10-13 13:05:09,101 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 started splitting logs in 
 [file:/tmp/hbase-root/hbase/.logs/sflow-linux02.santanet.dell.com,47137,1348606516541-splitting]
 2012-10-13 13:05:14,545 INFO org.apache.hadoop.hbase.regionserver.Leases: 
 RegionServer:0;sflow-linux02.santanet.dell.com,47137,1348606516541.leaseChecker
  closing leases
 2012-10-13 13:05:14,545 INFO org.apache.hadoop.hbase.regionserver.Leases: 
 RegionServer:0;sflow-linux02.santanet.dell.com,47137,1348606516541.leaseChecker
  closed leases
 2012-10-13 13:08:09,275 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 task /hbase/splitlog/RESCAN28 entered state done 
 

Re: crafting your key - scan vs. get

2012-10-17 Thread Neil Yalowitz
This is a helpful response, thanks.  Our use case fits the Show me the
most recent events by user A you described.

So using the first example, a table populated with events of user ID AA.

ROWCOLUMN+CELL


 AA
 column=data:event, timestamp=1350420705459, value=myeventval1


 AA
 column=data:event9998, timestamp=1350420704490, value=myeventval2


 AA
 column=data:event9997, timestamp=1350420704567, value=myeventval3

NOTE1: I replaced the TS stuff with ...9997 for brevity, and the
example user ID AA would actually be hashed to avoid hotspotting
NOTE2: I assume I should shorten the chosen column family and qualifier
before writing it to a large production table (for instance, d instead of
data and e instead of event)

I hope I have that right.  Thanks for the response!

As for including enough description for the question to be not-boring,
I'm never quite sure when an email will grow so long that no one will read
it.  :)  So to give more background: Each event is about 1KB of data.  The
frequency is highly variable... over any given period of time, some users
may only log one event and no more, some users may log a few events (10 to
100), in some rare cases a user may log many events (1000+).  The width of
the column is some concern for the users with many events, but I'm thinking
a few rare rows with 1KB x 1000+ width shouldn't kill us.

If I may ask a couple of followup question about your comments:

 Then store each event in a separate column where the column name is
something like event + (max Long - Time Stamp) .

 This will place the most recent event first.

Although I know row keys are sorted, I'm not sure what this means for a
qualifier.  The scan result can depend on what cf:qual is used?  ...and
that determines which column value is first?  Is this related to using
setMaxResultsPerColumnFamily(1)?  (ie-- only return one column value, so
sort on qualifier and return the first val found)

 The reason I say event + the long, is that you may want to place user
specific information in a column and you would want to make sure it was in
front of the event data.

Same question as above, I'm not sure what would place a column in front.
 Am I missing something?

 In the first case, you can use get() while still a scan, its a very
efficient fetch.
 In the second, you will always need to do a scan.

This is the core of my original question.  My anecdotal tests in hbase
shell showed a Get executing about 3x faster than a Scan with
start/stoprow, but I don't trust my crude testing much and hoped someone
could describe the performance trade-off between Scan vs. Get.


Thanks again for anyone who read this far.


Neil Yalowitz
neilyalow...@gmail.com

On Wed, Oct 17, 2012 at 10:45 AM, Michael Segel
michael_se...@hotmail.comwrote:

 Neil,


 Since you asked
 Actually your question is kind of a boring question. ;-) [Note I will
 probably get flamed for saying it, even if it is the truth!]

 Having said that...
 Boring as it is, its an important topic that many still seem to trivialize
 in terms of its impact on performance.

 Before answering your question, lets take a step back and ask a more
 important question...
 What data do you to capture and store in HBase?
 and then ask yourself...
 How do I plan on accessing the data?

 From what I can tell, you want to track certain events made by a user.
 So you're recording at Time X, user A did something.

 Then the question is how do you want to access the data.

 Do you primarily say Show me all the events in the past 15 minutes and
 organize them by user?
 Or do you say Show me the most recent events by user A ?

 Here's the issue.

 If you are more interested and will frequently ask the question of Show
 me the most recent events by user A,

 Then you would want to do the following:
 Key = User ID (hashed if necessary)
 Column Family: Data (For lack of a better name)

 Then store each event in a separate column where the column name is
 something like event + (max Long - Time Stamp) .

 This will place the most recent event first.

 The reason I say event + the long, is that you may want to place user
 specific information in a column and you would want to make sure it was in
 front of the event data.

 Now if your access pattern was more along the lines of show me  the events
 that occurred in the past 15 minutes, then you would use the time stamp and
 then have to worry about hot spotting and region splits. But then you could
 get your data from a simple start/stop row scan.

 In the first case, you can use get() while still a scan, its a very
 efficient fetch.
 In the second, you will always need to do a scan.

 Having said that, there are other things to think about including
 frequency and how wide your rows will get over time.
 (Mainly in terms of the first example I gave.)

 The reason I said that your question is boring is that its been asked
 numerous times and every time 

RE: Where is code in hbase that physically delete a record?

2012-10-17 Thread Ramkrishna.S.Vasudevan
Hi Yun

Hope Anoop's clear explanation will help you.
Just to add on, after you wrap the StoreScanner in your Custome Scanner Impl
you will invoke the next(ListKeyValue) on the delegator(here the delegator
is the actual StoreScanner).
The delegator will give you the KV list that it has fetched from underlying
Scanners (Memstore and StoreFileScanner).
Now on the returned kv you can do a check say if the KV has a column C1 and
its value is 'a', just skip it so that this scanner does not send the kv to
the actual Scanner on the outside of the custom Scanner which the compaction
tries to use.

The Code may look lik this
Class CustomScanner implements InternalScanner{
StoreScanner delegate;
Public CustomScanner(){
Delegate = new SToreScanner();

Public boolean next(ListKeyValuekv)
{
delegate.next(kv);
foreach(kv){
//Do necessary filtering here.
}

}
}

Regards
Ram

 -Original Message-
 From: Anoop Sam John [mailto:anoo...@huawei.com]
 Sent: Thursday, October 18, 2012 9:02 AM
 To: user@hbase.apache.org
 Subject: RE: Where is code in hbase that physically delete a record?
 
 Hi Yun,
  We have preCompactScannerOpen() and preCompact() hooks..
 As we said, for compaction, a scanner for reading all corresponding
 HFiles ( all HFiles in major compaction) will be created and scan via
 that scanner.. ( calling next() methods).. The kernel will do this
 way..
 Now using these hooks you can create a wrapper over the actual
 scanner... In fact you can use preCompact() hook(I think that is fine
 for you).. By the time this is being called,  the actual scanner is
 made and will get that object passed to your hook... You can create  a
 custom scanner impl and wrap the actual scanner within that and return
 the new wrapper scanner from your post hook.. [Yes its return type is
 InternalScanner]  The actual scanner you can use as a delegator to do
 the actual scanning purpose... Now all the KVs ( which the underlying
 scanner passed) will flow via ur new wrapper scanner where you can
 avoid certain KVs based on your condition or logic
 
 Core WrapperScannerImpl  Actual
 Scanner (created by core)
 - next(ListKeyValue) -
 next(ListKeyValue)
 -
 Do the real scan from HFiles
   See List KVs and remove
those u dont want
  -
 Only the passed
 KVs come in final
 merged file
 
 Hope I make it clear for you :)
 
 Note : - preCompactScannerOpen() will be called before even creating
 the actual scanner while preCompact() after this scanner creation.. You
 can see the code in Store#compactStore()
 
 -Anoop-
 
 From: yun peng [pengyunm...@gmail.com]
 Sent: Wednesday, October 17, 2012 9:04 PM
 To: user@hbase.apache.org
 Subject: Re: Where is code in hbase that physically delete a record?
 
 Hi, Ram and Anoop, Thanks for the nice reference on the java file,
 which I
 will check through.
 
 It is interesting to know about the recent feature on
 preCompactScannerOpen() hook. Ram, it would be nice if I can know how
 to
 specify conditions like c1 = 'a'.  I have also checked the example code
 in
 hbase 6496 link https://issues.apache.org/jira/browse/HBASE-6496.
 which
 show how to delete data before time as in a on-demand specification...
 Cheers,
 Yun
 
 On Wed, Oct 17, 2012 at 8:46 AM, Ramkrishna.S.Vasudevan 
 ramkrishna.vasude...@huawei.com wrote:
 
  Also to see the code how the delete happens pls refer to
 StoreScanner.java
  and how the ScanQueryMatcher.match() works.
 
  That is where we decide if any kv has to be avoided due to already
 deleted
  tombstone marker.
 
  Forgot to tell you about this.
 
  Regards
  Ram
 
   -Original Message-
   From: yun peng [mailto:pengyunm...@gmail.com]
   Sent: Wednesday, October 17, 2012 5:54 PM
   To: user@hbase.apache.org
   Subject: Where is code in hbase that physically delete a record?
  
   Hi, All,
   I want to find internal code in hbase where physical deleting a
 record
   occurs.
  
   -some of my understanding.
   Correct me if I am wrong. (It is largely based on my experience and
   even
   speculation.) Logically deleting a KeyValue data in hbase is
 performed
   by
   marking tombmarker (by Delete() per records) or setting
 TTL/max_version
   (per Store). After these actions, however, the physical data are
 still
   there, somewhere in the system. Physically deleting a record in
 hbase
   is
   realised by *a scanner to discard a keyvalue data record* during
 the
   major_compact.
  
   -what I need
   I want to extend hbase to associate some actions with physically
   deleting a
   record. Does hbase provide such hook (or coprocessor API) to inject
   code
   for each KV record that is skipped by hbase storescanner in
   major_compact.
   If not, anyone knows where 

RE: Coprocessor end point vs MapReduce?

2012-10-17 Thread Anoop Sam John

Hi Jean
   Are all coprocessors running at the same time on all regions
Yes it will try to run all in parallel.. It will submit one callable for each 
of the involved region. Though it uses the Executor pool available with the 
HTable. So the available slots it that and total regions count matters the 
parallel run.. 
The MapReduce framework guarantee me that it will run against
 all the regions. I tried to stop a regionserver while the job was
 running. The region moved, and the MapReduce restarted the job from
 the new location. Will the coprocessor do the same thing
Yes it will.. There will be retry (max 10 times by def) for every call to a 
region.
Though one another point came to my mind now is what will happen if in btw a 
region splits? How MR will handle this case? Sorry I dont know.Need to see the 
code.

Regarding your use case Jean,
 You want to put some data to another table right? How you plan to make use of 
CP for this Put?(I wonder) For the bulk delete as you said if you use an MR, it 
is like a scan to client side and delete rows one by one(Though many parallel 
clients ur Mappers). So as you expect it will be very slow comparing to the new 
approach what we are trying to do in 6942..

Hope I have answered your questions.. :)

-Anoop-

From: Michael Segel [michael_se...@hotmail.com]
Sent: Thursday, October 18, 2012 7:20 AM
To: user@hbase.apache.org
Subject: Re: Coprocessor end point vs MapReduce?

Run your weekly job in a low priority fair scheduler/capacity scheduler queue.

Maybe its just me, but I look at Coprocessors as a similar structure to RDBMS 
triggers and stored procedures.
You need to restrain and use them sparingly otherwise you end up creating 
performance issues.

Just IMHO.

-Mike

On Oct 17, 2012, at 8:44 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org 
wrote:

 I don't have any concern about the time it's taking. It's more about
 the load it's putting on the cluster. I have other jobs that I need to
 run (secondary index, data processing, etc.). So the more time this
 new job is taking, the less CPU the others will have.

 I tried the M/R and I really liked the way it's done. So my only
 concern will really be the performance of the delete part.

 That's why I'm wondering what's the best practice to move a row to
 another table.

 2012/10/17, Michael Segel michael_se...@hotmail.com:
 If you're going to be running this weekly, I would suggest that you stick
 with the M/R job.

 Is there any reason why you need to be worried about the time it takes to do
 the deletes?


 On Oct 17, 2012, at 8:19 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org
 wrote:

 Hi Mike,

 I'm expecting to run the job weekly. I initially thought about using
 end points because I found HBASE-6942 which was a good example for my
 needs.

 I'm fine with the Put part for the Map/Reduce, but I'm not sure about
 the delete. That's why I look at coprocessors. Then I figure that I
 also can do the Put on the coprocessor side.

 On a M/R, can I delete the row I'm dealing with based on some criteria
 like timestamp? If I do that, I will not do bulk deletes, but I will
 delete the rows one by one, right? Which might be very slow.

 If in the future I want to run the job daily, might that be an issue?

 Or should I go with the initial idea of doing the Put with the M/R job
 and the delete with HBASE-6942?

 Thanks,

 JM


 2012/10/17, Michael Segel michael_se...@hotmail.com:
 Hi,

 I'm a firm believer in KISS (Keep It Simple, Stupid)

 The Map/Reduce (map job only) is the simplest and least prone to
 failure.

 Not sure why you would want to do this using coprocessors.

 How often are you running this job? It sounds like its going to be
 sporadic.

 -Mike

 On Oct 17, 2012, at 7:11 PM, Jean-Marc Spaggiari
 jean-m...@spaggiari.org
 wrote:

 Hi,

 Can someone please help me to understand the pros and cons between
 those 2 options for the following usecase?

 I need to transfer all the rows between 2 timestamps to another table.

 My first idea was to run a MapReduce to map the rows and store them on
 another table, and then delete them using an end point coprocessor.
 But the more I look into it, the more I think the MapReduce is not a
 good idea and I should use a coprocessor instead.

 BUT... The MapReduce framework guarantee me that it will run against
 all the regions. I tried to stop a regionserver while the job was
 running. The region moved, and the MapReduce restarted the job from
 the new location. Will the coprocessor do the same thing?

 Also, I found the webconsole for the MapReduce with the number of
 jobs, the status, etc. Is there the same thing with the coprocessors?

 Are all coprocessors running at the same time on all regions, which
 mean we can have 100 of them running on a regionserver at a time? Or
 are they running like the MapReduce jobs based on some configured
 values?

 Thanks,

 JM








RE: Regionservers not connecting to master

2012-10-17 Thread Ramkrishna.S.Vasudevan
Just check out your etc/hosts files.  I have not worked on VMs anyway to
tell the problem more precisely.

Regards
Ram

 -Original Message-
 From: Dan Brodsky [mailto:danbrod...@gmail.com]
 Sent: Wednesday, October 17, 2012 11:05 PM
 To: user@hbase.apache.org
 Subject: Re: Regionservers not connecting to master
 
 Well, slight change: only 1 of the ZK peers happens to work. When a RS
 connects to the other 2, it doesn't go further than that. The 1 ZK
 node that happens to work is the one that runs on the same VM as the
 master.
 
 Sounds like it could be network connectivity issues, so I'm going to
 investigate that a bit further, but other suggestions are welcome.
 
 
 On Wed, Oct 17, 2012 at 1:29 PM, Dan Brodsky danbrod...@gmail.com
 wrote:
  Ram,
 
  Thanks for your suggestions.
 
  The datanodes are all built using the same image, so I know they're
  all pointed to the same ZK nodes.
 
  I monitored all three ZK logs, the master log, and the regionserver
  log for each RS I was trying to bring back online. I'm glad I have a
  big screen. :-) Here is what I found:
 
  Whenever a regionserver connects to one particular ZK peer *first*,
 it
  never goes online. The ZK log shows a successful connection
  negotiating a timeout value, and the RS's log shows a successful ZK
  connection, but then it just sits there.
 
  When a regionserver starts up and connects to one of the other two ZK
  peers first, it connects to a second one successfully, then contacts
  the master, and it comes up and all is happy.
 
  So the problem of regionservers not connecting to master only happens
  when the RS tries one particular ZK node as its first ZK connection.
  But the logs aren't helpful for diagnosing further than that.
 
  Additional thoughts?
 
 
  On Wed, Oct 17, 2012 at 9:12 AM, Ramkrishna.S.Vasudevan
  ramkrishna.vasude...@huawei.com wrote:
  Can you try like start any of the regionservers that are not
 connecting at
  all.  May be start 2 of them.
  Observer master logs.  See whether it says
  'Waiting for RegionServers to checkin'?.
 
  Just to confirm your ZK ip and port is correct thro out the cluster?
 If
  multitenant cluster then you may be the other regionservers are
 connecting
  to someother ZK cluster?
  Wild guess :)
 
  Regards
  Ram
  -Original Message-
  From: Dan Brodsky [mailto:danbrod...@gmail.com]
  Sent: Wednesday, October 17, 2012 6:31 PM
  To: user@hbase.apache.org
  Subject: Regionservers not connecting to master
 
  Good morning,
 
  I have a 10 node Hadoop/Hbase cluster, plus a namenode VM, plus
 three
  Zookeeper quorum peers (one on the namenode, one on a dedicated ZK
  peer VM, and one on a third box). All 10 HDFS datanodes are also
 Hbase
  regionservers.
 
  Several weeks ago, we had six HDFS datanodes go offline suddenly
 (with
  no meaningful error messages), and since then, I have been unable
 to
  get all 10 regionservers to connect to the Hbase master. I've tried
  bringing the cluster down and rebooting all the boxes, but no joy.
 The
  machines are all running, and hbase-regionserver appears to start
  normally on each one.
 
  Right now, my master status page (http://namenode:60010) shows 3
  regionservers online. There are also dozens of regions in
 transition
  listed on the status page (in the PENDING_OPEN state), but each of
  those are on one of the regionservers already online.
 
  The 7 other regionservers' log files show a successful connection
 to
  one ZK peer, followed by a regular trail of these messages:
 
  2012-10-17 12:36:08,394 DEBUG
  org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats:
 total=8.17
  MB, free=987.67 MB, max=995.84 MB, blocks=0, accesses=0, hits=0,
  hitRatio=0cachingAccesses=0, cachingHits=0,
  cachingHitsRatio=0evictions=0, evicted=0, evictedPerRun=NaN
 
  If I had to wager a guess, it seems like the 7 offline
 regionservers
  are not connecting to other ZK peers, but there isn't anything in
 the
  ZK logs to indicate why.
 
  Thoughts?
 
  Dan
 



RE: Unable to add co-processor to table through HBase api

2012-10-17 Thread Ramkrishna.S.Vasudevan
Do let me know if you are stuck up.  May be I did not get your actual
problem.  

All the best.

Regards
Ram

 -Original Message-
 From: anil gupta [mailto:anilgupt...@gmail.com]
 Sent: Wednesday, October 17, 2012 11:34 PM
 To: user@hbase.apache.org
 Subject: Re: Unable to add co-processor to table through HBase api
 
 Hi Ram,
 
 The table exists and I don't get any error while running the program(i
 would get an error if the table did not exist). I am running a
 distributed
 cluster.
 
 Tried following additional ways also:
 
1. I tried loading the AggregationImplementation coproc.
2. I also tried adding the coprocs while the table is enabled.
 
 
 Also had a look at the JUnit test cases and could not find any
 difference.
 
 I am going to try adding the coproc along with jar in Hdfs and see what
 happens.
 
 Thanks,
 Anil Gupta
 
 On Tue, Oct 16, 2012 at 11:44 PM, Ramkrishna.S.Vasudevan 
 ramkrishna.vasude...@huawei.com wrote:
 
  I tried out a sample test class.  It is working properly.  I just
 have a
  doubt whether you are doing the
  Htd.addCoprocessor() step before creating the table?  Try that way
 hope it
  should work.
 
  Regards
  Ram
 
   -Original Message-
   From: anil gupta [mailto:anilgupt...@gmail.com]
   Sent: Wednesday, October 17, 2012 4:05 AM
   To: user@hbase.apache.org
   Subject: Unable to add co-processor to table through HBase api
  
   Hi All,
  
   I would like to add a RegionObserver to a HBase table through HBase
   api. I
   don't want to put this RegionObserver as a user or system co-
 processor
   in
   hbase-site.xml since this is specific to a table. So, option of
 using
   hbase
   properties is out. I have already copied the jar file in the
 classpath
   of
   region server and restarted the cluster.
  
   Can any one point out the problem in following code for adding the
   co-processor to the table:
   private void modifyTable(String name) throws IOException
   {
   Configuration conf = HBaseConfiguration.create();
   HBaseAdmin hAdmin = new HBaseAdmin(conf);
   hAdmin.disableTable(txn_subset);
   if(!hAdmin.isTableEnabled(txn_subset))
   {
 System.err.println(Trying to add coproc to table); // using
 err
   so
   that it's easy to read this on eclipse console.
  
  
 hAdmin.getTableDescriptor(Bytes.toBytes(txn_subset)).addCoprocessor(
   com.intuit.hbase.poc.coprocessor.observer.IhubTxnRegionObserver);
 if(
  
 hAdmin.getTableDescriptor(Bytes.toBytes(txn_subset)).hasCoprocessor(
   com.intuit.hbase.poc.coprocessor.observer.IhubTxnRegionObserver)
   )
 {
   System.err.println(YIPIE!!!);
 }
 hAdmin.enableTable(ihub_txn_subset);
   }
   hAdmin.close();
   }*
   *
   --
   Thanks  Regards,
   Anil Gupta
 
 
 
 
 --
 Thanks  Regards,
 Anil Gupta