regionserver stucking

2013-04-17 Thread hua beatls
HI,
   from web ui I find one of my 5 regionserver missing,   and check the log
find:
*
[hadoop@hadoop1 logs]$ tail -f hbase-hadoop-regionserver-hadoop1.log
2013-04-17 15:21:24,789 DEBUG
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting for Split
Thread to finish...
2013-04-17 15:22:24,789 DEBUG
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting for Split
Thread to finish...
*
but the regionserver process is still alive from jps output.

 any suggestion?

Thanks!

beatls


Re: Loading text files from local file system

2013-04-17 Thread Suraj Varma
Have you considered using hfile.compression, perhaps with snappy
compression?
See this thread:
http://grokbase.com/t/hbase/user/10cqrd06pc/hbase-bulk-load-script
--Suraj



On Tue, Apr 16, 2013 at 9:31 PM, Omkar Joshi omkar.jo...@lntinfotech.comwrote:

 The background thread is here :


 http://mail-archives.apache.org/mod_mbox/hbase-user/201304.mbox/%3ce689a42b73c5a545ad77332a4fc75d8c1efbe84...@vshinmsmbx01.vshodc.lntinfotech.com%3E

 Following are the commands that I'm using to load files onto HBase :

 HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`
 ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.94.6.1.jar importtsv
 '-Dimporttsv.separator=;'
 -Dimporttsv.columns=HBASE_ROW_KEY,PRODUCT_INFO:NAME,PRODUCT_INFO:CATEGORY,PRODUCT_INFO:GROUP,PRODUCT_INFO:COMPANY,PRODUCT_INFO:COST,PRODUCT_INFO:COLOR,PRODUCT_INFO:BLANK_COLUMN
 -Dimporttsv.bulk.output=hdfs://cldx-1139-1033:9000/hbase/storefileoutput_6
 PRODUCTS hdfs://cldx-1139-1033:9000/hbase/copiedFromLocal/product_6.txt

 HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`
 ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.94.6.1.jar
 completebulkload hdfs://cldx-1139-1033:9000/hbase/storefileoutput_6 PRODUCTS

 As seen, the text files to be loaded in HBase first need to be loaded on
 HDFS. Given our infrastructure constraints/limitations, I'm getting space
 issues. The data in the text files is around 20GB + replication is
 consuming a lot of DFS.

 Is there a way wherein a text file can be loaded directly from the local
 file system onto HBase?

 Regards,
 Omkar Joshi

 
 The contents of this e-mail and any attachment(s) may contain confidential
 or privileged information for the intended recipient(s). Unintended
 recipients are prohibited from taking action on the basis of information in
 this e-mail and using or disseminating the information, and must notify the
 sender and delete it from their system. LT Infotech will not accept
 responsibility or liability for the accuracy or completeness of, or the
 presence of any virus or disabling code in this e-mail



Re: regionserver stucking

2013-04-17 Thread ramkrishna vasudevan
Can you attach a thread dump for this ?  Which version of HBase are you
using.

Logs also if attached would be fine.

Regards
Ram


On Wed, Apr 17, 2013 at 1:07 PM, hua beatls bea...@gmail.com wrote:

 HI,
from web ui I find one of my 5 regionserver missing,   and check the log
 find:

 *
 [hadoop@hadoop1 logs]$ tail -f hbase-hadoop-regionserver-hadoop1.log
 2013-04-17 15:21:24,789 DEBUG
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting for Split
 Thread to finish...
 2013-04-17 15:22:24,789 DEBUG
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting for Split
 Thread to finish...

 *
 but the regionserver process is still alive from jps output.

  any suggestion?

 Thanks!

 beatls



Re: regionserver stucking

2013-04-17 Thread hua beatls
any good tool for thread dump? can you recommand?

Thanks!

beatls


On Wed, Apr 17, 2013 at 4:06 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 Can you attach a thread dump for this ?  Which version of HBase are you
 using.

 Logs also if attached would be fine.

 Regards
 Ram


 On Wed, Apr 17, 2013 at 1:07 PM, hua beatls bea...@gmail.com wrote:

  HI,
 from web ui I find one of my 5 regionserver missing,   and check the
 log
  find:
 
 
 *
  [hadoop@hadoop1 logs]$ tail -f hbase-hadoop-regionserver-hadoop1.log
  2013-04-17 15:21:24,789 DEBUG
  org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting for
 Split
  Thread to finish...
  2013-04-17 15:22:24,789 DEBUG
  org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting for
 Split
  Thread to finish...
 
 
 *
  but the regionserver process is still alive from jps output.
 
   any suggestion?
 
  Thanks!
 
  beatls
 



RegionServer shutdown with ScanWildcardColumnTracker exception

2013-04-17 Thread Amit Sela
Hi all,

I had a regionserver crushed during counters increment. Looking at the
regionserver log I saw:

org.apache.hadoop.hbase.DroppedSnapshotException: region: TABLE_NAME,
ROW_KEY...at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1472)
at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1351)
at
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1292)
at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:406)
at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:380)
at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:243)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: ScanWildcardColumnTracker.checkColumn ran
into a column actually smaller than the previous column: *QUALIFIER*
at
org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker.checkColumn(ScanWildcardColumnTracker.java:104)
at
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:354)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:362)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:311)
at
org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:738)
at
org.apache.hadoop.hbase.regionserver.Store.flushCache(Store.java:673)
at
org.apache.hadoop.hbase.regionserver.Store.access$400(Store.java:108)
at
org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.flushCache(Store.java:2276)
at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1447)

The strange thing is that the *QUALIFER* name as it appears in the log is
misspelled there is no, and never was such qualifier name.

Thanks,

Amit.


Re: regionserver stucking

2013-04-17 Thread hua beatls
[hadoop@hadoop1 bin]$ jstack 27737
2013-04-17 16:33:07
Full thread dump Java HotSpot(TM) 64-Bit Server VM (22.1-b02 mixed mode):

Attach Listener daemon prio=10 tid=0x7f8f34006000 nid=0x625f waiting
on condition [0x]
   java.lang.Thread.State: RUNNABLE

SIGTERM handler daemon prio=10 tid=0x7f8f34005800 nid=0x5281 waiting
for monitor entry [0x7f8f57303000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at java.lang.Shutdown.exit(Shutdown.java:212)
- waiting to lock 0x00061b178a08 (a java.lang.Class for
java.lang.Shutdown)
at java.lang.Terminator$1.handle(Terminator.java:52)
at sun.misc.Signal$1.run(Signal.java:212)
at java.lang.Thread.run(Thread.java:722)

Thread-5 prio=10 tid=0x7f8dbc00a000 nid=0x1901 in Object.wait()
[0x7f8d4de9d000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1258)
- locked 0x00061ae2ba58 (a java.lang.Thread)
at org.apache.hadoop.hbase.util.Threads.shutdown(Threads.java:94)
at org.apache.hadoop.hbase.util.Threads.shutdown(Threads.java:82)
at
org.apache.hadoop.hbase.regionserver.ShutdownHook$ShutdownHookThread.run(ShutdownHook.java:114)
at
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

SIGTERM handler daemon prio=10 tid=0x7f8f34004800 nid=0x1900 in
Object.wait() [0x7f8d4efae000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1258)
- locked 0x00061b09a740 (a
org.apache.hadoop.util.ShutdownHookManager$1)
at java.lang.Thread.join(Thread.java:1332)
at
java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
at
java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
at java.lang.Shutdown.runHooks(Shutdown.java:123)
at java.lang.Shutdown.sequence(Shutdown.java:167)
at java.lang.Shutdown.exit(Shutdown.java:212)
- locked 0x00061b178a08 (a java.lang.Class for
java.lang.Shutdown)
at java.lang.Terminator$1.handle(Terminator.java:52)
at sun.misc.Signal$1.run(Signal.java:212)
at java.lang.Thread.run(Thread.java:722)

RS_CLOSE_ROOT-hadoop1,60020,1365731099431-0 prio=10
tid=0x7f8eec346000 nid=0x4f94 waiting on condition [0x7f8f57bf6000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x00061b37e050 (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

RS_CLOSE_REGION-hadoop1,60020,1365731099431-2 prio=10
tid=0x7f8d5c005800 nid=0x4ba8 waiting on condition [0x7f8f57202000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x00061b381df8 (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

RS_CLOSE_REGION-hadoop1,60020,1365731099431-1 prio=10
tid=0x7f8d6c005800 nid=0x5322 waiting on condition [0x7f8d4edac000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x00061b381df8 (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at

Re: regionserver stucking

2013-04-17 Thread ramkrishna vasudevan
Just do jstack with pid.

Regards
Ram


On Wed, Apr 17, 2013 at 1:56 PM, hua beatls bea...@gmail.com wrote:

 any good tool for thread dump? can you recommand?

 Thanks!

 beatls


 On Wed, Apr 17, 2013 at 4:06 PM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

  Can you attach a thread dump for this ?  Which version of HBase are you
  using.
 
  Logs also if attached would be fine.
 
  Regards
  Ram
 
 
  On Wed, Apr 17, 2013 at 1:07 PM, hua beatls bea...@gmail.com wrote:
 
   HI,
  from web ui I find one of my 5 regionserver missing,   and check the
  log
   find:
  
  
 
 *
   [hadoop@hadoop1 logs]$ tail -f hbase-hadoop-regionserver-hadoop1.log
   2013-04-17 15:21:24,789 DEBUG
   org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting for
  Split
   Thread to finish...
   2013-04-17 15:22:24,789 DEBUG
   org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting for
  Split
   Thread to finish...
  
  
 
 *
   but the regionserver process is still alive from jps output.
  
any suggestion?
  
   Thanks!
  
   beatls
  
 



Re: Loading text files from local file system

2013-04-17 Thread Suraj Varma
Maybe I misunderstood your constraint ... are you saying that your DFS
itself is having constraint due to file size  replication? If so, how
about setting dfs.replication to 1 for the job?

There are other options like chopping up your file and processing it
piecemeal ... or perhaps customizing LoadIncrementalFiles to process
compressed input files and so forth ...

See if the dfs.replication + hfile.compression option works for you first.
--Suraj



On Wed, Apr 17, 2013 at 1:00 AM, Suraj Varma svarma...@gmail.com wrote:

 Have you considered using hfile.compression, perhaps with snappy
 compression?
 See this thread:
 http://grokbase.com/t/hbase/user/10cqrd06pc/hbase-bulk-load-script
 --Suraj



 On Tue, Apr 16, 2013 at 9:31 PM, Omkar Joshi 
 omkar.jo...@lntinfotech.comwrote:

 The background thread is here :


 http://mail-archives.apache.org/mod_mbox/hbase-user/201304.mbox/%3ce689a42b73c5a545ad77332a4fc75d8c1efbe84...@vshinmsmbx01.vshodc.lntinfotech.com%3E

 Following are the commands that I'm using to load files onto HBase :

 HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`
 ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.94.6.1.jar importtsv
 '-Dimporttsv.separator=;'
 -Dimporttsv.columns=HBASE_ROW_KEY,PRODUCT_INFO:NAME,PRODUCT_INFO:CATEGORY,PRODUCT_INFO:GROUP,PRODUCT_INFO:COMPANY,PRODUCT_INFO:COST,PRODUCT_INFO:COLOR,PRODUCT_INFO:BLANK_COLUMN
 -Dimporttsv.bulk.output=hdfs://cldx-1139-1033:9000/hbase/storefileoutput_6
 PRODUCTS hdfs://cldx-1139-1033:9000/hbase/copiedFromLocal/product_6.txt

 HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`
 ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.94.6.1.jar
 completebulkload hdfs://cldx-1139-1033:9000/hbase/storefileoutput_6 PRODUCTS

 As seen, the text files to be loaded in HBase first need to be loaded on
 HDFS. Given our infrastructure constraints/limitations, I'm getting space
 issues. The data in the text files is around 20GB + replication is
 consuming a lot of DFS.

 Is there a way wherein a text file can be loaded directly from the local
 file system onto HBase?

 Regards,
 Omkar Joshi

 
 The contents of this e-mail and any attachment(s) may contain
 confidential or privileged information for the intended recipient(s).
 Unintended recipients are prohibited from taking action on the basis of
 information in this e-mail and using or disseminating the information, and
 must notify the sender and delete it from their system. LT Infotech will
 not accept responsibility or liability for the accuracy or completeness of,
 or the presence of any virus or disabling code in this e-mail





Re: RegionServer shutdown with ScanWildcardColumnTracker exception

2013-04-17 Thread ramkrishna vasudevan
Seems interesting.  Can  you tell us what are the families and the
qualifiers available in your schema.

Any other interesting logs that you can see before this?

BTW the version of HBase is also needed?  If we can track it out we can
then file a JIRA if it is a bug.

Regards
RAm


On Wed, Apr 17, 2013 at 2:00 PM, Amit Sela am...@infolinks.com wrote:

 Hi all,

 I had a regionserver crushed during counters increment. Looking at the
 regionserver log I saw:

 org.apache.hadoop.hbase.DroppedSnapshotException: region: TABLE_NAME,
 ROW_KEY...at

 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1472)
 at

 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1351)
 at
 org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1292)
 at

 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:406)
 at

 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:380)
 at

 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:243)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.io.IOException: ScanWildcardColumnTracker.checkColumn ran
 into a column actually smaller than the previous column: *QUALIFIER*
 at

 org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker.checkColumn(ScanWildcardColumnTracker.java:104)
 at

 org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:354)
 at

 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:362)
 at

 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:311)
 at

 org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:738)
 at
 org.apache.hadoop.hbase.regionserver.Store.flushCache(Store.java:673)
 at
 org.apache.hadoop.hbase.regionserver.Store.access$400(Store.java:108)
 at

 org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.flushCache(Store.java:2276)
 at

 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1447)

 The strange thing is that the *QUALIFER* name as it appears in the log is
 misspelled there is no, and never was such qualifier name.

 Thanks,

 Amit.



Re: regionserver stucking

2013-04-17 Thread Mohammad Tariq
You could make use of jVisualVM as well. Comes in quite handy.

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Wed, Apr 17, 2013 at 2:05 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 Just do jstack with pid.

 Regards
 Ram


 On Wed, Apr 17, 2013 at 1:56 PM, hua beatls bea...@gmail.com wrote:

  any good tool for thread dump? can you recommand?
 
  Thanks!
 
  beatls
 
 
  On Wed, Apr 17, 2013 at 4:06 PM, ramkrishna vasudevan 
  ramkrishna.s.vasude...@gmail.com wrote:
 
   Can you attach a thread dump for this ?  Which version of HBase are you
   using.
  
   Logs also if attached would be fine.
  
   Regards
   Ram
  
  
   On Wed, Apr 17, 2013 at 1:07 PM, hua beatls bea...@gmail.com wrote:
  
HI,
   from web ui I find one of my 5 regionserver missing,   and check
 the
   log
find:
   
   
  
 
 *
[hadoop@hadoop1 logs]$ tail -f hbase-hadoop-regionserver-hadoop1.log
2013-04-17 15:21:24,789 DEBUG
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting for
   Split
Thread to finish...
2013-04-17 15:22:24,789 DEBUG
org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting for
   Split
Thread to finish...
   
   
  
 
 *
but the regionserver process is still alive from jps output.
   
 any suggestion?
   
Thanks!
   
beatls
   
  
 



Re: RegionServer shutdown with ScanWildcardColumnTracker exception

2013-04-17 Thread Amit Sela
The cluster runs Hadoop 1.0.4 and HBase 0.94.2

I have three families in this table: weekly, daily, hourly. each family has
the following qualifiers:
Weekly - impressions_{countrycode}_{week#} - country code is 0, 1 or ALL
(aggregation of both 0 and 1)
Daily and hourly are the same but with MMdd and MMddhh
respectively.

Just before the exception the regionserver StoreFile executes the
following:

2013-04-16 17:56:06,769 [regionserver8041.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
type for hdfs://hadoop-master.infolinks.com:8000/hbase/URL_COUNTERS/af2760e
4d04a9e3025d1fb53bdba8acf/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc:
CompoundBloomFilterWriter
2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
DeleteFamily was added to HFile (hdfs://hbase-master-address:8000/hbase
/URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc)
2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=210517246,
memsize=39.3m, into tmp file hdfs://hbase-master:8000/hbase
/URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc
2013-04-16 17:56:07,357 [regionserver8041.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
type for hdfs://hbase-master:8000/hbase/URL_COUNTERS/*af2760e*
*4d04a9e3025d1fb53bdba8acf*/.tmp/3fa7993dcb294be1bca5e4d7357f4003:
CompoundBloomFilterWriter
2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
DeleteFamily was added to HFile (hdfs://hbase-master:8000/hbase
/URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
/.tmp/3fa7993dcb294be1bca5e4d7357f4003)
2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
region-server-address,8041,1364993168088: Replay of HLog required
. Forcing server shutdown
DroppedSnapshotException: region: TABLE,ROWKEY,1364317591568.*
af2760e4d04a9e3025d1fb53bdba8acf*.


...


On Wed, Apr 17, 2013 at 11:47 AM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 Seems interesting.  Can  you tell us what are the families and the
 qualifiers available in your schema.

 Any other interesting logs that you can see before this?

 BTW the version of HBase is also needed?  If we can track it out we can
 then file a JIRA if it is a bug.

 Regards
 RAm


 On Wed, Apr 17, 2013 at 2:00 PM, Amit Sela am...@infolinks.com wrote:

  Hi all,
 
  I had a regionserver crushed during counters increment. Looking at the
  regionserver log I saw:
 
  org.apache.hadoop.hbase.DroppedSnapshotException: region: TABLE_NAME,
  ROW_KEY...at
 
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1472)
  at
 
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1351)
  at
 
 org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1292)
  at
 
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:406)
  at
 
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:380)
  at
 
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:243)
  at java.lang.Thread.run(Thread.java:722)
  Caused by: java.io.IOException: ScanWildcardColumnTracker.checkColumn ran
  into a column actually smaller than the previous column: *QUALIFIER*
  at
 
 
 org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker.checkColumn(ScanWildcardColumnTracker.java:104)
  at
 
 
 org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:354)
  at
 
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:362)
  at
 
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:311)
  at
 
 
 org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:738)
  at
  org.apache.hadoop.hbase.regionserver.Store.flushCache(Store.java:673)
  at
  org.apache.hadoop.hbase.regionserver.Store.access$400(Store.java:108)
  at
 
 
 org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.flushCache(Store.java:2276)
  at
 
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1447)
 
  The strange thing is that the *QUALIFER* name as it appears in the log is
  misspelled there is no, and never was such qualifier name.
 
  Thanks,
 
  Amit.
 



RE: Loading text files from local file system

2013-04-17 Thread Omkar Joshi
Yeah DFS space is a constraint.

I'll check the options specified by you.

Regards,
Omkar Joshi

-Original Message-
From: Suraj Varma [mailto:svarma...@gmail.com] 
Sent: Wednesday, April 17, 2013 2:07 PM
To: user@hbase.apache.org
Subject: Re: Loading text files from local file system

Maybe I misunderstood your constraint ... are you saying that your DFS
itself is having constraint due to file size  replication? If so, how
about setting dfs.replication to 1 for the job?

There are other options like chopping up your file and processing it
piecemeal ... or perhaps customizing LoadIncrementalFiles to process
compressed input files and so forth ...

See if the dfs.replication + hfile.compression option works for you first.
--Suraj



On Wed, Apr 17, 2013 at 1:00 AM, Suraj Varma svarma...@gmail.com wrote:

 Have you considered using hfile.compression, perhaps with snappy
 compression?
 See this thread:
 http://grokbase.com/t/hbase/user/10cqrd06pc/hbase-bulk-load-script
 --Suraj



 On Tue, Apr 16, 2013 at 9:31 PM, Omkar Joshi 
 omkar.jo...@lntinfotech.comwrote:

 The background thread is here :


 http://mail-archives.apache.org/mod_mbox/hbase-user/201304.mbox/%3ce689a42b73c5a545ad77332a4fc75d8c1efbe84...@vshinmsmbx01.vshodc.lntinfotech.com%3E

 Following are the commands that I'm using to load files onto HBase :

 HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`
 ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.94.6.1.jar importtsv
 '-Dimporttsv.separator=;'
 -Dimporttsv.columns=HBASE_ROW_KEY,PRODUCT_INFO:NAME,PRODUCT_INFO:CATEGORY,PRODUCT_INFO:GROUP,PRODUCT_INFO:COMPANY,PRODUCT_INFO:COST,PRODUCT_INFO:COLOR,PRODUCT_INFO:BLANK_COLUMN
 -Dimporttsv.bulk.output=hdfs://cldx-1139-1033:9000/hbase/storefileoutput_6
 PRODUCTS hdfs://cldx-1139-1033:9000/hbase/copiedFromLocal/product_6.txt

 HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`
 ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.94.6.1.jar
 completebulkload hdfs://cldx-1139-1033:9000/hbase/storefileoutput_6 PRODUCTS

 As seen, the text files to be loaded in HBase first need to be loaded on
 HDFS. Given our infrastructure constraints/limitations, I'm getting space
 issues. The data in the text files is around 20GB + replication is
 consuming a lot of DFS.

 Is there a way wherein a text file can be loaded directly from the local
 file system onto HBase?

 Regards,
 Omkar Joshi

 
 The contents of this e-mail and any attachment(s) may contain
 confidential or privileged information for the intended recipient(s).
 Unintended recipients are prohibited from taking action on the basis of
 information in this e-mail and using or disseminating the information, and
 must notify the sender and delete it from their system. LT Infotech will
 not accept responsibility or liability for the accuracy or completeness of,
 or the presence of any virus or disabling code in this e-mail





Re: regionserver stucking

2013-04-17 Thread hua beatls
HI,
from stack pid,  innormal state:
   SIGTERM handler daemon prio=10 tid=0x7f8f34005800 nid=0x5281
waiting for monitor entry [0x7f8f57303000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.lang.Shutdown.exit(Shutdown.java:212)
what the reason?

Thanks!
 beatls


On Wed, Apr 17, 2013 at 4:54 PM, Mohammad Tariq donta...@gmail.com wrote:

 You could make use of jVisualVM as well. Comes in quite handy.

 Warm Regards,
 Tariq
 https://mtariq.jux.com/
 cloudfront.blogspot.com


 On Wed, Apr 17, 2013 at 2:05 PM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

  Just do jstack with pid.
 
  Regards
  Ram
 
 
  On Wed, Apr 17, 2013 at 1:56 PM, hua beatls bea...@gmail.com wrote:
 
   any good tool for thread dump? can you recommand?
  
   Thanks!
  
   beatls
  
  
   On Wed, Apr 17, 2013 at 4:06 PM, ramkrishna vasudevan 
   ramkrishna.s.vasude...@gmail.com wrote:
  
Can you attach a thread dump for this ?  Which version of HBase are
 you
using.
   
Logs also if attached would be fine.
   
Regards
Ram
   
   
On Wed, Apr 17, 2013 at 1:07 PM, hua beatls bea...@gmail.com
 wrote:
   
 HI,
from web ui I find one of my 5 regionserver missing,   and check
  the
log
 find:


   
  
 
 *
 [hadoop@hadoop1 logs]$ tail -f
 hbase-hadoop-regionserver-hadoop1.log
 2013-04-17 15:21:24,789 DEBUG
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting
 for
Split
 Thread to finish...
 2013-04-17 15:22:24,789 DEBUG
 org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting
 for
Split
 Thread to finish...


   
  
 
 *
 but the regionserver process is still alive from jps output.

  any suggestion?

 Thanks!

 beatls

   
  
 



Re: regionserver stucking

2013-04-17 Thread ramkrishna vasudevan
The compactsplitthread is not responding to the interrupt call that happens
thro shutdownNow().
So either the thread has already got interrupted and the call to
shutdownnow is not taking effect.  Not very sure of the problem.

Which version of HBase ? Any logs available?

Regards
Ram


On Wed, Apr 17, 2013 at 2:48 PM, hua beatls bea...@gmail.com wrote:

 HI,
 from stack pid,  innormal state:
SIGTERM handler daemon prio=10 tid=0x7f8f34005800 nid=0x5281
 waiting for monitor entry [0x7f8f57303000]
 java.lang.Thread.State: BLOCKED (on object monitor)
 at java.lang.Shutdown.exit(Shutdown.java:212)
 what the reason?

 Thanks!
  beatls


 On Wed, Apr 17, 2013 at 4:54 PM, Mohammad Tariq donta...@gmail.com
 wrote:

  You could make use of jVisualVM as well. Comes in quite handy.
 
  Warm Regards,
  Tariq
  https://mtariq.jux.com/
  cloudfront.blogspot.com
 
 
  On Wed, Apr 17, 2013 at 2:05 PM, ramkrishna vasudevan 
  ramkrishna.s.vasude...@gmail.com wrote:
 
   Just do jstack with pid.
  
   Regards
   Ram
  
  
   On Wed, Apr 17, 2013 at 1:56 PM, hua beatls bea...@gmail.com wrote:
  
any good tool for thread dump? can you recommand?
   
Thanks!
   
beatls
   
   
On Wed, Apr 17, 2013 at 4:06 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:
   
 Can you attach a thread dump for this ?  Which version of HBase are
  you
 using.

 Logs also if attached would be fine.

 Regards
 Ram


 On Wed, Apr 17, 2013 at 1:07 PM, hua beatls bea...@gmail.com
  wrote:

  HI,
 from web ui I find one of my 5 regionserver missing,   and
 check
   the
 log
  find:
 
 

   
  
 
 *
  [hadoop@hadoop1 logs]$ tail -f
  hbase-hadoop-regionserver-hadoop1.log
  2013-04-17 15:21:24,789 DEBUG
  org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting
  for
 Split
  Thread to finish...
  2013-04-17 15:22:24,789 DEBUG
  org.apache.hadoop.hbase.regionserver.CompactSplitThread: Waiting
  for
 Split
  Thread to finish...
 
 

   
  
 
 *
  but the regionserver process is still alive from jps output.
 
   any suggestion?
 
  Thanks!
 
  beatls
 

   
  
 



Speeding up the row count

2013-04-17 Thread Omkar Joshi
Hi,

I'm having two tables - CUSTOMERS(6 + rows) and PRODUCTS(1000851 rows).

The table structures are  :

CUSTOMERS
rowkey :   CUSTOMER_ID

column family : CUSTOMER_INFO

columns :  NAME
EMAIL
ADDRESS
MOBILE


PRODUCTS
rowkey :   PRODUCT_ID

column family : PRODUCT_INFO

columns : NAME
CATEGORY
GROUP
COMPANY
COST
COLOR

I'm trying to get the row count for each table using the following snippet :
.
.
.
hbaseCRUD.getTableCount(args[1], CUSTOMER_INFO,NAME);
.
.
hbaseCRUD.getTableCount(args[1], PRODUCT_INFO,NAME);

public long getTableCount(String tableName, String columnFamilyName,
  String columnName) {
AggregationClient aggregationClient = new AggregationClient(config);
Scan scan = new Scan();
scan.addFamily(Bytes.toBytes(columnFamilyName));
if (columnName != null  !columnName.isEmpty()) {
  scan.addColumn(Bytes.toBytes(columnFamilyName),
  Bytes.toBytes(columnName));
}

long rowCount = 0;
try {
  rowCount = 
aggregationClient.rowCount(Bytes.toBytes(tableName),
  null, scan);
} catch (Throwable e) {
  // TODO Auto-generated catch block
  e.printStackTrace();
}
System.out.println(row count is  + rowCount);

return rowCount;
  }

For CUSTOMERS, the response is acceptable but for PRODUCTS, it is 
timing-out(even on the shell 1000851 row(s) in 258.9220 seconds).

What needs to be done to get a response quickly? Approach other than 
AggregationClient or tweaking the Scan in the above code snippet?

Regards,
Omkar Joshi


The contents of this e-mail and any attachment(s) may contain confidential or 
privileged information for the intended recipient(s). Unintended recipients are 
prohibited from taking action on the basis of information in this e-mail and 
using or disseminating the information, and must notify the sender and delete 
it from their system. LT Infotech will not accept responsibility or liability 
for the accuracy or completeness of, or the presence of any virus or disabling 
code in this e-mail


Re: RegionServer shutdown with ScanWildcardColumnTracker exception

2013-04-17 Thread ramkrishna vasudevan
Hi Amit

Checking the code this is possible when the qualifiers are not sorted.  Do
you have any CPs in your path which tries to play with the KVs?

Seems to be a very weird thing.
Can you try doing a scan on the KV just before this happens.  That will tel
you the existing kvs that are present.

Even now if you can have the cluster you can try scanning for the region
for which the flush happened.  That will give us some more info.

Regards
Ram


On Wed, Apr 17, 2013 at 2:36 PM, Amit Sela am...@infolinks.com wrote:

 The cluster runs Hadoop 1.0.4 and HBase 0.94.2

 I have three families in this table: weekly, daily, hourly. each family has
 the following qualifiers:
 Weekly - impressions_{countrycode}_{week#} - country code is 0, 1 or ALL
 (aggregation of both 0 and 1)
 Daily and hourly are the same but with MMdd and MMddhh
 respectively.

 Just before the exception the regionserver StoreFile executes the
 following:

 2013-04-16 17:56:06,769 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
 type for hdfs://
 hadoop-master.infolinks.com:8000/hbase/URL_COUNTERS/af2760e
 4d04a9e3025d1fb53bdba8acf/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc:
 CompoundBloomFilterWriter
 2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
 DeleteFamily was added to HFile (hdfs://hbase-master-address:8000/hbase
 /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
 /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc)
 2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=210517246,
 memsize=39.3m, into tmp file hdfs://hbase-master:8000/hbase
 /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
 /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc
 2013-04-16 17:56:07,357 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
 type for hdfs://hbase-master:8000/hbase/URL_COUNTERS/*af2760e*
 *4d04a9e3025d1fb53bdba8acf*/.tmp/3fa7993dcb294be1bca5e4d7357f4003:
 CompoundBloomFilterWriter
 2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
 DeleteFamily was added to HFile (hdfs://hbase-master:8000/hbase
 /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
 /.tmp/3fa7993dcb294be1bca5e4d7357f4003)
 2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] FATAL
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
 region-server-address,8041,1364993168088: Replay of HLog required
 . Forcing server shutdown
 DroppedSnapshotException: region: TABLE,ROWKEY,1364317591568.*
 af2760e4d04a9e3025d1fb53bdba8acf*.
 
 
 ...


 On Wed, Apr 17, 2013 at 11:47 AM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

  Seems interesting.  Can  you tell us what are the families and the
  qualifiers available in your schema.
 
  Any other interesting logs that you can see before this?
 
  BTW the version of HBase is also needed?  If we can track it out we can
  then file a JIRA if it is a bug.
 
  Regards
  RAm
 
 
  On Wed, Apr 17, 2013 at 2:00 PM, Amit Sela am...@infolinks.com wrote:
 
   Hi all,
  
   I had a regionserver crushed during counters increment. Looking at the
   regionserver log I saw:
  
   org.apache.hadoop.hbase.DroppedSnapshotException: region: TABLE_NAME,
   ROW_KEY...at
  
  
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1472)
   at
  
  
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1351)
   at
  
 
 org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1292)
   at
  
  
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:406)
   at
  
  
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:380)
   at
  
  
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:243)
   at java.lang.Thread.run(Thread.java:722)
   Caused by: java.io.IOException: ScanWildcardColumnTracker.checkColumn
 ran
   into a column actually smaller than the previous column: *QUALIFIER*
   at
  
  
 
 org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker.checkColumn(ScanWildcardColumnTracker.java:104)
   at
  
  
 
 org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:354)
   at
  
  
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:362)
   at
  
  
 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:311)
   at
  
  
 
 org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:738)
   at
   org.apache.hadoop.hbase.regionserver.Store.flushCache(Store.java:673)
   at
   

Re: Speeding up the row count

2013-04-17 Thread Jean-Marc Spaggiari
Hi,

You might want to take a look at that:

http://hbase.apache.org/book/ops_mgt.html#rowcounter

JM


2013/4/17 Omkar Joshi omkar.jo...@lntinfotech.com

 Hi,

 I'm having two tables - CUSTOMERS(6 + rows) and PRODUCTS(1000851 rows).

 The table structures are  :

 CUSTOMERS
 rowkey :   CUSTOMER_ID

 column family : CUSTOMER_INFO

 columns :  NAME
 EMAIL
 ADDRESS
 MOBILE


 PRODUCTS
 rowkey :   PRODUCT_ID

 column family : PRODUCT_INFO

 columns : NAME
 CATEGORY
 GROUP
 COMPANY
 COST
 COLOR

 I'm trying to get the row count for each table using the following snippet
 :
 .
 .
 .
 hbaseCRUD.getTableCount(args[1], CUSTOMER_INFO,NAME);
 .
 .
 hbaseCRUD.getTableCount(args[1], PRODUCT_INFO,NAME);

 public long getTableCount(String tableName, String columnFamilyName,
   String columnName) {
 AggregationClient aggregationClient = new
 AggregationClient(config);
 Scan scan = new Scan();
 scan.addFamily(Bytes.toBytes(columnFamilyName));
 if (columnName != null  !columnName.isEmpty()) {
   scan.addColumn(Bytes.toBytes(columnFamilyName),
   Bytes.toBytes(columnName));
 }

 long rowCount = 0;
 try {
   rowCount =
 aggregationClient.rowCount(Bytes.toBytes(tableName),
   null, scan);
 } catch (Throwable e) {
   // TODO Auto-generated catch block
   e.printStackTrace();
 }
 System.out.println(row count is  + rowCount);

 return rowCount;
   }

 For CUSTOMERS, the response is acceptable but for PRODUCTS, it is
 timing-out(even on the shell 1000851 row(s) in 258.9220 seconds).

 What needs to be done to get a response quickly? Approach other than
 AggregationClient or tweaking the Scan in the above code snippet?

 Regards,
 Omkar Joshi

 
 The contents of this e-mail and any attachment(s) may contain confidential
 or privileged information for the intended recipient(s). Unintended
 recipients are prohibited from taking action on the basis of information in
 this e-mail and using or disseminating the information, and must notify the
 sender and delete it from their system. LT Infotech will not accept
 responsibility or liability for the accuracy or completeness of, or the
 presence of any virus or disabling code in this e-mail



Re: RegionServer shutdown with ScanWildcardColumnTracker exception

2013-04-17 Thread Amit Sela
I scanned over this counter with and without column specification and all
looks OK now.
I have no CPs in this table.
Is there some kind of a hint mechanism in HBase' internal scan ? because
it's weird that ScanWildcardColumnTracker.checkColumn says that column is
smaller than previous column: *imprersions_ALL_2013041617*. there is no
imprersions only impressions and r is indeed smaller than s, could it be
some kind of hint bug ? I don't think I know enough of HBase internals to
fully understand that...



On Wed, Apr 17, 2013 at 1:42 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 Hi Amit

 Checking the code this is possible when the qualifiers are not sorted.  Do
 you have any CPs in your path which tries to play with the KVs?

 Seems to be a very weird thing.
 Can you try doing a scan on the KV just before this happens.  That will tel
 you the existing kvs that are present.

 Even now if you can have the cluster you can try scanning for the region
 for which the flush happened.  That will give us some more info.

 Regards
 Ram


 On Wed, Apr 17, 2013 at 2:36 PM, Amit Sela am...@infolinks.com wrote:

  The cluster runs Hadoop 1.0.4 and HBase 0.94.2
 
  I have three families in this table: weekly, daily, hourly. each family
 has
  the following qualifiers:
  Weekly - impressions_{countrycode}_{week#} - country code is 0, 1 or ALL
  (aggregation of both 0 and 1)
  Daily and hourly are the same but with MMdd and MMddhh
  respectively.
 
  Just before the exception the regionserver StoreFile executes the
  following:
 
  2013-04-16 17:56:06,769 [regionserver8041.cacheFlusher] INFO
  org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
 filter
  type for hdfs://
  hadoop-master.infolinks.com:8000/hbase/URL_COUNTERS/af2760e
  4d04a9e3025d1fb53bdba8acf/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc:
  CompoundBloomFilterWriter
  2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
  org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
  DeleteFamily was added to HFile (hdfs://hbase-master-address:8000/hbase
  /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
  /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc)
  2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
  org.apache.hadoop.hbase.regionserver.Store: Flushed ,
 sequenceid=210517246,
  memsize=39.3m, into tmp file hdfs://hbase-master:8000/hbase
  /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
  /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc
  2013-04-16 17:56:07,357 [regionserver8041.cacheFlusher] INFO
  org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
 filter
  type for hdfs://hbase-master:8000/hbase/URL_COUNTERS/*af2760e*
  *4d04a9e3025d1fb53bdba8acf*/.tmp/3fa7993dcb294be1bca5e4d7357f4003:
  CompoundBloomFilterWriter
  2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] INFO
  org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
  DeleteFamily was added to HFile (hdfs://hbase-master:8000/hbase
  /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
  /.tmp/3fa7993dcb294be1bca5e4d7357f4003)
  2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] FATAL
  org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
 server
  region-server-address,8041,1364993168088: Replay of HLog required
  . Forcing server shutdown
  DroppedSnapshotException: region: TABLE,ROWKEY,1364317591568.*
  af2760e4d04a9e3025d1fb53bdba8acf*.
  
  
  ...
 
 
  On Wed, Apr 17, 2013 at 11:47 AM, ramkrishna vasudevan 
  ramkrishna.s.vasude...@gmail.com wrote:
 
   Seems interesting.  Can  you tell us what are the families and the
   qualifiers available in your schema.
  
   Any other interesting logs that you can see before this?
  
   BTW the version of HBase is also needed?  If we can track it out we can
   then file a JIRA if it is a bug.
  
   Regards
   RAm
  
  
   On Wed, Apr 17, 2013 at 2:00 PM, Amit Sela am...@infolinks.com
 wrote:
  
Hi all,
   
I had a regionserver crushed during counters increment. Looking at
 the
regionserver log I saw:
   
org.apache.hadoop.hbase.DroppedSnapshotException: region: TABLE_NAME,
ROW_KEY...at
   
   
  
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1472)
at
   
   
  
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1351)
at
   
  
 
 org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1292)
at
   
   
  
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:406)
at
   
   
  
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:380)
at
   
   
  
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:243)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: ScanWildcardColumnTracker.checkColumn
  ran
into a column actually smaller than the 

Problem in filters

2013-04-17 Thread Omkar Joshi
Hi,

I'm having the a table named ORDERS with 1000851 rows:

rowkey :   ORDER_ID

column family : ORDER_DETAILS
columns : CUSTOMER_ID
PRODUCT_ID
REQUEST_DATE
PRODUCT_QUANTITY
PRICE
PAYMENT_MODE

I'm using the following code to access the data :

public void executeOrdersQuery() {
/*
* SELECT ORDER_ID,CUSTOMER_ID,PRODUCT_ID,QUANTITY FROM ORDERS WHERE
* QUANTITY =16 and PRODUCT_ID='P60337998'
*/
String tableName = ORDERS;

String family = ORDER_DETAILS;
int quantity = 16;
String productId = P60337998;

SingleColumnValueFilter quantityFilter = new 
SingleColumnValueFilter(
Bytes.toBytes(family), 
Bytes.toBytes(PRODUCT_QUANTITY),
CompareFilter.CompareOp.GREATER_OR_EQUAL,
Bytes.toBytes(quantity));

SingleColumnValueFilter productIdFilter = new 
SingleColumnValueFilter(
Bytes.toBytes(family), Bytes.toBytes(PRODUCT_ID),
CompareFilter.CompareOp.EQUAL, 
Bytes.toBytes(productId));

FilterList filterList = new FilterList(
FilterList.Operator.MUST_PASS_ALL);
// filterList.addFilter(quantityFilter);
filterList.addFilter(productIdFilter);

Scan scan = new Scan();
scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(ORDER_ID));
scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(CUSTOMER_ID));
scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(PRODUCT_ID));
scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(QUANTITY));

scan.setFilter(filterList);

HTableInterface tbl = hTablePool.getTable(Bytes.toBytes(tableName));
ResultScanner scanResults = null;
try {
  scanResults = tbl.getScanner(scan);

  System.out.println(scanResults : );

  for (Result result : scanResults) {
System.out.println(The result is  + result);
  }

} catch (IOException e) {
  // TODO Auto-generated catch block
  e.printStackTrace();
} finally {
  try {
tbl.close();
  } catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
  }
}

  }

First few records of the table are :

O12004457;C110;P60337998;2000-5-17;16;19184.0;cash;Customer is the new emperor. 
Either you give him what he desires or you are
banished from his kingdom.;Before you place your order, we reserve the right to 
change these terms and conditions at any time
.Any such changes will take effect when posted on this website and it is your 
responsibility to read these terms and condition
s on each occasion you use this website. We will never supply you with 
substitute goods.Our VAT registration number is 875 505
5 01.;

O12004458;C425;P50478434;2008-4-30;3;831825.0;debit;In times of change, the 
learners will inherit the earth, while the knowers
will find themselves beautifully equipped to deal with a world that no longer 
exists;Before you place your order, we reserve
the right to change these terms and conditions at any time.Any such changes 
will take effect when posted on this website and i
t is your responsibility to read these terms and conditions on each occasion 
you use this website. We will never supply you wi
th substitute goods.Our VAT registration number is 875 5055 01.;



If I don't use any filter, the row that I'm trying to fetch is returned along 
with the 1000s of others but as soon as I use even a single filter(the other is 
commented), no results are returned.

Is there some problem with my code?

Regards,
Omkar Joshi



The contents of this e-mail and any attachment(s) may contain confidential or 
privileged information for the intended recipient(s). Unintended recipients are 
prohibited from taking action on the basis of information in this e-mail and 
using or disseminating the information, and must notify the sender and delete 
it from their system. LT Infotech will not accept responsibility or liability 
for the accuracy or completeness of, or the presence of any virus or disabling 
code in this e-mail


Re: HBase random read performance

2013-04-17 Thread Michel Segel
Wouldn't do that... Changing block size is the last thing you want to do.

First question...

What is your key?

Second...
What is your record size that you are attempting to read.


Third...
Compare the 10k multiget versus 10k individual gets.

Fourth are your random keys sorted?
If not, try sorting them...

There are a lot of issues that can affect performance 



Sent from a remote device. Please excuse any typos...

Mike Segel

On Apr 15, 2013, at 3:17 AM, Anoop Sam John anoo...@huawei.com wrote:

 Ankit
 I guess you might be having default HFile block size which is 
 64KB.
 For random gets a lower value will be better. Try will some thing like 8KB 
 and check the latency?
 
 Ya ofcourse blooms can help (if major compaction was not done at the time of 
 testing)
 
 -Anoop-
 
 From: Ankit Jain [ankitjainc...@gmail.com]
 Sent: Saturday, April 13, 2013 11:01 AM
 To: user@hbase.apache.org
 Subject: HBase random read performance
 
 Hi All,
 
 We are using HBase 0.94.5 and Hadoop 1.0.4.
 
 We have HBase cluster of 5 nodes(5 regionservers and 1 master node). Each
 regionserver has 8 GB RAM.
 
 We have loaded 25 millions records in HBase table, regions are pre-split
 into 16 regions and all the regions are equally loaded.
 
 We are getting very low random read performance while performing multi get
 from HBase.
 
 We are passing random 1 row-keys as input, while HBase is taking around
 17 secs to return 1 records.
 
 Please suggest some tuning to increase HBase read performance.
 
 Thanks,
 Ankit Jain
 iLabs
 
 
 
 --
 Thanks,
 Ankit Jain


Re: Problem in filters

2013-04-17 Thread Ted Yu
If you specify producIdFilter without using FilterList, what would you get ?

Thanks

On Apr 17, 2013, at 4:51 AM, Omkar Joshi omkar.jo...@lntinfotech.com wrote:

 Hi,
 
 I'm having the a table named ORDERS with 1000851 rows:
 
 rowkey :   ORDER_ID
 
 column family : ORDER_DETAILS
columns : CUSTOMER_ID
PRODUCT_ID
REQUEST_DATE
PRODUCT_QUANTITY
PRICE
PAYMENT_MODE
 
 I'm using the following code to access the data :
 
 public void executeOrdersQuery() {
/*
* SELECT ORDER_ID,CUSTOMER_ID,PRODUCT_ID,QUANTITY FROM ORDERS WHERE
* QUANTITY =16 and PRODUCT_ID='P60337998'
*/
String tableName = ORDERS;
 
String family = ORDER_DETAILS;
int quantity = 16;
String productId = P60337998;
 
SingleColumnValueFilter quantityFilter = new 
 SingleColumnValueFilter(
Bytes.toBytes(family), 
 Bytes.toBytes(PRODUCT_QUANTITY),
CompareFilter.CompareOp.GREATER_OR_EQUAL,
Bytes.toBytes(quantity));
 
SingleColumnValueFilter productIdFilter = new 
 SingleColumnValueFilter(
Bytes.toBytes(family), Bytes.toBytes(PRODUCT_ID),
CompareFilter.CompareOp.EQUAL, 
 Bytes.toBytes(productId));
 
FilterList filterList = new FilterList(
FilterList.Operator.MUST_PASS_ALL);
// filterList.addFilter(quantityFilter);
filterList.addFilter(productIdFilter);
 
Scan scan = new Scan();
scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(ORDER_ID));
scan.addColumn(Bytes.toBytes(family), 
 Bytes.toBytes(CUSTOMER_ID));
scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(PRODUCT_ID));
scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(QUANTITY));
 
scan.setFilter(filterList);
 
HTableInterface tbl = 
 hTablePool.getTable(Bytes.toBytes(tableName));
ResultScanner scanResults = null;
try {
  scanResults = tbl.getScanner(scan);
 
  System.out.println(scanResults : );
 
  for (Result result : scanResults) {
System.out.println(The result is  + result);
  }
 
} catch (IOException e) {
  // TODO Auto-generated catch block
  e.printStackTrace();
} finally {
  try {
tbl.close();
  } catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
  }
}
 
  }
 
 First few records of the table are :
 
 O12004457;C110;P60337998;2000-5-17;16;19184.0;cash;Customer is the new 
 emperor. Either you give him what he desires or you are
 banished from his kingdom.;Before you place your order, we reserve the right 
 to change these terms and conditions at any time
 .Any such changes will take effect when posted on this website and it is your 
 responsibility to read these terms and condition
 s on each occasion you use this website. We will never supply you with 
 substitute goods.Our VAT registration number is 875 505
 5 01.;
 
 O12004458;C425;P50478434;2008-4-30;3;831825.0;debit;In times of change, the 
 learners will inherit the earth, while the knowers
 will find themselves beautifully equipped to deal with a world that no longer 
 exists;Before you place your order, we reserve
 the right to change these terms and conditions at any time.Any such changes 
 will take effect when posted on this website and i
 t is your responsibility to read these terms and conditions on each occasion 
 you use this website. We will never supply you wi
 th substitute goods.Our VAT registration number is 875 5055 01.;
 
 
 
 If I don't use any filter, the row that I'm trying to fetch is returned along 
 with the 1000s of others but as soon as I use even a single filter(the other 
 is commented), no results are returned.
 
 Is there some problem with my code?
 
 Regards,
 Omkar Joshi
 
 
 
 The contents of this e-mail and any attachment(s) may contain confidential or 
 privileged information for the intended recipient(s). Unintended recipients 
 are prohibited from taking action on the basis of information in this e-mail 
 and using or disseminating the information, and must notify the sender and 
 delete it from their system. LT Infotech will not accept responsibility or 
 liability for the accuracy or completeness of, or the presence of any virus 
 or disabling code in this e-mail


Re: Problem in filters

2013-04-17 Thread Jean-Marc Spaggiari
Hi Omkar,

Using the shell, can you scan the few first lines from your table to make
sure it's store with the expected format? Don't forget the limit the number
of rows retrieved.

JM


2013/4/17 Omkar Joshi omkar.jo...@lntinfotech.com

 Hi Ted,

 I tried using only productIdFilter without FilterList but still no output.

 public void executeOrdersQuery() {
 /*
  * SELECT ORDER_ID,CUSTOMER_ID,PRODUCT_ID,QUANTITY FROM
 ORDERS WHERE
  * QUANTITY =16 and PRODUCT_ID='P60337998'
  */
 String tableName = ORDERS;

 String family = ORDER_DETAILS;
 int quantity = 16;
 String productId = P60337998;

 SingleColumnValueFilter quantityFilter = new
 SingleColumnValueFilter(
 Bytes.toBytes(family),
 Bytes.toBytes(PRODUCT_QUANTITY),
 CompareFilter.CompareOp.GREATER_OR_EQUAL,
 Bytes.toBytes(quantity));

 SingleColumnValueFilter productIdFilter = new
 SingleColumnValueFilter(
 Bytes.toBytes(family),
 Bytes.toBytes(PRODUCT_ID),
 CompareFilter.CompareOp.EQUAL,
 Bytes.toBytes(productId));

 FilterList filterList = new FilterList(
 FilterList.Operator.MUST_PASS_ALL);
 // filterList.addFilter(quantityFilter);
 filterList.addFilter(productIdFilter);

 Scan scan = new Scan();
 scan.addColumn(Bytes.toBytes(family),
 Bytes.toBytes(ORDER_ID));
 scan.addColumn(Bytes.toBytes(family),
 Bytes.toBytes(CUSTOMER_ID));
 scan.addColumn(Bytes.toBytes(family),
 Bytes.toBytes(PRODUCT_ID));
 scan.addColumn(Bytes.toBytes(family),
 Bytes.toBytes(QUANTITY));

 // scan.setFilter(filterList);
 scan.setFilter(productIdFilter);

 HTableInterface tbl =
 hTablePool.getTable(Bytes.toBytes(tableName));
 ResultScanner scanResults = null;
 try {
 scanResults = tbl.getScanner(scan);

 System.out.println(scanResults : );

 for (Result result : scanResults) {
 System.out.println(The result is  +
 result);
 }

 } catch (IOException e) {
 // TODO Auto-generated catch block
 e.printStackTrace();
 } finally {
 try {
 tbl.close();
 } catch (IOException e) {
 // TODO Auto-generated catch block
 e.printStackTrace();
 }
 }

 }

 Regards,
 Omkar Joshi


 -Original Message-
 From: Ted Yu [mailto:yuzhih...@gmail.com]
 Sent: Wednesday, April 17, 2013 6:46 PM
 To: user@hbase.apache.org
 Cc: user@hbase.apache.org
 Subject: Re: Problem in filters

 If you specify producIdFilter without using FilterList, what would you get
 ?

 Thanks

 On Apr 17, 2013, at 4:51 AM, Omkar Joshi omkar.jo...@lntinfotech.com
 wrote:

  Hi,
 
  I'm having the a table named ORDERS with 1000851 rows:
 
  rowkey :   ORDER_ID
 
  column family : ORDER_DETAILS
 columns : CUSTOMER_ID
 PRODUCT_ID
 REQUEST_DATE
 PRODUCT_QUANTITY
 PRICE
 PAYMENT_MODE
 
  I'm using the following code to access the data :
 
  public void executeOrdersQuery() {
 /*
 * SELECT ORDER_ID,CUSTOMER_ID,PRODUCT_ID,QUANTITY FROM ORDERS
 WHERE
 * QUANTITY =16 and PRODUCT_ID='P60337998'
 */
 String tableName = ORDERS;
 
 String family = ORDER_DETAILS;
 int quantity = 16;
 String productId = P60337998;
 
 SingleColumnValueFilter quantityFilter = new
 SingleColumnValueFilter(
 Bytes.toBytes(family),
 Bytes.toBytes(PRODUCT_QUANTITY),
 CompareFilter.CompareOp.GREATER_OR_EQUAL,
 Bytes.toBytes(quantity));
 
 SingleColumnValueFilter productIdFilter = new
 SingleColumnValueFilter(
 Bytes.toBytes(family),
 Bytes.toBytes(PRODUCT_ID),
 CompareFilter.CompareOp.EQUAL,
 Bytes.toBytes(productId));
 
 FilterList filterList = new FilterList(
 FilterList.Operator.MUST_PASS_ALL);
 // filterList.addFilter(quantityFilter);
 filterList.addFilter(productIdFilter);
 
 Scan scan = new Scan();
   

Re: RegionServer shutdown with ScanWildcardColumnTracker exception

2013-04-17 Thread ramkrishna vasudevan
@Lars
You have any suggestions on this?

@Amit
You have any Encoder enabled like the Prefix Encoding stuff?
There was one optimization added recently but that is not in 0.94.2

Regards
Ram


On Wed, Apr 17, 2013 at 5:17 PM, Amit Sela am...@infolinks.com wrote:

 I scanned over this counter with and without column specification and all
 looks OK now.
 I have no CPs in this table.
 Is there some kind of a hint mechanism in HBase' internal scan ? because
 it's weird that ScanWildcardColumnTracker.checkColumn says that column is
 smaller than previous column: *imprersions_ALL_2013041617*. there is no
 imprersions only impressions and r is indeed smaller than s, could it be
 some kind of hint bug ? I don't think I know enough of HBase internals to
 fully understand that...



 On Wed, Apr 17, 2013 at 1:42 PM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

  Hi Amit
 
  Checking the code this is possible when the qualifiers are not sorted.
  Do
  you have any CPs in your path which tries to play with the KVs?
 
  Seems to be a very weird thing.
  Can you try doing a scan on the KV just before this happens.  That will
 tel
  you the existing kvs that are present.
 
  Even now if you can have the cluster you can try scanning for the region
  for which the flush happened.  That will give us some more info.
 
  Regards
  Ram
 
 
  On Wed, Apr 17, 2013 at 2:36 PM, Amit Sela am...@infolinks.com wrote:
 
   The cluster runs Hadoop 1.0.4 and HBase 0.94.2
  
   I have three families in this table: weekly, daily, hourly. each family
  has
   the following qualifiers:
   Weekly - impressions_{countrycode}_{week#} - country code is 0, 1 or
 ALL
   (aggregation of both 0 and 1)
   Daily and hourly are the same but with MMdd and MMddhh
   respectively.
  
   Just before the exception the regionserver StoreFile executes the
   following:
  
   2013-04-16 17:56:06,769 [regionserver8041.cacheFlusher] INFO
   org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
  filter
   type for hdfs://
   hadoop-master.infolinks.com:8000/hbase/URL_COUNTERS/af2760e
   4d04a9e3025d1fb53bdba8acf/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc:
   CompoundBloomFilterWriter
   2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
   org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
   DeleteFamily was added to HFile (hdfs://hbase-master-address:8000/hbase
   /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
   /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc)
   2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
   org.apache.hadoop.hbase.regionserver.Store: Flushed ,
  sequenceid=210517246,
   memsize=39.3m, into tmp file hdfs://hbase-master:8000/hbase
   /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
   /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc
   2013-04-16 17:56:07,357 [regionserver8041.cacheFlusher] INFO
   org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
  filter
   type for hdfs://hbase-master:8000/hbase/URL_COUNTERS/*af2760e*
   *4d04a9e3025d1fb53bdba8acf*/.tmp/3fa7993dcb294be1bca5e4d7357f4003:
   CompoundBloomFilterWriter
   2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] INFO
   org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
   DeleteFamily was added to HFile (hdfs://hbase-master:8000/hbase
   /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
   /.tmp/3fa7993dcb294be1bca5e4d7357f4003)
   2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] FATAL
   org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
  server
   region-server-address,8041,1364993168088: Replay of HLog required
   . Forcing server shutdown
   DroppedSnapshotException: region: TABLE,ROWKEY,1364317591568.*
   af2760e4d04a9e3025d1fb53bdba8acf*.
   
   
   ...
  
  
   On Wed, Apr 17, 2013 at 11:47 AM, ramkrishna vasudevan 
   ramkrishna.s.vasude...@gmail.com wrote:
  
Seems interesting.  Can  you tell us what are the families and the
qualifiers available in your schema.
   
Any other interesting logs that you can see before this?
   
BTW the version of HBase is also needed?  If we can track it out we
 can
then file a JIRA if it is a bug.
   
Regards
RAm
   
   
On Wed, Apr 17, 2013 at 2:00 PM, Amit Sela am...@infolinks.com
  wrote:
   
 Hi all,

 I had a regionserver crushed during counters increment. Looking at
  the
 regionserver log I saw:

 org.apache.hadoop.hbase.DroppedSnapshotException: region:
 TABLE_NAME,
 ROW_KEY...at


   
  
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1472)
 at


   
  
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1351)
 at

   
  
 
 org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1292)
 at


   
  
 
 org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:406)
  

Re: RegionServer shutdown with ScanWildcardColumnTracker exception

2013-04-17 Thread Amit Sela
No, no encoding.


On Wed, Apr 17, 2013 at 6:56 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 @Lars
 You have any suggestions on this?

 @Amit
 You have any Encoder enabled like the Prefix Encoding stuff?
 There was one optimization added recently but that is not in 0.94.2

 Regards
 Ram


 On Wed, Apr 17, 2013 at 5:17 PM, Amit Sela am...@infolinks.com wrote:

  I scanned over this counter with and without column specification and all
  looks OK now.
  I have no CPs in this table.
  Is there some kind of a hint mechanism in HBase' internal scan ? because
  it's weird that ScanWildcardColumnTracker.checkColumn says that column is
  smaller than previous column: *imprersions_ALL_2013041617*. there is no
  imprersions only impressions and r is indeed smaller than s, could it be
  some kind of hint bug ? I don't think I know enough of HBase internals to
  fully understand that...
 
 
 
  On Wed, Apr 17, 2013 at 1:42 PM, ramkrishna vasudevan 
  ramkrishna.s.vasude...@gmail.com wrote:
 
   Hi Amit
  
   Checking the code this is possible when the qualifiers are not sorted.
   Do
   you have any CPs in your path which tries to play with the KVs?
  
   Seems to be a very weird thing.
   Can you try doing a scan on the KV just before this happens.  That will
  tel
   you the existing kvs that are present.
  
   Even now if you can have the cluster you can try scanning for the
 region
   for which the flush happened.  That will give us some more info.
  
   Regards
   Ram
  
  
   On Wed, Apr 17, 2013 at 2:36 PM, Amit Sela am...@infolinks.com
 wrote:
  
The cluster runs Hadoop 1.0.4 and HBase 0.94.2
   
I have three families in this table: weekly, daily, hourly. each
 family
   has
the following qualifiers:
Weekly - impressions_{countrycode}_{week#} - country code is 0, 1 or
  ALL
(aggregation of both 0 and 1)
Daily and hourly are the same but with MMdd and MMddhh
respectively.
   
Just before the exception the regionserver StoreFile executes the
following:
   
2013-04-16 17:56:06,769 [regionserver8041.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
   filter
type for hdfs://
hadoop-master.infolinks.com:8000/hbase/URL_COUNTERS/af2760e
4d04a9e3025d1fb53bdba8acf/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc:
CompoundBloomFilterWriter
2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and
 NO
DeleteFamily was added to HFile
 (hdfs://hbase-master-address:8000/hbase
/URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc)
2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.Store: Flushed ,
   sequenceid=210517246,
memsize=39.3m, into tmp file hdfs://hbase-master:8000/hbase
/URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc
2013-04-16 17:56:07,357 [regionserver8041.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
   filter
type for hdfs://hbase-master:8000/hbase/URL_COUNTERS/*af2760e*
*4d04a9e3025d1fb53bdba8acf*/.tmp/3fa7993dcb294be1bca5e4d7357f4003:
CompoundBloomFilterWriter
2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and
 NO
DeleteFamily was added to HFile (hdfs://hbase-master:8000/hbase
/URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
/.tmp/3fa7993dcb294be1bca5e4d7357f4003)
2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
   server
region-server-address,8041,1364993168088: Replay of HLog required
. Forcing server shutdown
DroppedSnapshotException: region: TABLE,ROWKEY,1364317591568.*
af2760e4d04a9e3025d1fb53bdba8acf*.


...
   
   
On Wed, Apr 17, 2013 at 11:47 AM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:
   
 Seems interesting.  Can  you tell us what are the families and the
 qualifiers available in your schema.

 Any other interesting logs that you can see before this?

 BTW the version of HBase is also needed?  If we can track it out we
  can
 then file a JIRA if it is a bug.

 Regards
 RAm


 On Wed, Apr 17, 2013 at 2:00 PM, Amit Sela am...@infolinks.com
   wrote:

  Hi all,
 
  I had a regionserver crushed during counters increment. Looking
 at
   the
  regionserver log I saw:
 
  org.apache.hadoop.hbase.DroppedSnapshotException: region:
  TABLE_NAME,
  ROW_KEY...at
 
 

   
  
 
 org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1472)
  at
 
 

   
  
 
 

Re: Problem in filters

2013-04-17 Thread Ian Varley
Omkar,

Have you considered using Phoenix (https://github.com/forcedotcom/phoenix), a 
SQL skin over HBase to execute your SQL directly? That'll save you from 
learning all the nuances of HBase filters and give you as good or better 
performance.

Once you've downloaded and installed Phoenix, here's what you'd need to do:

// One time DDL statement
Connection conn = 
DriverManager.getConnection(jdbc:phoenix:your-zookeeper-quorum-host);
conn.createStatement().execute(CREATE VIEW ORDERS(\n +
// Not sure what the PK is, so I added this column
ORDER_DETAILS.ORDER_DETAILS_ID VARCHAR NOT NULL PRIMARY KEY,\n +
// If you have fixed length IDs, then use CHAR(xxx)
ORDER_DETAILS.CUSTOMER_ID VARCHAR,\n +
ORDER_DETAILS.PRODUCT_ID VARCHAR,\n +
ORDER_DETAILS.REQUEST_DATE DATE,\n +
ORDER_DETAILS.PRODUCT_QUANTITY INTEGER,\n +
ORDER_DETAILS.PRICE DECIMAL(10,2),\n +
 // not sure on the type here, but this might map to an Enum
ORDER_DETAILS.PAYMENT_MODE CHAR(1)\n +
));

// Running the query:
Connection conn = 
DriverManager.getConnection(jdbc:phoenix:your-zookeeper-quorum-host);
PreparedStatement stmt = conn.prepareStatement(SELECT 
ORDER_ID,CUSTOMER_ID,PRODUCT_ID,QUANTITY\n +
FROM ORDERS WHERE QUANTITY = ? and PRODUCT_ID=?);
stmt.setInt(1,16);
stmt.setString(2,P60337998);
ResultSet rs = stmt.executeQuery();
while (rs.next()) {
System.out.println(ORDER_ID= + rs.getString(ORDER_ID) + ,CUSTOMER_ID= 
+ rs.getString(CUSTOMER_ID)+
,PRODUCT_ID= + rs.getString(PRODUCT_ID) + ,QUANTITY= + 
rs.getInt(QUANTITY));
}

There are different trade-offs for the make up of the columns in your PK, 
depending on your access patterns. Getting this right could prevent full table 
scans and make your query execute much faster. Also, there are performance 
trade-offs for using a VIEW versus a TABLE.

Ian


On Apr 17, 2013, at 8:32 AM, Jean-Marc Spaggiari wrote:

Hi Omkar,

Using the shell, can you scan the few first lines from your table to make
sure it's store with the expected format? Don't forget the limit the number
of rows retrieved.

JM


2013/4/17 Omkar Joshi 
omkar.jo...@lntinfotech.commailto:omkar.jo...@lntinfotech.com

Hi Ted,

I tried using only productIdFilter without FilterList but still no output.

public void executeOrdersQuery() {
   /*
* SELECT ORDER_ID,CUSTOMER_ID,PRODUCT_ID,QUANTITY FROM
ORDERS WHERE
* QUANTITY =16 and PRODUCT_ID='P60337998'
*/
   String tableName = ORDERS;

   String family = ORDER_DETAILS;
   int quantity = 16;
   String productId = P60337998;

   SingleColumnValueFilter quantityFilter = new
SingleColumnValueFilter(
   Bytes.toBytes(family),
Bytes.toBytes(PRODUCT_QUANTITY),
   CompareFilter.CompareOp.GREATER_OR_EQUAL,
   Bytes.toBytes(quantity));

   SingleColumnValueFilter productIdFilter = new
SingleColumnValueFilter(
   Bytes.toBytes(family),
Bytes.toBytes(PRODUCT_ID),
   CompareFilter.CompareOp.EQUAL,
Bytes.toBytes(productId));

   FilterList filterList = new FilterList(
   FilterList.Operator.MUST_PASS_ALL);
   // filterList.addFilter(quantityFilter);
   filterList.addFilter(productIdFilter);

   Scan scan = new Scan();
   scan.addColumn(Bytes.toBytes(family),
Bytes.toBytes(ORDER_ID));
   scan.addColumn(Bytes.toBytes(family),
Bytes.toBytes(CUSTOMER_ID));
   scan.addColumn(Bytes.toBytes(family),
Bytes.toBytes(PRODUCT_ID));
   scan.addColumn(Bytes.toBytes(family),
Bytes.toBytes(QUANTITY));

   // scan.setFilter(filterList);
   scan.setFilter(productIdFilter);

   HTableInterface tbl =
hTablePool.getTable(Bytes.toBytes(tableName));
   ResultScanner scanResults = null;
   try {
   scanResults = tbl.getScanner(scan);

   System.out.println(scanResults : );

   for (Result result : scanResults) {
   System.out.println(The result is  +
result);
   }

   } catch (IOException e) {
   // TODO Auto-generated catch block
   e.printStackTrace();
   } finally {
   try {
   tbl.close();
   } catch (IOException e) {
   // TODO Auto-generated catch block
   e.printStackTrace();
   }
   }

   }

Regards,
Omkar Joshi


-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Wednesday, April 17, 2013 6:46 PM
To: user@hbase.apache.orgmailto:user@hbase.apache.org
Cc: 

Re: RegionServer shutdown with ScanWildcardColumnTracker exception

2013-04-17 Thread ramkrishna vasudevan
There is a hint mechanism available when scanning happens.  But i dont
think there should be much of difference between a scan that happens during
flush and the normal scan.

Will look thro the code and come back on this.

Regards
RAm


On Wed, Apr 17, 2013 at 9:40 PM, Amit Sela am...@infolinks.com wrote:

 No, no encoding.


 On Wed, Apr 17, 2013 at 6:56 PM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

  @Lars
  You have any suggestions on this?
 
  @Amit
  You have any Encoder enabled like the Prefix Encoding stuff?
  There was one optimization added recently but that is not in 0.94.2
 
  Regards
  Ram
 
 
  On Wed, Apr 17, 2013 at 5:17 PM, Amit Sela am...@infolinks.com wrote:
 
   I scanned over this counter with and without column specification and
 all
   looks OK now.
   I have no CPs in this table.
   Is there some kind of a hint mechanism in HBase' internal scan ?
 because
   it's weird that ScanWildcardColumnTracker.checkColumn says that column
 is
   smaller than previous column: *imprersions_ALL_2013041617*. there is no
   imprersions only impressions and r is indeed smaller than s, could it
 be
   some kind of hint bug ? I don't think I know enough of HBase internals
 to
   fully understand that...
  
  
  
   On Wed, Apr 17, 2013 at 1:42 PM, ramkrishna vasudevan 
   ramkrishna.s.vasude...@gmail.com wrote:
  
Hi Amit
   
Checking the code this is possible when the qualifiers are not
 sorted.
Do
you have any CPs in your path which tries to play with the KVs?
   
Seems to be a very weird thing.
Can you try doing a scan on the KV just before this happens.  That
 will
   tel
you the existing kvs that are present.
   
Even now if you can have the cluster you can try scanning for the
  region
for which the flush happened.  That will give us some more info.
   
Regards
Ram
   
   
On Wed, Apr 17, 2013 at 2:36 PM, Amit Sela am...@infolinks.com
  wrote:
   
 The cluster runs Hadoop 1.0.4 and HBase 0.94.2

 I have three families in this table: weekly, daily, hourly. each
  family
has
 the following qualifiers:
 Weekly - impressions_{countrycode}_{week#} - country code is 0, 1
 or
   ALL
 (aggregation of both 0 and 1)
 Daily and hourly are the same but with MMdd and MMddhh
 respectively.

 Just before the exception the regionserver StoreFile executes the
 following:

 2013-04-16 17:56:06,769 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
filter
 type for hdfs://
 hadoop-master.infolinks.com:8000/hbase/URL_COUNTERS/af2760e
 4d04a9e3025d1fb53bdba8acf/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc:
 CompoundBloomFilterWriter
 2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom
 and
  NO
 DeleteFamily was added to HFile
  (hdfs://hbase-master-address:8000/hbase
 /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
 /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc)
 2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.Store: Flushed ,
sequenceid=210517246,
 memsize=39.3m, into tmp file hdfs://hbase-master:8000/hbase
 /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
 /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc
 2013-04-16 17:56:07,357 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom
filter
 type for hdfs://hbase-master:8000/hbase/URL_COUNTERS/*af2760e*
 *4d04a9e3025d1fb53bdba8acf*/.tmp/3fa7993dcb294be1bca5e4d7357f4003:
 CompoundBloomFilterWriter
 2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom
 and
  NO
 DeleteFamily was added to HFile (hdfs://hbase-master:8000/hbase
 /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
 /.tmp/3fa7993dcb294be1bca5e4d7357f4003)
 2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] FATAL
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
server
 region-server-address,8041,1364993168088: Replay of HLog required
 . Forcing server shutdown
 DroppedSnapshotException: region: TABLE,ROWKEY,1364317591568.*
 af2760e4d04a9e3025d1fb53bdba8acf*.
 
 
 ...


 On Wed, Apr 17, 2013 at 11:47 AM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

  Seems interesting.  Can  you tell us what are the families and
 the
  qualifiers available in your schema.
 
  Any other interesting logs that you can see before this?
 
  BTW the version of HBase is also needed?  If we can track it out
 we
   can
  then file a JIRA if it is a bug.
 
  Regards
  RAm
 
 
  On Wed, Apr 17, 2013 at 2:00 PM, Amit Sela am...@infolinks.com
wrote:
 

hbase hbase.rootdir configuration

2013-04-17 Thread lztaomin
HI 
I I use the Hadoop HA,hadoop ha work very well ,but  my hbase configuration 
property
  namehbase.rootdir/name
  valuehdfs://cluster/hbase/value
/property
  hbase can not access HDFS ,How should I the configuration 
hbase.rootdir is correct ? thanks very much?

My core-site.xml configuration 
property  
 namefs.defaultFS/name   
 valuehdfs://cluster/value  
/property   
property  
  nameio.compression.codecs/name  
  valueorg.apache.hadoop.io.compress.SnappyCodec/value  
/property 

   My  hdfs-site.xml  configuration 
property  
  namedfs.federation.nameservices/name  
  valuecluster/value  
/property  
property
  namedfs.replication/name
  value3/value
/property
property
  namedfs.datanode.max.xcievers/name
  value8192/value
/property

property  
  namedfs.namenode.name.dir/name  
  value/ytxt/hadoopData/value  
/property  

property  
  namedfs.ha.namenodes.cluster/name  
  valuenn0,nn1/value   
/property  
property  
  namedfs.namenode.rpc-address.cluster.nn0/name  
  valuesy-hadoop-namenode1.189read.com:9000/value  
/property  
property  
  namedfs.namenode.rpc-address.cluster.nn1/name  
  valuesy-hadoop-namenode2.189read.com:9000/value  
/property  

property  
  namedfs.namenode.http-address.cluster.nn0/name  
  valuesy-hadoop-namenode1.189read.com:50070/value  
/property  
property  
  namedfs.namenode.http-address.cluster.nn1/name  
  valuesy-hadoop-namenode2.189read.com:50070/value  
/property  

property  
  namedfs.namenode.shared.edits.dir/name  
  value/HAshared/value  
/property  

property  
  namedfs.client.failover.proxy.provider.cluster/name   
  
valueorg.apache.Hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value
  
/property  
   
property  
  namedfs.ha.fencing.methods/name  
  valuesshfence/value  
/property  
property  
  namedfs.ha.fencing.ssh.private-key-files/name  
  value/home/hadoop/.ssh/id_rsa/value  
/property  
property  
  namedfs.ha.automatic-failover.enabled/name  
  valuetrue/value  
/property  
property  
  nameha.zookeeper.quorum/name  
  
valuesy-hadoop-namenode1.189read.com,sy-hadoop-namenode2.189read.com,datanode1:2181,datanode2:2181,datanode3:2181/value
  
/property  
property  
namedfs.data.dir/name  
value/ytxt/hadoopData/value  
/property  




lztaomin

Re: RegionServer shutdown with ScanWildcardColumnTracker exception

2013-04-17 Thread ramkrishna vasudevan
Is there any testcases that tries to reproduce your issue?

Regards
Ram


On Wed, Apr 17, 2013 at 9:47 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 There is a hint mechanism available when scanning happens.  But i dont
 think there should be much of difference between a scan that happens during
 flush and the normal scan.

 Will look thro the code and come back on this.

 Regards
 RAm


 On Wed, Apr 17, 2013 at 9:40 PM, Amit Sela am...@infolinks.com wrote:

 No, no encoding.


 On Wed, Apr 17, 2013 at 6:56 PM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

  @Lars
  You have any suggestions on this?
 
  @Amit
  You have any Encoder enabled like the Prefix Encoding stuff?
  There was one optimization added recently but that is not in 0.94.2
 
  Regards
  Ram
 
 
  On Wed, Apr 17, 2013 at 5:17 PM, Amit Sela am...@infolinks.com wrote:
 
   I scanned over this counter with and without column specification and
 all
   looks OK now.
   I have no CPs in this table.
   Is there some kind of a hint mechanism in HBase' internal scan ?
 because
   it's weird that ScanWildcardColumnTracker.checkColumn says that
 column is
   smaller than previous column: *imprersions_ALL_2013041617*. there is
 no
   imprersions only impressions and r is indeed smaller than s, could it
 be
   some kind of hint bug ? I don't think I know enough of HBase
 internals to
   fully understand that...
  
  
  
   On Wed, Apr 17, 2013 at 1:42 PM, ramkrishna vasudevan 
   ramkrishna.s.vasude...@gmail.com wrote:
  
Hi Amit
   
Checking the code this is possible when the qualifiers are not
 sorted.
Do
you have any CPs in your path which tries to play with the KVs?
   
Seems to be a very weird thing.
Can you try doing a scan on the KV just before this happens.  That
 will
   tel
you the existing kvs that are present.
   
Even now if you can have the cluster you can try scanning for the
  region
for which the flush happened.  That will give us some more info.
   
Regards
Ram
   
   
On Wed, Apr 17, 2013 at 2:36 PM, Amit Sela am...@infolinks.com
  wrote:
   
 The cluster runs Hadoop 1.0.4 and HBase 0.94.2

 I have three families in this table: weekly, daily, hourly. each
  family
has
 the following qualifiers:
 Weekly - impressions_{countrycode}_{week#} - country code is 0, 1
 or
   ALL
 (aggregation of both 0 and 1)
 Daily and hourly are the same but with MMdd and MMddhh
 respectively.

 Just before the exception the regionserver StoreFile executes the
 following:

 2013-04-16 17:56:06,769 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family
 Bloom
filter
 type for hdfs://
 hadoop-master.infolinks.com:8000/hbase/URL_COUNTERS/af2760e
 4d04a9e3025d1fb53bdba8acf/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc:
 CompoundBloomFilterWriter
 2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom
 and
  NO
 DeleteFamily was added to HFile
  (hdfs://hbase-master-address:8000/hbase
 /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
 /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc)
 2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.Store: Flushed ,
sequenceid=210517246,
 memsize=39.3m, into tmp file hdfs://hbase-master:8000/hbase
 /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
 /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc
 2013-04-16 17:56:07,357 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family
 Bloom
filter
 type for hdfs://hbase-master:8000/hbase/URL_COUNTERS/*af2760e*
 *4d04a9e3025d1fb53bdba8acf*/.tmp/3fa7993dcb294be1bca5e4d7357f4003:
 CompoundBloomFilterWriter
 2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] INFO
 org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom
 and
  NO
 DeleteFamily was added to HFile (hdfs://hbase-master:8000/hbase
 /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
 /.tmp/3fa7993dcb294be1bca5e4d7357f4003)
 2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] FATAL
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING
 region
server
 region-server-address,8041,1364993168088: Replay of HLog required
 . Forcing server shutdown
 DroppedSnapshotException: region: TABLE,ROWKEY,1364317591568.*
 af2760e4d04a9e3025d1fb53bdba8acf*.
 
 
 ...


 On Wed, Apr 17, 2013 at 11:47 AM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

  Seems interesting.  Can  you tell us what are the families and
 the
  qualifiers available in your schema.
 
  Any other interesting logs that you can see before this?
 
  BTW the version of HBase is also needed?  If we can track it
 

Re: hbase hbase.rootdir configuration

2013-04-17 Thread shashwat shriparv
On Wed, Apr 17, 2013 at 8:25 PM, lztaomin lztao...@163.com wrote:

 hdfs://cluster/hbase/val


Where is the port number bro

*Thanks  Regards*

∞
Shashwat Shriparv


Re: HBaseStorage. Inconsistent result.

2013-04-17 Thread Jean-Daniel Cryans
Can you run a RowCounter a bunch of times to see if it exhibits the same
issue? It would tell us if it's HBase or Pig that causes the issue.

http://hbase.apache.org/book.html#rowcounter

J-D


On Tue, Apr 9, 2013 at 3:58 AM, Eugene Morozov emoro...@griddynamics.comwrote:

 Hello everyone.

 I have following script:
 pages = LOAD 'hbase://mmpages' USING
 org.apache.pig.backend.hadoop.hbase.HBaseStorage('t:d', '-loadKey');
 pages2 = FOREACH pages GENERATE $0;
 pages3 = DISTINCT pages2;
 g_pages = GROUP pages3 all PARALLEL 1;
 s_pages = FOREACH g_pages GENERATE 'count', COUNT(pages3);
 DUMP s_pages;

 It just calculates number of keys in the table.
 The issue with this is that it gives me different results.
 I had two launch.
 * first one - 7 tasks in parallel (I launched same script 7 times
 trying to imitate heavy workload)
 * second one - 9 tasks in parallel.

 All 7 guys in first and 8 guys in second give me correct result, which is:

 Input(s):
 Successfully read 246419854 records (102194 bytes) from: hbase://mmpages
 ...
 (count,246419854)


 But one last of second run gives different
 Input(s):
 Successfully read 246419853 records (102194 bytes) from: hbase://mmpages
 ...
 (count,246419853)

 Number of read bytes is same, but number of rows is different.

 There was definitely no change in mmpages. We do not use standard
 Put/Delete - only bulkImport and there were no  Major compaction run on
 this table. Even if it would be run, it wouldn't delete anything,
 because TTL of this page is = '2147483647'. Moreover this table was for
 debug purposes - nobody uses it, but me.


 Original issue I got was actually same, but with my own HBaseStorage. It
 gives much less consistent results. For example for 7 parallel run it gives
 me:
 --(count,246419854)
 --(count,246419173) : Successfully read 246419173 records (2333164 bytes)
 from: hbase://mmpages
 --(count,246419854) : Successfully read 246419854 records (2333164 bytes)
 from: hbase://mmpages
 --(count,246419854) : Successfully read 246419854 records (2333164 bytes)
 from: hbase://mmpages
 --(count,246419173) : Successfully read 246419173 records (2333164 bytes)
 from: hbase://mmpages
 --(count,246418816) : Successfully read 246418816 records (2333164 bytes)
 from: hbase://mmpages
 --(count,246418690)
 -- and one job has been failed due to lease exception.
 During run with my own HBaseStorage I see many map tasks killed with lease
 does not exist exception, though job usually finish successful.

 As you can see number of read bytes is exactly same every time, but numbers
 of read rows are different. Exactly same I got with native HBaseStorage,
 though difference is really small.

 But anyway, I didn't expect to see that original HBaseStorage could also do
 the trick. And now my question is more about org.apache...HBaseStorage than
 about my own HBaseStorage.

 Any advice
 to prove anything regarding native org.apache...HBaseStorage to fix it
 or
 to do more experiments on the matter would be really really
 appreciated.
 --
 Eugene Morozov
 Developer of Grid Dynamics
 Skype: morozov.evgeny
 www.griddynamics.com
 emoro...@griddynamics.com



RE: How to configure mapreduce archive size?

2013-04-17 Thread Xia_Yang
Hi,

I am using hbase -0.94.1. The hadoop which is packaged within it is hadoop 
1.0.3. There are some hbase mapreduce jobs running on my server. After some 
time, I found that my folder /tmp/hadoop-root/mapred/local/archive has 14G size.

I did not explicitly use Hadoop DistributedCache in my code. Does Hbase have 
some settings to write jar file to this folder? How could I remove them or 
limit the size?

Thank you.

Jane

From: Yang, Xia
Sent: Wednesday, April 17, 2013 11:19 AM
To: 'u...@hadoop.apache.org'
Subject: RE: How to configure mapreduce archive size?

Hi Hemanth and Bejoy KS,

I have tried both mapred-site.xml and core-site.xml. They do not work. I set 
the value to 50K just for testing purpose, however the folder size already goes 
to 900M now. As in your email, After they are done, the property will help 
cleanup the files due to the limit set.  How frequently the cleanup task will 
be triggered?

Regarding the job.xml, I cannot use JT web UI to find it. It seems when hadoop 
is packaged within Hbase, this is disabled. I am only use Hbase jobs. I was 
suggested by Hbase people to get help from Hadoop mailing list. I will contact 
them again.

Thanks,

Jane

From: Hemanth Yamijala [mailto:yhema...@thoughtworks.com]
Sent: Tuesday, April 16, 2013 9:35 PM
To: u...@hadoop.apache.org
Subject: Re: How to configure mapreduce archive size?

You can limit the size by setting local.cache.size in the mapred-site.xml (or 
core-site.xml if that works for you). I mistakenly mentioned mapred-default.xml 
in my last mail - apologies for that. However, please note that this does not 
prevent whatever is writing into the distributed cache from creating those 
files when they are required. After they are done, the property will help 
cleanup the files due to the limit set.

That's why I am more keen on finding what is using the files in the Distributed 
cache. It may be useful if you can ask on the HBase list as well if the APIs 
you are using are creating the files you mention (assuming you are only running 
HBase jobs on the cluster and nothing else)

Thanks
Hemanth

On Tue, Apr 16, 2013 at 11:15 PM, xia_y...@dell.commailto:xia_y...@dell.com 
wrote:
Hi Hemanth,

I did not explicitly using DistributedCache in my code. I did not use any 
command line arguments like -libjars neither.

Where can I find job.xml? I am using Hbase MapReduce API and not setting any 
job.xml.

The key point is I want to limit the size of 
/tmp/hadoop-root/mapred/local/archive. Could you help?

Thanks.

Xia

From: Hemanth Yamijala 
[mailto:yhema...@thoughtworks.commailto:yhema...@thoughtworks.com]
Sent: Thursday, April 11, 2013 9:09 PM

To: u...@hadoop.apache.orgmailto:u...@hadoop.apache.org
Subject: Re: How to configure mapreduce archive size?

TableMapReduceUtil has APIs like addDependencyJars which will use 
DistributedCache. I don't think you are explicitly using that. Are you using 
any command line arguments like -libjars etc when you are launching the 
MapReduce job ? Alternatively you can check job.xml of the launched MR job to 
see if it has set properties having prefixes like mapred.cache. If nothing's 
set there, it would seem like some other process or user is adding jars to 
DistributedCache when using the cluster.

Thanks
hemanth



On Thu, Apr 11, 2013 at 11:40 PM, xia_y...@dell.commailto:xia_y...@dell.com 
wrote:
Hi Hemanth,

Attached is some sample folders within my 
/tmp/hadoop-root/mapred/local/archive. There are some jar and class files 
inside.

My application uses MapReduce job to do purge Hbase old data. I am using basic 
HBase MapReduce API to delete rows from Hbase table. I do not specify to use 
Distributed cache. Maybe HBase use it?

Some code here:

   Scan scan = new Scan();
   scan.setCaching(500);// 1 is the default in Scan, which will be 
bad for MapReduce jobs
   scan.setCacheBlocks(false);  // don't set to true for MR jobs
   scan.setTimeRange(Long.MIN_VALUE, timestamp);
   // set other scan attrs
   // the purge start time
   Date date=new Date();
   TableMapReduceUtil.initTableMapperJob(
 tableName,// input table
 scan,   // Scan instance to control CF and attribute 
selection
 MapperDelete.class, // mapper class
 null, // mapper output key
 null,  // mapper output value
 job);

   job.setOutputFormatClass(TableOutputFormat.class);
   job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, tableName);
   job.setNumReduceTasks(0);

   boolean b = job.waitForCompletion(true);

From: Hemanth Yamijala 
[mailto:yhema...@thoughtworks.commailto:yhema...@thoughtworks.com]
Sent: Thursday, April 11, 2013 12:29 AM

To: u...@hadoop.apache.orgmailto:u...@hadoop.apache.org
Subject: Re: How to configure mapreduce archive size?

Could you paste the contents of the directory ? Not sure whether that will 
help, but just giving it a shot.

What application 

Re: RegionServer shutdown with ScanWildcardColumnTracker exception

2013-04-17 Thread Amit Sela
No. It happened in our production environment after running counters
increments every 5 minutes for a few weeks now. I could try to reproduce in
test cluster environment but that would mean running for weeks as well...
but I will keep digging and let you guys know if it happens again or / and
I have more information or insights on the issue.

Thanks.


On Wed, Apr 17, 2013 at 8:18 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:

 Is there any testcases that tries to reproduce your issue?

 Regards
 Ram


 On Wed, Apr 17, 2013 at 9:47 PM, ramkrishna vasudevan 
 ramkrishna.s.vasude...@gmail.com wrote:

  There is a hint mechanism available when scanning happens.  But i dont
  think there should be much of difference between a scan that happens
 during
  flush and the normal scan.
 
  Will look thro the code and come back on this.
 
  Regards
  RAm
 
 
  On Wed, Apr 17, 2013 at 9:40 PM, Amit Sela am...@infolinks.com wrote:
 
  No, no encoding.
 
 
  On Wed, Apr 17, 2013 at 6:56 PM, ramkrishna vasudevan 
  ramkrishna.s.vasude...@gmail.com wrote:
 
   @Lars
   You have any suggestions on this?
  
   @Amit
   You have any Encoder enabled like the Prefix Encoding stuff?
   There was one optimization added recently but that is not in 0.94.2
  
   Regards
   Ram
  
  
   On Wed, Apr 17, 2013 at 5:17 PM, Amit Sela am...@infolinks.com
 wrote:
  
I scanned over this counter with and without column specification
 and
  all
looks OK now.
I have no CPs in this table.
Is there some kind of a hint mechanism in HBase' internal scan ?
  because
it's weird that ScanWildcardColumnTracker.checkColumn says that
  column is
smaller than previous column: *imprersions_ALL_2013041617*. there
 is
  no
imprersions only impressions and r is indeed smaller than s, could
 it
  be
some kind of hint bug ? I don't think I know enough of HBase
  internals to
fully understand that...
   
   
   
On Wed, Apr 17, 2013 at 1:42 PM, ramkrishna vasudevan 
ramkrishna.s.vasude...@gmail.com wrote:
   
 Hi Amit

 Checking the code this is possible when the qualifiers are not
  sorted.
 Do
 you have any CPs in your path which tries to play with the KVs?

 Seems to be a very weird thing.
 Can you try doing a scan on the KV just before this happens.  That
  will
tel
 you the existing kvs that are present.

 Even now if you can have the cluster you can try scanning for the
   region
 for which the flush happened.  That will give us some more info.

 Regards
 Ram


 On Wed, Apr 17, 2013 at 2:36 PM, Amit Sela am...@infolinks.com
   wrote:

  The cluster runs Hadoop 1.0.4 and HBase 0.94.2
 
  I have three families in this table: weekly, daily, hourly. each
   family
 has
  the following qualifiers:
  Weekly - impressions_{countrycode}_{week#} - country code is 0,
 1
  or
ALL
  (aggregation of both 0 and 1)
  Daily and hourly are the same but with MMdd and MMddhh
  respectively.
 
  Just before the exception the regionserver StoreFile executes
 the
  following:
 
  2013-04-16 17:56:06,769 [regionserver8041.cacheFlusher] INFO
  org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family
  Bloom
 filter
  type for hdfs://
  hadoop-master.infolinks.com:8000/hbase/URL_COUNTERS/af2760e
  4d04a9e3025d1fb53bdba8acf/.tmp/dc4ce516887f4e0bbaf6201d69ba90bc:
  CompoundBloomFilterWriter
  2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
  org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom
  and
   NO
  DeleteFamily was added to HFile
   (hdfs://hbase-master-address:8000/hbase
  /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
  /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc)
  2013-04-16 17:56:07,331 [regionserver8041.cacheFlusher] INFO
  org.apache.hadoop.hbase.regionserver.Store: Flushed ,
 sequenceid=210517246,
  memsize=39.3m, into tmp file hdfs://hbase-master:8000/hbase
  /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
  /.tmp/dc4ce516887f4e0bbaf6201d69ba90bc
  2013-04-16 17:56:07,357 [regionserver8041.cacheFlusher] INFO
  org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family
  Bloom
 filter
  type for hdfs://hbase-master:8000/hbase/URL_COUNTERS/*af2760e*
 
 *4d04a9e3025d1fb53bdba8acf*/.tmp/3fa7993dcb294be1bca5e4d7357f4003:
  CompoundBloomFilterWriter
  2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] INFO
  org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom
  and
   NO
  DeleteFamily was added to HFile (hdfs://hbase-master:8000/hbase
  /URL_COUNTERS/*af2760e4d04a9e3025d1fb53bdba8acf*
  /.tmp/3fa7993dcb294be1bca5e4d7357f4003)
  2013-04-16 17:56:07,608 [regionserver8041.cacheFlusher] FATAL
  org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING
  region
 server
  

Re: Speeding up the row count

2013-04-17 Thread Vedad Kirlic
Hi Omkar,

If you are not interested in occurrences of specific column (e.g. name,
email ... ), and just want to get total number of rows (regardless of their
content - i.e. columns), you should avoid adding any columns to the Scan, in
which case coprocessor implementation for AggregateClient, will add
FirstKeyOnlyFilter to the Scan, so to avoid loading unnecessary columns, so
this should result in some speed up.

This is similar approach to what hbase shell 'count' implementation does,
although reduction in overhead in that case is bigger, since data transfer
from region server to client (shell) is minimized, whereas in case of
coprocessor, data does not leave region server, so most of the improvement
in that case should come from avoiding loading of unnecessary files. Not
sure how this will apply to your particular case, given that data set per
row seems to be rather small. Also, in case of AggregateClient you will
benefit if/when your tables span multiple regions. Essentially, performance
of this approach will 'degrade' as your table gets bigger, but only to the
point when it splits, from which point it should be pretty constant. Having
this in mind, and your type of data, you might consider pre-splitting your
tables.

DISCLAIMER: this is mostly theoretical, since I'm not an expert in hbase
internals :), so your best bet is to try it - I'm too lazy to verify impact
my self ;)

Finally, if your case can tolerate eventual consistency of counters with
actual number of rows, you can, as already suggested, have RowCounter map
reduce run every once in a while, write the counter(s) back to hbase, and
read those when you need to obtain the number of rows.

Regards,
Vedad



--
View this message in context: 
http://apache-hbase.679495.n3.nabble.com/Speeding-up-the-row-count-tp4042378p4042415.html
Sent from the HBase User mailing list archive at Nabble.com.


Rolling Restart and Load Balancer.

2013-04-17 Thread Jean-Marc Spaggiari
When we are doing rolling restarts, the load balancer is automatically
turned off.

hbase@node3:~/hbase$ ./bin/graceful_stop.sh --restart --reload --debug buldo
Disabling balancer!
HBase Shell; enter 'helpRETURN' for list of supported commands.
Type exitRETURN to leave the HBase Shell
Version 0.94.6.1, r1464658, Thu Apr  4 10:58:50 PDT 2013

balance_switch false
false

0 row(s) in 0.4110 seconds
.
.
.


However, in the documentation, we still recommand to turn it of manually
before doing the rolling restart: http://hbase.apache.org/book.html#rolling

Should the documentation be updated to reflect that it's not required
anymore to turn off the load balancer since it will be done by the script?

JM


Re: Rolling Restart and Load Balancer.

2013-04-17 Thread Ted Yu
We should make the documentation match the script. 

Thanks

On Apr 17, 2013, at 1:15 PM, Jean-Marc Spaggiari jean-m...@spaggiari.org 
wrote:

 When we are doing rolling restarts, the load balancer is automatically
 turned off.
 
 hbase@node3:~/hbase$ ./bin/graceful_stop.sh --restart --reload --debug buldo
 Disabling balancer!
 HBase Shell; enter 'helpRETURN' for list of supported commands.
 Type exitRETURN to leave the HBase Shell
 Version 0.94.6.1, r1464658, Thu Apr  4 10:58:50 PDT 2013
 
 balance_switch false
 false
 
 0 row(s) in 0.4110 seconds
 .
 .
 .
 
 
 However, in the documentation, we still recommand to turn it of manually
 before doing the rolling restart: http://hbase.apache.org/book.html#rolling
 
 Should the documentation be updated to reflect that it's not required
 anymore to turn off the load balancer since it will be done by the script?
 
 JM


Re: namenode recovery from HA enviorment

2013-04-17 Thread Azuryy Yu
active or standby NN failed.  does that failover automatically?

just run hadoop-deamon.sh start namenode on your failed NN.


On Thu, Apr 18, 2013 at 11:54 AM, huaxiang huaxi...@asiainfo-linkage.comwrote:

 Hi,

One of my HA namenode failed, any guide to recovery it safely. CDH4.1.2
 with ZKFC.



Thanks!





beatls




Under Heavy Write Load + Replication On : Brings All My Region Servers Dead

2013-04-17 Thread Ameya Kantikar
I am running Hbase 0.94.2 from cloudera cdh4.2. (10 machine cluster)

Under heavy write load, and when replication is on, all my region servers
are going down.
I checked with cloudera version, it has HBASE-2611 bug patched in the
version I am using, so not sure whats going on. Here is the stack:

2013-04-18 01:47:33,423 INFO
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager:
Atomically moving relevance-hbase5-snc1.snc1,60020,1366247910200's hlogs to
my queue

2013-04-18 01:47:33,424 DEBUG
org.apache.hadoop.hbase.replication.ReplicationZookeeper:  The multi list
size is: 1

2013-04-18 01:47:33,425 WARN
org.apache.hadoop.hbase.replication.ReplicationZookeeper: Got exception in
copyQueuesFromRSUsingMulti:

org.apache.zookeeper.KeeperException$NotEmptyException: KeeperErrorCode =
Directory not empty

at
org.apache.zookeeper.KeeperException.create(KeeperException.java:125)

at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:925)

at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:901)

at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi(RecoverableZooKeeper.java:538)

at
org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential(ZKUtil.java:1457)

at
org.apache.hadoop.hbase.replication.ReplicationZookeeper.copyQueuesFromRSUsingMulti(ReplicationZookeeper.java:705)

at
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:585)

at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

at java.lang.Thread.run(Thread.java:662)


Followed by

2013-04-18 01:47:36,043 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
relevance-hbase2-snc1.snc1,60020,1366247745434: Writing replication status


I checked by turning replication off, and everything seems fine. I can
reproduce this bug almost every time I run my write heavy job.


Here is the complete log:

http://pastebin.com/da0m475T



Any ideas?


Ameya


Re: Under Heavy Write Load + Replication On : Brings All My Region Servers Dead

2013-04-17 Thread Himanshu Vashishtha
Hello Ameya,

Sorry to hear that.

You have two options:

1) Apply HBase-8099 patch to your version. (
https://issues.apache.org/jira/browse/HBASE-8099) The patch is simple, so
should be easy to do, OR,
2) Turn off zk.multi feature (see hbase-default.xml). (You can refer to
CDH4.2.0 docs for that)

This fix (HBase-8099) will be in CDH4.2.1, though.

Please ask list if you have any more questions.

Thanks,
Himanshu

On Wed, Apr 17, 2013 at 10:38 PM, Ameya Kantikar am...@groupon.com wrote:

 I am running Hbase 0.94.2 from cloudera cdh4.2. (10 machine cluster)

 Under heavy write load, and when replication is on, all my region servers
 are going down.
 I checked with cloudera version, it has HBASE-2611 bug patched in the
 version I am using, so not sure whats going on. Here is the stack:

 2013-04-18 01:47:33,423 INFO
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager:
 Atomically moving relevance-hbase5-snc1.snc1,60020,1366247910200's hlogs to
 my queue

 2013-04-18 01:47:33,424 DEBUG
 org.apache.hadoop.hbase.replication.ReplicationZookeeper:  The multi list
 size is: 1

 2013-04-18 01:47:33,425 WARN
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Got exception in
 copyQueuesFromRSUsingMulti:

 org.apache.zookeeper.KeeperException$NotEmptyException: KeeperErrorCode =
 Directory not empty

 at
 org.apache.zookeeper.KeeperException.create(KeeperException.java:125)

 at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:925)

 at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:901)

 at

 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.multi(RecoverableZooKeeper.java:538)

 at

 org.apache.hadoop.hbase.zookeeper.ZKUtil.multiOrSequential(ZKUtil.java:1457)

 at

 org.apache.hadoop.hbase.replication.ReplicationZookeeper.copyQueuesFromRSUsingMulti(ReplicationZookeeper.java:705)

 at

 org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:585)

 at

 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

 at

 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

 at java.lang.Thread.run(Thread.java:662)


 Followed by

 2013-04-18 01:47:36,043 FATAL
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server
 relevance-hbase2-snc1.snc1,60020,1366247745434: Writing replication status


 I checked by turning replication off, and everything seems fine. I can
 reproduce this bug almost every time I run my write heavy job.


 Here is the complete log:

 http://pastebin.com/da0m475T



 Any ideas?


 Ameya