Hi Ted,All
I have set
hfile.block.cache.size to 0.6
hbase.regionserver.handler.count to 60
DATA_BLOCK _ENCODING = 'FAST_DIFF'
BLOOMFILTER = 'ROW'
BLOCKSIZE = '8192'
BLOCKCACHE = 'true'
The performance has been increased.
But after creating another table with same size and
Here is the describtion of two tables created :
FIRST TABLE
Thanks for reply!
I'm running with single thread.
Actually I want to know: How really fast that HBase write can be with each
thread?
And how to optimize it?
Config BLOCKSIZE/WriteBuffer ...?
I'm a newbie.
Thanks for help!
jingych
From: Asaf Mesika
Date: 2013-11-26 19:20
To:
..
2013-11-27 18:26:18,102 FATAL org.apache.hadoop.hbase.master.HMaster:
Master server abort: loaded coprocessors are: []
2013-11-27 18:26:18,102 FATAL org.apache.hadoop.hbase.master.HMaster:
Unexpected state : H,
What about hbase.version ? Is it used? Should be updated automatically ?
Thanks
2013/11/26 Ted Yu yuzhih...@gmail.com
Take a look at bin/rolling-restart.sh where you will see various options.
bq. i would like to keep current (.7) folder and move my installation to
.13 folder
You mean
Have you checked region server log on d199.uuc.com ?
Cheers
On Nov 27, 2013, at 3:21 AM, Jiajun Chen chenjia...@uuwatch.com wrote:
..
2013-11-27 18:26:18,102 FATAL org.apache.hadoop.hbase.master.HMaster:
Master server abort: loaded coprocessors are: []
2013-11-27 18:26:18,102 FATAL
Hi,
I am trying to install snappy compression for HBase. I believe I have
installed the library and wanted to check by using the CompressionTest
utility. I am issuing the following command:
bin/hbase org.apache.hadoop.hbase.util.CompressionTest
hdfs://C-Master:/usr/local/hadoop/hbase snappy
bq. out of maxHeapMB=15983
In previous email you said RAM is 8GB. Above figure is larger than 8GB.
There're 6 coprocessors installed on each table.
I wonder if what you observed was related to HBASE-10047.
Cheers
On Wed, Nov 27, 2013 at 12:22 AM, Job Thomas j...@suntecgroup.com wrote:
Hi
bq. hdfs://C-Master*:*/
Did you actually typed the second colon ?
On Wed, Nov 27, 2013 at 6:58 AM, dwijesinghe dwijesin...@ivantagehealth.com
wrote:
Hi,
I am trying to install snappy compression for HBase. I believe I have
installed the library and wanted to check by using the
I tried both with and without (was unsure of the syntax for specifying the
hbase path). Same result either way.
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/CompressionTest-failing-tp4053156p4053159.html
Sent from the HBase User mailing list archive at Nabble.com.
Log snippet shows 8020.
Is that the correct port ?
BTW '/usr/local/hadoop' seems to indicate a local path.
On Wed, Nov 27, 2013 at 7:15 AM, dwijesinghe dwijesin...@ivantagehealth.com
wrote:
I tried both with and without (was unsure of the syntax for specifying the
hbase path). Same result
Thank you for your reply.
That port is open for access, as stated in my security rules for the
cluster. Is there any further configuration I should do to that port/should
it be using another port?
Also I was a little confused by the instructions there. It prefixed the path
with hdfs:// but it
Here is example given by CompressionTest:
For example:\n +
hbase + CompressionTest.class + file:///tmp/testfile gz\n);
CompressionTest would write to this file. This is to verify that
compression works on each node.
You should specify file: as scheme.
Cheers
On Wed, Nov 27,
Hi all,
I've read a lot of good things about Phoenix here and I have a few
questions that maybe some of you, who already use Phoenix, can help me with:
How does Phoenix handle pre-existing data (before it was deployed) ?
Does the deployment require HBase restart or just RegionServers restart ?
Amit:
Have you subscribed to phoenix-hbase-...@googlegroups.com ?
Cheers
On Wed, Nov 27, 2013 at 8:23 AM, Amit Sela am...@infolinks.com wrote:
Hi all,
I've read a lot of good things about Phoenix here and I have a few
questions that maybe some of you, who already use Phoenix, can help me
I actually asked some of these questions in the phoenix-hbase-user
googlegroup but never got an answer...
On Wed, Nov 27, 2013 at 6:39 PM, Ted Yu yuzhih...@gmail.com wrote:
Amit:
Have you subscribed to phoenix-hbase-...@googlegroups.com ?
Cheers
On Wed, Nov 27, 2013 at 8:23 AM, Amit Sela
Ted,
Thank you so much for your help. I was able to successfully test my snappy
installation by following that example.
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/CompressionTest-failing-tp4053156p4053166.html
Sent from the HBase User mailing list archive at
You're welcome.
Actually the ref guide mentions this tool:
http://hbase.apache.org/book.html#compression.test
On Wed, Nov 27, 2013 at 8:52 AM, dwijesinghe dwijesin...@ivantagehealth.com
wrote:
Ted,
Thank you so much for your help. I was able to successfully test my snappy
installation by
Amit,
So sorry we didn't answer your question before - I'll post an answer now
over on our mailing list.
Thanks,
James
On Wed, Nov 27, 2013 at 8:46 AM, Amit Sela am...@infolinks.com wrote:
I actually asked some of these questions in the phoenix-hbase-user
googlegroup but never got an
For 0.94.X, hbase.version would stay at 7
You don't need to take extra action.
Cheers
On Wed, Nov 27, 2013 at 6:00 AM, Federico Gaule fga...@despegar.com wrote:
What about hbase.version ? Is it used? Should be updated automatically ?
Thanks
2013/11/26 Ted Yu yuzhih...@gmail.com
Take
Did you play with HBase 0.96+ at some point? Looks like there is junk in a
znode.
-- Lars
From: Jiajun Chen chenjia...@uuwatch.com
To: user@hbase.apache.org user@hbase.apache.org
Sent: Wednesday, November 27, 2013 3:21 AM
Subject: HMaster Aborted for
The following JIRA has been integrated to branch 2.2 :
HADOOP-10132 RPC#stopProxy() should log the class of proxy when
IllegalArgumentException is encountered
FYI
On Mon, Nov 25, 2013 at 9:56 PM, Ted Yu yuzhih...@gmail.com wrote:
Update:
Henry tried my patch attached to HBASE-10029
From
Hi all,
Knowing that replication metrics are global at the region server level in
HBase 0.94.13, what is the meaning of a metric like sizeOfLogQueue when
replicating to more than one peer/slave? Is it the queue size reported by
the last replication source thread ? does the last thread win ? Can I
I did't play with HBase 0.96+ at any point.
and on d199.uuc.com
2013-11-27 18:24:33,421 INFO org.apache.hadoop.hbase.regionserver.Store:
Completed compaction of 3 file(s) in page of H,
http://istock.jrj.com.cn/article,002024,6
567377.html,1385541132079.18c9cb11b3e673dec07038f166fb3ef7. into
Jiajun:
Are you able to show us some more of the region server log ?
Thanks
On Nov 27, 2013, at 9:54 PM, Jiajun Chen chenjia...@uuwatch.com wrote:
I did't play with HBase 0.96+ at any point.
and on d199.uuc.com
2013-11-27 18:24:33,421 INFO org.apache.hadoop.hbase.regionserver.Store:
Hi,
Thanks for update.
After spending quite a bit of time on Hadoop/HBase I couldn't find any thing
awkward in logs.
At last what I got to know is the reason for outage is IO Error thrown by the
one of disk in which we are storing NameNode files.
One more suggestion we need is regarding
2013-11-27 18:24:33,375 INFO
org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
type for hdfs://
master.uc.uuc.com:9000/hbase/H/18c9cb11b3e673dec07038f166fb3ef7/.tmp/832ec249071c45b3934a186046ca429d:
CompoundBloomFilterWriter
2013-11-27 18:24:33,385 INFO
27 matches
Mail list logo