copy your hbase-site.xml here
On Thu, Mar 10, 2011 at 3:01 PM, 茅旭峰 m9s...@gmail.com wrote:
It seems like there are lots of WAL files in .logs and .oldlogs
directories.
Is there any parameter to control
the size of those WAL files? Or the frequency at which to check the WAL
files.
Thanks a
I have not set much stuff in the xml file.
hbase-site.xml
===
configuration
property
namehbase.rootdir/name
valuehdfs://cloud135:9000/hbase/value
/property
property
namehbase.cluster.distributed/name
valuetrue/value
/property
property
Thank you J-D. These operations are more clear to me now.
On 03/10/2011 12:58 AM, Jean-Daniel Cryans wrote:
There's seems to be quite a few questions in your email, I'll try to
answer all of them but pardon me if I miss any of them.
J-D
On Wed, Mar 9, 2011 at 1:33 AM, Iulia
Dear all,
When I stop and start HBase server, HTable (org.apache.hadoop.hbase.client)
throw exception.
Here is my code test:
HTable ht = new HTable(config, T1);
byte[] row = Bytes.toBytes(1);
Get get = new Get(row);
Result result = ht.get(get);
Hi,
I want to delete one family in the java programe,
Here is my code:
-
Configuration conf=HBaseConfiguration.create();
HBaseAdmin admin = new HBaseAdmin(conf);
if (admin.tableExists(tableName)) {
HTable table = new HTable(conf,tableName);
Delete d=new
Hi,
I suggested him to upgrade his environment to the latest version, so
at this time, he used CDH3b4 (HBase 0.90.1) and performed the same
test procedure. Then now he got a new issue. HMaster was aborted
because it couldn't reach to the host that had the kernel panic.
Can anybody verify this
When u new Get(row) first,HBase client cache the connection to the
appropriate region server,after u restart HBase(I think u run
standalone?),when u do Get(row) again,HBase client still use the
cached connection to do this,so throw the Exception.
But it seems should not throws the exception a lot
Hi Iulia, when major-compaction does run well, how long does it take? (I
know it doesn't answer your question).
On Thu, Mar 10, 2011 at 1:06 AM, Iulia Zidaru iulia.zid...@1and1.ro wrote:
Thank you for your answer. It is very useful. Unfortunately, our write
pattern changes files across all
You need to call the following method of Delete which sets row before
calling table.delete():
public Delete(byte [] row) {
Cheers
On Thu, Mar 10, 2011 at 2:19 AM, Sindy sindyban...@gmail.com wrote:
Hi,
I want to delete one family in the java programe,
Here is my code:
It took 18h and it run well, this means that the insertion time was
not affected. I'm not sure what well means, that's why I'm curious
about compaction time in other systems. We don't have other times
because we didn't monitor it so far.
On 03/10/2011 04:12 PM, M. C. Srivas wrote:
Hi Iulia,
-
Hi,(See attached file: hbase-env.sh)(See attached file: hbase-site.xml)
Trying to run Hbase 0.90.1, on Windows machine with cygwin, was able to
start the Hbase , confirmed by display on http://localhost:60010/master.jsp
.
But when tried to start the shell getting following exception.
I found the major compaction operation in the API:
admin.majorCompact(tableNameOrRegionName)
Thank you,
On 03/10/2011 04:34 PM, Iulia Zidaru wrote:
It took 18h and it run well, this means that the insertion time was
not affected. I'm not sure what well means, that's why I'm curious
about
Hello HBase Users/Authors,
I may be digging old things out for which there could be lot of
debates/discussions, I couldn't able to get much info, I'm looking for
direct and precise answers.
It is simple and complex/tricky at the same time.
Ques1 :- Does HBase supports composite keys,
Hi ,
We are in process of installations hbase 0.90.1.
We compiled source code of hadoop-append and changed it in the
distribution of hbase
We succeeded to run some tests using only hadoop (map/reduce jobs)
but , after starting hbase we see such region server's list :
There are 2
Mohit,
The short answer is that the row key is the primary key and hence has a backing
index. Outside of that, HBase doesn't support secondary indexes.
Your primary key could be a composite of values.
You can use a secondary index, and some of the code is out in Git Hub. But its
not
I am trying to estimate the cost of hosting own HBase cluster vs using EC2.
Could anyone give me some guidance?
Cluster size ~ 6 to 8 nodes
Usage ~ at least 12 hours/day with lot of read/write operations. (I know I
need to have more concrete usage number here)
Thank you so much :)
Here is what we do at GumGum for multi valued field:
1) Make a new family if there are large number of fields
2) Use JSON or comma separated string lists.
3) Make a composite column name such as email|work, email|home
It is not recommended to make a new family for small number of fields as
Hi Weishung,
See the EC2 instance pricing details here:
http://aws.amazon.com/ec2/#pricing
http://aws.amazon.com/ec2/#pricingand try to calculate it out vs. price
quotes for hardware.
You'll need to run at _least_ m1.large or c1.xlarge instances for HBase.
There was a recent discussion thread
With no information whatsoever about size of the data, I would guess a cost
of about $4000 / node with annual hosting and power requirements about
$2000/year.
This is probably no more accurate than one order of magnitude. It has a
decent chance of being on the close order of magnitude. In
Sindy,
Are you trying to drop the column family completely from the table?
If so, you'll need to use HBaseAdmin.deleteColumn().
If you're only trying to remove values from the column family, then you will
need to specify a row, as Ted mentions. And the table must be enabled to
handle the
Hi all,
I have a small Hadoop and HBase cluster with 4 nodes all acting as datanodes
and regionservers, with replication set to 3. I am bulk loading data in
HBase using the importtsv program, writing heavily to one table that
initially had no data in it and only 1 region. I'll call this TableA.
On a related note:
I have tested my patch in our dev cluster for HBASE-3373
It works nicely.
On Thu, Mar 10, 2011 at 10:09 AM, Stack st...@duboce.net wrote:
Nichole:
There is a bug in our load balancer where what we thought random is
not. It'll be fixed in 0.90.2. See
Hi,
I've got some data in HBase that I'd hate to lose. Yeah, very original. :))
I know I can:
1) make a export/backup of 1 table at a time using
org.apache.hadoop.hbase.mapreduce.Export from HBASE-1684
2) copy 1 table at a time using
Whats in the regionserver logs? Please put up regionserver and
datanode excerpts.
Thanks Jack,
St.Ack
On Thu, Mar 10, 2011 at 10:31 AM, Jack Levin magn...@gmail.com wrote:
All was well, until this happen:
http://pastebin.com/iM1niwrS
and all regionservers went down, is this xciever issue?
On Thu, Mar 10, 2011 at 11:33 AM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
1) make a export/backup of 1 table at a time using
org.apache.hadoop.hbase.mapreduce.Export from HBASE-1684
This is actually checked in. See:
./bin/hadoop jar hbase-0.X.X.jar
2) copy 1 table at a time using
Hi,
1) make a export/backup of 1 table at a time using
org.apache.hadoop.hbase.mapreduce.Export from HBASE-1684
This is actually checked in. See:
./bin/hadoop jar hbase-0.X.X.jar
2) copy 1 table at a time using
http://pastebin.com/ZmsyvcVc Here is the regionserver log, they all have
similar stuff,
On Thu, Mar 10, 2011 at 11:34 AM, Stack st...@duboce.net wrote:
Whats in the regionserver logs? Please put up regionserver and
datanode excerpts.
Thanks Jack,
St.Ack
On Thu, Mar 10, 2011 at 10:31 AM,
Looks like a datanode went down. InterruptedException is how java
uses to interrupt IO in threads, its similar to the EINTR errno. That
means the actual source of the abort is higher up...
So back to how InterruptedException works... at some point a thread in
the JVM decides that the VM should
On Thu, Mar 10, 2011 at 12:31 PM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
Options 1) and 2) will give you a snapshot on a table at a particular
instance in time. You'll get the state of the row at the time the
MapReduce job crosses that row.
Hm, isn't this contradictory? That
On Thu, Mar 10, 2011 at 3:41 AM, Tatsuya Kawano tatsuya6...@gmail.com wrote:
I suggested him to upgrade his environment to the latest version, so
at this time, he used CDH3b4 (HBase 0.90.1) and performed the same
test procedure. Then now he got a new issue. HMaster was aborted
because it
On Wed, Mar 9, 2011 at 11:01 PM, 茅旭峰 m9s...@gmail.com wrote:
It seems like there are lots of WAL files in .logs and .oldlogs directories.
Is there any parameter to control
the size of those WAL files? Or the frequency at which to check the WAL
files.
Check your regionserver logs. They should
That looks like someone trying to connect to the master but they are
not doing the handshake properly. Do you have old versions of hbase
around the place? Or some other process connection the HBase Master?
As to unresponsive for 100 seconds, what was going on on your cluster?
Any clues in
Everything Gary said.
Something interesting Netflix said this week at the ccevent conference was they
were able to depreciate Reserved Instance payments as a capital expenditure.
Also, c1.xlarge is one of only three instance types that seem to get its own
physical server for each instance
Hi Stack,
Thanks for checking this issue and filing HBASE-3617. Well, that command was
supposed the node to crash and shutdown. I'll check the detailed procedure and
try to reproduce this issue during weekend.
This is odd. Communication with the RegionServer was working fine up
until it
I just took a day course on the Amazon Cloud and he had mentioned the every
time you spin up a VM it gets a different IP and Host name. If this is true how
do you keep the configuration files current every time you add a new VM or
power on an existing Cluster?
Thanks
-Pete
-Original
For the first time I am using a get to retrieve a record vs. a scan to pull
back a bunch of records. I discovered that I am only seeing 239 Gets per
second. This is causing my import time to go from 90 seconds to over 27
minutes. Any idea of what would be a 'normal' get rate?
Thanks
-Pete
Depends on how well cached you are.
Remember, random gets require disk seeks. 239 gets/sec is 239 * 1-3
seeks/sec (1-3 store files per get appx). So that seems reasonable
yes, sorry.
-ryan
On Thu, Mar 10, 2011 at 3:55 PM, Peter Haidinyak phaidin...@local.com wrote:
For the first time I am
Can you try replacing the jruby jar file in your 0.90.1 install with the one
from 0.20.6? With 0.90, the jruby version had to be downgraded due to some
licensing issues - this has been resolved and will be fixed with 0.90.2
It might help - not sure - so, try that out and ping back if you still
I’m experimenting with the coprocessor functionality in trunk, specifically
region observers. The documentation talks about the ability to chain
coprocessors such that they are called one after another. I noticed in the
source (CoprocessorHost.java) that coprocessors are added to HBase using a
Hi Jesse,
Yes, the intent is definitely to allow chaining of multiple user
coprocessors for the same table. If EnvironmentPriorityComparator is
blocking that, that is definitely a bug and we should get it fixed.
Would you mind opening a JIRA for the issue?
Thanks,
Gary
On Thu, Mar 10, 2011
Hi Peter,
We boot the master first, then boot the slaves after the master's IP address is
known.
Instances are initialized using user-data scripts.
We do substitutions on config details when creating the user-data for the
instances.
So this is sufficient for transient/testing clusters. For
iletken-test-2 died.
J-D
2011/3/10 M.Deniz OKTAR deniz.ok...@gmail.com:
Hi,
Still working on the issue. This is one of the last trials I am doing before
ordering a new cluster.
I was going through yahoo benchmark again and hbase became non responsive
for a long time, (about 100 secs)
Here is stacktrace:
11/03/11 08:56:44 WARN zookeeper.ClientCnxn: Exception closing session
0x12ea29ecc580005 to sun.nio.ch.SelectionKeyImpl@1175422
java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0
lim=4 cap=4]
at
After change code to
HTable ht = new HTable(config, T1);
byte[] row = Bytes.toBytes(1);
Get get = new Get(row);
Result result = ht.get(get);
System.out.println(Bytes.
toString(result.getValue(F1.getBytes(), C1.getBytes(;
try {
Hi Suraj,
Thanks, it worked, replaced jruby-complete-1.0.3 jar with
jruby-complete-1.2.0.jar and shell started without any exception.
Friends,
how do I best achieve intersection of sets of row ids
suppose I have two tables with similar row ids
how can I get the row ids present in one and not in the other?
does things get better if I have row ids as values in some qualifier/
qualifier itself?
I hope the question is not too
What did you change (Its hard looking at code to see what line is
different; better to post a diff).
St.Ack
On Thu, Mar 10, 2011 at 6:09 PM, King JKing beuk...@gmail.com wrote:
After change code to
HTable ht = new HTable(config, T1);
byte[] row = Bytes.toBytes(1);
Get
Done: https://issues.apache.org/jira/browse/HBASE-3624.
On Thu, Mar 10, 2011 at 5:12 PM, Gary Helmling ghelml...@gmail.com wrote:
Hi Jesse,
Yes, the intent is definitely to allow chaining of multiple user
coprocessors for the same table. If EnvironmentPriorityComparator is
blocking that,
Thats the weird thing,
Region is still alive. Just paused for a while and I don't know what are the
causes of those long pauses. Checked the garbage collector logs, nothing was
taking too long.
I'm suspecting hardware.
--
Deniz
2011/3/11 Jean-Daniel Cryans jdcry...@apache.org
iletken-test-2
You mean like write a map-reduce program that joins the key sets and outputs
what you want?
On Thu, Mar 10, 2011 at 8:08 PM, Vishal Kapoor
vishal.kapoor...@gmail.comwrote:
Friends,
how do I best achieve intersection of sets of row ids
suppose I have two tables with similar row ids
how can I
Hi,
Since HBase has a mechanism to replicate edit logs to another HBase cluster, I
was wondering if people think it would be possible to implement HBase=Hive
replication? (and really make the destination pluggable later on)
I'm asking because while one can integrate Hive and HBase by creating
51 matches
Mail list logo