Dear all,
I've installed HBase from the Cloudera repository, using the
karmic-cdh3 distribution, which evidently gives me hbase version
0.89.20100621+17. I'm particularly interested in creating a few
secondary indexes and was initially using the following blog as an
example:
Hi Todd,
Thanks for that, I'm a bit new to HBase at the minute, so I might
refrain from making any changes just yet, however, if I get some spare
time I may compare 0.20 and 0.89 and see what the functional
differences are. Ignoring the tests, the bulk of the issues appear to
be changed
Todd,
It seems that I'm not the only one looking at this issue. ;-)
Secondary indexes are going to be an issue to many people who adopt HBase and
MapReduce.
I'd hope that the keepers of HBase rethink their decision to push contrib out
to Github.
-Mike
From: t...@cloudera.com
Date: Wed,
Dear All,
once, I read an article about Hbase, the article said that HBase good for
the process of reading but not for writing.
So, if we have everyday applications should still use the RDBMS and the data
from the RDBMS batch into Hbase . Or is there another solution?
Regards
Firdaus
Hi,
I've checked the Release Notes of HDFS 0.21 and saw two fixes from hadoop-
append included, other two not, but still some more that have to do with sync
stuff.
Is Hadoop-append for HBase made obsolete with HDFS 0.21?
Thank you,
Thomas Koch, http://www.koch.ro
Hi All,
I have set up the Hbase in standalone mode. Both my server an client are
running on the same box.
When I tried to load a large amount of data, the client is hanging after
loading around 32 records.
It looks, it may be a timeout issue. Please find below the snippet from
logs,
Dear all,
My current HBase/Hadoop architecture has HBase region servers on the
same physical boxes as the HDFS data-nodes. I'm getting an awful lot
of region server crashes. The last thing that happens appears to be a
DroppedSnapshot Exception, caused by an IOException: could not
complete write
Jamie,
Does your configuration meets the requirements?
http://hbase.apache.org/docs/r0.20.5/api/overview-summary.html#requirements
ulimit and xcievers, if not set, are usually time bombs that blow off when
the cluster is under load.
J-D
On Wed, Jul 7, 2010 at 9:11 AM, Jamie Cockrill
Which HBase version are you running? Did you even take a look at the server
logs? (this client log you pasted just informs us that the connection to
ZooKeeper was successful)
J-D
On Wed, Jul 7, 2010 at 8:49 AM, manua agarwal.m...@gmail.com wrote:
Hi All,
I have set up the Hbase in
HBase is currently faster at writing than random reading, but long
scans are faster than writing. Not sure exactly what that unnamed
article isĀ referringĀ to.
Also, about using any other DBMS in conjunction with HBase, I would
simply recommend using the right tool for the right job.
J-D
On Wed,
HBase probably won't support 0.21 at all, since that release is marked
unstable. HBase 0.90 will be on hadoop 0.20-append which has a
different implementation for sync (HDFS-200 instead of HDFS-265).
I personally expect that everything will be tied back together for Hadoop 0.22
J-D
On Wed, Jul
I have minimal knowledge of banking IT systems, so unless someone else
on this list has done that kind of work in the past (eg integrating
HBase in banking systems) and is willing to share some knowledge,
you'll have to do your own homework and find out whether HBase fits
your use case.
One last thing, a slight oddity of our setup is that although we're on
Hadoop 0.20.2, we were previously on 0.18.something and upgraded. That
went fine and there have been no problems, however some convenience
base-classes that we created for our jobs were based on the old
pre-0.20 API, as such
Hey Jamie,
Using the deprecated classes should be fine - many people use them with
success.
The xceivers thing is certainly worth checking.
The other thing to check is GC tuning. Have you changed heap size or
anything in the hbase configuration, or just left it at defaults?
-Todd
On Wed, Jul
Bad news, it looks like my xcievers is set as it should be, it's in
the hdfs-site.xml and looking at the job.xml of one of my jobs in the
job-tracker, it's showing that property as set to 2047. I've cat |
grepped one of the datanode logs and although there were a few in
there, they were from a few
On the subject of GC and heap, I've left those as defaults. I could
look at those if that's the next logical step? Would there be anything
in any of the logs that I should look at?
One thing I have noticed is that it does take an absolute age to log
in to the DN/RS to restart the RS once it's
On Wed, Jul 7, 2010 at 10:32 AM, Jamie Cockrill jamie.cockr...@gmail.comwrote:
On the subject of GC and heap, I've left those as defaults. I could
look at those if that's the next logical step? Would there be anything
in any of the logs that I should look at?
One thing I have noticed is that
On the subject of swapping, I'm re-running one of the jobs to have a
go. All the load is going to one regionserver at the moment (no region
splits have occurred yet) and it's on (via top):
Mem: 8184284k total, ~813k used, ~524000k free, 28000k buffers
(might be inaccurate, can't type at a ms
Hello,
In my current application environment, I need to have two HBase
clusters running in two different racks, to form a fault-tolerant
group to tolerate power failure. Then I have an HBase client, which is
sitting outside of these two clusters, to make invocation to the
these two HBase
Passing the hbase.zookeeper.quorum config will do exactly what you
need in 0.89, but I'm not sure that it will work in 0.20
J-D
On Wed, Jul 7, 2010 at 10:46 AM, Jun Li jltz922...@gmail.com wrote:
Hello,
In my current application environment, I need to have two HBase
clusters running in two
Hi,
I am using Hbase version 0.20.5.
Also, I am running both the server on client on the same box. Please find
below the other log snippets,
Log File : hbase-manu-regionserver-domU-12-31-39-06-62-43.log
Wed Jul 7 13:42:17 EDT 2010 Starting regionserver on domU-12-31-39-06-62-43
ulimit -n
swappinness at 0 is good, but also don't overcommit your memory!
J-D
On Wed, Jul 7, 2010 at 10:53 AM, Jamie Cockrill
jamie.cockr...@gmail.com wrote:
I think you're right.
Unfortunately the machines are on a separate network to this laptop,
so I'm having to type everything across, apologies
PS, I've now reset my MAX_FILESIZE back to the default. (from the 1GB
i raised it to). It caused me to run into a delightful
'YouAreDeadException' which looks very related to the Garbage
collection issues on the Troubleshooting page, as my Zookeeper session
expired.
Thanks
Jamie
On 7 July
You configured your table to use LZO, but it's not on the classpath.
Please read and follow
http://wiki.apache.org/hadoop/UsingLzoCompression
J-D
On Wed, Jul 7, 2010 at 11:58 AM, manua agarwal.m...@gmail.com wrote:
Hi,
I have created a swap space of 1Gb, reduced the heap size to 500Mb and
From: Jean-Daniel Cryans jdcry...@apache.org
Also, about using any other DBMS in conjunction with HBase, I would
simply recommend using the right tool for the right job.
This seems like a sensible approach to me.
We are using HBase for the data that needs massive scalability, but are
There is no released version of Hadoop 0.21. For the foreseeable
future HBase releases will depend on either CDHv3 or the ASF append
branch.
-ryan
On Wed, Jul 7, 2010 at 9:38 AM, Jean-Daniel Cryans jdcry...@apache.org wrote:
HBase probably won't support 0.21 at all, since that release is
That said, the API of 0.21 should be pretty close to compatible, so even
though most testing will be happening against the branches Ryan mentioned
below, I think we should be able to trivially work on either those branches
or 0.21 in the 0.90 timeframe.
-Todd
On Wed, Jul 7, 2010 at 1:25 PM, Ryan
FWIW, one could use Cassandra, HBase, MongoDB and MySQL to support a
single product if they are used in a way that makes sense WRT their
features. The downside will obviously be maintaining radically
different systems.
Our experience at StumbleUpon is that our business has been built on
top of
Hey James,
Can you file a JIRA with information about the unhelpful exception message?
I'm on a mission to hunt down common errors with unhelpful exceptions, and
you seem to have discovered one.
Thanks,
Jeff
On Tue, Jul 6, 2010 at 5:39 AM, Jamie Cockrill jamie.cockr...@gmail.comwrote:
Dear
Hi:
I got undefined method columns.to_java_bytes when execute the get command
under hbase shell in ./bin/hbase-0.20.5. The problem happened at the line
554 of Hbase.rb Ruby code on line 554.
What should I do to fix it.
Any help is apprecaited.
Thanks,
Fuesane Cheng
--
View this message in
30 matches
Mail list logo