On Mon, Aug 13, 2012 at 6:10 AM, Gurjeet Singh gurj...@gmail.com wrote:
Thanks Lars!
One final question : is it advisable to issue multiple threads
against a single HTable instance, like so:
HTable table = ...
for (i = 0; i 10; i++) {
new ScanThread(table, startRow, endRow,
Thanks a lot!
On Mon, Aug 13, 2012 at 12:27 AM, Stack st...@duboce.net wrote:
On Mon, Aug 13, 2012 at 6:10 AM, Gurjeet Singh gurj...@gmail.com wrote:
Thanks Lars!
One final question : is it advisable to issue multiple threads
against a single HTable instance, like so:
HTable table = ...
On Sun, Aug 12, 2012 at 3:57 PM, Harsh J ha...@cloudera.com wrote:
Bryan,
I believe running with -Djava.net.preferIPv4Stack=true should work just
fine.
I can add to the ref guide a macosx section if above works for you Bryan.
St.Ack
On Fri, Aug 10, 2012 at 8:37 AM, J Mohamed Zahoor jmo...@gmail.com wrote:
Look at this post for more about Catalog Janitor
http://blog.zahoor.in/2012/08/hbase-hmaster-architecture/
Nice blog Mohamed (Its 'HBase', not 'Hbase'). I added a link to it in
our reference guide from the Master
On Sun, Aug 12, 2012 at 12:52 PM, David Koch ogd...@googlemail.com wrote:
Hi Anil,
Thank you for your advice.
We don't have a native column typing metadata facility in HBase
currently and so there is nothing for the shell to leverage undoing
the bytes returned in a scan. HBase in this case
Harsh is right. You find the wrong place.
regards!
Yong
On Sun, Aug 12, 2012 at 1:40 PM, Harsh J ha...@cloudera.com wrote:
Richard,
The property disables major compactions from happening automatically.
However, if you choose to do this, you should ensure you have a cron
job that does
Anil,
Do you have root cause on the RS failure? I have never heard of one RS
failure causing a whole job to fail.
On Tue, Aug 7, 2012 at 1:59 PM, anil gupta anilgupt...@gmail.com wrote:
Hi HBase Folks,
I ran the bulk loader yesterday night to load data in a table. During the
bulk loading
I encountered a similar problem in my CP. I think the issue is, you seem
to use YCSB to put a lot of records in a short period of time. By default
YCSB has a big write buffer. In the CP you do not set autoflush so by
default it is true. Therefore a lot of incoming puts to the CP are waiting
I was thinking that an easier way might even be to just add the conversion
capability at the ruby shell level. Something like the following where you
can give a third qualifier that describes how you want it interpreted.
get|scan 'table1', {COLUMNS = ['fam:qual1:toInt', 'fam:qual2:toUTF8',
Hi,
I'm pretty new to hbase and currently evaluate it for use in a project I'm
working on.
I use hbase from Cloudera CDH4, which is 0.92.1.
I'm trying to calculate an average via a coprocessor with this code:
Scan scan = new Scan((metricID + , +
basetime_begin).getBytes(), (metricID
Hi Guys,
Sorry for not mentioning the version I am currently running. My current
version is HBase 0.92.1(cdh4) and running Hadoop2.0.0-Alpha with YARN for
MR. My original post was for HBase0.92. Here are some more details of my
current setup:
I am running a 8 slave, 4 admin node cluster on
Anil,
Do you know what happens when you have an airplane that has too heavy a cargo
when it tries to take off?
You run out of runway and you crash and burn.
Looking at your post, why are you starting 8 map processes on each slave?
That's tunable and you clearly do not have enough memory in
I've decided to write an end-to-end Installation guide for HBase, which also
includes HDFS, user configuration and tons of other stuff no guide ever
mentions, in a blog post: http://blog.devving.com/hbase-quickstart-guide/
I hope that all the newbies who get task assignment like I have at work
Hi Mike,
I tried doing that by setting up properties in mapred-site.xml but Yarn
doesnt seems to work with mapreduce.tasktracker.
map.tasks.maximum property. Here is a reference to a discussion to same
problem:
Nice initiative.
Regards,
Mohammad Tariq
On Tue, Aug 14, 2012 at 1:39 AM, Asaf Mesika asaf.mes...@gmail.com wrote:
I've decided to write an end-to-end Installation guide for HBase, which
also includes HDFS, user configuration and tons of other stuff no guide
ever mentions, in a blog
Hi Mike,
Here is the link to my email on Hadoop list regarding YARN problem:
http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201208.mbox/%3ccaf1+vs8of4vshbg14b7sgzbb_8ty7gc9lw3nm1bm0v+24ck...@mail.gmail.com%3E
Somehow the link for cloudera mail in last email does not seems to work.
Not sure why you're having an issue in getting an answer.
Even if you're not a YARN expert, google is your friend.
See:
Not really a good idea or anything new.
Essentially a full table scan where you're doing a closer inspection on the key
to see if it matches your search regex, before actually fetching the entire row
and returning it.
Secondary indexes are pretty straight forward.
You have your primary key
Hi Mike,
You hit the nail on the that i need to lower down the memory by setting
yarn.nodemanager.resource.memory-mb. Here's another major bug of YARN you
are talking about. I already tried setting that property to 1500 MB in
yarn-site.xml and setting yarn.app.mapreduce.am.resource.mb to 1000 MB
I am beginning to think that this is a configuration issue on my
cluster. Do the following configuration files seem sane ?
hbase-env.sh https://gist.github.com/3345338
hbase-site.xmlhttps://gist.github.com/3345356
Gurjeet
On Mon, Aug 13, 2012 at 5:30 PM, lars hofhansl
Hi Mike,
I am constrained by the hardware available for POC cluster. We are waiting for
hardware which we will use for performance.
Best Regards,
Anil
On Aug 13, 2012, at 6:59 PM, Michael Segel michael_se...@hotmail.com wrote:
Anil,
I don't know if you can call it a bug if you don't have
Please pardon while I ramble, this started off as a short response and
is now... lengthy.
I've also seen Megastore-inspired secondary index implementations that
clone the data from the primary table into the secondary table, by
sort order of the attribute that is indexed. In Megastore this was
Anil,
Same hardware, fewer VMs.
On Aug 13, 2012, at 9:49 PM, Anil Gupta anilgupt...@gmail.com wrote:
Hi Mike,
I am constrained by the hardware available for POC cluster. We are waiting
for hardware which we will use for performance.
Best Regards,
Anil
On Aug 13, 2012, at 6:59 PM,
23 matches
Mail list logo