On Thu, May 12, 2011 at 7:37 AM, Chris Bohme ch...@pinkmatter.com wrote:
When manually browsing to the recovered.edits folder in HDFS and opening
them with HFile an error is shown: Trailer header is wrong
They are not hfiles so yes, you'll see that (they are straight
SequenceFiles IIRC).
Hi All,
I started using HBase for a while and i am working on some open source
project which has some old jars from Hadoop and HBase.
Now I am trying to update it to the latest Hadoop and HBase versions and I
have trouble doing it.
Noticably, I have a jar *hbase-0.20.3-indexed.jar * in the
That contrib was moved to github, see
https://github.com/hbase-trx/hbase-transactional-tableindexed
J-D
On Thu, May 12, 2011 at 8:28 AM, Praveen Bathala pbatha...@gmail.com wrote:
Hi All,
I started using HBase for a while and i am working on some open source
project which has some old jars
Seems the master had lots of problems talking to that node... can you
repro? If you jstack, are all the handlers filled?
J-D
2011/5/12 Gaojinchao gaojinc...@huawei.com:
Thank you for reply.
It seems master had some problem. But I am not sure what 's up.
I am not familiar with RPC and need
I got the contrib on git, but I dont see any classes related to idx in
there
- Prvn
On Thu, May 12, 2011 at 12:59 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
That contrib was moved to github, see
https://github.com/hbase-trx/hbase-transactional-tableindexed
J-D
On Thu, May 12, 2011 at
Hi Jean,
We have upgraded to branch-0.20-append working with hbase 0.20.6. But it
looks we are still meeting the same problem. And today I found we started to
get tons of these issue when a Hadoop balance started. I am wondering will
hadoop balancing for the data files will impact the meta info
Hi,
Is it recommended to use blades and SAN for hadoop/hbase cluster nodes ? or
will it be more performant to use seperate server machines with dedicated
CPU/hard disks on each of them.
Could using SAN cause a bottleneck or degrade performance for hadoop
nodes(using the same SAN for all nodes).
Thanks for your help. We are implementing our own secondary index table to
get rid of the scan and replace those calls with Get.
One common trend that we are following , to ensure the frontend web
application is performant as per our expectation, is to always try and use
Gets' from the UI instead
Hi folks, the wiki Troubleshooting page is now obsolete. It's accessible via
the obsolete pages link.
Stack just updated the website with the updates to the Troubleshooting section.
Enjoy!
-Original Message-
From: Doug Meil [mailto:doug.m...@explorysmedical.com]
Sent: Friday, May
Hey Guys,
Not sure if this functionality is available or not, if its not consider this a
feature request :).
The main summary is that rows can contain massive amounts of data, so we can
narrow
selection by family. However, if the family is large enough is there a way to
grab parts of
the
Don't forget that a Get is just a 1 row scan, they share the same code
path internally. The only difference of course is that a get just
returns that one row and therefore is fairly fast (unless your row is
huge, think hundreds of MBs).
-ryan
On Thu, May 12, 2011 at 1:31 PM, Himanish Kushary
Dear all,
I have sent quite a couple of questions about the issue we met these days
which was killing my team.
We were now using hbase 0.20.6 with hadoop branch-0.20-append.
The region server will get tons of logs as following:
2011-05-12 12:11:47,349 WARN org.apache.hadoop.hdfs.DFSClient:
Thank you Stack! I will try applying to my code base and see if it works.
Thanks again
Vidhya
On 5/12/11 11:02 AM, Stack st...@duboce.net wrote:
The issue says that it was applied to the branch for 0.90.2.Thats
a misstatement. The patch was not applied. Will apply to the branch
now.
In general, Hadoop applications will perform much better with dedicated
local disks (don't use RAID for data drives, either).
On Thu, May 12, 2011 at 1:42 PM, sean barden sbar...@gmail.com wrote:
You'll get the best performance out of dedicated hardware.
Sean
On Thu, May 12, 2011 at 3:25
If I understand what you need, there is the ColumnPaginationFilter that does
exactly what you mention.
From: m...@imageshack.net
Subject: Pagination through families / columns?
Date: Thu, 12 May 2011 13:49:16 -0700
To: user@hbase.apache.org
Hey Guys,
Not sure if this functionality is
Pardon my being slow but I think I now understand what you are getting at. I
took a look at a heap dump on one of our production servers which is carrying
10k regions. I see retention of an hserverload per online region. The count
of hserverload hregionload instances retained can be regions
Maybe I didn't descript the issue clearly(I hate English ^_^).Yes, we need on
branch. I don't have a patch yet, but if need I can try my best to fix this
issue.
-邮件原件-
发件人: Stack [mailto:saint@gmail.com]
发送时间: 2011年5月13日 8:46
收件人: user@hbase.apache.org
抄送: user@hbase.apache.org;
Have a go at fixing it. If you do it there is some hope it will make it into
the code base soon.
My English isn't too good either but it's better than my Chinese
Stack
On May 12, 2011, at 18:04, Jack Zhang(jian) jack.zhangj...@huawei.com wrote:
Maybe I didn't descript the issue clearly(I
Hi
Using hbase-0.20.6
mapreduce job started failing in the map phase (using hbase table as input for
mapper)..(ran fine for a week or so starting with empty tables)..
task tracker log:
Task attempt_201105121141_0002_m_000452_0 failed to report status for 600
seconds. Killing
Region
Ok, I try to do it.
Jian Zhang(Jack)
-邮件原件-
发件人: Stack [mailto:saint@gmail.com]
发送时间: 2011年5月13日 9:48
收件人: user@hbase.apache.org
抄送: user@hbase.apache.org
主题: Re: 答复: Hmaster is OutOfMemory
Have a go at fixing it. If you do it there is some hope it will make it into
the code base
You'd have to hack it up into the thrift server, shouldn't be so bad
but there's no such doc.
J-D
On Thu, May 12, 2011 at 8:26 PM, Matthew Ward m...@imageshack.net wrote:
Oh interesting, is there a way to access it via thrift (from PHP)? Are there
some docs I can read up on it?
Thanks!
The error message in the datanode log is pretty obvious about the
config if you really hit it. The error message you pasted from the DN
doesn't look complete either.
J-D
On Wed, May 11, 2011 at 7:30 AM, Stanley Xu wenhao...@gmail.com wrote:
Dear all,
We are using hadoop 0.20.2 with a couple
Your zookeeper stuff is fine, as you can see it's INFO level.
What's fatal tho is that the master can't connect to your namenode (on
port 9000).
J-D
2011/5/11 Άρμεν Αρσακιάν arsak...@ee.duth.gr:
I installed hbase on my Mac Os 10.6 machine and when i try to run hbase
master start I get the
23 matches
Mail list logo