On Fri, May 7, 2010 at 12:03 AM, Takayuki Tsunakawa
wrote:
> If versioning is not necessary from your requirement, you can ignore
> timestamps (do not have to specify timestamp in API call).
Yes, it's actually recommended to not manually specify timestamps in
API calls, particularly when insertin
I would argue that the primary reasons for versioning has nothing to do with
"rescuing users" or being able to recover data.
To reiterate what others have said, the reasons that HBase/BigTable is
versioned is because of the immutable nature of data (an update is a newer
version on top of the ol
I am curious as to whether the current Hive query support against HBase can
handle your use case (as a way to by pass the export to a rel store)?
-b
On Wed, May 5, 2010 at 12:22 AM, Michelan Arendse wrote:
> I don't know what the row start and end keys are - they GUID keys (improves
> writes acr
Hi,
First of all, thanks to all the HBase contributors for getting 0.20.4 out.
We're planning on upgrading soon, and we're also looking forward to 0.20.5.
Recently we've had a couple of problems where HBase (0.20.3) can't seem to
read a file, and the client spews errors like this:
java.io.IOExcep
On Fri, May 7, 2010 at 8:27 PM, James Baldassari wrote:
> java.io.IOException: Cannot open filename
> /hbase/users/73382377/data/312780071564432169
>
This is the regionserver log? Is this deploying the region? It fails?
> Our cluster throughput goes from around 3k requests/second down to 500-10
This could very well be HBASE-2231.
Do you find that region servers occasionally crash after going into GC
pauses?
-Todd
On Fri, May 7, 2010 at 9:02 PM, Stack wrote:
> On Fri, May 7, 2010 at 8:27 PM, James Baldassari
> wrote:
> > java.io.IOException: Cannot open filename
> > /hbase/users/7338
On Sat, May 8, 2010 at 12:02 AM, Stack wrote:
> On Fri, May 7, 2010 at 8:27 PM, James Baldassari
> wrote:
> > java.io.IOException: Cannot open filename
> > /hbase/users/73382377/data/312780071564432169
> >
> This is the regionserver log? Is this deploying the region? It fails?
>
This error is
Thanks, I'll check out HBase-2231. Prior to this problem occurring our
cluster had been running for almost 2 weeks with no problems. I'm not sure
about the GC pauses, but I'll look through the logs. I've never noticed
that before, though.
Also, maybe it would help to understand how we're using
If you can grep for '4841840178880951849' as well
as /hbase/users/73382377/data/312780071564432169 across all of your datanode
logs plus your NN, and put that online somewhere, that would be great. If
you can grep with -C 20 to get some context that would help as well.
Grepping for the region in q
OK, these logs are huge, so I'm just going to post the first 1,000 lines
from each for now. Let me know if it would be helpful to have more. The
namenode logs didn't contain either of the strings you were interested in.
A few of the datanode logs had '4841840178880951849':
http://pastebin.com/4M
10 matches
Mail list logo