Thanx once more. Now works fine.
2010/9/29 Jean-Daniel Cryans jdcry...@apache.org:
The fix is here: http://pastebin.com/zuL23e0U
We're going to do a push to github later today, along with other
patches that require more testing.
J-D
On Wed, Sep 29, 2010 at 10:54 AM, Andrey Stepachev
hi there ! i wonder if there is a reliable tutorial how to compile hbase in
eclipse.
i would be grateful if abyone could help me.
thanks in advance
Sorry for that :)
I still didn't push to github the new version as I'm still testing a
few unrelated improvements that are coming in (nothing fancy, as it's
already in apache's SVN).
J-D
On Thu, Sep 30, 2010 at 12:01 AM, Andrey Stepachev oct...@gmail.com wrote:
Thanx once more. Now works fine.
After running stargate with no issues for a month or two, I encountered some
problems today, and can't quite figure it out. Perhaps someone could give me
advice.
I start up stargate
$ ./hbase-daemon.sh start rest --port=8080
starting rest, logging to
Ok. I applied patch to internal git. 10 hours of mr job and still looks fine.
2010/9/30 Jean-Daniel Cryans jdcry...@apache.org:
Sorry for that :)
I still didn't push to github the new version as I'm still testing a
few unrelated improvements that are coming in (nothing fancy, as it's
already
I cant find any documentation which says you shouldn't write to the same HBase
table you're scanning. But it doesnt seem to work... I have a mapper
(subclass of TableMapper) which scans a table, and for each row encountered
during the scan, it updates a column of the row, writing it back to
Are at least export file format compatible?
On Thu, Sep 30, 2010 at 11:28 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
There's no migration.
J-D
On Thu, Sep 30, 2010 at 11:24 AM, Dmitriy Lyubimov dlie...@gmail.com
wrote:
Hi,
I tried to find the info on data migration from 0.20
Sorry, no migration as in there's nothing special to do... it just
works. And like Ryan said, no coming back.
J-D
On Thu, Sep 30, 2010 at 11:30 AM, Dmitriy Lyubimov dlie...@gmail.com wrote:
Are at least export file format compatible?
On Thu, Sep 30, 2010 at 11:28 AM, Jean-Daniel Cryans
Thank you, Ryan, Jean-Daniel.
That's extremely nice. I don't care about going back so much.
One question though -- in the case of going back -- are export routine
formats compatible or i'd have to pump the data thru a format of my own if i
want to re-flush it back?
Thanks.
-Dmitriy
On Thu,
HTML directory listing is not Stargate output. Got something else running on
port 8080?
Best regards,
- Andy
--- On Thu, 9/30/10, mike anderson saidthero...@gmail.com wrote:
From: mike anderson saidthero...@gmail.com
Subject: stargate troubles
To: hbase-u...@hadoop.apache.org
Date:
I'm inserting several million rows of a table that has ~50 columns, 31 of
which are days of the month and are often null. For the null ones I issue a
delete instead of a put to make sure any previous data in that column gets
deleted. I'm inserting 1000 rows at a time, so the behavior is to put
What jvm version are you using? A readwrite lock should not be able to
deadlock on itself like that. There has been known jvm bugs involving locks
alas...
On Sep 30, 2010 12:09 PM, Matt Corgan mcor...@hotpads.com wrote:
I'm inserting several million rows of a table that has ~50 columns, 31 of
RWLock can indeed deadlock with itself - it doesn't support lock
upgrade from read to write.
-Todd
On Thu, Sep 30, 2010 at 12:14 PM, Ryan Rawson ryano...@gmail.com wrote:
What jvm version are you using? A readwrite lock should not be able to
deadlock on itself like that. There has been known
Sorry - that was the lastest release in the downloads section of
hbase.apache.org. I'll upgrade right away.
We're on 1.6.0_17. I've seen warnings about u18 having problems. Do you
recommend u20?
On Thu, Sep 30, 2010 at 4:20 PM, Ryan Rawson ryano...@gmail.com wrote:
Indeed, but we never do
Actually, I first encountered the problem when I was using HBase standalone,
i.e. outside of map-reduce. I use an HTable client to simultaneously scan and
update a table. While iterating the scan, I do a put() of a single column back
to the current row returned by the scanner.
What I see is
Yes that should work. What version of hbase are you using? I would
appreciate a test case :-)
-ryan
On Thu, Sep 30, 2010 at 2:47 PM, Curt Allred c...@mediosystems.com wrote:
Actually, I first encountered the problem when I was using HBase standalone,
i.e. outside of map-reduce. I use an
version 0.20.3.
I'll try to reduce my code to a testcase I can give you.
Thanks!
-Original Message-
From: Ryan Rawson [mailto:ryano...@gmail.com]
Sent: Thursday, September 30, 2010 2:53 PM
To: user@hbase.apache.org
Subject: Re: Is it legal to write to the same HBase table you're
Can you also try 0.20.6 and see if you can repro on that? There has
been a lot of changes between those versions that potentially affect
this issue.
-ryan
On Thu, Sep 30, 2010 at 2:58 PM, Curt Allred c...@mediosystems.com wrote:
version 0.20.3.
I'll try to reduce my code to a testcase I can
Right now, most of our boxes have 3 disk in them. We take a small
partition on each of those and raid stripe them together to use as the
OS partition then allocate the rest of the disks as JBOD for HDFS storage.
We are building out a new cluster and I'm wondering if there are any
better
Sorry to jump in on the tail end of this...
What sort of buffer size do you have on your client?
What it sounds like is that you're doing a put() but you don't see the row in
the table until either the client side buffer is full or if you've flushed the
buffer which then writes the records to
I didnt try changing the write buffer size from the default. Looks like the
default is 2MB. I'll try setting it to zero. Thanks for the tip.
-Original Message-
From: Michael Segel [mailto:michael_se...@hotmail.com]
Sent: Thursday, September 30, 2010 4:19 PM
To: user@hbase.apache.org
Hi
We have cluster up running in production..Firstly, thanks to J-D the group
for all the initial input/help.
I'm trying to handle of DFS disk usage. All I'm storing in hdfs is hbase
records.
If i do hadoop du /hbase_data dir ..i see 1 gig so far..where as DFS usage via
the web
22 matches
Mail list logo