http://nosql.mypopescu.com/post/3163240962/ycbs-benchmark-results-for-cassandra-hbase-mongodb
Not sure what the issue was there with HBase, but would this be much
improved if they tried it again with 0.90?
Thanks,
___
David Engfer
[817.360.4923]
david.engfer@gmail
That paper looked like complete nonsense. The slide on HBase has
misunderstandings about how we work. The data size is miniscule.
We get slower when you add nodes (?).
St.Ack
On Fri, Feb 11, 2011 at 7:01 AM, David Engfer david.eng...@gmail.com wrote:
Which straggler Andrew, the hbase-0.90.1.jar in lib? That should be
harmless, no?
Looks like our mvn lib building could do with a bit of weeding. Let
me file an issue for that.
St.Ack
On Fri, Feb 11, 2011 at 1:15 AM, Andrew Purtell apurt...@apache.org wrote:
Never mind, I read forward
I know Ryan is anti runtime check, but since several smart people have been
bit by this, seems we could do something simple like look if there are
multiple hbase-X.jars where X differs on the classpath?
On Fri, Feb 11, 2011 at 7:36 AM, Stack st...@duboce.net wrote:
Which straggler Andrew, the
Thanks for the clarification; didn't think that was true.
___
David Engfer
[817.360.4923]
david.engfer@gmail
http://www.engfers.com
On Fri, Feb 11, 2011 at 9:36 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Whatever happened, something was way off the way they
These slides were presented at FOSDEM, the conference where I just spoke about
FB HBase. I went to the talk.
There were some glaringly obvious problems that came out during the QA.
Someone asked how many regions he had. Didnt know. Are you network or IO or CPU
bottlenecked? Didn't know.
Imagine what happens when hbase user tries to put in some customization
without knowing what lurks in lib dir ?
Our philosophy should encourage people to make modifications to open source
code, not placing potential bomb somewhere.
My two cents.
On Fri, Feb 11, 2011 at 7:42 AM, Todd Lipcon
Also, ycsb at least doesn't check the data that is returned.
That means that their numbers for /dev/null would have been even better.
On Fri, Feb 11, 2011 at 7:36 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Finally, in research the most important step is validation of the
results. They
No an earlier version from before that I failed to delete while moving jars
around. So this is a user problem, but I forsee it coming up again and again.
Best regards,
- Andy
--- On Fri, 2/11/11, Stack st...@duboce.net wrote:
From: Stack st...@duboce.net
Subject: Re: [VOTE] HBase
Yes. We need to fix the assembly. Its going to trip folks up. I
don't think it a sinker on the RC though, especially as we shipped
0.90.0 w/ this same issue. What you think boss?
St.Ack
On Fri, Feb 11, 2011 at 9:30 AM, Andrew Purtell apurt...@apache.org wrote:
No an earlier version from
I see:
this.executorService.startExecutorService(ExecutorType.MASTER_SERVER_OPERATIONS,
conf.getInt(hbase.master.executor.serverops.threads, 3));
this.executorService.startExecutorService(ExecutorType.MASTER_META_SERVER_OPERATIONS,
On Fri, Feb 11, 2011 at 10:59 AM, Ted Yu yuzhih...@gmail.com wrote:
The fix assumed that user wouldn't change the value for
hbase.master.executor.serverops.threads which is not in hbase-default.xml
Interesting.
If users are changing advanced undocumented configs, then they deserve what
they
Right, but the 2.78hours [threadWakeFrequency (1ms) x multiplier (1000ms)]
check comes before a full compaction and from the code it looks like the NPE
would have blocked any Store that had no timeRangeTracker from every getting a
major compaction... unless it had been triggered in some
hregion is the internal implementation of a region inside
regionserver, you dont get it from a client.
that data is being sent to the master, it's being published to ganglia
and the metric system.
On Fri, Feb 11, 2011 at 1:23 PM, Ted Yu yuzhih...@gmail.com wrote:
HTable can return region
Our QA is going to generate a lot of data using hbase 0.20.6 on cluster A
In order to keep the application that accesses such data functional, I plan
to migrate related hbase tables to cluster B before upgrading cluster A to
hbase 0.90.1
/hbase/table would be copied over to cluster B.
Please
QA can use export/import tool
I wonder if there is a faster way of transferring whole table across
clusters.
On Fri, Feb 11, 2011 at 2:11 PM, Ted Yu yuzhih...@gmail.com wrote:
Our QA is going to generate a lot of data using hbase 0.20.6 on cluster A
In order to keep the application that
On Fri, Feb 11, 2011 at 2:32 PM, Ted Yu yuzhih...@gmail.com wrote:
QA can use export/import tool
I wonder if there is a faster way of transferring whole table across
clusters.
You saw the recent mozilla posting by xavier about wrangling
x-datacenter copying? If not, check it out.
Are your
Are you referring to https://issues.apache.org/jira/browse/HBASE-3451 ?
Our clusters are in same DC but not on same hdfs.
Our issue is that the table to be transferred doesn't exist on destination
cluster.
Thanks
On Fri, Feb 11, 2011 at 2:41 PM, Stack st...@duboce.net wrote:
On Fri, Feb 11,
The way we've moved data from cluster A to cluster B is:
1. Turn on replication from A - B
2. Run map reduce export on A and map reduce import on B
The time stamp functionality of hbase makes it so the import to B wont overwrite any of the newer
replicated data. Once the import is complete we
I don't think replication is in 0.20.6
Can we do this:
create table with same schema on cluster B.
distcp copy over /hbase/tableX from cluster A to B ?
On Fri, Feb 11, 2011 at 3:27 PM, Jeff Whiting je...@qualtrics.com wrote:
The way we've moved data from cluster A to cluster B is:
1. Turn on
Seems reasonable to stay -1 given HBASE-3524.
This weekend I'm rolling RPMs of 0.90.1rc0 + ... a few patches (including 3524)
... for deployment to preproduction staging. Depending how that goes we may
have jiras and patches for you next week.
Best regards,
- Andy
From: Stack
I am generally +1, but we'll need another RC to address HBASE-3524.
Here is some of my other report of running this:
Been running a variant of this found here:
https://github.com/stumbleupon/hbase/tree/su_prod_90
Running in dev here at SU now.
Also been testing that against our Hadoop CDH3b2
Ryan:
Can you share how you built patched cdh3b2 ?
When I used 'ant jar', I got build/hadoop-core-0.20.2-CDH3b2-SNAPSHOT.jar
which was much larger than the official hadoop-core-0.20.2+320.jar
hadoop had trouble starting if I used hadoop-core-0.20.2-CDH3b2-SNAPSHOT.jar
in place of official jar.
I put up the patch I used, I then changed the version to 0.20.2-322
and just did ant jar. I crippled the forrest crap in build.xml... I
didnt check the filesize of the resulting jar though.
-ryan
On Fri, Feb 11, 2011 at 10:08 PM, Ted Yu yuzhih...@gmail.com wrote:
Ryan:
Can you share how you
my jar looks like:
-rw-r--r-- 1 hadoop hadoop 2861459 2011-02-09 16:34 hadoop-core-0.20.2+322.jar
-ryan
On Fri, Feb 11, 2011 at 10:29 PM, Ryan Rawson ryano...@gmail.com wrote:
I put up the patch I used, I then changed the version to 0.20.2-322
and just did ant jar. I crippled the forrest
Is it possible for you to share the hadoop-core-0.20.2+320.jar that you
built ?
Thanks
On Fri, Feb 11, 2011 at 10:29 PM, Ryan Rawson ryano...@gmail.com wrote:
I put up the patch I used, I then changed the version to 0.20.2-322
and just did ant jar. I crippled the forrest crap in build.xml...
i call it 0.20.2-322 and its at http://people.apache.org/~rawson/repo/ (m2 repo)
for just the jar you can find it there.
On Fri, Feb 11, 2011 at 10:35 PM, Ted Yu yuzhih...@gmail.com wrote:
Is it possible for you to share the hadoop-core-0.20.2+320.jar that you
built ?
Thanks
On Fri, Feb
Oh right, the groupId is com.cloudera not org.apache, so the other dir...
On Fri, Feb 11, 2011 at 10:41 PM, Ted Yu yuzhih...@gmail.com wrote:
I don't see it under
http://people.apache.org/~rawson/repo/org/apache/hadoop/hadoop-core/
Should I look somewhere else ?
On Fri, Feb 11, 2011 at 10:37
I am going to patch CDH3B3 with 347 today/tomorrow and try it.
We plan to test this more
and roll in to our production environment.
Likewise.
Best regards,
- Andy
Problems worthy of attack prove their worth by hitting back.
- Piet Hein (via Tom White)
--- On Fri, 2/11/11, Ryan
29 matches
Mail list logo