Was thinking, does hbase have to use hostname? What if you are running this in
a FW env that does not have DNS Access?
On 2 Apr 2012, at 06:31, Ben Cuthbert wrote:
All when I try and run in distributed mode with two servers I get this error
when starting the slave node
two nodes
node1 =
Ben,
Please see:
http://hbase.apache.org/book/os.html#dns
- Dave
On Mon, Apr 2, 2012 at 5:25 AM, Ben Cuthbert bencuthb...@ymail.com wrote:
Was thinking, does hbase have to use hostname? What if you are running
this in a FW env that does not have DNS Access?
On 2 Apr 2012, at 06:31, Ben
Hi Dave
Thanks. So what happens when you run in a network that does not have DNS over
firewalls. So like running primary data center to backup data center?
On 2 Apr 2012, at 14:33, Dave Wang wrote:
Ben,
Please see:
http://hbase.apache.org/book/os.html#dns
- Dave
On Mon, Apr 2, 2012
Follow the instructions here:
http://blog.lars-francke.de/2010/08/16/performance-testing-hbase-using-ycsb/
The load portion will load a thousand rows into HBase for testing.
On Sun, Apr 1, 2012 at 8:12 PM, Mahdi Negahi negahi.ma...@hotmail.comwrote:
thanks for your reply
but i install and
See the link to the BigTable paper here...
http://hbase.apache.org/book.html#other.info
... and there is other reading material and videos too.
On 4/1/12 11:30 PM, Mahdi Negahi negahi.ma...@hotmail.com wrote:
thanks, but all databases have good examples , like Cinema in Neo4j and
etc.
but
Also, see this chapter.
http://hbase.apache.org/book.html#schema
On 4/2/12 11:40 AM, Doug Meil doug.m...@explorysmedical.com wrote:
See the link to the BigTable paper here...
http://hbase.apache.org/book.html#other.info
... and there is other reading material and videos too.
On
when I try count data rows I have this output after a while.--
hbase(main):001:0 list
TABLE
tsdb
tsdb-uid
2 row(s) in 0.7600 seconds
hbase(main):002:0 count 'tsdb-uid'
ERROR: org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to
find region for tsdb-uid,,99 after 7
Can you run 'bin/hbase hbck' and see if there is any inconsistency ?
Thanks
On Mon, Apr 2, 2012 at 7:07 AM, Toni Moreno toni.mor...@gmail.com wrote:
when I try count data rows I have this output after a while.--
hbase(main):001:0 list
TABLE
tsdb
tsdb-uid
2 row(s) in 0.7600 seconds
The link I referred you to states that forward and reverse resolution is
required for at least 0.92.x. If you do not have DNS, then perhaps you
can hardcode the resolutions in /etc/resolv.conf or similar.
- Dave
On Mon, Apr 2, 2012 at 7:19 AM, Ben Cuthbert bencuthb...@ymail.com wrote:
Hi
+common-u...@hadoop.apache.org
Hi Harsh,
Thanks for the information.
Is there any way to differentiate between a client side property and
server-side property?or a Document which enlists whether a property is
server or client-side? Many times i have to speculate over this and try out
test runs.
Please note though that YCSB 0.1.4 is now fully mavenized and uses the POM to
pull in the various dependencies, as well as supplying a script that you can
use to avoid the lengthy java command line. So the build steps and invocation
have changed a bit, but the overall idea stays the same.
Lars
Dear
doug
i think you didn't read my question :) because I know what is HBase and I work
it successfuly and design my tables and insert a few data in it. but my
question is : I need a sample database for testing my Master thesis that it
contains more than 1000 rows.
regard
From:
Dear Lars
you do not have any updated guideline , because I'm not professional in Java
and Maven.Regard Mahdi Subject: Re: HBase database sample
From: lars.geo...@gmail.com
Date: Mon, 2 Apr 2012 19:57:54 +0300
To: user@hbase.apache.org
Please note though that YCSB 0.1.4 is now fully
Sorry for jumping on this thread late, but, I have seen very similar
behavior in our cluster with hadoop 0.23.2 (CDH4B2 snapshot) and hbase
0.23.1. We have a small, 7 node cluster (48GB/16Core/6x10Kdisk/GigE
network) with about 500M rows/4Tb of data. The random read performance
is excellent, but,
Okay. I guess we will look into putting the host entries.
On 2 Apr 2012, at 17:19, Dave Wang wrote:
The link I referred you to states that forward and reverse resolution is
required for at least 0.92.x. If you do not have DNS, then perhaps you
can hardcode the resolutions in
2012/4/2 Alok Singh a...@urbanairship.com:
Sorry for jumping on this thread late, but, I have seen very similar
behavior in our cluster with hadoop 0.23.2 (CDH4B2 snapshot) and hbase
0.23.1. We have a small, 7 node cluster (48GB/16Core/6x10Kdisk/GigE
network) with about 500M rows/4Tb of data.
I heard yesterday that the first conference dedicated to HBase will be
in the next days. Where I can fin all the information about the event?
regards and best wishes
--
Marcos Luis Ortíz Valmaseda (@marcosluis2186)
Data Engineer at UCI
http://marcosluis2186.posterous.com
10mo. ANIVERSARIO
http://www.hbasecon.com/
On Apr 2, 2012, at 10:16 PM, Marcos Ortiz wrote:
I heard yesterday that the first conference dedicated to HBase will be in the
next days. Where I can fin all the information about the event?
regards and best wishes
--
Marcos Luis Ortíz Valmaseda
We are frequently seeing flush storms like the following:
2012-03-29 07:44:32,743 INFO
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: Using
syncFs -- HDFS-200
2012-03-29 07:44:32,749 INFO org.apache.hadoop.hbase.regionserver.wal.HLog:
Roll
HBaseCon is also on the home page...
http://hbase.apache.org/
On 4/2/12 3:18 PM, Lars George lars.geo...@gmail.com wrote:
http://www.hbasecon.com/
On Apr 2, 2012, at 10:16 PM, Marcos Ortiz wrote:
I heard yesterday that the first conference dedicated to HBase will be
in the next days.
Helllooo,
I am using hbase thrift for my app. I have made a table for patient which has
first a column family called info which contains his/her general info.
I want to make a method to search for a patient by his name and date of birth.
I didn't find any method for search all requires the
On Mon, Apr 2, 2012 at 12:27 PM, Miles Spielberg mi...@box.com wrote:
Our region server are each hosting ~270 regions. Our writes are extremely
well distributed (our HBase keys are output from a hash function) and small
(~100s of bytes). I believe that the writes are being so well distributed
HBasene?
https://github.com/akkumar/hbasene
On 04/02/2012 04:46 PM, Bryan Beaudreault wrote:
I imagine you don't want this search to have to scan the entire patients
table to find someone by their name, assuming there could be many many
patients. It may be a better idea to create a search
Thanks Bryan I will try it it sounds good.
But another question how could I make a table with 2 row keys: name, date ???
Sent from my iPad
On Apr 2, 2012, at 10:47 PM, Bryan Beaudreault bbeaudrea...@hubspot.com
wrote:
I imagine you don't want this search to have to scan the entire patients
You could use a prefix on the rowkey. I imagine there are multiple
different field types, so just have an enum or something that enumerates
the different field types you have, such as name, date, email, etc. Each
value would have a 1 char identifier, so then your search table would have
rowkeys
On Mon, Apr 2, 2012 at 1:41 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Decrease *hbase.hregion.memstore.flush.size?
Even if you decrease it enough so that you don't hit the too many hlogs
you'll still end up flushing tiny files which will trigger compactions a
lot too.
Are there
Thanks for the suggestion, Sandy. I wil let you know the outcome once i run
the job.
On Mon, Apr 2, 2012 at 3:26 PM, Sandy Pratt prat...@adobe.com wrote:
It might work to set the property as final on the server side, so that
clients can't override it:
property
Juhani,
Have you looked at any of the logs from your perf runs? Can you try
running HBase's performance evaluation with debug comments on? I'd like
to know if what I'm seeing is the same as you.
I've started running some of these and have encountered what seems to be
networking code isssues
Hi guys, conversation went off the list briefly as I resent stack
dumps to stack. We've moved back to hdfs 0.20.2 but want to post this
back here and try to summarize events as well as our experiences with
0.23 and concerns.
Quick summary: after having some issues with 0.20.2(since resolved),
we
Hi Alok, please refer to my previous post where I detailed some of the
stuff we did.
At this point, I'm unsure if it is actually possible to get good
autoFlushed throughput with 0.23, we weren't able to and switched back
to 0.20.2
If you want to persevere however, please let us know if you make
Jon,
we had a fair few long pauses. Our test tool gave us latency, and we
got a lot of requests taking much longer than they should.
Unfortunately we didn't hold onto our logs from the PerformanceEvaluation runs.
Also I would note that PerformanceEvaluation internally disables
autoFlush, so it
The interesting point I didn't mention from my simplistic tests is that
these slowdowns were present when using 0.92ish hbase on top of cdh3u3 hdfs
(the olds school hadoop 0.20.x based hadoop and didn't even use a hadoop 23
based hdfs) . I'm in the process of testing a hypothesis Todd suggested
On Mon, Apr 2, 2012 at 8:19 PM, Jonathan Hsieh j...@cloudera.com wrote:
I'm in the process of testing a hypothesis Todd suggested
and will share results after test is done.
What is the hypothesis?
St.Ack
Hi,
I have two regionservers and two tables with 10 regions each. When
starting first table's 10 regions assigned to first RS and the next table's
regions assigned to next RS. So, when i use coprocessor, it is not being
executed in both RS. what will be the problem?
--
Regards,
Balaji,K
34 matches
Mail list logo