This is how /etc/hosts file looks like on HBase master node
ubuntu@master:~$ cat /etc/hosts
10.78.21.133 master
#10.62.126.245 slave1
#10.154.133.161 slave1
10.224.115.218 slave1
10.32.213.195 slave2
and the code which actually tries to connect is as shown below:
see https://issues.apache.org/jira/browse/HBASE-3556
it looks you are using a very old release? 0.90 perhaps?
On Thu, Jul 31, 2014 at 2:24 PM, Chandrashekhar Kotekar
shekhar.kote...@gmail.com wrote:
This is how /etc/hosts file looks like on HBase master node
ubuntu@master:~$ cat /etc/hosts
No, we are using hbase-0.90.6-cdh3u6
Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455
On Thu, Jul 31, 2014 at 12:49 PM, Qiang Tian tian...@gmail.com wrote:
see https://issues.apache.org/jira/browse/HBASE-3556
it looks you are using a very old release? 0.90 perhaps?
On Thu, Jul
Oh. 0.90.6 is VERY old. Any chance for you to upgrade a more recent version
like CDH4 or even CDH5 (HBase 0.98)?
2014-07-31 3:25 GMT-04:00 Chandrashekhar Kotekar shekhar.kote...@gmail.com
:
No, we are using hbase-0.90.6-cdh3u6
Regards,
Chandrash3khar Kotekar
Mobile - +91 8600011455
On
Hi,
I have a conceptional question and would appreciate hints.
My task is to save files to hdfs and to maintain some informations about
them in a hbase db and then serve both to the application.
Per file I have around 50 rows with 10 columns (in 2 column families) in
the tables, which have
Hi All,
I am using Mapreduce API to read Hbase Table, based on some scan operation
in mapper and putting the data to a file in reducer.
I am using Hbase Version Version 0.94.5.23.
*Problem:*
Now in my job, my mapper output a key as text and value as text, but my
reducer output key as text and
What's the read / write mix in your workload ?
Have you looked at HBASE-10070 'HBase read high-availability using
timeline-consistent region replicas' (phase 1 has been merged for the
upcoming 1.0 release) ?
Cheers
On Thu, Jul 31, 2014 at 8:17 AM, Wilm Schumacher wilm.schumac...@cawoom.com
Am 31.07.2014 um 18:08 schrieb Ted Yu:
What's the read / write mix in your workload ?
I would think around
1 put to 2-5 reads for the hdfs files (estimated)
and
1 put to hundreds of reads in the hbase table
So in short form:
= for the files
* number of puts ~ gets
* small number of puts
HBASE-11339 'HBase MOB' may be of interest to you - it is still in
development.
Cheers
On Thu, Jul 31, 2014 at 9:21 AM, Wilm Schumacher wilm.schumac...@cawoom.com
wrote:
Am 31.07.2014 um 18:08 schrieb Ted Yu:
What's the read / write mix in your workload ?
I would think around
1 put to
If you put all values from columns into the row key, you wouldn't be able
to utilize the benefits column families bring you (e.g. essential column
family feature)
See:
http://hbase.apache.org/book.html#columnfamily
Cheers
On Wed, Jul 30, 2014 at 6:13 PM, yl wu wuyl6...@gmail.com wrote:
Hi
Hi Wilm,
What else will this cluster do? Are you planning to run MR against the data
here? If this cluster is dedicated to your application and you have enough
IO capacity to support all application needs on the cluster, I see no
reason to run two clusters.
The reason we recommend against
Does anyone know how to read data from OpenTSDB/HBase into PySpark? I’ve
searched multiple forums and can’t seem to find the answer. Thanks!
Hello,
Have you tried to use the Thrift bindings for Python? An example can be
found under the hbase-examples directory:
https://github.com/apache/hbase/blob/0.96/hbase-examples/src/main/python/thrift1/DemoClient.py
regards,
esteban.
--
Cloudera, Inc.
On Thu, Jul 31, 2014 at 11:36 AM, CHU,
Hi,
Am 31.07.2014 um 20:28 schrieb Nick Dimiduk:
What else will this cluster do? Are you planning to run MR against the data
here?
The cluster does nothing else than this application. The application is
the hdfs part, and the hbase part.
And yes, I plan to run some MR jobs against the data.
Hi Parkirat,
I don't follow the reducer problem you're having. Can you post your code
that configures the job? I assume you're using TableMapReduceUtil someplace.
Your reducer is removing duplicate values? Sounds like you need to update
it's logic to only emit a value once. Pastebin-ing your
Hi,
I am trying to write a customized rebalancing algorithm. I would like to
run the rebalancer every 30 minutes inside a single thread. I would also
like to completely disable Helix triggering the rebalancer.
I have a few questions:
1) What's the best way to run the custom controller ? Can I
Sorry - wrong user mailing list - please ignore...
On Thu, Jul 31, 2014 at 12:12 PM, Varun Sharma va...@pinterest.com wrote:
Hi,
I am trying to write a customized rebalancing algorithm. I would like to
run the rebalancer every 30 minutes inside a single thread. I would also
like to
On Thu, Jul 31, 2014 at 11:47 AM, Esteban Gutierrez este...@cloudera.com
wrote:
Hello,
Have you tried to use the Thrift bindings for Python? An example can be
found under the hbase-examples directory:
+1 for happybase
On Thu, Jul 31, 2014 at 2:47 PM, Stack st...@duboce.net wrote:
On Thu, Jul 31, 2014 at 11:47 AM, Esteban Gutierrez este...@cloudera.com
wrote:
Hello,
Have you tried to use the Thrift bindings for Python? An example can be
found under the hbase-examples directory:
19 matches
Mail list logo