Hi Lucas
As you said that RegionSizeCalculator is developed on top of 0.94, the
class has interdependencies vig
import org.apache.hadoop.hbase.RegionLoad;
import org.apache.hadoop.hbase.ServerLoad;
unable to find these classes in 0.94.X
are these classes available in 0.94 under some other
Many thanks, I got it
The default TTL over 69 years
-Original Message-
From: lars hofhansl [mailto:la...@apache.org]
Sent: Tuesday, February 18, 2014 12:01 AM
To: user@hbase.apache.org
Subject: Re: TTL forever
Just do not set any TTL, the default is forever.
Hi,
Add this import:
import org.apache.hadoop.hbase.HServerLoad;
And change names of classes:
ServerLoad - HServerLoad
RegionLoad - HServerLoad.RegionLoad
Lukas
On 18.2.2014 11:19, Vikram Singh Chandel wrote:
Hi Lucas
As you said that RegionSizeCalculator is developed on top of 0.94, the
Hi Mohamed,
Default value is MAX_VALUE, which is considered as forever. So default
TTL is NOT 69 years. default TTL IS forever.
JM
2014-02-18 5:19 GMT-05:00 Mohamed Ghareb m.ghar...@tedata.net:
Many thanks, I got it
The default TTL over 69 years
-Original Message-
From: lars
I also calculated the years of ttl, just for fun. :). But as Jean said,
default ttl is forever.
On Tue, Feb 18, 2014 at 2:05 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Mohamed,
Default value is MAX_VALUE, which is considered as forever. So default
TTL is NOT 69 years.
On execution of ./hbase shell i am getting below error...
This is new hdfs-hbase installation...
[sas@172.20.8.20~/hbase-0.98.0-hadoop2/bin]$ ./hbase shell
2014-02-18 20:34:36,531 INFO [main] Configuration.deprecation:
hadoop.native.lib is deprecated. Instead, use io.native.lib.available
do you have any ruby package installed outsite of HBase? Also, what's your
JDK version? What's the value of you JAVA_HOME variable?
JM
2014-02-18 10:05 GMT-05:00 Upendra Yadav upendra1...@gmail.com:
On execution of ./hbase shell i am getting below error...
This is new hdfs-hbase
See reply from Pete in this post:
https://groups.google.com/forum/#!topic/logstash-users/7dS2quZt_98
On Tue, Feb 18, 2014 at 9:05 AM, Upendra Yadav upendra1...@gmail.comwrote:
On execution of ./hbase shell i am getting below error...
This is new hdfs-hbase installation...
do you have any ruby package installed outsite of HBase?
NO
Also, what's your JDK version?
java version 1.6.0_24
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
What's the value of you JAVA_HOME variable?
/home/sas/jdk1.6.0_24
I think it's Ruby using this tmp directory, not HBase.
Can you try to setup TMPDIR?
export TMPDIR=/home/sas/hbase-0.98.0-hadoop2/tmp/
2014-02-18 11:54 GMT-05:00 Upendra Yadav upendra1...@gmail.com:
do you have any ruby package installed outsite of HBase?
NO
Also, what's your JDK version?
Hi All,
We are getting the following the exceptions when we are running hbase
mapreduce job using *Oozie*. However, when we run the same job
manually(using hadoop jar), it was running fine. In both the cases, we are
running as a same user. Also I configured all the configurations properly.
Hbase
After setting TMPDIR still same errors are coming... :(
On Tue, Feb 18, 2014 at 10:31 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
I think it's Ruby using this tmp directory, not HBase.
Can you try to setup TMPDIR?
export TMPDIR=/home/sas/hbase-0.98.0-hadoop2/tmp/
2014-02-18
Hum.
Not really sure, but maybe you can try to add
ENV['TMPDIR']='/home/sas/hbase-0.98.0-hadoop2/tmp/' into hirb.rb file?
2014-02-18 13:00 GMT-05:00 Upendra Yadav upendra1...@gmail.com:
After setting TMPDIR still same errors are coming... :(
On Tue, Feb 18, 2014 at 10:31 PM, Jean-Marc
Moving the discussion to the user mailing list.
Hi Jignesh,
You can not really map mysql tables to HBase. You need to rethink your
schema when moving to HBase. Like in MySQL, a key can be on multiple
columns. In HBase, it's they key itself. etc.
What are you trying to achieve?
JM
2014-02-18
On Mon, Feb 17, 2014 at 11:50 PM, Kamesh Bhallamudi kamesh.had...@gmail.com
wrote:
Hi All,
We are getting the following the exceptions when we are running hbase
mapreduce job using *Oozie*. However, when we run the same job
manually(using hadoop jar), it was running fine. In both the cases,
What are you trying to do? Do you want hex keys or decimal keys? Usually
you'll want keys fixed-length and zero-padded if you don't want to be
surprised by how they sort.
St.Ack
On Mon, Feb 17, 2014 at 3:28 AM, Mohamed Ghareb m.ghar...@tedata.netwrote:
Kindly when I create new tables with
On Mon, Feb 10, 2014 at 12:25 AM, Asaf Mesika asaf.mes...@gmail.com wrote:
Hi,
We have HBase 0.94.7 deployed in production with 54 Region Servers (Hadoop
1).
Couple of days ago, we had an incident which made our system unusable for
several hours.
HBase started emitting WARN exceptions
On Mon, Feb 17, 2014 at 1:59 AM, Asaf Mesika asaf.mes...@gmail.com wrote:
Hi,
Apparently this just happened on Staging machine as well. The common ground
between is a failed disk (1 out of 8).
It seems like a bug if HBase can't recover from a failed disk. Could it be
that short circuit is
Jean,
We have a product which is working on mysql and we are trying to move it on
HBase to create multi-tenant database.
I agree with you that because of nosql nature of database, we should
denormalize the mysql database. However, was thinking of making quick
progress by first creating dirty
Hi Dean,
Any chance you've tested Ram's patch? Does it work for you?
Thanks,
Nick
On Mon, Jan 27, 2014 at 8:28 AM, Dean hikeonp...@gmail.com wrote:
Hi Ram,
We'll give it a shot, thanks!
-Dean
You might want to take a look at Phoenix
http://phoenix.incubator.apache.org/
JM
2014-02-18 19:29 GMT-05:00 Jignesh Patel jigneshmpa...@gmail.com:
Jean,
We have a product which is working on mysql and we are trying to move it on
HBase to create multi-tenant database.
I agree with you that
Can you tell us oozie version you're using ?
bq. Hbase version : 0.94.2.21
I am not aware of such release. Are you using 0.94.2 ?
Thanks
On Tue, Feb 18, 2014 at 1:50 AM, Kamesh Bhallamudi
kamesh.had...@gmail.comwrote:
Hi All,
We are getting the following the exceptions when we are running
On Sat, Feb 15, 2014 at 8:01 PM, Jack Levin magn...@gmail.com wrote:
Looks like I patched it in DFSClient.java, here is the patch:
https://gist.github.com/anonymous/9028934
I moved 'deadNodes' list outside as global field that is accessible by
all running threads, so at any point
Hi Nick,
This is near the top of our TODO list. I pledge to report back as soon
as we've tested it.
Thanks,
Dean
Hi Jignesh,
Phoenix has support for multi-tenant tables:
http://phoenix.incubator.apache.org/multi-tenancy.html. Also, your primary
key constraint would transfer over as-is, since Phoenix supports composite
row keys. Essentially your pk constraint values get concatenated together
to form your row
I have bad luck... ENV['TMPDIR']='/home/sas/hbase-0.98.0-hadoop2/tmp/' into
hirb.rb file..
Still not working...
In the last i have no any option and I asked my system-admin team to mount
/tmp without noexec for one of the machine where i will execute ./hbase
shell..
And after that it's
26 matches
Mail list logo