hi all i use hbase 0.94 and when I stop hbase as follow
last week i add configuration to $HBASE_HOME/bin/hbase
elif [ "$COMMAND" = "master" ] ; then
CLASS='org.apache.hadoop.hbase.master.HMaster'
HBASE_OPTS="$HBASE_OPTS -Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=10444"
I
The HBase Team is pleased to announce the immediate release of HBase 0.94.11.
Download it from your favorite Apache mirror [1]. This release has also been
pushed to Apache's maven repository.
As before, all previous 0.92.x and 0.94.x releases can upgraded to 0.94.11 via
a rolling upgrade without
Hi Andrew,
I don't think the homebrew recipes are managed by an HBase developer.
Rather, someone in the community has taken it upon themselves to
provide the project through brew. Likewise, the Apache HBase project does
not provide RPM or DEB packages, but you're likely to find them if you look
ar
Hi everyone,
I'm facing the same issue as Pablo. Renaming my classes used in HBase
context improved network usage more than 20%. It would be really nice to
have an improvement around this.
On 08/20/2013 01:15 PM, Jean-Marc Spaggiari wrote:
But even if we are using Protobuf, he is going to
When you start looking at secondary indexing, they really become powerful when
you want to join two tables.
(Something I thought was already being discussed)
So you can use the inverted table as a secondary index with one small glitch...
And then create a table of indexes. Where each row
On Mon, Aug 19, 2013 at 11:52 PM, Monish r wrote:
> Hi Jean,
>
s/Jean/Jean-Daniel ;)
> Thanks for the explanation.
>
> Just a clarification on the third answer,
>
> In our current cluster ( 0.90.6 ) , i find that irrespective of whether TTL
> is set or not , Major compaction compaction rewrite
RTFM? ;)
Thanks for pointing me to this link! I have all the responses I need there.
JM
2013/8/20 Jean-Daniel Cryans
> You can find a lot here: http://hbase.apache.org/replication.html
>
> And how many logs you can queue is how much disk space you have :)
>
>
> On Tue, Aug 20, 2013 at 7:23 AM,
You can find a lot here: http://hbase.apache.org/replication.html
And how many logs you can queue is how much disk space you have :)
On Tue, Aug 20, 2013 at 7:23 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi,
>
> If I have a master -> slave replication, and master went down, re
But even if we are using Protobuf, he is going to face the same issue,
right?
We should have a way to send the filter once with a number to say to the
regions that this filter, moving forward, will be represented by this
number. There is some risk to re-use a number of a filter already using it,
b
Hi Ted,
I'm using 0.94.7. Great! so moving to 0.95 will avoid this issue.
Thanks!
2013/8/20 Ted Yu
> Are you using HBase 0.92 or 0.94 ?
>
> In 0.95 and later releases, HbaseObjectWritable doesn't exist. Protobuf is
> used for communication.
>
> Cheers
>
>
> On Tue, Aug 20, 2013 at 8:56 AM, Pa
Are you using HBase 0.92 or 0.94 ?
In 0.95 and later releases, HbaseObjectWritable doesn't exist. Protobuf is
used for communication.
Cheers
On Tue, Aug 20, 2013 at 8:56 AM, Pablo Medina wrote:
> Hi all,
>
> I'm using custom filters to retrieve filtered data from HBase using the
> native api.
The scan will be broken up into multiple map tasks, each of which will run
over a single split of the table (look at TableInputFormat to see how it is
done). The map tasks will run in parallel.
Jeff
On Tue, Aug 20, 2013 at 8:45 AM, yonghu wrote:
> Hello,
>
> I know if I use default scan api,
Hi all,
I'm using custom filters to retrieve filtered data from HBase using the
native api. I noticed that the class full names of those custom filters is
being sent as the bytes representation of the string using
Text.writeString(). This consumes a lot of network bandwidth in my case due
to using
Hello,
I know if I use default scan api, HBase scans table in a serial manner, as
it needs to guarantee the order of the returned tuples. My question is if I
use MapReduce to read the HBase table, and directly output the results in
HDFS, not returned back to client. The HBase scan is still in a se
Hi,
If I have a master -> slave replication, and master went down, replication
will start back where it was when master will come back online. Fine.
If I have a master -> slave replication, and slave went down, is the data
queued until the slave come back online and then sent? If so, how big can
b
Hi,
Are you able to run FSCK on HDFS to see if there is any issue? And HBCK on
hbase, but don't put the option to repair for now...
JM
2013/8/13 g_jinlong
> Caused by: java.io.EOFException:
>
> The file has read.But the program is reading.
>
> 通过对这句话的分析,好像是你的文件读完了,程序还在读文件。
>
>
>
>
> g_jinlong
Hi, Lars
Thank you for your reply and sorry for the unclarity.
Actually, hbase daemon is runing only on the master, just one server. It
uses HDFS as its storage.
The input data is on the EBS. It is wrtten in HBase which is over Hdfs
based on EBS.
The only turning I did is :
hbase.
Hi,
I am running Hbase in pseudo distributed mode on top HDFS.
Recently , i was facing problems related to long GC pauses.
When i read in official documentation , its suggested to increase zookeeper
timeout.
I am planning to make it 10 minutes .I understand the risk of increasing
timeout means it
18 matches
Mail list logo