java -d64 version works well in the shell.
2014-09-15 11:59 GMT+08:00 牛兆捷 nzjem...@gmail.com:
I use hbase-0.98-5-hadoop2 and modify the default heap size of region
server in hbase-env.sh as below (keep all the other parameters in the file
default):
export HBASE_REGIONSERVER_OPTS=-Xmn200m
It works now by configuring the $JAVA_HOME explicitly.
The JAVA_HOME is configured as $JAVA_HOME by default. Now I configure it to
the complete path of my JDK explicitly.
A little strange here, the $JAVA_HOME is already set in the shell
environment, why do I still need to configure it again
Thanks Anoop.I did that but the only method that was getting called in my
filter was public byte[] toByteArray() ,even though I over ride
transformcell.
Thanks,
Nishanth
On Thu, Sep 11, 2014 at 10:51 PM, Anoop John anoop.hb...@gmail.com wrote:
And u have to implement
transformCell(*final*
Take a look at StoreScanner#next():
ScanQueryMatcher.MatchCode qcode = matcher.match(kv);
switch(qcode) {
case INCLUDE:
case INCLUDE_AND_SEEK_NEXT_ROW:
case INCLUDE_AND_SEEK_NEXT_COL:
Filter f = matcher.getFilter();
if (f != null) {
Hi,
We are thinking of adding some new 64-bit servers to run RegionServers to
our 32-bit HBase cluster. Anything we should worry about or pay extra
attention to?
Thanks,
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr Elasticsearch Support *
Well, many HBase internal limits are based on the architecture so that
should impact you right away in resource utilization in the RSs, so a heap
or a setting that worked fine for your 32-bit RSs might need to be tuned
for a 64-bit environment. Probably might be easier to run a 32-bit JVM in
the
Hi Esteban,
Sorry, I meant to point that out in my original email - yeah, heap sizes,
Xmx, and such will be different for 32-bit and 64-bit servers, but I was
wondering if there is anything in HBase that could complain if, say, a
region written on a 32-bit server moves to a 64-bit server or
Only where we touch the native Hadoop libraries I think. If you have
specified compression implemented with a Hadoop native library, like
snappy or lzo, and have forgotten to deploy 64 bit native libraries,
and move to this 64 bit environment, you won't be able to open the
affected table(s) until
Do we have kind of native compression in PB? Or not at all? Because if so,
it might be an issue.
I run a 0.94 cluster with a mix of 32 and 64 with Snappy enabled. But never
tried 0.96 or more on it...
2014-09-15 18:36 GMT-04:00 Andrew Purtell apurt...@apache.org:
Only where we touch the native
bq. Do we have kind of native compression in PB?
I don't think so.
On Mon, Sep 15, 2014 at 4:28 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Do we have kind of native compression in PB? Or not at all? Because if so,
it might be an issue.
I run a 0.94 cluster with a mix of 32 and
On Mon, Sep 15, 2014 at 4:28 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Do we have kind of native compression in PB?
Protobufs has its own encodings, the Java language bindings implement
them in Java.
--
Best regards,
- Andy
Problems worthy of attack prove their worth by
bq. 98.1 on dest cluster
Looking at the history for SnapshotManifestV1, it came with HBASE-7987
which went to 0.99.0
Perhaps you're using a distro with HBASE-7987 ?
On Mon, Sep 15, 2014 at 4:58 PM, Gautam gautamkows...@gmail.com wrote:
Hello,
I'm trying to copy data between Hbase
Yep, looks like the CDH distro backports HBASE-7987. Having said that, is
there a transition path for us or are we hosed :-) ? In general, what's the
recommended way to achieve this, at this point I feel i'm going around the
system to achieve what I want. If nothing else works with export snapshot
94 and 98 differs in directory layout
so 98 is not able to read 94 layout unless you run the migration tool
which is basically moving all the data in a default namespace directory
e.g.
/hbase/table - /hbase/data/default/table
/hbase/.archive/table - /hbase/archive/default/table
Matteo
On Mon,
Hi all,
I'd like to solicit your help in providing current rules of thumb and best
practices for a section of the HBase Reference Guide. I've done some
research and started a JIRA at
https://issues.apache.org/jira/browse/HBASE-11791. I'd really appreciate it
if you guys could take some time and
Thanks for the reply Matteo.
This is exactly what I did. I modified the source cluster's dir structure
to mimic that of the 98 cluster. I even got as far as it trying to look
through the reference files.
I end up with this exception :
14/09/15 23:34:59 ERROR snapshot.ExportSnapshot: Snapshot
can you post the full exception and the file path ?
maybe there is a bug in looking up the reference file.
It seems to not be able to find enough data in the file...
Matteo
On Mon, Sep 15, 2014 at 10:08 PM, Gautam gautamkows...@gmail.com wrote:
Thanks for the reply Matteo.
This is exactly
14/09/15 23:34:59 DEBUG snapshot.SnapshotManifestV1: Adding reference for
file (4/4): hftp://
master42.stg.com:50070/hbase/.hbase-snapshot/msg_snapshot/84f60fc2aa7e96df91e6289e6c19dc25/c/afe341e4149649578c5861e32494dbec
14/09/15 23:34:59 ERROR snapshot.ExportSnapshot: Snapshot export failed
While you continue on the snapshot approach, have you tried to Export the
table in 0.94 to hdfs, and then Import the data from hdfs to 0.98?
On Sep 15, 2014 10:19 PM, Matteo Bertozzi theo.berto...@gmail.com wrote:
can you post the full exception and the file path ?
maybe there is a bug in
19 matches
Mail list logo