Hello!
I have a problem with long-running HBase-client. I have the
HBase-0.98.6-cdh5.3.1.
After few days of application run I have a bug (?) with a trace as
below. As I debugged my issue I see, that the client try to reach
region, that does not exists. The region has existed in the past,
hoverer
After spending more time I realised that my understanding and my question
(was invalid).
I am still trying to get more information regarding the problem and will
update the thread once I have a better handle on the problem.
Apologies for the confusion..
On Thu, Sep 24, 2015 at 10:32 AM,
Hi
In the version that you were using by default the caching was 1000 ( I
believe) need to see the old code. So in that case it was trying to fetch
1000 rows and each row with 20k cols. Now when you are saying that the
client was missing rows, did you check the server logs?
Did you get any
Just trying to understand more,
you are having a combination of PRefixFilter and SingleColumnValueFilter -
now the column you have specified in the SingleColumnValueFilter - is it
the only column that you have in your table? Or is there many other
columns and one such column was used in the
Hi,
The problem that I am actually facing is that when doing a scan over rows
where each row has very large number of cells (large number of columns),
the scan API seems to be transparently dropping data - in my case I noticed
that entire row of data was missing in few cases.
On suggestions from
22:28:10,233 DEBUG org.apache.hadoop.hbase.util.ByteStringer
- Failed to classload HBaseZeroCopyByteString:
java.lang.IllegalAccessError: class com.google.protobuf.HBaseZeroCopyByteString
cannot access its superclass com.google.protobuf.LiteralByteString
Can you check the classpath ?
Have no Idea, some guys try to use "curl" to determine active NN.
My suggestion is different. You should put remote NN HA configuration in
hdfs-site.xml.
2015-09-24 14:33 GMT+02:00 Akmal Abbasov :
> > add remote cluster HA configuration to your "local" hdfs client
> >
Hi all,
I am trying to get the HBaseReadExample from Apache Flink to run. I have filled
a table with the HBaseWriteExample (that works great) and purposely split it
over 3 regions.
Now when I try to read from it the first split seems to be scanned (170 rows)
fine and after that the Connections
> add remote cluster HA configuration to your "local" hdfs client
> configuration
I am using the following command in script
$HBASE_PATH/bin/hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot
snapshot-name -copy-to hdfs://remote_hbase_master/hbase
In this case how I can know which
Gaurav:
Please also check GC activities on the client side.
Here is the reason I brought this to your attention:
HBASE-14177 Full GC on client may lead to missing scan results
Cheers
On Thu, Sep 24, 2015 at 2:13 AM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:
> Hi
>
> In
Hi Akmal,
It will be better if you use name service value. You will not need to worry
about which NN is active. I believe you can find that property in Hadoop's
core-site.xml file.
Sent from my iPhone
On Sep 24, 2015, at 7:23 AM, Akmal Abbasov wrote:
>> My
Hello,
I tried to change hbase.hregion.memstore.flush.size for a table but it
didn't work.
(just wanted to see if I can set different memstore size for each table)
create 't1', {NAME => 'cf', CONFIGURATION =>
{'hbase.hregion.memstore.flush.size' => '1048576'}}
What can I set with CONFIGURATION?
Which release of hbase do you use ?
I used command similar to yours and I got :
hbase(main):005:0> describe 't3'
Table t3 is ENABLED
t3
COLUMN FAMILIES DESCRIPTION
{NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW',
REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION =>
Hi all,
Is there a better way to warm HFile indexes other than scanning through my
datasets? I did see PREFETCH_BLOCKS_ON_OPEN, but the warning that it "is
not a good idea if the data to be preloaded will not fit into the
blockcache" makes me wary. Why would this be a bad idea?
Thanks!
I think the option you mentioned is for data blocks.
As for index blocks, please refer to HFile V2 design doc.
Also see this:
https://issues.apache.org/jira/browse/HBASE
-3857?focusedCommentId=13031489=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13031489
Cheers
On
bq. Excluding datanode RS-1:50010
Was RS-1 the only data node to be excluded in that timeframe ?
Have you run fsck to see if hdfs is healthy ?
Cheers
On Thu, Sep 24, 2015 at 7:47 PM, Alexandre Normand <
alexandre.norm...@opower.com> wrote:
> Hi Ted,
> We'll be upgrading to cdh5 in the coming
Hi Ted,
We'll be upgrading to cdh5 in the coming months but we're unfortunately
stuck on 0.94.6 at the moment.
The RS logs were empty around the time of the failed snapshot restore
operation, but the following errors were in the master log. The node
'RS-1' is the only node indicated in the logs.
Hey,
We're trying to restore a snapshot of a relatively big table (20TB) using
hbase 0.94.6-cdh4.5.0 and we're getting timeouts doing so. We increased the
timeout configurations(hbase.snapshot.master.timeoutMillis,
hbase.snapshot.region.timeout, hbase.snapshot.master.timeout.millis) to 10
minutes
Hi,
There are many other columns and one such column was used in the
SingleColumnValueFilter. My intention is first use the PrefixFilter to narrow
the data scope, then use the SingleColumnValueFilter to choose the correct
record, and last use the FirstKeyOnlyFilter to get just one KV and
19 matches
Mail list logo