Hello.
Occasionally, when closing a region, the RS_CLOSE_REGION thread is unable to
acquire a lock and is still in the WAITING.
(These days, the cluster load increase.)
So the Region state is PENDING_CLOSE persists.
The thread holding the lock is the RPC handler.
If you have any good tips on
Tried with given snippet,
It works when a table placed on single RegionServer. But when Table is
distributed across the cluster, I am not able to scan table, Let me know if
I am going wrong somewhere.
On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan wrote:
> Try this way:
>
>
> Connection connection
Does your hbase client run on multiple machines?
R.C
From: Lalit Jadhav
Sent: 11 July 2018 14:31:40
To: user@hbase.apache.org
Subject: Re: Unable to read from Kerberised HBase
Tried with given snippet,
It works when a table placed on single
I am a subscriber please add me thanks
Yes.
On Wed, Jul 11, 2018 at 2:58 PM, Reid Chan wrote:
> Does your hbase client run on multiple machines?
>
> R.C
>
>
>
> From: Lalit Jadhav
> Sent: 11 July 2018 14:31:40
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
Does every machine where hbase client runs has your specific keytab and
corresponding principal?
>From snippet, i can tell that you're using service principal to login (with
>name/hostname@REALM format), and each principal should be different due to
>their different hostname.
R.C
There must be a handler thread is running (or stuck) somewhere, so the
close region thread can't obtain the write lock. You can look closely in
your thread dump.
The handler thread you pasted above it is just a thread can't obtain the
read lock since the close thread is trying write lock.
Best
Dear All
I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's
Dear All
I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's
To expand on this, I'm also having the inverse issue. I had to take down
our main HBase today and now when I try to run hbck it is trying to look
for the hbase:meta,,1 table on a region server that is serving a read
replica metadata table and failing.
It seems like something is messed up on
Hi All
I have a query regarding Hbase replication and OldWals
Hbase version 1.2.1
To enable Hbase indexing we use below command on table
alter '', {NAME => 'CF1', REPLICATION_SCOPE => 1}
By Doing this actually replication get enabled as hbase-indexer required
it, as per my understanding
Unless you are including the date+time in the rowKey yourself, no.
HBase has exactly one index for fast lookups, and that is the rowKey.
Any other query operation is (essentially) an exhaustive search.
On 7/11/18 12:07 PM, Ming wrote:
Hi, all,
Is there a way to get the last row
Thanks Sean for your reply
I still have some question un answered like
Q1: How Hbase syncronized with Hbase indexer.
Q2 What optimization I can apply.
Q3 As it's clear from my stats, data in OldWals is quite huge so it's not
getting clear my HMaster., how can I improve my HDFS space issue due to
The hbase-spark module in the HBase project (which hasn't yet made it
into a release) is FWICT the eventual replacement for both the
Cloudera Labs SparkOnHBase and the Hortonworks SHC.
The code in the hbase-spark module started as an update of the
SparkOnHBase code and then quickly expanded via
Dear All
I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's
Dear All
I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's
Dear All
I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's
Dear All
I saw there is one Hbase-spark module in Hbase code and saw there is one
jira for this: https://issues.apache.org/jira/browse/HBASE-13992
In this jira it's told the Hbase-spark module code initially from
https://github.com/cloudera-labs/SparkOnHBase
And in anther discuss list it's
Presuming you're using the Lily indexer[1], yes it relies on hbase's
built in cross-cluster replication.
The replication system stores WALs until it can successfully send them
for replication. If you look in ZK you should be able to see which
regionserver(s) are waiting to send those WALs over.
Hi, all,
Is there a way to get the last row put/delete into a HBase table?
In other words, how can I tell the last time a HBase table is changed? I was
trying to check the HDFS file stats, but HBase has memstore, so that is not
a good way, and HFile location is internal to HBase.
My
oldWals are supposed to be cleaned in master background chore, I also doubt
they are needed.
HBASE-20352(for 1.x version) is to speed up cleaning oldWals, it may address
your concern "OldWals is quite huge"
R.C
From: Manjeet Singh
Sent: 12 July
thanks Josh,
My application leave the timestamp as default, so each put/delete should
have timestamp generated by HBase itself. I was thinking there is a way to
get the last timestamp.
But think it again, yes you are right, the data is sorted by rowkey, so
there is no quick way to get the
I have one more question
If solr is having its own data mean its maintaining data in their shards
and hbase is maintaining in data folder... Why still oldWals need?
Thanks
Manjeet singh
On Wed, 11 Jul 2018, 23:19 Manjeet Singh,
wrote:
> Thanks Sean for your reply
>
> I still have some
Please see for subscription information:
http://hbase.apache.org/mail-lists.html
On Wed, Jul 11, 2018 at 4:19 AM bill.zhou wrote:
> I am a subscriber please add me thanks
>
>
24 matches
Mail list logo