+user@phoenix
Hi Vishnu,
Can you try restarting the region server where you are seeing the timeout
on rowlocks. Would be helpful if you share RS logs.
Can you provide the details like what kind of operations done before
restart and would you like the share the schemas tables.
Thanks,
to ram
i am not sure if I misunstand the code, so I want to make sure if there is any
improvements
to ted
sorry, I make some mistakes in previous description
i mean if we can use guava multiMap:
private MultiMap hostAndWeight
do not need to store hostname many times
thanks
YQ:
The HostAndWeight is basically a tuple.
In getTopHosts(), hosts are retrieved.
In getWeight(String host), weight is retrieved.
Why do you think a single Long is enough ?
Cheers
On Wed, Jun 22, 2016 at 9:28 PM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:
> Hi WangYQ,
>
>
Hi WangYQ,
For code related suggestions if you feel there is an improvement or bug it
is preferrable to raise a JIRA and give a patch. Pls feel free to raise a
JIRA with your suggestion and why you plan to change it.
Regards
Ram
On Thu, Jun 23, 2016 at 9:36 AM, WangYQ
there is a class named "HDFSBlockDistribution", use a tree map "hostAndWeight"
to store data
private Map hostAndWeight
I think we can use
private Map hostAndWeight
to store data
thanks
i tried the following:
1) truncating system stats did not work.
2) phoenix.stats.useCurrentTime=false
but no luck - the wait time increased even further
On Thu, Jun 23, 2016 at 9:04 AM, vishnu rao wrote:
> Hi Biju
>
> Yes local index
>
> . It all started when 1 box
So giving the spark user the SYSTEM label access and having the fix
for HBASE-15707, things are working for you? Or still some issues?
Pls let us know.
-Anoop-
On Fri, Jun 17, 2016 at 12:07 PM, ramkrishna vasudevan
wrote:
>>>so long as only the HBase user and
Hi Biju
Yes local index
. It all started when 1 box crashed.
When I brought up a new one the error was localized to the new box.
After cluster restart - it's spread to all servers.
I shall attempt to clear system stats and increase meta cache size
Vishnu,
Are you using "local index" on
Vishnu,
Are you using "local index" on any of the tables? We have seen similar
issues while using "local index".
On Wed, Jun 22, 2016 at 12:25 PM, vishnu rao wrote:
> the server dies when trying to take the thread dump.
>
> i believe i am experiencing this bug
>
>
the server dies when trying to take the thread dump.
i believe i am experiencing this bug
https://issues.apache.org/jira/browse/PHOENIX-2508
On Wed, Jun 22, 2016 at 5:03 PM, Heng Chen wrote:
> which thread hold the row lock? could you dump the jstack with 'jstack -l
You weren't setting the classpath. In Bash, you can't put a $ in front of
the variable name when you're assigning it to a value.
-Dima
On Wednesday, June 22, 2016, Mahesha999 wrote:
> hey thanks. That worked. Seems that my lack of experience with Linux
> causing
> trouble.
hey thanks. That worked. Seems that my lack of experience with Linux causing
trouble. Can u tell me what was going?
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/hbase-classpath-command-causes-File-name-too-long-error-tp4080706p4080814.html
Sent from the HBase User
Jinhong:
Please take a look at 3rd paragraph of:
http://hbase.apache.org/book.html#gcpause
Cheers
On Wed, Jun 22, 2016 at 2:09 AM, Heng Chen wrote:
> 8000/200 = 40, if your table balance enough, each RS will serve 40
> requests per second, that is OK for RS.
8000/200 = 40, if your table balance enough, each RS will serve 40
requests per second, that is OK for RS. Have you try set xmn smaller to
reduce young generation?
2016-06-22 16:12 GMT+08:00 jinhong lu :
> 400 regions, 8000 eps for the whole table. hbase 1.0, AND heap
which thread hold the row lock? could you dump the jstack with 'jstack -l
pid' ?
2016-06-22 16:14 GMT+08:00 vishnu rao :
> hi Heng.
>
> 2016-06-22 08:13:42,256 WARN
> [B.defaultRpcServer.handler=32,queue=2,port=16020] regionserver.HRegion:
> Failed getting lock in batch
400 regions, 8000 eps for the whole table. hbase 1.0, AND heap -Xmx32G,-Xms32G,
-Xmn4G
Thanks,
lujinhong
> 在 2016年6月22日,15:53,Heng Chen 写道:
>
> How many regions do you have for the table? 8000 qps for one RS or for the
> whole table? What's your java heap size now?
hi Heng.
2016-06-22 08:13:42,256 WARN
[B.defaultRpcServer.handler=32,queue=2,port=16020] regionserver.HRegion:
Failed getting lock in batch put,
row=\x01\xD6\xFD\xC9\xDC\xE4\x08\xC4\x0D\xBESM\xC2\x82\x14Z
java.io.IOException: Timed out waiting for lock for row:
How many regions do you have for the table? 8000 qps for one RS or for the
whole table? What's your java heap size now? and what's your hbase
version?
2016-06-22 12:39 GMT+08:00 jinhong lu :
> I got a cluster of 200 regionserver, and one of the tables is about 3T and
>
Could you paste the whole jstack and relates rs log? It seems row write
lock was occupied by some thread. Need more information to find it.
2016-06-22 13:48 GMT+08:00 vishnu rao :
> need some help. this has happened for 2 of my servers
> -
>
>
19 matches
Mail list logo