Re:Re: question on "drain region servers"

2016-04-25 Thread WangYQ
thanks
in hbase 0.99.0,  I find the rb file: draining_servers.rb


i have some suggestions on this tool:
1. if I add rs hs1 to draining_servers, when hs1 restart, the zk node still 
exists in zk, but hmaster will not treat hs1 as draining_servers
i think when we add a hs to draining_servers, we do not need to store the 
start code in zk, just store the hostName and port 
2.  we add hs1 to draining_servers, but if hs1 always restart, we will need to 
add hs1 several times 
  when we need to delete the draining_servers info of hs1, we  will  need to 
delete hs1 several times


 
finally, what is the original motivation of this tool, some scenario 
descriptions are good.






At 2016-04-26 11:33:10, "Ted Yu"  wrote:
>Please take a look at:
>bin/draining_servers.rb
>
>On Mon, Apr 25, 2016 at 8:12 PM, WangYQ  wrote:
>
>> in hbase,  I find there is a "drain regionServer" feature
>>
>>
>> if a rs is added to drain regionServer in ZK, then regions will not be
>> move to on these regionServers
>>
>>
>> but, how can a rs be add to  drain regionServer,   we add it handly or rs
>> will add itself automaticly


Re: question on "drain region servers"

2016-04-25 Thread Ted Yu
Please take a look at:
bin/draining_servers.rb

On Mon, Apr 25, 2016 at 8:12 PM, WangYQ  wrote:

> in hbase,  I find there is a "drain regionServer" feature
>
>
> if a rs is added to drain regionServer in ZK, then regions will not be
> move to on these regionServers
>
>
> but, how can a rs be add to  drain regionServer,   we add it handly or rs
> will add itself automaticly


question on "drain region servers"

2016-04-25 Thread WangYQ
in hbase,  I find there is a "drain regionServer" feature


if a rs is added to drain regionServer in ZK, then regions will not be move to 
on these regionServers


but, how can a rs be add to  drain regionServer,   we add it handly or rs will 
add itself automaticly

Re: Slow sync cost

2016-04-25 Thread Saad Mufti
Thanks, the meaning makes sense now but I still need to figure out why we
keep seeing this. How would I know if this is just us overloading the
capacity of our system with too much write load vs some configuration
problem?

Thanks.


Saad

On Mon, Apr 25, 2016 at 4:25 PM, Ted Yu  wrote:

> w.r.t. the pipeline, please see this description:
>
> http://itm-vm.shidler.hawaii.edu/HDFS/ArchDocUseCases.html
>
> On Mon, Apr 25, 2016 at 12:18 PM, Saad Mufti  wrote:
>
> > Hi,
> >
> > In our large HBase cluster based on CDH 5.5 in AWS, we're constantly
> seeing
> > the following messages in the region server logs:
> >
> > 2016-04-25 14:02:55,178 INFO
> > org.apache.hadoop.hbase.regionserver.wal.FSHLog: Slow sync cost: 258 ms,
> > current pipeline:
> > [DatanodeInfoWithStorage[10.99.182.165:50010
> > ,DS-281d4c4f-23bd-4541-bedb-946e57a0f0fd,DISK],
> > DatanodeInfoWithStorage[10.99.182.236:50010
> > ,DS-f8e7e8c9-6fa0-446d-a6e5-122ab35b6f7c,DISK],
> > DatanodeInfoWithStorage[10.99.182.195:50010
> > ,DS-3beae344-5a4a-4759-ad79-a61beabcc09d,DISK]]
> >
> >
> > These happen regularly while HBase appear to be operating normally with
> > decent read and write performance. We do have occasional performance
> > problems when regions are auto-splitting, and at first I thought this was
> > related but now I se it happens all the time.
> >
> >
> > Can someone explain what this means really and should we be concerned? I
> > tracked down the source code that outputs it in
> >
> >
> >
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
> >
> > but after going through the code I think I'd need to know much more about
> > the code to glean anything from it or the associated JIRA ticket
> > https://issues.apache.org/jira/browse/HBASE-11240.
> >
> > Also, what is this "pipeline" the ticket and code talks about?
> >
> > Thanks in advance for any information and/or clarification anyone can
> > provide.
> >
> > 
> >
> > Saad
> >
>


Re: Hbase shell script from java

2016-04-25 Thread Saad Mufti
I don't know why you would want to do this all through the hbase shell if
your main driver is Java, unless you have a lot of existing complicated
scripts you want to leverage. Why not just write Java code against the
standard Java hbase client?

Or if you need different parameters for each time you invoke a script,
parameterize your script and invoke it with different parameters each time.
HBase shell scripts are just Ruby scripts with special built in HBase
commands/functions, so parameterize it in the same way you'd parameterize
any Ruby script.


Saad


On Mon, Apr 25, 2016 at 7:10 PM, Saurabh Malviya (samalviy) <
samal...@cisco.com> wrote:

> Thanks, That will work for me.
>
> I am just curious how people are doing in industry, Suppose take a case
> where you have more than 100 tables and need to modify table script a lot
> for each deployment for performance or other reasons.
>
> -Saurabh
>
> -Original Message-
> From: Saad Mufti [mailto:saad.mu...@gmail.com]
> Sent: Sunday, April 24, 2016 2:55 PM
> To: user@hbase.apache.org
> Subject: Re: Hbase shell script from java
>
> Why can't you install hbase on your local machine, with the configuration
> pointing it to your desired cluster, then run the hbase shell and its
> script locally?
>
> I believe the HBase web UI has a convenient link to download client
> configuration.
>
> 
> Saad
>
>
> On Sun, Apr 24, 2016 at 5:22 PM, Saurabh Malviya (samalviy) <
> samal...@cisco.com> wrote:
>
> > I need to execute this command remotely.
> >
> > Right now I am SSH into hbase master and execute the script, which is
> > not the most elegant way to do.
> >
> > Saurabh
> >
> > Please see
> >
> >
> > https://blog.art-of-coding.eu/executing-operating-system-commands-from
> > -java/
> >
> > > On Apr 23, 2016, at 10:18 PM, Saurabh Malviya (samalviy) <
> > samal...@cisco.com> wrote:
> > >
> > > Hi,
> > >
> > > Is there any way to run hbase shell script from Java. Also mentioned
> > this question earlier in below url earlier.
> > >
> > > As we are having bunch of scripts and need to change frequently for
> > performance tuning.
> > >
> > >
> > http://grokbase.com/p/hbase/user/161ezbnk11/run-hbase-shell-script-fro
> > m-java
> > >
> > >
> > > -Saurabh
> >
>


RE: Hbase shell script from java

2016-04-25 Thread Saurabh Malviya (samalviy)
Thanks, That will work for me.

I am just curious how people are doing in industry, Suppose take a case where 
you have more than 100 tables and need to modify table script a lot for each 
deployment for performance or other reasons.

-Saurabh

-Original Message-
From: Saad Mufti [mailto:saad.mu...@gmail.com] 
Sent: Sunday, April 24, 2016 2:55 PM
To: user@hbase.apache.org
Subject: Re: Hbase shell script from java

Why can't you install hbase on your local machine, with the configuration 
pointing it to your desired cluster, then run the hbase shell and its script 
locally?

I believe the HBase web UI has a convenient link to download client 
configuration.


Saad


On Sun, Apr 24, 2016 at 5:22 PM, Saurabh Malviya (samalviy) < 
samal...@cisco.com> wrote:

> I need to execute this command remotely.
>
> Right now I am SSH into hbase master and execute the script, which is 
> not the most elegant way to do.
>
> Saurabh
>
> Please see
>
>
> https://blog.art-of-coding.eu/executing-operating-system-commands-from
> -java/
>
> > On Apr 23, 2016, at 10:18 PM, Saurabh Malviya (samalviy) <
> samal...@cisco.com> wrote:
> >
> > Hi,
> >
> > Is there any way to run hbase shell script from Java. Also mentioned
> this question earlier in below url earlier.
> >
> > As we are having bunch of scripts and need to change frequently for
> performance tuning.
> >
> >
> http://grokbase.com/p/hbase/user/161ezbnk11/run-hbase-shell-script-fro
> m-java
> >
> >
> > -Saurabh
>


Re: Slow sync cost

2016-04-25 Thread Ted Yu
w.r.t. the pipeline, please see this description:

http://itm-vm.shidler.hawaii.edu/HDFS/ArchDocUseCases.html

On Mon, Apr 25, 2016 at 12:18 PM, Saad Mufti  wrote:

> Hi,
>
> In our large HBase cluster based on CDH 5.5 in AWS, we're constantly seeing
> the following messages in the region server logs:
>
> 2016-04-25 14:02:55,178 INFO
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Slow sync cost: 258 ms,
> current pipeline:
> [DatanodeInfoWithStorage[10.99.182.165:50010
> ,DS-281d4c4f-23bd-4541-bedb-946e57a0f0fd,DISK],
> DatanodeInfoWithStorage[10.99.182.236:50010
> ,DS-f8e7e8c9-6fa0-446d-a6e5-122ab35b6f7c,DISK],
> DatanodeInfoWithStorage[10.99.182.195:50010
> ,DS-3beae344-5a4a-4759-ad79-a61beabcc09d,DISK]]
>
>
> These happen regularly while HBase appear to be operating normally with
> decent read and write performance. We do have occasional performance
> problems when regions are auto-splitting, and at first I thought this was
> related but now I se it happens all the time.
>
>
> Can someone explain what this means really and should we be concerned? I
> tracked down the source code that outputs it in
>
>
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
>
> but after going through the code I think I'd need to know much more about
> the code to glean anything from it or the associated JIRA ticket
> https://issues.apache.org/jira/browse/HBASE-11240.
>
> Also, what is this "pipeline" the ticket and code talks about?
>
> Thanks in advance for any information and/or clarification anyone can
> provide.
>
> 
>
> Saad
>


Re: current leaseholder is trying to recreate file error with ProcedureV2

2016-04-25 Thread Ted Yu
Can you pastebin more of the master log ?

Which version of hadoop are you using ?

Log snippet from namenode w.r.t. state-0073.log may also
provide some more clue.

Thanks

On Mon, Apr 25, 2016 at 12:56 PM, donmai  wrote:

> Hi all,
>
> I'm getting a strange error during table creation / disable in HBase 1.1.2:
>
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
> failed to create file /hbase/MasterProcWALs/state-0073.log
> for DFSClient_NONMAPREDUCE_87753856_1 for client 10.66.102.192 because
> current leaseholder is trying to recreate file.
>
> Looks somewhat related to HBASE-14234 - what's the root cause behind this
> error message?
>
> Full stack trace below:
>
> -
>
> 2016-04-25 15:24:01,356 WARN
>  [B.defaultRpcServer.handler=7,queue=7,port=6] wal.WALProcedureStore:
> failed to create log file with id=73
>
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
> failed to create file /hbase/MasterProcWALs/state-0073.log
> for DFSClient_NONMAPREDUCE_87753856_1 for client 10.66.102.192 because
> current leaseholder is trying to recreate file.
>
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2988)
>
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2737)
>
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2632)
>
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)
>
> at
>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)
>
> at
>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)
>
> at
>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>
> at
>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:635)
>
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>
> at
>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
>
> at com.sun.proxy.$Proxy31.create(Unknown Source)
>
> at
>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:295)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>
> at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
> at com.sun.proxy.$Proxy32.create(Unknown Source)
>
> at
>
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1738)
>
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1671)
>
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1596)
>
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
>
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
>
> at
>
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
>
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
>
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:912)
>
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:893)
>
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:790)
>
> at
>
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:706)
>
> at
>
> org.apache.hadoop.hbase.procedure2

current leaseholder is trying to recreate file error with ProcedureV2

2016-04-25 Thread donmai
Hi all,

I'm getting a strange error during table creation / disable in HBase 1.1.2:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
failed to create file /hbase/MasterProcWALs/state-0073.log
for DFSClient_NONMAPREDUCE_87753856_1 for client 10.66.102.192 because
current leaseholder is trying to recreate file.

Looks somewhat related to HBASE-14234 - what's the root cause behind this
error message?

Full stack trace below:

-

2016-04-25 15:24:01,356 WARN
 [B.defaultRpcServer.handler=7,queue=7,port=6] wal.WALProcedureStore:
failed to create log file with id=73

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
failed to create file /hbase/MasterProcWALs/state-0073.log
for DFSClient_NONMAPREDUCE_87753856_1 for client 10.66.102.192 because
current leaseholder is trying to recreate file.

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2988)

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2737)

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2632)

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)

at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)

at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)

at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:635)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

at org.apache.hadoop.ipc.Client.call(Client.java:1468)

at org.apache.hadoop.ipc.Client.call(Client.java:1399)

at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)

at com.sun.proxy.$Proxy31.create(Unknown Source)

at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:295)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)

at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

at com.sun.proxy.$Proxy32.create(Unknown Source)

at
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1738)

at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1671)

at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1596)

at
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)

at
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)

at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)

at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)

at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:912)

at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:893)

at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:790)

at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:706)

at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:676)

at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.checkAndTryRoll(WALProcedureStore.java:655)

at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.insert(WALProcedureStore.java:355)

at
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.submitProcedure(ProcedureExecutor.java:524)

at
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1481)


Slow sync cost

2016-04-25 Thread Saad Mufti
Hi,

In our large HBase cluster based on CDH 5.5 in AWS, we're constantly seeing
the following messages in the region server logs:

2016-04-25 14:02:55,178 INFO
org.apache.hadoop.hbase.regionserver.wal.FSHLog: Slow sync cost: 258 ms,
current pipeline:
[DatanodeInfoWithStorage[10.99.182.165:50010,DS-281d4c4f-23bd-4541-bedb-946e57a0f0fd,DISK],
DatanodeInfoWithStorage[10.99.182.236:50010,DS-f8e7e8c9-6fa0-446d-a6e5-122ab35b6f7c,DISK],
DatanodeInfoWithStorage[10.99.182.195:50010
,DS-3beae344-5a4a-4759-ad79-a61beabcc09d,DISK]]


These happen regularly while HBase appear to be operating normally with
decent read and write performance. We do have occasional performance
problems when regions are auto-splitting, and at first I thought this was
related but now I se it happens all the time.


Can someone explain what this means really and should we be concerned? I
tracked down the source code that outputs it in

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java

but after going through the code I think I'd need to know much more about
the code to glean anything from it or the associated JIRA ticket
https://issues.apache.org/jira/browse/HBASE-11240.

Also, what is this "pipeline" the ticket and code talks about?

Thanks in advance for any information and/or clarification anyone can
provide.



Saad


Re: Balancing reads and writes

2016-04-25 Thread Stack
Make sure you are on an HBase that has this fix in it too: HBASE-15213
(append goes the same code path as increment).
St.Ack

On Mon, Apr 25, 2016 at 1:38 AM, Kevin Bowling 
wrote:

> Yeah here's the config I ended up with.  Out of the box it had really
> severe blocking for write bursts, reads are much better with this and
> handlers turned up a bit:
>
>   
>hbase.ipc.server.callqueue.read.ratio
> 0.4 
>   
>hbase.ipc.server.callqueue.scan.ratio
> 0.5 
>   
>hbase.ipc.server.callqueue.handler.factor
> 0.5 
>
> Regards,
> Kevin
>
> On Sat, Apr 16, 2016 at 9:27 PM, Vladimir Rodionov  >
> wrote:
>
> > There are separate RPC queues for read and writes in 1.0+ (not sure about
> > 0.98). You need to set sizes of these queues accordingly.
> >
> > -Vlad
> >
> > On Sat, Apr 16, 2016 at 4:23 PM, Kevin Bowling  >
> > wrote:
> >
> > > Hi,
> > >
> > > Using OpenTSDB 2.2 with its "appends" feature, I see significant impact
> > on
> > > read performance when writes are happening.  If a process injects a few
> > > hundred thousand points in batch, the call queues on on the region
> > servers
> > > blow up and until they drain a new read request is basically blocked at
> > the
> > > end of the line.
> > >
> > > Any recommendations for keeping reads balanced vs writes?
> > >
> > > Regards,
> > > Kevin
> > >
> >
>


Re: New blog post: HDFS HSM and HBase by Jingcheng Du and Wei Zhou

2016-04-25 Thread Heng Chen
That's great!  We are ready to use SSD to improve read performance now.

2016-04-23 8:25 GMT+08:00 Stack :

> It is well worth the read. It goes deep so is a bit long and I had to cut
> it up to do Apache Blog sized bits. Start reading here:
> https://blogs.apache.org/hbase/entry/hdfs_hsm_and_hbase_part
>
> St.Ack
>


Re: Balancing reads and writes

2016-04-25 Thread Kevin Bowling
Yeah here's the config I ended up with.  Out of the box it had really
severe blocking for write bursts, reads are much better with this and
handlers turned up a bit:

  
   hbase.ipc.server.callqueue.read.ratio
0.4 
  
   hbase.ipc.server.callqueue.scan.ratio
0.5 
  
   hbase.ipc.server.callqueue.handler.factor
0.5 

Regards,
Kevin

On Sat, Apr 16, 2016 at 9:27 PM, Vladimir Rodionov 
wrote:

> There are separate RPC queues for read and writes in 1.0+ (not sure about
> 0.98). You need to set sizes of these queues accordingly.
>
> -Vlad
>
> On Sat, Apr 16, 2016 at 4:23 PM, Kevin Bowling 
> wrote:
>
> > Hi,
> >
> > Using OpenTSDB 2.2 with its "appends" feature, I see significant impact
> on
> > read performance when writes are happening.  If a process injects a few
> > hundred thousand points in batch, the call queues on on the region
> servers
> > blow up and until they drain a new read request is basically blocked at
> the
> > end of the line.
> >
> > Any recommendations for keeping reads balanced vs writes?
> >
> > Regards,
> > Kevin
> >
>


Re: Can not connect local java client to a remote Hbase

2016-04-25 Thread SOUFIANI Mustapha | السفياني مصطفى
Hi,
Thanks Sreeram for replying.
But could you please tell me how this could be done ? is it in the
hbase-site.xml ?
Thanks in advance

2016-04-23 7:47 GMT+01:00 Sreeram :

> Hi Soufiani,
>
> Can you try changing your configuration to have region server listen on
> 0.0.0.0:16020 and master listen on 0.0.0.0:16000 ?
>
> 127.0.0.1 being local loopback will not be accessible from outside.
>
> Regards,
> Sreeram
>
>
> On Fri, Apr 22, 2016 at 9:00 PM, SOUFIANI Mustapha | السفياني مصطفى <
> s.mustaph...@gmail.com> wrote:
>
> > Thanks Sachine for your help, I already checked this issue on the
> officiel
> > pentaho users form and it seems to be OK for them too
> > I really don't know what could be the problem for this
> > but any way, thanks again for your help
> > Regards.
> >
> > 2016-04-22 16:17 GMT+01:00 Sachin Mittal :
> >
> > > you ports are open. Your settings are fine. Issue seems to be elsewhere
> > bu
> > > I am not sure where.
> > > check with Pentaho maybe.
> > >
> > > On Fri, Apr 22, 2016 at 8:44 PM, SOUFIANI Mustapha | السفياني مصطفى <
> > > s.mustaph...@gmail.com> wrote:
> > >
> > > > Maybe those ports are not open:
> > > > hduser@big-services:~$ telnet localhost 16020
> > > > Trying ::1...
> > > > Trying 127.0.0.1...
> > > > Connected to localhost.
> > > > Escape character is '^]'.
> > > >
> > >
> >
>