Re: HBase cluster health java api

2017-01-19 Thread kiran
Thanks Ted.. Will explore

On Thu, Jan 19, 2017 at 6:54 PM, Ted Yu  wrote:

> You can use the following method from HBaseAdmin:
>
>   public ClusterStatus getClusterStatus() throws IOException {
>
> where ClusterStatus has getter for retrieving live server count:
>
>   public int getServersSize() {
>
> and getter for retrieving dead server count:
>
>   public int getDeadServers() {
>
>
> FYI
>
> On Thu, Jan 19, 2017 at 4:52 AM, kiran 
> wrote:
>
> > Dear All,
> >
> > Is there a way to check hbase cluster health (Master, Zookeeper,
> > Regionservers) using java api. I have seen HBaseAdmin.clusterHealth but
> it
> > checks for the health for master and zookeeper only.
> >
> > My use case is slightly different.
> >
> > I need to monitor the health of hbase cluster including region servers
> for
> > every 2 min. 2min is my turn around time and I can't go beyond.
> >
> > If X%(30-70) of regionservers fail or master or zookeeper fails I will
> > treat hbase cluster as down.
> >
> > Hbase version : 0.98.6
> >
> > --
> > Thank you
> > Kiran Sarvabhotla
> >
> > -Even a correct decision is wrong when it is taken late
> >
>



-- 
Thank you
Kiran Sarvabhotla

-Even a correct decision is wrong when it is taken late


Re: How to store/get video files and image files in hbase?

2017-01-19 Thread Ted Yu
Manjeet:
For #2, the threshold / maxMobDataSize values on the command line are quite
low. You can give them larger values to mimic your usage.

For #3, the snippet you cited is correct.

Cheers

On Thu, Jan 19, 2017 at 9:48 AM, Josh Elser  wrote:

> I would suggest that ask vendor specific questions via the vendor's
> channel in the future.
>
> Add the following repository to your Maven project[1] and specify the
> correct version for the artifact (e.g. 1.1.2.2.4.2.0-258) to specify the
> correct dependencies for your project.
>
> [1] http://repo.hortonworks.com/content/groups/public
>
>
> Manjeet Singh wrote:
>
>> Hi All
>>
>> I am using Hbase 1.1.2 on HDP  2.4.2.0 I have few questions
>>
>>(1). below code not getting compiled, what is the Maven dependency for
>> the same?
>>
>> HColumnDescriptor hcd = new HColumnDescriptor(“f”);
>> hcd.setMobEnabled(true);
>> hcd.setMobThreshold(102400L);
>>
>> (2). Below are two links and both are talking about same things about
>> test utility
>>
>>   sudo -u hbase hbase
>> org.apache.hadoop.hbase.IntegrationTestIngestWithMOB \ -threshold 1024
>> \ -minMobDataSize 512 \ -maxMobDataSize threshold * 5 \
>>
>> what is the relevance of this utility what should I expect from it? I
>> run it and it produeced lot of logs
>>
>> How can I validate it?
>>
>> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_
>> data-access/content/ch_MOB-support.html
>>
>> https://hbase.apache.org/book.html#hbase_mob
>>
>> 3. I am saving large file by converting into byte array and just
>> normal put command I just enable all setting which mention in above
>> url only missing part is
>>
>> HColumnDescriptor hcd = new HColumnDescriptor(“f”);
>> hcd.setMobEnabled(true);
>> hcd.setMobThreshold(102400L);
>>
>> please any buddy tell if its ok?
>>
>> Thanks
>>
>> Manjeet
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Sun, Jan 15, 2017 at 4:34 PM, Manjeet Singh> >
>> wrote:
>>
>> It could be 5 MB 15 MB and 50 MB different different size.
>>>
>>> -Manjeet
>>>
>>> On Thu, Jan 12, 2017 at 11:06 PM, Ted Yu  wrote:
>>>
>>> How big are your video files expected to be ?

 I assume you have read:

 http://hbase.apache.org/book.html#hbase_mob

 Is the example there not enough ?

 Cheers

 On Thu, Jan 12, 2017 at 9:33 AM, Manjeet Singh<
 manjeet.chand...@gmail.com>
 wrote:

 Hi All,
>
> can any buddy help me to know How to store video files and image files
>
 in

> hbase?
>
> I have seen lots of blogs but not get exact example step by step.
>
> if any buddy able to share Java native code so it will be quite helpful
>
>
> Thanks
> Manjeet
>
>
>>>
>>> --
>>> luv all
>>>
>>>
>>
>>
>>


Re: How to store/get video files and image files in hbase?

2017-01-19 Thread Josh Elser
I would suggest that ask vendor specific questions via the vendor's 
channel in the future.


Add the following repository to your Maven project[1] and specify the 
correct version for the artifact (e.g. 1.1.2.2.4.2.0-258) to specify the 
correct dependencies for your project.


[1] http://repo.hortonworks.com/content/groups/public

Manjeet Singh wrote:

Hi All

I am using Hbase 1.1.2 on HDP  2.4.2.0 I have few questions

   (1). below code not getting compiled, what is the Maven dependency for
the same?

HColumnDescriptor hcd = new HColumnDescriptor(“f”);
hcd.setMobEnabled(true);
hcd.setMobThreshold(102400L);

(2). Below are two links and both are talking about same things about
test utility

  sudo -u hbase hbase
org.apache.hadoop.hbase.IntegrationTestIngestWithMOB \ -threshold 1024
\ -minMobDataSize 512 \ -maxMobDataSize threshold * 5 \

what is the relevance of this utility what should I expect from it? I
run it and it produeced lot of logs

How can I validate it?

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_data-access/content/ch_MOB-support.html

https://hbase.apache.org/book.html#hbase_mob

3. I am saving large file by converting into byte array and just
normal put command I just enable all setting which mention in above
url only missing part is

HColumnDescriptor hcd = new HColumnDescriptor(“f”);
hcd.setMobEnabled(true);
hcd.setMobThreshold(102400L);

please any buddy tell if its ok?

Thanks

Manjeet









On Sun, Jan 15, 2017 at 4:34 PM, Manjeet Singh
wrote:


It could be 5 MB 15 MB and 50 MB different different size.

-Manjeet

On Thu, Jan 12, 2017 at 11:06 PM, Ted Yu  wrote:


How big are your video files expected to be ?

I assume you have read:

http://hbase.apache.org/book.html#hbase_mob

Is the example there not enough ?

Cheers

On Thu, Jan 12, 2017 at 9:33 AM, Manjeet Singh<
manjeet.chand...@gmail.com>
wrote:


Hi All,

can any buddy help me to know How to store video files and image files

in

hbase?

I have seen lots of blogs but not get exact example step by step.

if any buddy able to share Java native code so it will be quite helpful


Thanks
Manjeet




--
luv all







Re: hbase has problems with two hostname

2017-01-19 Thread Yu Li
If you have two network cards and one hostname binding on each, please make
sure to set the hostname you'd like to use in HBase in /etc/hosts on each
nodes, including HMaster and every RS.

Hope this helps.

Best Regards,
Yu

On 17 January 2017 at 23:47, Dima Spivak  wrote:

> Is there any other DNS server running that might be confusing reverse
> lookup? What happens if you run `host YOUR_RS_IP_ADDRESS`?
>
> And what kind of machines are you using in your deployment?
>
> Cheers,
>
> On Mon, Jan 16, 2017 at 11:34 PM C R  wrote:
>
> > Thanks,
> >
> >
> >
> >
> >
> > I deployed my HBase very simply, which has one Master and three
> > regionservers.
> >
> >
> >
> >
> >
> > [hbase@bjsh19-16-30 conf]$ more regionservers
> >
> > bjsh19-16-33.qbos.com
> >
> > bjsh19-16-34.qbos.com
> >
> > bjsh19-16-35.qbos.com
> >
> > [hbase@bjsh19-16-30 conf]$ more hbase-site.xml
> >
> >
> >
> > ...
> >
> >
> >
> > 
> >
> > 
> >
> > zookeeper.znode.parent
> >
> > /hbase117
> >
> > 
> >
> > 
> >
> >   hbase.rootdir
> >
> > hdfs://bidc/hbase117
> >
> >   
> >
> > 
> >
> >   hbase.zookeeper.quorum
> >
> > bjsh19-16-30.qbos.com,bjsh19-16-31.qbos.com,
> > bjsh19-16-32.qbos.com
> >
> >   
> >
> > 
> >
> >   hbase.cluster.distributed
> >
> > true
> >
> >   
> >
> > 
> >
> >   hbase.zookeeper.property.clientPort
> >
> > 2181
> >
> >   
> >
> > 
> >
> >
> >
> >
> >
> > The special place is the file /etc/hosts with one ip mapping to two
> > hostnames on all nodes,so it will have the message:
> >
> >
> >
> > ...
> >
> >
> >
> > the server that tried to transition was wjsa-tsl05,16020,1484623636195
> not
> > the expected bjsh19-16-34.qbos.com,16020,1484623636195
> >
> >
> >
> > ...
> >
> >
> >
> >
> >
> > 
> >
> > 发件人: Dima Spivak 
> >
> > 发送时间: 2017年1月17日 4:50
> >
> > 收件人: user@hbase.apache.org
> >
> > 主题: Re: hbase has problems with two hostname
> >
> >
> >
> > Hi C R,
> >
> >
> >
> > Like many Hadoop-like services, HBase is pretty temperamental about
> >
> > requiring forward and reverse DNS to work properly. FWIW, the
> configuration
> >
> > file where you can populate RegionServers doesn't tend to matter as long
> as
> >
> > the hbase-site.xml file is populated correctly (it's just used to start
> >
> > daemons from one place).
> >
> >
> >
> > If you pass along more details about how exactly you're deploying HBase,
> we
> >
> > might be able to give more advice.
> >
> >
> >
> > On Mon, Jan 16, 2017 at 8:00 PM C R  wrote:
> >
> >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > > more /etc/hosts
> >
> > >
> >
> > >
> >
> > > ...
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > > 10.19.16.31  bjsh19-16-31.qbos.comwjsa-tsl02
> >
> > >
> >
> > >
> >
> > > ...
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > > There will have six regionservers listed in web console, but
> >
> > >
> >
> > > only three in the configuration file,  metadata tables also are not
> > online
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > > Hmaster will be dead after a while.
> >
> > >
> >
> > >
> >
> > > what should I do?
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > > snapshot:
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > >
> >
> > > 2017-01-17 11:45:24,394 INFO
> >
> > >  [MASTER_SERVER_OPERATIONS-bjsh19-16-30:16000-0]
> > master.AssignmentManager:
> >
> > > Assigning
> > hbase:namespace,,1484623643279.30fab746cb3b6ceadcbda421459204b9.
> >
> > > to bjsh19-16-34.qbos.com,16020,1484623636195
> >
> > >
> >
> > >
> >
> > > 2017-01-17 11:45:24,395 INFO  [bjsh19-16-30:16000.activeMasterManager]
> >
> > > master.AssignmentManager: Joined the cluster in 23ms, failover=true
> >
> > >
> >
> > >
> >
> > > 2017-01-17 11:50:24,314 FATAL [bjsh19-16-30:16000.activeMasterManager]
> >
> > > master.HMaster: Failed to become active master
> >
> > >
> >
> > >
> >
> > > java.io.IOException: Timedout 30ms waiting for namespace table to
> be
> >
> > > assigned
> >
> > >
> >
> > >
> >
> > > at
> >
> > >
> > org.apache.hadoop.hbase.master.TableNamespaceManager.
> start(TableNamespaceManager.java:104)
> >
> > >
> >
> > >
> >
> > > at
> >
> > > org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:986)
> >
> > >
> >
> > >
> >
> > > at
> >
> > >
> > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitializati
> on(HMaster.java:780)
> >
> > >
> >
> > >
> >
> > > at
> >
> > > 

Re: SocketTimeoutException on regionservers

2017-01-19 Thread Yu Li
Have you ever checked the RS gc log and observed any long ones?

Please also check the load of your RS/DN machine when this
SocketTimeoutException happens, there may be some pause caused by system
halt instead of JVM GC.

Hope this helps.

Best Regards,
Yu

On 17 January 2017 at 13:29, Stack  wrote:

> Timeout after waiting ten seconds to read from HDFS is no fun. Tell us more
> about your HDFS setup. You collect system metrics on disks? Machines are
> healthy all around? How frequent does this occur?
>
> Thanks for the question,
> S
>
> On Thu, Jan 12, 2017 at 10:18 AM, Tulasi Paradarami <
> tulasi.krishn...@gmail.com> wrote:
>
> > Hi,
> >
> > I noticed that Regionservers are raising following exceptions
> > intermittently that is manifesting itself as request timeouts on the
> client
> > side. HDFS is in a healthy state and there are no corrupted blocks (from
> > "hdfs fsck" results). Datanodes were not out of service when this error
> > occurs and GC on datanodes is usually around 0.3sec.
> >
> > Also, when these exceptions occur, HDFS metric "Send Data Packet Blocked
> On
> > Network Average Time" tends to go up.
> >
> > Here are the configured values for some of the relevant parameters:
> > dfs.client.socket-timeout: 10s
> > dfs.datanode.socket.write.timeout: 10s
> > dfs.namenode.avoid.read.stale.datanode: true
> > dfs.namenode.avoid.write.stale.datanode: true
> > dfs.datanode.max.xcievers: 8192
> >
> > Any pointers towards what could be causing these exceptions is
> appreciated.
> > Thanks.
> >
> > CDH 5.7.2
> > HBase 1.2.0
> >
> > ---> Regionserver logs
> >
> > 2017-01-11 19:19:04,940 WARN
> >  [PriorityRpcServer.handler=3,queue=1,port=60020]
> hdfs.BlockReaderFactory:
> > I/O error constructing remote block reader.
> > java.net.SocketTimeoutException: 1 millis timeout while waiting for
> > channel to be ready for read. ch :
> > java.nio.channels.SocketChannel[connected local=/datanode3:27094
> > remote=/datanode2:50010]
> > at
> > org.apache.hadoop.net.SocketIOWithTimeout.doIO(
> > SocketIOWithTimeout.java:164)
> > ...
> >
> > 2017-01-11 19:19:04,995 WARN
> >  [PriorityRpcServer.handler=11,queue=1,port=60020] hdfs.DFSClient:
> > Connection failure: Failed to connect to /datanode2:50010 for file
> > /hbase/data/default//ec9ca
> > java.net.SocketTimeoutException: 1 millis timeout while waiting for
> > channel to be ready for read. ch :
> > java.nio.channels.SocketChannel[connected local=/datanode3:27107
> > remote=/datanode2:50010]
> > at
> > org.apache.hadoop.net.SocketIOWithTimeout.doIO(
> > SocketIOWithTimeout.java:164)
> > at
> > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
> > at
> > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.
> > readChannelFully(PacketReceiver.java:258)
> > at
> > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(
> > PacketReceiver.java:209)
> > at
> > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(
> > PacketReceiver.java:171)
> > at
> > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.
> > receiveNextPacket(PacketReceiver.java:102)
> > at
> > org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(
> > RemoteBlockReader2.java:207)
> > at
> > org.apache.hadoop.hdfs.RemoteBlockReader2.read(
> > RemoteBlockReader2.java:156)
> > at
> > org.apache.hadoop.hdfs.BlockReaderUtil.readAll(BlockReaderUtil.java:32)
> > at
> > org.apache.hadoop.hdfs.RemoteBlockReader2.readAll(
> > RemoteBlockReader2.java:386)
> > at
> > org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(
> > DFSInputStream.java:1193)
> > at
> > org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(
> > DFSInputStream.java:1112)
> > at
> > org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1473)
> > at
> > org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1432)
> > at
> > org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:89)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileBlock.positionalReadWithExtra(
> > HFileBlock.java:752)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$
> AbstractFSReader.readAtOffset(
> > HFileBlock.java:1448)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.
> > readBlockDataInternal(HFileBlock.java:1648)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.
> > readBlockData(HFileBlock.java:1532)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(
> > HFileReaderV2.java:445)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.
> > loadDataBlockWithScanInfo(HFileBlockIndex.java:261)
> > at
> > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(
> > HFileReaderV2.java:642)
> > at
> > 

Re: HBase cluster health java api

2017-01-19 Thread Ted Yu
You can use the following method from HBaseAdmin:

  public ClusterStatus getClusterStatus() throws IOException {

where ClusterStatus has getter for retrieving live server count:

  public int getServersSize() {

and getter for retrieving dead server count:

  public int getDeadServers() {


FYI

On Thu, Jan 19, 2017 at 4:52 AM, kiran  wrote:

> Dear All,
>
> Is there a way to check hbase cluster health (Master, Zookeeper,
> Regionservers) using java api. I have seen HBaseAdmin.clusterHealth but it
> checks for the health for master and zookeeper only.
>
> My use case is slightly different.
>
> I need to monitor the health of hbase cluster including region servers for
> every 2 min. 2min is my turn around time and I can't go beyond.
>
> If X%(30-70) of regionservers fail or master or zookeeper fails I will
> treat hbase cluster as down.
>
> Hbase version : 0.98.6
>
> --
> Thank you
> Kiran Sarvabhotla
>
> -Even a correct decision is wrong when it is taken late
>


HBase cluster health java api

2017-01-19 Thread kiran
Dear All,

Is there a way to check hbase cluster health (Master, Zookeeper,
Regionservers) using java api. I have seen HBaseAdmin.clusterHealth but it
checks for the health for master and zookeeper only.

My use case is slightly different.

I need to monitor the health of hbase cluster including region servers for
every 2 min. 2min is my turn around time and I can't go beyond.

If X%(30-70) of regionservers fail or master or zookeeper fails I will
treat hbase cluster as down.

Hbase version : 0.98.6

-- 
Thank you
Kiran Sarvabhotla

-Even a correct decision is wrong when it is taken late


Re: scan performance

2017-01-19 Thread Rajeshkumar J
Same one but now I think we have found the cause.

we have one column qualifier and five columns in every table. We will add
singlecolumnvaluefilter to the scan based on the input parameters. All the
time scan is successful except when we add that specific column. We have
tested other four columns with single column value filter in a single scan
and  the scan is successful. But when I test with this specific column I am
getting this lease exception.

Is there any reason?


On Thu, Jan 19, 2017 at 6:01 PM, Yu Li  wrote:

> So the answer in the previous mail thread
>  201701.mbox/%3CCAM7-19+6mbP5Cdmtoovh4aN5H_GPaFAzy1xt_
> tdl1mum+t-...@mail.gmail.com%3E>
> didn't resolve your problem, or this is a new one? If a new one, mind talk
> about more details? Thanks.
>
> Best Regards,
> Yu
>
> On 19 January 2017 at 20:08, Rajeshkumar J 
> wrote:
>
> > I am using SingleColumnValueFilter for filtering based on some values.
> > Based on this I am getting lease expired exception during scan. So is
> there
> > any way to solve this?
> >
>


Re: scan performance

2017-01-19 Thread Yu Li
So the answer in the previous mail thread

didn't resolve your problem, or this is a new one? If a new one, mind talk
about more details? Thanks.

Best Regards,
Yu

On 19 January 2017 at 20:08, Rajeshkumar J 
wrote:

> I am using SingleColumnValueFilter for filtering based on some values.
> Based on this I am getting lease expired exception during scan. So is there
> any way to solve this?
>


scan performance

2017-01-19 Thread Rajeshkumar J
I am using SingleColumnValueFilter for filtering based on some values.
Based on this I am getting lease expired exception during scan. So is there
any way to solve this?